WO2021042655A1 - Sound and picture synchronization processing method and display device - Google Patents

Sound and picture synchronization processing method and display device Download PDF

Info

Publication number
WO2021042655A1
WO2021042655A1 PCT/CN2020/071103 CN2020071103W WO2021042655A1 WO 2021042655 A1 WO2021042655 A1 WO 2021042655A1 CN 2020071103 W CN2020071103 W CN 2020071103W WO 2021042655 A1 WO2021042655 A1 WO 2021042655A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
time
processing
sound
video
Prior art date
Application number
PCT/CN2020/071103
Other languages
French (fr)
Chinese (zh)
Inventor
陈俊宁
李慧娟
初德进
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Publication of WO2021042655A1 publication Critical patent/WO2021042655A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4852End-user interface for client configuration for modifying audio parameters, e.g. switching between mono and stereo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast

Definitions

  • This application relates to the technical field of display devices, and in particular, to a method for audio and picture synchronization processing and a display device.
  • FIG. 1 is a video processing flowchart of a dual-system display device.
  • the dual-system includes a first chip (A chip) and a second chip (N chip).
  • the first chip and the second chip can communicate with each other through an interface circuit, such as HDMI (High Definition Multimedia Interface), network port and USB (Universal Serial Bus), etc.
  • Video signals such as network sources and local media are decoded in the A chip to separate the sound data and image data in the video, and the A chip performs PQ (Picture-Quality) processing on the image data in the video, and Perform sound processing on the sound data in the video, and then the first chip transmits the processed image data and sound data to the second chip through HDMI.
  • PQ Picture-Quality
  • the N chip After the N chip receives the data sent by the A chip, it performs PQ processing on the image data again. And output the processed image data to the display screen to display the video image; and, perform sound processing on the sound data, and output the processed sound data to the speaker to play the video sound, thereby completing the video, audio and picture playback.
  • PQ processing links mainly include brightness, contrast, chroma, hue, sharpness, image noise reduction, dynamic contrast, gamma, color temperature, white balance, color correction, brightness dynamic range, motion picture compensation, etc.; sound processing
  • the links mainly include sound noise reduction, AVC (Advanced Video Coding, advanced video coding), DTS (Digital Theater System) sound effects, Dolby (Dolby Atmos) sound effects, GEQ (Graphic Equalize, graphic equalizer) processing and PEQ (Parametric Equalizer) , Parametric Equalizer) etc.
  • AVC Advanced Video Coding, advanced video coding
  • DTS Digital Theater System
  • Dolby Dolby Atmos
  • GEQ Graphic Equalize, graphic equalizer
  • PEQ Parametric Equalizer
  • Parametric Equalizer Parametric Equalizer
  • the present application provides an audio-visual synchronization processing method and a display device to solve the problem that the existing dual-system display device plays a video when the sound and the picture are not synchronized.
  • the present application provides a display device including a display and a sound player, the display is used for displaying video images, the sound player is used for playing audio, and the display device further includes a first chip and a second chip. Two chips, the first chip and the second chip are connected in communication, the first chip is provided with a first video processor, a first audio processor and a controller, and the second chip is provided with A second video processor and a second audio processor;
  • the first video processor is configured to receive a video signal through an input interface, and perform image quality processing on the video signal, and it takes time to obtain the first image quality processing;
  • the first audio processor receives an audio signal through an input interface, and performs sound processing on the audio signal, and it takes time to obtain the first sound processing;
  • the second video processor receives the video signal output by the first chip through a communication interface, and performs image quality processing on the video signal output by the first chip, and it takes time to obtain the second image quality processing;
  • the second audio processor receives the audio signal output by the first chip through a communication interface, and performs sound processing on the audio signal output by the first chip, and it takes time to obtain the second sound processing;
  • the controller is configured to:
  • the video signal and audio signal output by the second chip are synchronously compensated, and the synchronously compensated video signal is transmitted to the display, and the synchronously compensated Output the audio signal of to the sound player;
  • the video signal output by the second chip is transmitted to the display, and the audio signal output by the second chip is output to the sound player.
  • the audio-visual synchronization time difference is calculated according to the following formula:
  • N T1+T2-(T3+T4)
  • N is the time difference between audio and video synchronization
  • T1 is the time consumed for the first image quality processing
  • T2 is the time consumed for the second image quality processing
  • T3 is the time consumed for the first sound processing
  • T4 is the first image quality processing time.
  • the first chip detects whether an image setting operation or a sound setting operation is received
  • the first chip receives an image setting operation or a sound setting operation, it takes time for the first video processor to reacquire the first image quality, and the first audio processor reacquires the first sound Processing time, and the second video processor reacquiring the second image quality processing time, the second audio processor reacquiring the second sound processing time, so that the controller can The audio-visual synchronization time difference is corrected;
  • the controller is further configured to perform synchronous compensation on the video signal and the audio signal output by the second chip according to the corrected audio-visual synchronization time difference if the corrected audio-visual synchronization time difference is not within the threshold range.
  • the first video processor acquires the image quality every preset time and the processing time is time-consuming
  • the first audio processor acquires the first image quality every preset time. Time-consuming sound processing, and time-consuming processing of acquiring the second image quality by the second video processor every preset time, and time-consuming processing of acquiring the second sound every preset time by the second audio processor, So that the controller corrects the time difference of the audio and video synchronization;
  • the controller is further configured to perform synchronization compensation on the video signal and the audio signal output by the second chip according to the corrected audio-visual synchronization time difference if the corrected audio-visual synchronization time difference is not within the threshold range
  • the first image quality processing time and the second image quality processing time are calculated according to the following formula:
  • T1 is the time consuming for the first image quality processing
  • T2 is the time consuming for the second image quality processing
  • Tn is the time consuming for each frame of image quality processing
  • Td is the time for each frame
  • f is the refresh frequency in Hz
  • R is the frame processing threshold, the frame processing threshold is used to instruct to read the image data of the current frame and the subsequent R-1 frames, to Finish the image quality processing of the current frame
  • S is the number of frames for video playback.
  • the controller performs synchronous compensation on the video signal and audio signal output by the second chip according to the following steps:
  • the audio-visual synchronization time difference is greater than the upper limit of the threshold range, frame dropping or audio delay is performed so that the sound and the picture are adjusted and synchronized; wherein the upper limit of the threshold range is the value that allows the image to appear later than the sound time.
  • the controller performs synchronous compensation on the video signal and audio signal output by the second chip according to the following steps:
  • the audio and picture synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed to synchronize the sound and the picture; wherein the lower limit of the threshold range is the time allowed for the image to appear earlier than the sound.
  • the performing frame dropping includes:
  • the interpolating frame includes:
  • If the compensation mode that adjusts the synchronization of sound and picture within a preset threshold time is adopted, calculate the number of interval frames JZ2, JZ2 Z/
  • the present application also provides an audio-visual synchronization processing method, which is used in the above-mentioned display device, and the method includes:
  • the first video processor is configured to receive a video signal through an input interface, and perform image quality processing on the video signal, and it takes time to obtain the first image quality processing;
  • the first audio processor receives the audio signal through the input interface, and performs sound processing on the audio signal, and it takes time to obtain the first sound processing;
  • the second video processor receives the video signal output by the first chip through the communication interface, and performs image quality processing on the video signal output by the first chip, and the second image quality processing is time-consuming;
  • the second audio processor receives the audio signal output by the first chip through the communication interface, and performs sound processing on the audio signal output by the first chip, and it takes time to obtain the second sound processing;
  • the controller calculates the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
  • the controller judges whether the audio-visual synchronization time difference is within a threshold range
  • the controller performs synchronous compensation on the video signal and audio signal output by the second chip, and transmits the synchronously compensated video signal to the display, and, Outputting the synchronized and compensated audio signal to the sound player;
  • the controller transmits the video signal output by the second chip to the display, and outputs the audio signal output by the second chip to the sound player.
  • the embodiment of the present application also proposes a display device, including
  • Display used to display video images
  • the first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
  • the first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing
  • the second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
  • the second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
  • the controller is configured to:
  • the audio signal is compensated and transmitted to the sound player.
  • the controller is also used to:
  • the audio-visual synchronization time difference is not within the threshold range, compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, according to the audio-visual synchronization time difference Compensate the processed audio signal output by the second audio processor and transmit it to the sound player;
  • the video signal output processed by the second video processor is transmitted to the display, and the audio signal output processed by the second audio processor is output to the display Sound player.
  • the controller is also used to:
  • the process of reacquiring the first image quality by the first video processor takes time, and the process of reacquiring the first sound by the first audio processor takes time, and, It takes time for the second video processor to reacquire the second image quality processing, and it takes time for the second audio processor to reacquire the second sound processing;
  • the controller compares the audio and video Synchronize the time difference to correct it.
  • the first video processor acquires the first image quality every preset time and the processing time is time-consuming
  • the first audio processor acquires the first image quality every preset time.
  • the first sound processing is time-consuming
  • the second video processor acquires the second image quality processing time every preset time
  • the second audio processor acquires the second sound processing time every preset time. Time.
  • the controller is also used to:
  • the audio-visual synchronization time difference is greater than the upper limit of the threshold range, frame dropping or audio delay is performed; wherein, the upper limit of the threshold range is the time allowed for the image to appear later than the sound.
  • the controller is also used to:
  • the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound.
  • the number of interpolated frames CZ f ⁇
  • the embodiment of the present application also proposes a method for audio-visual synchronization processing, which includes:
  • the first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
  • the first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing
  • the second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
  • the second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
  • the controller is configured to:
  • the audio signal is compensated and transmitted to the sound player.
  • the embodiment of the present application also proposes a display device, including
  • the display is configured to display image content
  • a sound reproducer configured to reproduce sound signals
  • the first processing chip includes a first video processor and a first audio processor, receives external audio signals and video signals through an input interface, the first audio processor is used to process the audio signals, and the first The video processor is configured to process the video signal, and a first time delay occurs when the audio signal and the video signal are processed;
  • the second processing chip is configured to receive the audio signal and the video signal output by the first chip through a connecting line, the second processing chip includes a second video processor and a second audio processor, and the second audio processor is used for For reprocessing the audio signal received by the first processing chip, and the second video processor is used for reprocessing the video signal received by the first processing chip, and the audio signal and the The second time delay occurs when the video signal is reprocessed;
  • time delay compensation is performed on the video signal and/or the audio signal after reprocessing, and the video signal and the audio signal after the time delay compensation are respectively The signal is output to the display and the sound reproducer.
  • the first chip detects whether an image setting operation or a sound setting operation is received
  • the first audio processor reprocesses the audio signal
  • the first video processor reprocesses the video signal
  • a third time delay occurs when the audio signal and the video signal are reprocessed
  • the second audio processor is used for reprocessing the audio signal received by the first processing chip
  • the second video processor is used for reprocessing the video signal received by the first processing chip
  • a fourth time delay occurs during the reprocessing of the audio signal and the video signal; according to the third time delay and the fourth time delay, the video signal and/or the audio signal after the reprocessing Perform delay compensation.
  • the first audio processor processes the audio signal
  • the first video processor processes the video signal, and obtains A first time delay occurs when processing the audio signal and the video signal; and every predetermined time, the second audio processor reprocesses the audio signal received by the first processing chip, and the second The second video processor reprocesses the video signal received by the first processing chip, and a second time delay occurs when the audio signal and the video signal are reprocessed.
  • the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
  • the audio-visual synchronization time difference is greater than the upper limit of the threshold range, then perform frame drop or audio delay; wherein the upper limit of the threshold range is the time allowed for the image to appear later than the sound;
  • the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay.
  • the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
  • the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound;
  • the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay.
  • the performing frame dropping includes:
  • the interpolating frame includes:
  • If the compensation mode that adjusts the synchronization of sound and picture within a preset threshold time is adopted, calculate the number of interval frames JZ2, JZ2 Z/
  • the audio-visual synchronization time difference is calculated according to the following formula:
  • N T1+T2-(T3+T4)
  • N is the time difference between audio and video synchronization
  • T1 is the time consumption of the first image quality processing generated by the first video processor processing the video signal
  • T2 is the second video processor reprocessing the video signal. It takes time for the second image quality to be processed.
  • T3 is the time for the first sound processing generated by the first audio processor to process the audio signal.
  • T4 is the time for the second audio processor to process the audio signal. The second sound processing generated by the reprocessing takes time.
  • the first image quality processing time and the second image quality processing time are calculated according to the following formula:
  • T1 is the time-consuming processing of the first image quality generated by the first video processor processing the video signal
  • T2 is the second image quality generated by the second video processor reprocessing the video signal Processing time
  • Tn is the time consumed when the image quality is processed for each frame of the image
  • Td is the delay generated when the image quality is processed for each frame of the image
  • f is the refresh frequency in Hz
  • R is the frame processing Threshold, the frame processing threshold is used to instruct to read the image data of the current frame and subsequent R-1 frames to complete the image quality processing of the current frame
  • S is the number of frames for video playback.
  • the technical solution provided by this application has the following beneficial effects: for a display device with a dual-system structure, video decoding is performed on the first chip, and after the image data and the sound data are separated, the first image quality processing is time-consuming and the first sound processing is obtained. Time-consuming and time-consuming to obtain the second image quality processing and the second sound processing, so as to accurately obtain the time-consuming processing of the two chips for the sound effect and image quality respectively, and then use these parameters to calculate the audio-visual synchronization time difference , That is, calculate the time difference between the image played by the display and the sound played by the sound playback device. If the audio and video synchronization time difference is not within the threshold range, it is considered that the sound and the image are not played synchronously. Whether the image is ahead of the sound or the image lags the sound, so that the video signal and audio signal output by the second chip are synchronized and compensated to achieve audio and picture synchronization, thereby improving the video playback effect of the display device.
  • Figure 1 is a video processing flowchart of a dual-system display device
  • FIG. 2 is a schematic diagram of an operation scenario between a display device and a control device shown in an embodiment of the application;
  • FIG. 3 is a block diagram of the hardware configuration of the control device 100 shown in an embodiment of the application;
  • FIG. 4 is a block diagram of the hardware configuration of the display device 200 shown in an embodiment of the application.
  • FIG. 5 is a block diagram of the hardware architecture of the display device 200 shown in an embodiment of the application.
  • FIG. 6 is a schematic diagram of a functional configuration of a display device 200 shown in an embodiment of the application.
  • FIG. 7(a) is a schematic diagram of software configuration in the display device 200 shown in an embodiment of the application.
  • FIG. 7(b) is a schematic diagram of the configuration of an application program in the display device 200 shown in an embodiment of the application;
  • FIG. 8 is a schematic diagram of a user interface in a display device 200 according to an embodiment of the application.
  • FIG. 9 is a flowchart of a method for processing audio and video synchronization according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of a method for acquiring frames during image quality processing according to an embodiment of the application.
  • FIG. 11 is a schematic diagram of each link circuit of the sound processing performed by the first chip/second chip according to an embodiment of the application.
  • FIG. 12 is a flowchart of time-consuming image quality processing and sound processing of acquiring the first chip/second chip according to an embodiment of the application;
  • FIG. 13 is a flowchart of another audio-visual synchronization processing method shown in an embodiment of the application.
  • FIG. 14 is a flowchart of a method for performing audio-visual synchronization on a display device according to an embodiment of the application.
  • This application is mainly aimed at the audio and video synchronization processing of a display device with a dual system structure, that is, a first chip (first hardware system, A chip) and a second chip (second hardware system, N chip).
  • first chip first hardware system, A chip
  • second chip second hardware system, N chip
  • various external device interfaces are usually provided on the display device to facilitate the connection of different peripheral devices or cables to achieve corresponding functions.
  • a high-resolution camera When a high-resolution camera is connected to the interface of the display device, if the hardware system of the display device does not have the hardware interface of the high-pixel camera that receives the source code, it will cause the data received by the camera to be unable to present the data received by the camera to the display of the display device. On the screen.
  • the hardware system of traditional display devices only supports one hard decoding resource, and usually only supports 4K resolution video decoding. Therefore, when you want to realize the video chat while watching Internet TV, in order not to reduce
  • the definition of the network video picture requires the use of hard decoding resources (usually the GPU in the hardware system) to decode the network video.
  • the general-purpose processor such as CPU
  • the video chat screen is processed by soft decoding.
  • Using soft decoding to process the video chat screen will greatly increase the data processing burden on the CPU.
  • the data processing burden on the CPU is too heavy, the picture may freeze or become unsmooth.
  • the CPU soft decoding is used to process the video chat screen, it is usually impossible to achieve multi-channel video calls.
  • the user wants to simultaneously video chat with multiple other users in the same chat scene At times, access will be blocked.
  • this application discloses a dual hardware system architecture to realize multiple channels of video chat data (at least one local video).
  • circuit used in the various embodiments of this application can refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or a combination of hardware or/and software code that can execute related to the component Function.
  • remote control used in the various embodiments of this application refers to a component of an electronic device (such as the display device disclosed in this application), which can generally control the electronic device wirelessly within a short distance.
  • This component can generally use infrared and/or radio frequency (RF) signals and/or Bluetooth to connect with electronic devices, and can also include functional circuits such as WiFi, wireless USB, Bluetooth, and motion sensors.
  • RF radio frequency
  • a handheld touch remote control replaces most of the physical built-in hard keys in general remote control devices with the user interface in the touch screen.
  • gesture used in the embodiments of the present application refers to a user's behavior through a change of hand shape or hand movement to express expected ideas, actions, goals, and/or results.
  • the term "hardware system” used in the various embodiments of this application may refer to an integrated circuit (IC), printed circuit board (Printed circuit board, PCB) and other mechanical, optical, electrical, and magnetic devices with computing , Control, storage, input and output functions of the physical components.
  • the hardware system is usually also referred to as a motherboard or a chip.
  • Fig. 2 exemplarily shows a schematic diagram of an operation scenario between the display device and the control device according to the embodiment. As shown in FIG. 2, the user can operate the display device 200 by controlling the device 100.
  • the control device 100 may be a remote controller 100A, which can communicate with the display device 200 through infrared protocol communication, Bluetooth protocol communication, ZigBee protocol communication or other short-distance communication methods for wireless or other short-distance communication.
  • the display device 200 is controlled in a wired manner.
  • the user can control the display device 200 by inputting user instructions through keys on the remote control, voice input, control panel input, etc.
  • the user can control the display device 200 by inputting corresponding control commands through the volume plus and minus keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, and power on/off keys on the remote control. Function.
  • the control device 100 can also be a smart device, such as a mobile terminal 100B, a tablet computer, a computer, a notebook computer, etc., which can be connected through a local area network (LAN, Wide Area Network), a wide area network (WAN, Wide Area Network), and a wireless local area network ((WLAN) , Wireless Local Area Network) or other networks communicate with the display device 200, and control the display device 200 through an application program corresponding to the display device 200.
  • LAN Local area network
  • WAN Wide Area Network
  • WLAN wireless local area network
  • the application can provide users with various controls through an intuitive user interface (UI, User Interface) on the screen associated with the smart device.
  • UI User Interface
  • both the mobile terminal 100B and the display device 200 can install software applications, so that the connection and communication between the two can be realized through a network communication protocol, thereby achieving the purpose of one-to-one control operation and data communication.
  • the mobile terminal 100B can be made to establish a control command protocol with the display device 200
  • the remote control keyboard can be synchronized to the mobile terminal 100B
  • the function of controlling the display device 200 can be realized by controlling the user interface of the mobile terminal 100B; or the mobile terminal 100B
  • the audio and video content displayed on the screen is transmitted to the display device 200 to realize the synchronous display function.
  • the display device 200 can also communicate with the server 300 through multiple communication methods.
  • the display device 200 may be allowed to communicate with the server 300 via a local area network, a wireless local area network, or other networks.
  • the server 300 may provide various contents and interactions to the display device 200.
  • the display device 200 transmits and receives information, interacts with an Electronic Program Guide (EPG, Electronic Program Guide), receives software program updates, or accesses a remotely stored digital media library.
  • EPG Electronic Program Guide
  • the server 300 may be a group or multiple groups, and may be one or more types of servers.
  • the server 300 provides other network service content such as video-on-demand and advertising services.
  • the display device 200 may be a liquid crystal display, an OLED (Organic Light Emitting Diode) display, or a projection display device; on the other hand, the display device may be a smart TV or a display system composed of a display and a set-top box.
  • OLED Organic Light Emitting Diode
  • the display device 200 may make some changes in performance and configuration as required.
  • the display device 200 may additionally provide a smart network TV function that provides a computer support function. In some embodiments, it includes Internet TV, Smart TV, Internet Protocol TV (IPTV), and the like. In some embodiments, the display device may not have the function of broadcasting and receiving TV.
  • a smart network TV function that provides a computer support function. In some embodiments, it includes Internet TV, Smart TV, Internet Protocol TV (IPTV), and the like. In some embodiments, the display device may not have the function of broadcasting and receiving TV.
  • the display device may be connected or provided with a camera, which is used to present the picture captured by the camera on the display interface of the display device or other display devices, so as to realize interactive chat between users.
  • the image captured by the camera can be displayed on the display device in full screen, half screen, or in any selectable area.
  • the camera is connected to the rear case of the display through a connecting plate, and is fixedly installed on the upper middle part of the rear case of the display. As an installable way, it can be fixedly installed at any position of the rear case of the display. It is sufficient to ensure that the image capture area is not blocked by the rear shell. For example, the image capture area and the display device have the same orientation.
  • the camera can be connected to the display rear shell through a connecting plate or other conceivable connectors.
  • a lifting motor is installed on the connector.
  • the camera used in this application may be 16 million pixels to achieve the purpose of ultra-high-definition display. In actual use, a camera with higher or lower than 16 million pixels can also be used.
  • the content displayed in different application scenarios of the display device can be merged in a variety of different ways, so as to achieve functions that cannot be achieved by traditional display devices.
  • the user may have a video chat with at least one other user while watching a video program.
  • the presentation of the video program can be used as the background screen, and the video chat window is displayed on the background screen.
  • At least one video chat is performed across terminals.
  • the user can have a video chat with at least one other user while entering the education application for learning. For example, students can realize remote interaction with teachers while learning content in educational applications. Visually, you can call this function "learning and chatting”.
  • a video chat is conducted with players entering the game.
  • players For example, when a player enters a game application to participate in a game, it can realize remote interaction with other players. Visually, you can call this function "watch and play”.
  • the game scene and the video image are merged, and the portrait in the video image is cut out and displayed on the game image, which improves the user experience.
  • somatosensory games such as ball games, boxing games, running games, dancing games, etc.
  • human body postures and movements are acquired through a camera, limb detection and tracking, and key point data detection of human bones.
  • Animations are integrated in the game to realize games such as sports, dance and other scenes.
  • the user can interact with at least one other user in video and voice in the K song application. Visually, you can call this function "watch and sing". In some embodiments, when at least one user enters the application in a chat scene, multiple users can jointly complete the recording of a song.
  • the user can turn on the camera locally to obtain pictures and videos, which is vivid, and this function can be called "mirror”.
  • more functions may be added or the above functions may be reduced. This application does not specifically limit the function of the display device.
  • Fig. 3 exemplarily shows a configuration block diagram of the control device 100 according to an exemplary embodiment.
  • the control device 100 includes a controller 110, a communicator 130, a user input/output interface 140, a memory 190, and a power supply 180.
  • the control device 100 is configured to control the display device 200, and can receive user input operation instructions, and convert the operation instructions into instructions that can be recognized and responded to by the display device 200, acting as an intermediary for the interaction between the user and the display device 200 effect.
  • the user operates the channel plus and minus key on the control device 100, and the display device 200 responds to the channel plus and minus operation.
  • control device 100 may be a smart device.
  • control device 100 can install various applications for controlling the display device 200 according to user requirements.
  • the mobile terminal 100B or other smart electronic devices can perform similar functions to the control device 100 after installing an application for controlling the display device 200.
  • the user can install various function keys or virtual buttons of the graphical user interface that can be provided on the mobile terminal 100B or other smart electronic devices by installing applications to realize the function of the physical keys of the control device 100.
  • the controller 110 includes a processor 112, a RAM 113 and a ROM 114, a communication interface, and a communication bus.
  • the controller 110 is used to control the operation and operation of the control device 100, as well as communication and cooperation between internal components, and external and internal data processing functions.
  • the communicator 130 realizes the communication of control signals and data signals with the display device 200 under the control of the controller 110. For example, the received user input signal is sent to the display device 200.
  • the communicator 130 may include at least one of communication circuits such as a WIFI circuit 131, a Bluetooth circuit 132, and an NFC circuit 133.
  • the user input/output interface 140 wherein the input interface includes at least one of input interfaces such as a microphone 141, a touch panel 142, a sensor 143, and a button 144.
  • input interfaces such as a microphone 141, a touch panel 142, a sensor 143, and a button 144.
  • the user can implement the user instruction input function through voice, touch, gesture, pressing and other actions.
  • the input interface converts the received analog signal into a digital signal and the digital signal into a corresponding instruction signal, and sends it to the display device 200.
  • the output interface includes an interface for sending the received user instruction to the display device 200.
  • it may be an infrared interface or a radio frequency interface.
  • the user input instruction needs to be converted into an infrared control signal according to the infrared control protocol, and sent to the display device 200 via the infrared sending circuit.
  • a radio frequency signal interface a user input instruction needs to be converted into a digital signal, which is then modulated according to the radio frequency control signal modulation protocol, and then sent to the display device 200 by the radio frequency sending terminal.
  • control device 100 includes at least one of a communicator 130 and an output interface.
  • the control device 100 is configured with a communicator 130, such as WIFI, Bluetooth, NFC and other circuits, which can encode user input instructions through the WIFI protocol, or Bluetooth protocol, or NFC protocol, and send to the display device 200.
  • a communicator 130 such as WIFI, Bluetooth, NFC and other circuits, which can encode user input instructions through the WIFI protocol, or Bluetooth protocol, or NFC protocol, and send to the display device 200.
  • the memory 190 is used to store various operating programs, data, and applications for driving and controlling the control device 100 under the control of the controller 110.
  • the memory 190 can store various control signal instructions input by the user.
  • the power supply 180 is used to provide operating power support for the electrical components of the control device 100 under the control of the controller 110. Can battery and related control circuit.
  • FIG. 4 exemplarily shows a hardware configuration block diagram of the hardware system in the display device 200 according to the exemplary embodiment.
  • the mechanism relationship of the hardware system can be shown in Figure 4.
  • one hardware system in the dual hardware system architecture is referred to as the first hardware system or A system, A chip, and the other hardware system is referred to as the second hardware system or N system, N chip.
  • the A chip includes the controller of the A chip and various circuits connected to the controller of the A chip through various interfaces
  • the N chip includes the controller of the N chip and various circuits connected to the controller of the N chip through various interfaces.
  • a relatively independent operating system can be installed in the A chip and the N chip.
  • the operating system of the A chip and the operating system of the N chip can communicate with each other through a communication protocol. Exemplary: the framework layer of the A chip's operating system and the N chip's
  • the framework layer of the operating system can communicate for the transmission of commands and data, so that there are two independent but interrelated subsystems in the display device 200.
  • the A chip and the N chip can realize connection, communication and power supply through multiple different types of interfaces.
  • the interface type of the interface between the A chip and the N chip may include general-purpose input/output (GPIO), USB interface, HDMI interface, UART interface, and the like.
  • GPIO general-purpose input/output
  • USB interface USB interface
  • HDMI interface HDMI interface
  • UART interface UART interface
  • One or more of these interfaces can be used between the A chip and the N chip for communication or power transmission.
  • the N chip can be powered by an external power source, and the A chip can be powered by the N chip instead of the external power source.
  • the external power supply can also be connected to the N chip and the A chip respectively to supply power to the N chip and the A chip.
  • the A chip may also include interfaces for connecting other devices or components, such as the MIPI interface for connecting to a camera (Camera) shown in FIG. 4, a Bluetooth interface, etc.
  • the N chip can also include a VBY interface for connecting to the display screen TCON (Timer Control Register), which is used to connect a power amplifier (Amplifier, AMP) and a speaker (Speaker). ) I2S interface; and IR/Key interface, USB interface, Wifi interface, Bluetooth interface, HDMI interface, Tuner interface, etc.
  • TCON Timer Control Register
  • AMP power amplifier
  • Speaker speaker
  • I2S interface I2S interface
  • IR/Key interface IR/Key interface
  • USB interface USB interface
  • Wifi interface Wireless Fidelity
  • Bluetooth interface HDMI interface
  • Tuner interface etc.
  • FIG. 5 is only an exemplary description of the dual hardware system architecture of the present application, and does not represent a limitation to the present application. In practical applications, both hardware systems can contain more or less hardware or interfaces as required.
  • FIG. 5 exemplarily shows a block diagram of the hardware architecture of the display device 200 according to FIG. 4.
  • the hardware system of the display device 200 may include an A chip and an N chip, and circuits connected to the A chip or the N chip through various interfaces.
  • the N chip may include a tuner and demodulator 220, a communicator 230, an external device interface 250, a controller 210, a memory 290, a user input interface 260-3, a video processor 260-1, an audio processor 260-2, a display 280, Audio output interface 270, power supply circuit 240.
  • the N chip may also include more or fewer circuits.
  • the tuner and demodulator 220 is used to perform modulation and demodulation processing such as amplification, mixing, and resonance on the broadcast and television signals received through wired or wireless methods, so as to demodulate the user’s information from multiple wireless or cable broadcast and television signals. Select the audio and video signals carried in the frequency of the TV channel, as well as additional information (such as EPG data signals).
  • the signal path of the tuner and demodulator 220 can have many kinds, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting or Internet broadcasting, etc.; and according to different modulation types, the signal adjustment method can be digitally modulated The method may also be an analog modulation method; and according to different types of received television signals, the tuner demodulator 220 may demodulate analog signals and/or digital signals.
  • the tuner and demodulator 220 is also used to respond to the TV channel frequency selected by the user and the TV signal carried by the frequency according to the user's selection and control by the controller 210.
  • the tuner demodulator 220 may also be in an external device, such as an external set-top box.
  • the set-top box outputs TV audio and video signals through modulation and demodulation, and inputs them to the display device 200 through the external device interface 250.
  • the communicator 230 is a component for communicating with external devices or external servers according to various communication protocol types.
  • the communicator 230 may include a WIFI circuit 231, a Bluetooth communication protocol circuit 232, a wired Ethernet communication protocol circuit 233, and an infrared communication protocol circuit and other network communication protocol circuits or near field communication protocol circuits (not shown in the figure).
  • the display device 200 may establish a control signal and a data signal connection with an external control device or content providing device through the communicator 230.
  • the communicator may receive the control signal of the remote controller 100 according to the control of the controller.
  • the external device interface 250 is a component that provides data transmission between the N chip controller 210 and the A chip and other external devices.
  • the external device interface 250 can be connected to external devices such as set-top boxes, game devices, notebook computers, etc. in a wired/wireless manner, and can receive external devices such as video signals (such as moving images), audio signals (such as music), and additional information (such as EPG) and other data.
  • the external device interface 250 may include: a high-definition multimedia interface (HDMI) terminal is also referred to as HDMI251, a composite video blanking synchronization (CVBS) terminal is also referred to as AV252, an analog or digital component terminal is also referred to as component 253, and universal Any one or more of a serial bus (USB) terminal 254, a red, green, and blue (RGB) terminal (not shown in the figure), etc.
  • HDMI251 high-definition multimedia interface
  • CVBS composite video blanking synchronization
  • component 253 an analog or digital component terminal
  • USB serial bus
  • RGB red, green, and blue
  • the controller 210 controls the operation of the display device 200 and responds to user operations by running various software control programs (such as an operating system and/or various application programs) stored on the memory 290.
  • various software control programs such as an operating system and/or various application programs
  • the controller 210 includes a read-only memory RAM 213, a random access memory ROM 214, a graphics processor 216, a CPU processor 212, a communication interface 218, and a communication bus.
  • the RAM 213 and the ROM 214, the graphics processor 216, the CPU processor 212, and the communication interface 218 are connected by a bus.
  • the graphics processor 216 is used to generate various graphics objects, such as icons, operation menus, and user input instructions to display graphics. Including an arithmetic unit, which performs operations by receiving various interactive commands input by the user, and displays various objects according to the display attributes. As well as including a renderer, various objects obtained based on the arithmetic unit are generated, and the result of the rendering is displayed on the display 280.
  • the CPU processor 212 is configured to execute operating system and application program instructions stored in the memory 290. And according to receiving various interactive instructions input from the outside, to execute various application programs, data and content, so as to finally display and play various audio and video content.
  • the CPU processor 212 may include multiple processors.
  • the multiple processors may include one main processor and multiple or one sub-processors.
  • the main processor is used to perform some operations of the display device 200 in the pre-power-on mode, and/or to display images in the normal mode.
  • the communication interface 218 may include a first interface 218-1 to an nth interface 218-n. These interfaces may be network interfaces connected to external devices via a network.
  • the controller 210 may control the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
  • the object may be any one of the selectable objects, such as a hyperlink or an icon.
  • Operations related to the selected object for example: display operations connected to hyperlink pages, documents, images, etc., or perform operations corresponding to the icon.
  • the user command for selecting the UI object may be a command input through various input devices (for example, a mouse, a keyboard, a touch pad, etc.) connected to the display device 200 or a voice command corresponding to the voice spoken by the user.
  • the memory 290 includes storing various software circuits for driving and controlling the display device 200.
  • various software circuits stored in the memory 290 include: basic circuits, detection circuits, communication circuits, display control circuits, browser circuits, and various service circuits (not shown in the figure).
  • the basic circuit is a low-level software circuit used for signal communication between various hardware in the display device 200 and sending processing and control signals to the upper-level circuit.
  • the detection circuit is a management circuit used to collect various information from various sensors or user input interfaces, and perform digital-to-analog conversion, analysis and management.
  • the voice recognition circuit includes a voice analysis circuit and a voice command database circuit.
  • the display control circuit is a circuit used to control the display 280 to display image content, and can be used to play information such as multimedia image content and UI interfaces.
  • the communication circuit is a circuit used for control and data communication with external devices.
  • the browser circuit is a circuit used to perform data communication between browsing servers.
  • the service circuit is a circuit used to provide various services and various applications.
  • the memory 290 is also used to store and receive external data and user data, images of various items in various user interfaces, and visual effect diagrams of focus objects, and the like.
  • the user input interface 260-3 is used to send a user's input signal to the controller 210, or to transmit a signal output from the controller 210 to the user.
  • the control device (such as a mobile terminal or a remote control) can send input signals input by the user, such as a power switch signal, a channel selection signal, and a volume adjustment signal, to the user input interface, and then forward the input signal to the user input interface 260-3.
  • the control device may receive output signals such as audio, video, or data output from the user input interface 260-3 processed by the controller 210, and display the received output signal or output the received output signal as audio or vibration form.
  • the user may input a user command on a graphical user interface (GUI) displayed on the display 280, and the user input interface 260-3 receives the user input command through the graphical user interface (GUI).
  • GUI graphical user interface
  • the user may input a user command by inputting a specific sound or gesture, and the user input interface 260-3 recognizes the sound or gesture through a sensor to receive the user input command.
  • the video processor 260-1 is used to receive video signals, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to the standard codec protocol of the input signal.
  • the video signal displayed or played directly on the display 280.
  • the video processor 260-1 includes a demultiplexing circuit, a video decoding circuit, an image synthesis circuit, a frame rate conversion circuit, a display formatting circuit, etc. (not shown in the figure).
  • the demultiplexing circuit is used to demultiplex the input audio and video data stream. For example, if MPEG-2 is input, the demultiplexing circuit will demultiplex into a video signal and an audio signal.
  • the video decoding circuit is used to process the demultiplexed video signal, including decoding and scaling.
  • An image synthesis circuit such as an image synthesizer, is used to superimpose and mix the GUI signal generated by the graphics generator with the zoomed video image according to user input or itself to generate an image signal for display.
  • Frame rate conversion circuit used to convert the frame rate of the input video, such as converting the frame rate of the input 24Hz, 25Hz, 30Hz, 60Hz video to the frame rate of 60Hz, 120Hz or 240Hz, where the input frame rate can be the same as the source
  • the video stream is related, and the output frame rate can be related to the refresh rate of the display.
  • the display formatting circuit is used to change the signal output by the frame rate conversion circuit into a signal that conforms to the display format of a display, such as format conversion of the signal output by the frame rate conversion circuit to output RGB data signals.
  • the display 280 is configured to receive image signals input from the video processor 260-1, display video content and images, and a menu control interface.
  • the display 280 includes a display component for presenting a picture and a driving component for driving image display.
  • the displayed video content can be from the video in the broadcast signal received by the tuner and demodulator 220, or from the video content input by the communicator or the interface of an external device.
  • the display 280 simultaneously displays a user manipulation interface UI generated in the display device 200 and used to control the display device 200.
  • the display 280 it also includes a driving component for driving the display.
  • the display 280 is a projection display, it may also include a projection device and a projection screen.
  • the audio processor 260-2 is used to receive audio signals, and perform decompression and decoding according to the standard codec protocol of the input signal, as well as audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing, and the result can be in the speaker 272 The audio signal to be played.
  • the audio output interface 270 is used to receive the audio signal output by the audio processor 260-2 under the control of the controller 210.
  • the audio output interface may include a speaker 272 or output to an external audio output terminal 274 of the generator of an external device, such as : External audio terminal or headphone output terminal, etc.
  • the video processor 260-1 may include one or more chips.
  • the audio processor 260-2 may also include one or more chips.
  • the video processor 260-1 and the audio processor 260-2 may be separate chips, or may be integrated with the controller 210 in one or more chips.
  • the power supply circuit 240 is configured to provide power supply support for the display device 200 with power input from an external power supply under the control of the controller 210.
  • the power supply circuit 240 may include a built-in power supply circuit installed inside the display device 200, or may be a power supply installed outside the display device 200, such as a power interface for providing an external power supply in the display device 200.
  • the A chip may include a controller 310, a communicator 330, a detector 340, and a memory 390. In some embodiments, it may also include a user input interface, a video processor 360-1, an audio processor 360-2, a display, and an audio output interface (not shown in the figure). In some embodiments, there may also be a power supply circuit ((not shown in the figure)) that independently supplies power to the A chip.
  • the communicator 330 is a component for communicating with external devices or external servers according to various communication protocol types.
  • the communicator 330 may include a WIFI circuit 331, a Bluetooth communication protocol circuit 332, a wired Ethernet communication protocol circuit 333, and an infrared communication protocol circuit and other network communication protocol circuits or near field communication protocol circuits. (Not shown in the picture)
  • the communicator 330 of the A chip and the communicator 230 of the N chip also interact with each other.
  • the WiFi circuit 231 in the N chip hardware system is used to connect to an external network, and to generate network communication with an external server or the like.
  • the WiFi circuit 331 in the hardware system of the A chip is used to connect to the WiFi circuit 231 of the N chip without direct connection with an external network or the like, and the A chip connects to the external network through the N chip. Therefore, for the user, a display device as in the above embodiment may display a WiFi account to the outside.
  • the detector 340 is a component used by the chip of the display device A to collect signals from the external environment or interact with the outside.
  • the detector 340 may include a light receiver 342, a sensor used to collect the intensity of ambient light, which can adaptively display parameter changes by collecting ambient light, etc.; it may also include an image collector 341, such as a camera, a camera, etc., which can be used to collect external
  • the environment scene, as well as the user's attributes or gestures used to interact with the user can adaptively change the display parameters, and can also recognize the user's gestures to achieve the function of interaction with the user.
  • the external device interface 350 provides components for data transmission between the controller 310 and the N chip or other external devices.
  • the external device interface can be connected to external devices such as set-top boxes, game devices, notebook computers, etc., in a wired/wireless manner.
  • the video processor 360-1 is used to process related video signals.
  • the controller 310 controls the work of the display device 200 and responds to user operations by running various software control programs (such as installed third-party applications, etc.) stored on the memory 390 and interacting with the N chip.
  • various software control programs such as installed third-party applications, etc.
  • the controller 310 includes a read-only memory ROM 313, a random access memory RAM 314, a graphics processor 316, a CPU processor 312, a communication interface 318, and a communication bus.
  • the ROM 313 and the RAM 314, the graphics processor 316, the CPU processor 312, and the communication interface 318 are connected by a bus.
  • the CPU processor 312 runs the system startup instruction in the ROM, and copies the operating system stored in the memory 390 to the RAM 314 to start the startup operating system. After the operating system is started, the CPU processor 312 copies various application programs in the memory 390 to the RAM 314, and then starts to run and start various application programs.
  • the CPU processor 312 is used to execute the operating system and application instructions stored in the memory 390, communicate with the N chip, transmit and interact with signals, data, instructions, etc., and execute according to various interactive instructions received from external inputs.
  • Communication interfaces 318 may include a first interface 318-1 to an nth interface 318-n. These interfaces may be network interfaces connected to external devices via the network, or network interfaces connected to the N chip via the network.
  • the controller 310 may control the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
  • the graphics processor 316 is used to generate various graphics objects, such as icons, operation menus, and user input instructions to display graphics. Including an arithmetic unit, which performs operations by receiving various interactive commands input by the user, and displays various objects according to the display attributes. As well as including a renderer, various objects obtained based on the arithmetic unit are generated, and the result of the rendering is displayed on the display 280.
  • Both the graphics processor 316 of the A chip and the graphics processor 216 of the N chip can generate various graphics objects. Differentily, if application 1 is installed on the A chip and application 2 is installed on the N chip, when the user is in the interface of the application 1 and the user inputs instructions in the application 1, the graphics object is generated by the A chip graphics processor 316. When the user is on the interface of Application 2 and performs the user-input instructions in Application 2, the graphics processor 216 of the N chip generates the graphics object.
  • Fig. 6 exemplarily shows a schematic diagram of a functional configuration of a display device according to an exemplary embodiment.
  • the memory 390 of the A chip and the memory 290 of the N chip are used to store operating systems, application programs, content, and user data, respectively, under the control of the controller 310 of the A chip and the controller 210 of the N chip. Perform system operations that drive the display device 200 and respond to various operations of the user.
  • the memory 390 of the A chip and the memory 290 of the N chip may include volatile and/or nonvolatile memory.
  • the memory 290 is specifically used to store the operating program that drives the controller 210 in the display device 200, and store various application programs built in the display device 200, and various application programs downloaded by the user from an external device, and application programs.
  • the memory 290 is used to store system software such as an operating system (OS) kernel, middleware, and applications, as well as to store input video data and audio data, and other user data.
  • OS operating system
  • the memory 290 is specifically used to store driver programs and related data such as the video processor 260-1 and the audio processor 260-2, the display 280, the communicator 230, the tuner and demodulator 220, and the input/output interface.
  • the memory 290 may store software and/or programs.
  • the software programs used to represent an operating system (OS) include, for example, kernels, middleware, application programming interfaces (APIs), and/or application programs.
  • OS operating system
  • the kernel may control or manage system resources, or functions implemented by other programs (such as the middleware, API, or application program), and the kernel may provide interfaces to allow middleware and APIs, or applications to access the controller , In order to achieve control or management of system resources.
  • the memory 290 includes a broadcast receiving circuit 2901, a channel control circuit 2902, a volume control circuit 2903, an image control circuit 2904, a display control circuit 2905, a first audio control circuit 2906, an external command recognition circuit 2907, and a communication control circuit.
  • the controller 210 executes various software programs in the memory 290 such as: broadcast and television signal reception and demodulation function, TV channel selection control function, volume selection control function, image control function, display control function, audio control function, external command Various functions such as recognition function, communication control function, optical signal receiving function, power control function, software control platform supporting various functions, and browser function.
  • the memory 390 includes storing various software circuits for driving and controlling the display device 200.
  • various software circuits stored in the memory 390 include: basic circuits, detection circuits, communication circuits, display control circuits, browser circuits, and various service circuits (not shown in the figure). Since the functions of the memory 390 and the memory 290 are relatively similar, please refer to the memory 290 for related parts, which will not be repeated here.
  • the memory 390 includes an image control circuit 3904, a second audio control circuit 3906, an external command recognition circuit 3907, a communication control circuit 3908, a light receiving circuit 3909, an operating system 3911, and an application program 3912, a browser circuit 3913 and so on.
  • the controller 210 executes various software programs in the memory 290, such as: image control function, display control function, audio control function, external command recognition function, communication control function, optical signal receiving function, power control function, support for various Functional software control platform, as well as various functions such as browser functions.
  • the external command recognition circuit 2907 of the N chip and the external command recognition circuit 3907 of the A chip can recognize different commands.
  • the external command recognition circuit 3907 of the A chip may include a graphic recognition circuit 2907-1.
  • the graphic recognition circuit 3907-1 stores a graphic database, and the camera receives When the graphics instructions are sent to the outside, the corresponding relationship is made with the instructions in the graphics database to control the display device.
  • the voice receiving device and the remote control are connected to the N chip, the external command recognition circuit 2907 of the N chip may include a voice recognition circuit 2907-2.
  • the voice recognition circuit 2907-2 stores a voice database, and the voice receiving device, etc.
  • the external voice commands or time correspond to the commands in the voice database to control the display device.
  • a control device 100 such as a remote controller is connected to the N chip, and the key command recognition circuit 2907-3 interacts with the control device 100 in command.
  • FIG. 7(a) exemplarily shows a configuration block diagram of a software system in the display device 200 according to an exemplary embodiment.
  • the operating system 2911 includes operating software for processing various basic system services and implementing hardware-related tasks, acting as an application program and completing data processing between hardware components Medium.
  • part of the operating system kernel may include a series of software to manage the hardware resources of the display device and provide services for other programs or software codes.
  • part of the operating system kernel may include one or more device drivers, and the device driver may be a set of software codes in the operating system to help operate or control the device or hardware associated with the display device.
  • the drive may contain code to manipulate video, audio, and/or other multimedia components. In some embodiments, it includes a display, a camera, Flash, WiFi, and an audio driver.
  • the accessibility circuit 2911-1 is used to modify or access the application program to realize the accessibility of the application program and the operability of its display content.
  • the communication circuit 2911-2 is used to connect to other peripherals via the relevant communication interface and communication network.
  • the user interface circuit 2911-3 is used to provide objects that display the user interface for access by various applications, and can realize user operability.
  • the control application 2911-4 is used to control process management, including runtime applications.
  • the event transmission system 2914 can be implemented in the operating system 2911 or in the application 2912. In some embodiments, it is implemented in the operating system 2911 on the one hand, and implemented in the application program 2912 at the same time, for monitoring various user input events, and responding to the recognition results of various events or sub-events according to various events. And implement one or more sets of pre-defined operation procedures.
  • the event monitoring circuit 2914-1 is used to monitor input events or sub-events of the user input interface.
  • the event recognition circuit 2914-2 is used to input the definition of various events to various user input interfaces, recognize various events or sub-events, and transmit them to the processing to execute their corresponding one or more sets of processing programs .
  • the event or sub-event refers to the input detected by one or more sensors in the display device 200 and the input of an external control device (such as the control device 100, etc.).
  • an external control device such as the control device 100, etc.
  • one or more sub-events in the remote control include various forms, including but not limited to one or a combination of up/down/left/right/, confirm key, and key press. And the operations of non-physical keys, such as moving, pressing, and releasing.
  • the interface layout management circuit 2913 directly or indirectly receives the events or sub-events monitored by the event transmission system 2914 from the user input, and is used to update the layout of the user interface, including but not limited to the position of the controls or sub-controls in the interface, and the container
  • the size, position, level, etc. of the interface are related to the various execution operations of the interface layout.
  • the application programs of the display device include various application programs that can be executed on the display device 200.
  • the application program 2912 of the N chip may include, but is not limited to, one or more application programs, such as: video-on-demand application, application center, game application, and so on.
  • the application 3912 of the A chip may include, but is not limited to, one or more applications, such as a live TV application, a media center application, and so on. It should be noted that the application programs contained on the A chip and the N chip are determined according to the operating system and other designs. This application does not need to specifically limit and divide the application programs contained on the A chip and the N chip.
  • Live TV applications can provide live TV through different sources.
  • a live TV application can provide a TV signal using input from cable TV, wireless broadcasting, satellite services, or other types of live TV services.
  • the live TV application can display the video of the live TV signal on the display device 200.
  • Video-on-demand applications can provide videos from different storage sources. Unlike live TV applications, VOD provides video display from certain storage sources. For example, video on demand can come from the server side of cloud storage, and from the local hard disk storage that contains stored video programs.
  • Media center applications can provide various multimedia content playback applications.
  • the media center can provide services that are different from live TV or video on demand, and users can access various images or audio through the media center application.
  • Application center can provide storage of various applications.
  • the application program can be a game, an application program, or some other application program that is related to a computer system or other device but can be run on a display device.
  • the application center can obtain these applications from different sources, store them in the local storage, and then run on the display device 200.
  • FIG. 8 exemplarily shows a schematic diagram of a user interface in the display device 200 according to an exemplary embodiment.
  • the user interface includes multiple view display areas.
  • the first view display area 201 and the play screen 202 where the play screen includes the layout of one or more different items.
  • the user interface also includes a selector indicating that the item is selected, and the position of the selector can be moved through user input to change the selection of different items.
  • multiple view display areas can present display screens of different levels.
  • the display area of the first view may present the content of the video chat item
  • the display area of the second view may present the content of the application layer item (eg, webpage video, VOD display, application program screen, etc.).
  • the presentation of different view display areas has different priorities, and the display priorities of the view display areas are different between view display areas with different priorities.
  • the priority of the system layer is higher than the priority of the application layer.
  • the screen display in the view display area of the system layer is not blocked; and the application layer is enabled according to the user's choice.
  • the size and position of the view display area change the size and position of the view display area of the system layer will not be affected.
  • the same level of display screen can also be presented.
  • the selector can switch between the display area of the first view and the display area of the second view, and when the size and position of the display area of the first view change, the second view The size and position of the display area can be changed at any time.
  • an independent operating system may be installed in the A chip and the N chip, there are two independent but related sub-systems in the display device 200.
  • both the A chip and the N chip can be independently installed with Android and various APPs, so that each chip can realize a certain function, and the A chip and the N chip can cooperate to realize a certain function.
  • an embodiment of the present application provides a method for audio-visual synchronization processing, and the method includes the following steps:
  • Step S10 Obtain the time-consuming processing of the first image quality, the time-consuming processing of the first sound, the time-consuming processing of the second image quality, and the time-consuming processing of the second sound.
  • the first chip mainly plays videos such as network source and local media
  • the second chip ie N chip
  • the second chip can be connected to a set-top box and other devices through HDMI 2.0 to realize live TV playback, and the second chip is used to improve image quality
  • the processed image data is output to the display screen to display the video image, and the second chip is also used to output the processed sound data to the sound player (that is, the audio output interface 270 in Figure 5), which is used to play the video. sound.
  • the first image quality processing time is the time that the first chip performs image quality processing on the video signal, which can be obtained by the first video processor in the first chip;
  • the first sound processing time is the first chip The time-consuming process of image quality processing on the audio signal can be obtained by the first audio processor in the first chip;
  • the second chip receives the video signal output by the first chip through a communication interface (such as HDMI), and compares the video signal to the first chip.
  • the video signal output by the chip is subjected to image quality processing, which results in the second image quality processing time consuming in the second chip, and the second image quality processing time consuming can be obtained by the second video processor in the second chip;
  • the chip receives the audio signal output by the first chip through a communication interface (such as HDMI), and performs sound processing on the audio signal output by the first chip, thereby generating a second sound processing time-consuming second sound processing in the second chip Time-consuming can be obtained by the second audio processor in the second chip.
  • the image quality processing includes processing links such as brightness, contrast, chroma, hue, sharpness, image noise reduction, dynamic contrast, gamma, color temperature, white balance, color correction, dynamic range of brightness, motion picture compensation, and so on.
  • the time-consuming image quality processing mainly includes two aspects: On the one hand, it is the time required to complete conventional image processing tasks to make the image achieve a specific image quality effect, such as completing brightness, contrast, and chroma. , Hue, sharpness, image noise reduction, dynamic contrast, gamma, color temperature, white balance, color correction, brightness dynamic range, motion picture compensation and other processing links in total time consumed; on the other hand, each frame of image
  • the PQ delay generated during processing because the first chip and the second chip process the image frame by frame, when processing the current frame, it will read the current frame and the subsequent frames of image data, by referring to the current frame
  • the next few frames of image data complete the image quality processing of the current frame. Because the image data of the next few frames needs to be read during the image quality processing, it will cause a delay in the PQ processing of each frame, that is, the above-mentioned PQ delay .
  • the time-consuming image quality processing in this embodiment is the sum of the time-consuming image processing in the first aspect and the PQ delay in the second aspect. Therefore, the following formula can be used to calculate the processing time for the first image quality and the processing time for the second image quality:
  • T1 is the time consuming for the first image quality processing
  • T2 is the time consuming for the second image quality processing
  • Tn is the time consuming for each frame of image quality processing
  • Td is the time for each frame
  • f is the refresh frequency, in Hz
  • R is the frame processing threshold, and the frame processing threshold is used to instruct to read the image data of the current frame and the subsequent N-1 frames, To complete the image quality processing of the current frame
  • S is the number of frames of video playback.
  • the frame processing threshold is equal to 4 as an example.
  • the frame processing threshold is greater than or equal to 2. The smaller the frame processing threshold, the smaller the PQ delay. The larger the frame processing threshold, the better the image quality processing effect. Therefore, it can be set according to actual application requirements. This embodiment does not limited.
  • sound processing includes noise reduction (Noise Reduction) processing circuit, sound signal amplitude (Prescale) processing circuit, AVC (Auto Volume Control, automatic volume control) circuit, sound effect processing circuit, GEQ (Graphic Equalize, graphic equalization) (Equalizer) circuit and PEQ (Parametric Equalizer, parametric equalizer) circuit, etc.
  • the sound effect processing circuit can process the sound into DTS (Digital Theater System) or Dolby (Dolby Atmos) and other sound effects.
  • the audio is processed by sound.
  • the sound processing of the first chip and the second chip takes time, that is, the time taken for the first sound processing and the time taken for the second sound processing are the processing paths in Figure 11 The total time consumed by each circuit for sound processing.
  • the noise reduction (Noise Reduction) process is used to eliminate the noise caused by the PCM (Pulse Code Modulation) board, and the noise reduction helps to improve the sound quality.
  • the sound signal amplitude (Prescale) processing is to process the sound signal amplitude, allowing different signal sources to enter the processing, maintaining the same signal amplitude.
  • AVC can realize automatic volume control and limit the sound output amplitude of the signal source.
  • the display device can automatically adjust the output volume level according to the volume level of the video input to maintain the stability of the sound, reduce or eliminate the popping sound, and amplify the smaller sound to a suitable level. Scope; DTS sound effects and Dolby sound effects both process the sound effects of the sound to improve the sound playback effect.
  • the GEQ circuit can intuitively reflect the balance compensation curve that is called through the distribution of push-pull keys on the panel.
  • the increase and attenuation of each frequency are clear at a glance. It uses constant Q technology, and each frequency point is equipped with a push-pull potentiometer. To boost or attenuate a certain frequency, the frequency bandwidth of the filter is always the same.
  • the commonly used professional equalizer divides the 20Hz ⁇ 20kHz signal into 10 segments, 15 segments, 27 segments, and 31 segments for adjustment. In this way, frequency equalizers with different numbers of segments are selected according to different requirements. Generally speaking, the frequency points of the 10-band equalizer are distributed in octave intervals.
  • the 15-band equalizer is a 2/3 octave equalizer, and when used in professional sound reinforcement, the 31-band equalizer is 1
  • the /3-octave equalizer is mostly used in more important occasions where fine compensation is required.
  • the parametric equalizer can finely adjust various parameters of the equalization adjustment, and it is mostly attached to the mixer, but there is also an independent parametric equalizer.
  • the adjusted parameters include frequency band, frequency point, gain and quality factor Q value, etc. Beautify and modify the sound, make the sound style more vivid and prominent, rich and colorful to achieve the required artistic effect.
  • Step S20 Calculate the audio-visual synchronization time difference according to the time-consuming first image quality processing, the time-consuming first sound processing, the time-consuming second image quality processing, and the time-consuming second sound processing.
  • step S10 the audio-visual synchronization time difference can be calculated according to the following formula:
  • N T1+T2-(T3+T4)
  • N is the time difference between audio and video synchronization
  • T1 is the time consuming for the first image quality processing
  • T2 is the time consuming for the second image quality processing
  • T3 is the time consuming for the first sound processing
  • T4 is the time consuming for the second sound processing.
  • the network video or local media application of the A chip can be sent to the N chip through the communication circuit of the A chip immediately before the video is decoded by the first chip.
  • Get the PQ time-consuming and sound-consuming instructions the communication circuit of the N chip receives and parses the instructions sent by the A chip, and calculates the second image quality processing time and the second sound processing time of the N chip; then the N chip communication circuit will The second image quality processing time and the second sound processing time are packaged and sent to the A chip; the A chip communication circuit receives the parameter data sent by the N chip, and the A chip analyzes the relevant data to obtain the second image quality of the N chip The processing is time consuming and the second sound processing is time consuming.
  • the A chip obtains its own first image quality processing time and the first sound processing time, plus the acquired N chip's second image quality processing time and second sound processing time Time-consuming to obtain T1, T2, T3, and T4, and send the data of T1, T2, T3, and T4 to the controller in the A chip; after the controller in the A chip receives T1, T2, T3, and T4, that is After the execution of step S10 is completed, the audio-visual synchronization time difference can be calculated according to step S20.
  • the time-consuming parameters of the A chip and the N-chip can be obtained at the same time, or the time-consuming parameters of the A chip and then the time-consuming parameters of the N chip can be obtained first, or the time-consuming parameters of the N chip and then the time-consuming parameters of the A chip
  • the time-consuming parameter is not limited in this embodiment.
  • Step S30 It is judged whether the time difference of the audio-visual synchronization is within a threshold range.
  • the threshold range is -30ms ⁇ +20ms
  • 20ms is the upper limit of the threshold range
  • the upper limit of the threshold range is the time allowed for the image to appear later than the sound, that is, the image appears 20ms later than the sound at most
  • -30ms is the lower limit of the threshold range
  • the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound, that is, the image appears 30ms earlier than the sound at most.
  • the audio and video synchronization time difference is greater than the upper limit of the threshold range, it is considered that the image quality processing time of the dual-chip is greater than the audio processing time of the dual-chip, causing the image to lag behind the sound; if the audio and video synchronization time difference is less than the threshold range
  • the lower limit of it is considered that the image quality processing time of the dual-chip is less than the sound processing time of the dual-chip, which causes the image to be ahead of the sound. Therefore, by comparing the time difference between audio and video synchronization and the threshold range, it is possible to accurately know the state of the picture and the sound being out of sync. It may be that the picture appears later than the sound, or the picture appears earlier than the sound, so that the synchronization adjustment method can be adopted in a targeted manner.
  • step S40 is executed to synchronously compensate the video signal and the audio signal output by the second chip, so as to synchronize the sound played by the display device with the picture adjustment.
  • the audio delay method can be used; or the frame loss method can be used to discard the part that is not synchronized with the sound Image frames to synchronize sound and image adjustments. If the time difference between the audio and picture synchronization is less than the lower limit of the threshold range, it means that the image will appear earlier than the sound, and the frame can be inserted to intervene in the image playback, thereby adjusting the synchronization of the sound and the picture.
  • two synchronization compensation modes can be included. One is to synchronize the sound and image adjustment immediately, and the other is to synchronize the sound within a preset threshold time Z. Synchronize with screen adjustment.
  • the method may further include:
  • Step S401 It is judged whether the time difference between audio and video synchronization is greater than the upper limit of the threshold range. If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, indicating that the sound is ahead of the image playback, step S402, step S403, or step S404 can be performed; if the audio-visual synchronization time difference is not greater than the upper limit of the threshold range, that is, the audio-visual synchronization time difference is less than the threshold range.
  • the lower limit value indicates that the image is played in advance of the sound, and then step S405 or step S406 can be executed.
  • f is the refresh frequency, which is used to indicate the number of frames of image display per second (s);
  • N is the time difference between audio and video synchronization, in milliseconds.
  • step S404 the Audio Delay compensation mode is executed to delay the playback of the audio, thereby synchronizing the sound with the picture adjustment.
  • f is the refresh frequency, which is used to indicate the number of frames displayed per second;
  • is the absolute value of the time difference between audio and video synchronization, in milliseconds.
  • the threshold time Z can be set according to actual application conditions, which is not limited in this embodiment.
  • the user may adjust the image settings or sound settings. For example, the user adjusts the image mode to the game mode in the image settings. In order to ensure that the image is displayed quickly so that the user can get a better gaming experience, it is required in the game mode. Keep the picture delay as low as possible.
  • the user turns off the advanced sound effects in the sound settings, or chooses to enter the K song low latency mode.
  • the sound delay needs to be as low as possible to ensure that the user has a better K song experience.
  • the game mode requires a low image delay state, some aspects of the image quality processing will be set to the minimum, such as reading 1 frame of image for image quality processing, so that the time-consuming image quality processing will be reduced.
  • the sound processing Time-consuming may also change, so it is necessary to recalculate the audio-visual synchronization time difference to ensure audio-visual synchronization.
  • the method further includes: during the video playback process, detecting whether an image setting operation is received; if the image setting operation is received, reacquiring the first image quality processing time-consuming, first Time-consuming sound processing, time-consuming second image quality processing, and time-consuming second sound processing are used to correct the audio-visual synchronization time difference; if the corrected audio-visual synchronization time difference is not within the threshold range, it will be based on the corrected audio and video synchronization time difference.
  • the picture synchronization time difference synchronously compensates the video signal and the audio signal output by the second chip, so as to synchronize the sound and picture adjustment.
  • the method further includes: during the video playback process, detecting whether a sound setting operation is received; if the sound setting operation is received, reacquiring the first image quality processing time-consuming, first Time-consuming sound processing, time-consuming second image quality processing, and time-consuming second sound processing are used to correct the audio-visual synchronization time difference; if the corrected audio-visual synchronization time difference is not within the threshold range, it will be based on the corrected audio and video synchronization time difference.
  • the picture synchronization time difference synchronously compensates the video signal and the audio signal output by the second chip, so as to synchronize the sound and picture adjustment.
  • the image setting operation and the sound setting operation are the user's operations on the display device, such as using the remote control, mouse or touch screen operation, by starting the "image setting” and “sound setting” options in the display interface, so that the user
  • the playback status of images and sounds can be set adaptively according to usage requirements.
  • the image setting operation and the sound setting operation are detected by the first chip.
  • the method further includes: during the video playback process, acquiring the first image quality every preset time is time-consuming, time-consuming to process the first sound, and time-consuming to process the second image quality. Time-consuming processing with the second sound to correct the audio-visual synchronization time difference; if the corrected audio-visual synchronization time difference is not within the threshold range, the video output from the second chip is performed according to the corrected audio-visual synchronization time difference
  • the signal and audio signal are compensated synchronously to synchronize the sound and picture adjustments. Every preset time, such as 2 seconds, the time difference of audio and video synchronization is updated and corrected regularly, so as to ensure that the video playback keeps the audio and video synchronization playback status as much as possible.
  • This application also provides an embodiment of a display device, which is used to implement the audio-visual synchronization processing method described above, which should at least include a first chip, a second chip, a display screen, and an audio output interface.
  • the first chip is used to decode the playback data to separate the video signal and the audio signal, perform image quality processing on the video signal, and perform sound processing on the audio signal, and then combine the processed video signal and audio signal once.
  • the display device also includes:
  • Memory used to store program instructions
  • a processor the processor is configured to call and execute program instructions in the memory, and execute all the steps in the foregoing audio-visual synchronization processing method embodiment.
  • the memory and the processor may be integrated or connected through a bus.
  • the processor may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processing, DSP), or application specific integrated circuits.
  • the memory can be high-speed RAM memory, disk memory, read-only memory, U disk, hard disk, flash memory, or non-volatile memory.
  • the method steps involved in the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by using a combination of hardware and software circuits in the processor.
  • the first chip performs video decoding and separates the image data and the sound data, it takes time to obtain the first image quality and the first sound processing, and to obtain the second image quality.
  • Time and second sound processing time-consuming, so as to accurately obtain the time consumed by the two chips when processing the sound effects and image quality, and then use these parameters to calculate the audio-visual synchronization time difference, that is, to calculate the display playback image and the sound playback device playback The time difference between the sounds. If the audio and video synchronization time difference is not within the threshold range, it is considered that the sound and the picture are not played synchronously.
  • the audio and video synchronization time difference it can be determined whether the image is ahead of the sound or the image is lagging behind the sound.
  • the video signal and the audio signal output by the second chip are synchronously compensated in a targeted manner to achieve audio and video synchronization, thereby improving the video playback effect of the display device.
  • the display device includes a first chip, a second chip, a display (ie, the display 280 in FIG. 5), and a sound player. Communication connection between the chips, the display is used to display video images, the sound player is used to play audio, and a first video processor (that is, the video processor 360-1 in FIG. 5) is provided on the first chip , A first audio processor (that is, the audio processor 360-2 in FIG. 5) and a controller, and the second chip is provided with a second video processor (that is, the video processor 260-1 in FIG.
  • the second audio processor (that is, the audio processor 260-2 in FIG. 5); in each embodiment of the present application, the sound player is the audio output interface 270 in FIG. 5, and the audio output interface 270 is the speaker 272 of the display device , Or include an external audio output terminal 274 for external audio equipment to realize sound playback.
  • the display device in the display device:
  • the first video processor is configured to receive a video signal through an input interface, and perform image quality processing on the video signal, and it takes time to obtain the first image quality processing;
  • the first audio processor receives an audio signal through an input interface, and performs sound processing on the audio signal, and it takes time to obtain the first sound processing;
  • the second video processor receives the video signal output by the first chip through a communication interface, and performs image quality processing on the video signal output by the first chip, and it takes time to obtain the second image quality processing;
  • the second audio processor receives the audio signal output by the first chip through a communication interface, and performs sound processing on the audio signal output by the first chip, and it takes time to obtain the second sound processing;
  • the controller in the first chip is configured as:
  • the video signal and audio signal output by the second chip are synchronously compensated, and the synchronously compensated video signal is transmitted to the display, and the synchronously compensated Output the audio signal of to the sound player;
  • the video signal output by the second chip is transmitted to the display, and the audio signal output by the second chip is output to the sound player.
  • step S10 the synchronization correction when the image setting operation or the sound setting operation is received, and the synchronization correction every preset time are all time-consuming for the first video processor to obtain the first image quality. It takes time for the first audio processor to acquire the first sound processing, the second video processor to acquire the second image quality processing time, the second audio processor to acquire the second sound processing time, and the controller to receive the first image quality processing time After the time, the first sound processing time-consuming, the second image quality processing time-consuming and the second sound processing time-consuming, the controller has two chips at the same time for the image quality processing time-consuming and sound processing time-consuming data, so as to be controlled by The controller calculates the time difference between audio and video synchronization; steps S20 to S40 and the corresponding refinement steps/formulas are all executed by the controller set in the first chip.
  • the embodiment of the present application also proposes a display device, including
  • Display used to display video images
  • the first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
  • the first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing
  • the second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
  • the second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
  • the controller is configured to:
  • the audio signal is compensated and transmitted to the sound player.
  • the first video processor and the first audio processor can be arranged on the first chip, and the second video processor and the second audio processor can be arranged on the second chip. on;
  • the first video processor, the first audio processor, the second video processor, and the second audio processor may be provided on the same chip.
  • the controller is provided on the first chip, and may also be provided on the second chip.
  • the controller is also used to:
  • the audio-visual synchronization time difference is not within the threshold range, compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, according to the audio-visual synchronization time difference Compensate the processed audio signal output by the second audio processor and transmit it to the sound player;
  • the video signal output processed by the second video processor is transmitted to the display, and the audio signal output processed by the second audio processor is output to the display Sound player.
  • the audio signal output processed by the second audio processor is output to the display Sound player.
  • the controller is also used to:
  • the process of reacquiring the first image quality by the first video processor takes time, and the process of reacquiring the first sound by the first audio processor takes time, and, It takes time for the second video processor to reacquire the second image quality processing, and it takes time for the second audio processor to reacquire the second sound processing;
  • the controller compares the audio and video Synchronize the time difference to correct it.
  • the controller compares the audio and video Synchronize the time difference to correct it.
  • the first video processor acquires the first image quality every preset time and the processing time is time-consuming
  • the first audio processor acquires the first image quality every preset time.
  • the first sound processing is time-consuming
  • the second video processor acquires the second image quality processing time every preset time
  • the second audio processor acquires the second sound processing time every preset time. Time.
  • the controller is also used to:
  • the audio-visual synchronization time difference is greater than the upper limit of the threshold range, frame dropping or audio delay is performed; wherein, the upper limit of the threshold range is the time allowed for the image to appear later than the sound.
  • the upper limit of the threshold range is the time allowed for the image to appear later than the sound.
  • the controller is also used to:
  • the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound.
  • the first compensation mode is the compensation mode in which the sound and the picture are adjusted and synchronized immediately mentioned in the foregoing embodiment.
  • the number of interpolated frames CZ f ⁇
  • the embodiment of the present application also proposes a method for audio-visual synchronization processing, which includes:
  • the first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
  • the first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing
  • the second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
  • the second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
  • the controller is configured to:
  • the audio signal is compensated and transmitted to the sound player.
  • the embodiment of the present application also proposes a display device, including
  • the display is configured to display image content
  • a sound reproducer configured to reproduce sound signals
  • the first processing chip includes a first video processor and a first audio processor, receives external audio signals and video signals through an input interface, the first audio processor is used to process the audio signals, and the first The video processor is configured to process the video signal, and a first time delay occurs when the audio signal and the video signal are processed;
  • the second processing chip is configured to receive the audio signal and the video signal output by the first chip through a connecting line, the second processing chip includes a second video processor and a second audio processor, and the second audio processor is used for For reprocessing the audio signal received by the first processing chip, and the second video processor is used for reprocessing the video signal received by the first processing chip, and the audio signal and the The second time delay occurs when the video signal is reprocessed;
  • time delay compensation is performed on the video signal and/or the audio signal after reprocessing, and the video signal and the audio signal after the time delay compensation are respectively The signal is output to the display and the sound reproducer.
  • the first chip detects whether an image setting operation or a sound setting operation is received
  • the first audio processor reprocesses the audio signal
  • the first video processor reprocesses the video signal
  • a third time delay occurs when the audio signal and the video signal are reprocessed
  • the second audio processor is used for reprocessing the audio signal received by the first processing chip
  • the second video processor is used for reprocessing the video signal received by the first processing chip
  • a fourth time delay occurs during the reprocessing of the audio signal and the video signal; according to the third time delay and the fourth time delay, the video signal and/or the audio signal after the reprocessing Perform delay compensation.
  • the first audio processor processes the audio signal
  • the first video processor processes the video signal
  • obtains A first time delay occurs when processing the audio signal and the video signal
  • the second audio processor reprocesses the audio signal received by the first processing chip
  • the second The second video processor reprocesses the video signal received by the first processing chip, and a second time delay occurs when the audio signal and the video signal are reprocessed.
  • the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
  • the audio-visual synchronization time difference is greater than the upper limit of the threshold range, then perform frame drop or audio delay; wherein the upper limit of the threshold range is the time allowed for the image to appear later than the sound;
  • the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay.
  • the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
  • the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound;
  • the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay.
  • the performing frame dropping includes:
  • the interpolating frame includes:
  • the audio-visual synchronization time difference is calculated according to the following formula:
  • N T1+T2-(T3+T4)
  • N is the time difference between audio and video synchronization
  • T1 is the time consumption of the first image quality processing generated by the first video processor processing the video signal
  • T2 is the second video processor reprocessing the video signal. It takes time for the second image quality to be processed.
  • T3 is the time for the first sound processing generated by the first audio processor to process the audio signal.
  • T4 is the time for the second audio processor to process the audio signal. The second sound processing generated by the reprocessing takes time.
  • the first image quality processing time and the second image quality processing time are calculated according to the following formula:
  • T1 is the time-consuming processing of the first image quality generated by the first video processor processing the video signal
  • T2 is the second image quality generated by the second video processor reprocessing the video signal Processing time
  • Tn is the time consumed when the image quality is processed for each frame of the image
  • Td is the delay generated when the image quality is processed for each frame of the image
  • f is the refresh frequency in Hz
  • R is the frame processing Threshold, the frame processing threshold is used to instruct to read the image data of the current frame and subsequent R-1 frames to complete the image quality processing of the current frame
  • S is the number of frames for video playback.
  • the present application also provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the audio-visual synchronization processing method embodiment provided in the present application when the program is executed.
  • the computer storage medium may be a magnetic disk, an optical disc, a read-only memory (English: read-only memory, abbreviated as: ROM) or a random access memory (English: random access memory, abbreviated as: RAM), etc.

Abstract

The present application relates to an sound and picture synchronization processing method and display device. The display device comprises a first chip and a second chip, the first chip being provided with a first video processor, a first audio processor and a controller, and the second chip being provided with a second video processor and a second audio processor; the first video processor acquires a first picture quality processing time; the first audio processor acquires a first sound processing time; the second video processor acquires a second picture quality processing time; the second audio processor acquires a second sound processing time; according to the first picture quality processing time, the second picture quality processing time, the first sound processing time, and the second sound processing time, the controller calculates a time difference of sound and picture synchronization, and determines whether the time difference of the sound and picture synchronization is within a threshold range, if not, a video signal and an audio signal outputted by the second chip undergo synchronization compensation. The present application may achieve the synchronization of sound and picture, so as to improve the video playing effect of a display device. In particular, the solution may be applied to social television.

Description

一种音画同步处理方法及显示设备Audio and picture synchronization processing method and display device
本专利申请要求于2019年9月4日提交的、申请号为201910832295.1的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。This patent application claims the priority of the Chinese patent application with application number 201910832295.1 filed on September 4, 2019, the full text of which is incorporated herein by reference.
技术领域Technical field
本申请涉及显示设备技术领域,尤其涉及一种音画同步处理方法及显示设备。This application relates to the technical field of display devices, and in particular, to a method for audio and picture synchronization processing and a display device.
背景技术Background technique
图1为一种双***显示设备视频处理流程图,双***包括第一芯片(A芯片)和第二芯片(N芯片),第一芯片与第二芯片之间可通过接口电路进行通信,比如HDMI(High Definition Multimedia Interface,高清多媒体接口)、网口和USB(Universal Serial Bus,通用串行总线)等。网络片源和本地媒体等视频信号在A芯片中解码,以将视频中的声音数据和图像数据进行分离,由A芯片对视频中的图像数据进行PQ(Picture-Quality,画质)处理,以及对视频中的声音数据进行声音处理,然后由第一芯片将处理后的图像数据和声音数据通过HDMI传输给第二芯片,N芯片接收A芯片发送的数据后,再次对图像数据进行PQ处理,并将处理后的图像数据输出至显示屏,以显示视频图像;以及,对声音数据进行声音处理,并将处理后的声音数据输出至扬声器,以播放视频声音,从而完成视频音画播放。Figure 1 is a video processing flowchart of a dual-system display device. The dual-system includes a first chip (A chip) and a second chip (N chip). The first chip and the second chip can communicate with each other through an interface circuit, such as HDMI (High Definition Multimedia Interface), network port and USB (Universal Serial Bus), etc. Video signals such as network sources and local media are decoded in the A chip to separate the sound data and image data in the video, and the A chip performs PQ (Picture-Quality) processing on the image data in the video, and Perform sound processing on the sound data in the video, and then the first chip transmits the processed image data and sound data to the second chip through HDMI. After the N chip receives the data sent by the A chip, it performs PQ processing on the image data again. And output the processed image data to the display screen to display the video image; and, perform sound processing on the sound data, and output the processed sound data to the speaker to play the video sound, thereby completing the video, audio and picture playback.
一般来说,PQ处理环节主要包括亮度、对比度、色度、色调、清晰度、图 像降噪、动态对比度、伽玛、色温、白平衡、色彩校正、亮度动态范围、运动画面补偿等;声音处理环节主要包括声音降噪、AVC(Advanced Video Coding,高级视频编码)、DTS(Digital Theater System)音效、杜比(Dolby Atmos)音效、GEQ(Graphic Equalize,图示均衡器)处理和PEQ(Parametric Equalizer,参量均衡器)等。然而,在双***显示设备进行视频播放处理时,PQ处理的耗时大于声音处理的耗时,因此图像数据到达屏幕显示的时间比声音数据到达喇叭的播放时间长,从而导致显示设备播放视频时,出现声音和画面不同步的现象。Generally speaking, PQ processing links mainly include brightness, contrast, chroma, hue, sharpness, image noise reduction, dynamic contrast, gamma, color temperature, white balance, color correction, brightness dynamic range, motion picture compensation, etc.; sound processing The links mainly include sound noise reduction, AVC (Advanced Video Coding, advanced video coding), DTS (Digital Theater System) sound effects, Dolby (Dolby Atmos) sound effects, GEQ (Graphic Equalize, graphic equalizer) processing and PEQ (Parametric Equalizer) , Parametric Equalizer) etc. However, when the dual-system display device performs video playback processing, the PQ processing time is greater than the sound processing time. Therefore, the time for image data to reach the screen is longer than the time for sound data to reach the speaker, which causes the display device to play the video. , The sound and the picture are out of sync.
申请内容Application content
本申请提供一种音画同步处理方法及显示设备,以解决现有双***显示设备播放视频时,存在的声音和画面不同步的问题。The present application provides an audio-visual synchronization processing method and a display device to solve the problem that the existing dual-system display device plays a video when the sound and the picture are not synchronized.
在一些实施例中,本申请提供一种显示设备,包括显示器和声音播放器,所述显示器用于显示视频图像,所述声音播放器用于播放音频,所述显示设备还包括第一芯片和第二芯片,所述第一芯片与所述第二芯片之间通信连接,所述第一芯片上设有第一视频处理器、第一音频处理器和控制器,所述第二芯片上设有第二视频处理器和第二音频处理器;In some embodiments, the present application provides a display device including a display and a sound player, the display is used for displaying video images, the sound player is used for playing audio, and the display device further includes a first chip and a second chip. Two chips, the first chip and the second chip are connected in communication, the first chip is provided with a first video processor, a first audio processor and a controller, and the second chip is provided with A second video processor and a second audio processor;
所述第一视频处理器用于通过输入接口接收视频信号,并对所述视频信号进行画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal through an input interface, and perform image quality processing on the video signal, and it takes time to obtain the first image quality processing;
所述第一音频处理器通过输入接口接收音频信号,并对所述音频信号进行声音处理,获取第一声音处理耗时;The first audio processor receives an audio signal through an input interface, and performs sound processing on the audio signal, and it takes time to obtain the first sound processing;
所述第二视频处理器通过通信接口接收所述第一芯片输出的视频信号,并 对所述第一芯片输出的视频信号进行画质处理,获取第二画质处理耗时;The second video processor receives the video signal output by the first chip through a communication interface, and performs image quality processing on the video signal output by the first chip, and it takes time to obtain the second image quality processing;
所述第二音频处理器通过通信接口接收所述第一芯片输出的音频信号,并对所述第一芯片输出的音频信号进行声音处理,获取第二声音处理耗时;The second audio processor receives the audio signal output by the first chip through a communication interface, and performs sound processing on the audio signal output by the first chip, and it takes time to obtain the second sound processing;
所述控制器被配置为:The controller is configured to:
根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;Calculating the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
判断所述音画同步时差是否在阈值范围内;Judging whether the audio-visual synchronization time difference is within a threshold range;
如果所述音画同步时差不在阈值范围内,则对所述第二芯片输出的视频信号和音频信号进行同步补偿,并将同步补偿后的视频信号传输至所述显示器,以及,将同步补偿后的音频信号输出至所述声音播放器;If the audio-visual synchronization time difference is not within the threshold range, the video signal and audio signal output by the second chip are synchronously compensated, and the synchronously compensated video signal is transmitted to the display, and the synchronously compensated Output the audio signal of to the sound player;
如果所述音画同步时差在阈值范围内,则将所述第二芯片输出的视频信号传输至所述显示器,以及,将所述第二芯片输出的音频信号输出至所述声音播放器。If the audio-visual synchronization time difference is within the threshold range, the video signal output by the second chip is transmitted to the display, and the audio signal output by the second chip is output to the sound player.
在一些实施例中,按照如下公式计算音画同步时差:In some embodiments, the audio-visual synchronization time difference is calculated according to the following formula:
N=T1+T2-(T3+T4)N=T1+T2-(T3+T4)
式中,N为音画同步时差,T1为所述第一画质处理耗时,T2为所述第二画质处理耗时,T3为所述第一声音处理耗时,T4为所述第二声音处理耗时。In the formula, N is the time difference between audio and video synchronization, T1 is the time consumed for the first image quality processing, T2 is the time consumed for the second image quality processing, T3 is the time consumed for the first sound processing, and T4 is the first image quality processing time. Second, sound processing is time-consuming.
在一些实施例中,在视频播放过程中,所述第一芯片检测是否接收到图像设置操作或声音设置操作;In some embodiments, during the video playback process, the first chip detects whether an image setting operation or a sound setting operation is received;
如果所述第一芯片接收到图像设置操作或声音设置操作,则所述第一视频处理器重新获取所述第一画质处理耗时,所述第一音频处理器重新获取所述第一声音处理耗时,以及,所述第二视频处理器重新获取所述第二画质处理耗时, 所述第二音频处理器重新获取所述第二声音处理耗时,以使所述控制器对所述音画同步时差进行修正;If the first chip receives an image setting operation or a sound setting operation, it takes time for the first video processor to reacquire the first image quality, and the first audio processor reacquires the first sound Processing time, and the second video processor reacquiring the second image quality processing time, the second audio processor reacquiring the second sound processing time, so that the controller can The audio-visual synchronization time difference is corrected;
所述控制器还配置为如果修正后的音画同步时差不在所述阈值范围内,则根据修正后的音画同步时差对所述第二芯片输出的视频信号和音频信号进行同步补偿。The controller is further configured to perform synchronous compensation on the video signal and the audio signal output by the second chip according to the corrected audio-visual synchronization time difference if the corrected audio-visual synchronization time difference is not within the threshold range.
在一些实施例中,在视频播放过程中,所述第一视频处理器每隔预设时间获取所述画质处理耗时,所述第一音频处理器每隔预设时间获取所述第一声音处理耗时,以及,所述第二视频处理器每隔预设时间获取第二画质处理耗时,所述第二音频处理器每个预设时间获取所述第二声音处理耗时,以使所述控制器对所述音画同步时差进行修正;In some embodiments, during the video playback process, the first video processor acquires the image quality every preset time and the processing time is time-consuming, and the first audio processor acquires the first image quality every preset time. Time-consuming sound processing, and time-consuming processing of acquiring the second image quality by the second video processor every preset time, and time-consuming processing of acquiring the second sound every preset time by the second audio processor, So that the controller corrects the time difference of the audio and video synchronization;
所述控制器还配置为如果修正后的音画同步时差不在所述阈值范围内,则根据修正后的音画同步时差对所述第二芯片输出的视频信号和音频信号进行同步补偿The controller is further configured to perform synchronization compensation on the video signal and the audio signal output by the second chip according to the corrected audio-visual synchronization time difference if the corrected audio-visual synchronization time difference is not within the threshold range
在一些实施例中,按照如下公式计算所述第一画质处理耗时和所述第二画质处理耗时:In some embodiments, the first image quality processing time and the second image quality processing time are calculated according to the following formula:
Figure PCTCN2020071103-appb-000001
Figure PCTCN2020071103-appb-000001
式中,T1为所述第一画质处理耗时;T2为所述第二画质处理耗时;Tn为对每帧图像进行各环节画质处理时产生的耗时;Td为对每帧图像进行画质处理时产生的延迟;f为刷新频率,单位为Hz;R为帧处理阈值,所述帧处理阈值用于指示读取当前帧及其之后的R-1帧的图像数据,来完成当前帧的画质处理;S为视频播放的帧数。In the formula, T1 is the time consuming for the first image quality processing; T2 is the time consuming for the second image quality processing; Tn is the time consuming for each frame of image quality processing; Td is the time for each frame The delay caused by the image quality processing; f is the refresh frequency in Hz; R is the frame processing threshold, the frame processing threshold is used to instruct to read the image data of the current frame and the subsequent R-1 frames, to Finish the image quality processing of the current frame; S is the number of frames for video playback.
在一些实施例中中,所述控制器按如下步骤对所述第二芯片输出的视频信 号和音频信号进行同步补偿:In some embodiments, the controller performs synchronous compensation on the video signal and audio signal output by the second chip according to the following steps:
如果所述音画同步时差大于阈值范围的上限值,则进行丢帧或音频延迟,以使声音和画面被调节同步;其中,所述阈值范围的上限值为允许图像比声音延迟出现的时间。If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, frame dropping or audio delay is performed so that the sound and the picture are adjusted and synchronized; wherein the upper limit of the threshold range is the value that allows the image to appear later than the sound time.
在一些实施例中,所述控制器按如下步骤对所述第二芯片输出的视频信号和音频信号进行同步补偿:In some embodiments, the controller performs synchronous compensation on the video signal and audio signal output by the second chip according to the following steps:
如果所述音画同步时差小于阈值范围的下限值,则进行插帧,以使声音和画面被调节同步;其中,所述阈值范围的下限值为允许图像比声音超前出现的时间。If the audio and picture synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed to synchronize the sound and the picture; wherein the lower limit of the threshold range is the time allowed for the image to appear earlier than the sound.
在一些实施例中,所述进行丢帧包括:In some embodiments, the performing frame dropping includes:
如果采用将声音和画面立即调节同步的补偿模式,则立即丢帧数DZ=f×N/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;N为音画同步时差,单位为毫秒;If the compensation mode that adjusts the sound and the picture immediately is synchronized, the number of immediately dropped frames is DZ=f×N/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; N is the synchronization of audio and video Time difference, in milliseconds;
如果采用将声音和画面在预设的阈值时间内调节同步的补偿模式,则计算间隔帧数JZ1,JZ1=Z/N,其中Z为所述阈值时间;每间隔JZ1帧丢弃一帧图像。If the compensation mode that adjusts the sound and the picture in the preset threshold time is adopted, the interval frame number JZ1 is calculated, JZ1=Z/N, where Z is the threshold time; one frame of image is discarded every JZ1 frame interval.
在一些实施例中,所述进行插帧包括:In some embodiments, the interpolating frame includes:
如果采用将声音和画面立即调节同步的补偿模式,则立即插帧数CZ=f×|N|/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;|N|为音画同步时差的绝对值,单位为毫秒;If the compensation mode that adjusts the sound and the picture immediately is synchronized, the number of frames immediately inserted is CZ=f×|N|/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; |N| It is the absolute value of the audio and video synchronization time difference, in milliseconds;
如果采用将声音和画面在预设的阈值时间内调节同步的补偿模式,则计算间隔帧数JZ2,JZ2=Z/|N|,其中Z为所述阈值时间;每间隔JZ2帧***一帧图 像。If the compensation mode that adjusts the synchronization of sound and picture within a preset threshold time is adopted, calculate the number of interval frames JZ2, JZ2=Z/|N|, where Z is the threshold time; insert a frame of image every JZ2 frame .
在一些实施例中,本申请还提供一种音画同步处理方法,用于上述的显示设备,所述方法包括:In some embodiments, the present application also provides an audio-visual synchronization processing method, which is used in the above-mentioned display device, and the method includes:
第一视频处理器用于通过输入接口接收视频信号,并对所述视频信号进行画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal through an input interface, and perform image quality processing on the video signal, and it takes time to obtain the first image quality processing;
第一音频处理器通过输入接口接收音频信号,并对所述音频信号进行声音处理,获取第一声音处理耗时;The first audio processor receives the audio signal through the input interface, and performs sound processing on the audio signal, and it takes time to obtain the first sound processing;
第二视频处理器通过通信接口接收第一芯片输出的视频信号,并对所述第一芯片输出的视频信号进行画质处理,获取第二画质处理耗时;The second video processor receives the video signal output by the first chip through the communication interface, and performs image quality processing on the video signal output by the first chip, and the second image quality processing is time-consuming;
第二音频处理器通过通信接口接收所述第一芯片输出的音频信号,并对所述第一芯片输出的音频信号进行声音处理,获取第二声音处理耗时;The second audio processor receives the audio signal output by the first chip through the communication interface, and performs sound processing on the audio signal output by the first chip, and it takes time to obtain the second sound processing;
控制器根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;The controller calculates the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
所述控制器判断所述音画同步时差是否在阈值范围内;The controller judges whether the audio-visual synchronization time difference is within a threshold range;
如果所述音画同步时差不在阈值范围内,则所述控制器对所述第二芯片输出的视频信号和音频信号进行同步补偿,并将同步补偿后的视频信号传输至所述显示器,以及,将同步补偿后的音频信号输出至所述声音播放器;If the audio-visual synchronization time difference is not within the threshold range, the controller performs synchronous compensation on the video signal and audio signal output by the second chip, and transmits the synchronously compensated video signal to the display, and, Outputting the synchronized and compensated audio signal to the sound player;
如果所述音画同步时差在阈值范围内,则所述控制器将所述第二芯片输出的视频信号传输至所述显示器,以及,将所述第二芯片输出的音频信号输出至所述声音播放器。If the audio-visual synchronization time difference is within the threshold range, the controller transmits the video signal output by the second chip to the display, and outputs the audio signal output by the second chip to the sound player.
本申请实施例还提出一种显示设备,包括The embodiment of the present application also proposes a display device, including
显示器,用于显示视频图像;Display, used to display video images;
声音播放器,用于播放音频;Sound player, used to play audio;
第一视频处理器用于接收视频信号,并进行第一画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
第一音频处理器用于接收音频信号,并进行第一声音处理,获取第一声音处理耗时;The first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing;
第二视频处理器用于接收所述第一视频处理器输出处理后的视频信号,并进行第二画质处理,获取第二画质处理耗时;The second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
第二音频处理器用于接收所述第一音频处理器输出处理后的音频信号,并进行第二声音处理,获取第二声音处理耗时;The second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
所述控制器被配置为:The controller is configured to:
根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;Calculating the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输出处理后的音频信号进行补偿,并传输到所述声音播放器。Compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, and output the processed video signal to the second audio processor according to the audio-visual synchronization time difference The audio signal is compensated and transmitted to the sound player.
在一些实施例中,所述控制器在计算音画同步时差之后,还用于:In some embodiments, the controller is also used to:
判断所述音画同步时差是否在阈值范围内;Judging whether the audio-visual synchronization time difference is within a threshold range;
如果所述音画同步时差不在阈值范围内,根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输出处理后的音频信号进行补偿,并传输到所述声音播放器;If the audio-visual synchronization time difference is not within the threshold range, compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, according to the audio-visual synchronization time difference Compensate the processed audio signal output by the second audio processor and transmit it to the sound player;
如果所述音画同步时差在阈值范围内,将所述第二视频处理器输出处理后 的视频信号传输至所述显示器,以及,将第二音频处理器输出处理后的音频信号输出至所述声音播放器。If the audio-visual synchronization time difference is within the threshold range, the video signal output processed by the second video processor is transmitted to the display, and the audio signal output processed by the second audio processor is output to the display Sound player.
在一些实施例中,所述控制器还用于:In some embodiments, the controller is also used to:
在视频播放过程中,检测是否接收到图像设置操作或声音设置操作;During the video playback, detect whether the image setting operation or the sound setting operation is received;
如果检测到图像设置操作或声音设置操作,所述第一视频处理器重新获取所述第一画质处理耗时,所述第一音频处理器重新获取所述第一声音处理耗时,以及,所述第二视频处理器重新获取所述第二画质处理耗时,所述第二音频处理器重新获取所述第二声音处理耗时;If an image setting operation or a sound setting operation is detected, the process of reacquiring the first image quality by the first video processor takes time, and the process of reacquiring the first sound by the first audio processor takes time, and, It takes time for the second video processor to reacquire the second image quality processing, and it takes time for the second audio processor to reacquire the second sound processing;
所述控制器根据重新获取的所述第一画质处理耗时、所述第一声音处理耗、所述第二画质处理耗时和所述第二声音处理耗时,对所述音画同步时差进行修正。According to the re-acquired time-consuming processing of the first image quality, the time-consuming processing of the first sound, the time-consuming processing of the second image quality, and the time-consuming processing of the second sound, the controller compares the audio and video Synchronize the time difference to correct it.
在一些实施例中,在视频播放过程中,所述第一视频处理器每隔预设时间获取所述第一画质处理耗时,所述第一音频处理器每隔预设时间获取所述第一声音处理耗时,以及,所述第二视频处理器每隔预设时间获取第二画质处理耗时,所述第二音频处理器每隔预设时间获取所述第二声音处理耗时。In some embodiments, during the video playback process, the first video processor acquires the first image quality every preset time and the processing time is time-consuming, and the first audio processor acquires the first image quality every preset time. The first sound processing is time-consuming, and the second video processor acquires the second image quality processing time every preset time, and the second audio processor acquires the second sound processing time every preset time. Time.
在一些实施例中,所述控制器还用于:In some embodiments, the controller is also used to:
如果所述音画同步时差大于阈值范围的上限值,则进行丢帧或音频延迟;其中,所述阈值范围的上限值为允许图像比声音延迟出现的时间。If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, frame dropping or audio delay is performed; wherein, the upper limit of the threshold range is the time allowed for the image to appear later than the sound.
在一些实施例中,所述控制器还用于:In some embodiments, the controller is also used to:
如果所述音画同步时差小于阈值范围的下限值,则进行插帧;其中,所述阈值范围的下限值为允许图像比声音超前出现的时间。If the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound.
在一些实施例中,当所述显示设备采用第一补偿模式,所述丢帧数为DZ=f ×N/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;N为音画同步时差,单位为毫秒。In some embodiments, when the display device adopts the first compensation mode, the number of dropped frames is DZ=f×N/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; N is the time difference between audio and video synchronization, in milliseconds.
在一些实施例中,当所述显示设备采用第一补偿模式,所述插帧数CZ=f×|N|/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;|N|为音画同步时差的绝对值,单位为毫秒。In some embodiments, when the display device adopts the first compensation mode, the number of interpolated frames CZ=f×|N|/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second. ; |N| is the absolute value of the time difference between audio and video synchronization, in milliseconds.
本申请实施例还提出一种音画同步处理方法,所述方法包括:The embodiment of the present application also proposes a method for audio-visual synchronization processing, which includes:
第一视频处理器用于接收视频信号,并进行第一画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
第一音频处理器用于接收音频信号,并进行第一声音处理,获取第一声音处理耗时;The first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing;
第二视频处理器用于接收所述第一视频处理器输出处理后的视频信号,并进行第二画质处理,获取第二画质处理耗时;The second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
第二音频处理器用于接收所述第一音频处理器输出处理后的音频信号,并进行第二声音处理,获取第二声音处理耗时;The second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
所述控制器被配置为:The controller is configured to:
根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;Calculating the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输出处理后的音频信号进行补偿,并传输到所述声音播放器。Compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, and output the processed video signal to the second audio processor according to the audio-visual synchronization time difference The audio signal is compensated and transmitted to the sound player.
本申请实施例还提出一种显示设备,包括The embodiment of the present application also proposes a display device, including
显示器,被配置为显示图像内容;The display is configured to display image content;
声音再现器,被配置为再现声音信号;A sound reproducer, configured to reproduce sound signals;
第一处理芯片,包括第一视频处理器和第一音频处理器,通过输入接口接收外部音频信号和视频信号,所述第一音频处理器用于对所述音频信号进行处理,以及所述第一视频处理器用于对所述视频信号进行处理,且处理所述音频信号和所述视频信号时发生第一时延;The first processing chip includes a first video processor and a first audio processor, receives external audio signals and video signals through an input interface, the first audio processor is used to process the audio signals, and the first The video processor is configured to process the video signal, and a first time delay occurs when the audio signal and the video signal are processed;
第二处理芯片,用于通过连接线接收所述第一芯片输出的音频信号和视频信号,所述第二处理芯片包括第二视频处理器和第二音频处理器,所述第二音频处理器用于对接收所述第一处理芯片所述音频信号进行再处理,以及所述第二视频处理器用于对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第二时延;The second processing chip is configured to receive the audio signal and the video signal output by the first chip through a connecting line, the second processing chip includes a second video processor and a second audio processor, and the second audio processor is used for For reprocessing the audio signal received by the first processing chip, and the second video processor is used for reprocessing the video signal received by the first processing chip, and the audio signal and the The second time delay occurs when the video signal is reprocessed;
根据所述第一时延和所述第二时延,对再处理后所述视频信号和/或所述音频信号进行时延补偿,分别将时延补偿后的所述视频信号和所述音频信号输出所述显示器和所述声音再现器。According to the first time delay and the second time delay, time delay compensation is performed on the video signal and/or the audio signal after reprocessing, and the video signal and the audio signal after the time delay compensation are respectively The signal is output to the display and the sound reproducer.
在一些实施例中,在视频播放过程中,所述第一芯片检测是否接收到图像设置操作或声音设置操作;In some embodiments, during the video playback process, the first chip detects whether an image setting operation or a sound setting operation is received;
如果所述第一芯片接收到图像设置操作或声音设置操作,则所述第一音频处理器重新对所述音频信号进行处理,以及所述第一视频处理器重新对所述视频信号进行处理,且重新处理所述音频信号和所述视频信号时发生第三时延;If the first chip receives an image setting operation or a sound setting operation, the first audio processor reprocesses the audio signal, and the first video processor reprocesses the video signal, And a third time delay occurs when the audio signal and the video signal are reprocessed;
以及,所述第二音频处理器用于对接收所述第一处理芯片所述音频信号进行再处理,以及所述第二视频处理器用于对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第四时延;根据所述第三时延和所述第四时延,对再处理后所述视频信号和/或所述音频信号进行 时延补偿。And, the second audio processor is used for reprocessing the audio signal received by the first processing chip, and the second video processor is used for reprocessing the video signal received by the first processing chip , And a fourth time delay occurs during the reprocessing of the audio signal and the video signal; according to the third time delay and the fourth time delay, the video signal and/or the audio signal after the reprocessing Perform delay compensation.
在一些实施例中,在视频播放过程中,每隔预定时间,所述第一音频处理器对所述音频信号进行处理,以及所述第一视频处理器对所述视频信号进行处理,且获取处理所述音频信号和所述视频信号时发生第一时延;以及每隔预定时间,所述第二音频处理器对接收所述第一处理芯片所述音频信号进行再处理,以及所述第二视频处理器对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第二时延。In some embodiments, during the video playback process, at predetermined intervals, the first audio processor processes the audio signal, and the first video processor processes the video signal, and obtains A first time delay occurs when processing the audio signal and the video signal; and every predetermined time, the second audio processor reprocesses the audio signal received by the first processing chip, and the second The second video processor reprocesses the video signal received by the first processing chip, and a second time delay occurs when the audio signal and the video signal are reprocessed.
在一些实施例中,所述控制器按如下步骤对再处理后所述视频信号和/或所述音频信号进行时延补偿:In some embodiments, the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
如果音画同步时差大于阈值范围的上限值,则进行丢帧或音频延迟;其中,所述阈值范围的上限值为允许图像比声音延迟出现的时间;If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, then perform frame drop or audio delay; wherein the upper limit of the threshold range is the time allowed for the image to appear later than the sound;
其中,所述音画同步视差等于所述第一时延和所述第二时延之和。Wherein, the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay.
在一些实施例中,所述控制器按如下步骤对再处理后所述视频信号和/或所述音频信号进行时延补偿:In some embodiments, the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
如果所述音画同步时差小于阈值范围的下限值,则进行插帧;其中,所述阈值范围的下限值为允许图像比声音超前出现的时间;If the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound;
其中,所述音画同步视差等于所述第一时延和所述第二时延之和。Wherein, the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay.
在一些实施例中,所述进行丢帧包括:In some embodiments, the performing frame dropping includes:
如果采用将声音和画面立即调节同步的补偿模式,则立即丢帧数DZ=f×N/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;N为音画同步时差,单位为毫秒;If the compensation mode that adjusts the sound and the picture immediately is synchronized, the number of immediately dropped frames is DZ=f×N/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; N is the synchronization of audio and video Time difference, in milliseconds;
如果采用将声音和画面在预设的阈值时间内调节同步的补偿模式,则计算 间隔帧数JZ1,JZ1=Z/N,其中Z为所述阈值时间;每间隔JZ1帧丢弃一帧图像。If the compensation mode that adjusts the sound and the picture in the preset threshold time is adopted, the interval frame number JZ1 is calculated, JZ1=Z/N, where Z is the threshold time; one frame of image is discarded every JZ1 frame interval.
在一些实施例中,所述进行插帧包括:In some embodiments, the interpolating frame includes:
如果采用将声音和画面立即调节同步的补偿模式,则立即插帧数CZ=f×|N|/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;|N|为音画同步时差的绝对值,单位为毫秒;If the compensation mode that adjusts the sound and the picture immediately is synchronized, the number of frames immediately inserted is CZ=f×|N|/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; |N| It is the absolute value of the audio and video synchronization time difference, in milliseconds;
如果采用将声音和画面在预设的阈值时间内调节同步的补偿模式,则计算间隔帧数JZ2,JZ2=Z/|N|,其中Z为所述阈值时间;每间隔JZ2帧***一帧图像。If the compensation mode that adjusts the synchronization of sound and picture within a preset threshold time is adopted, calculate the number of interval frames JZ2, JZ2=Z/|N|, where Z is the threshold time; insert a frame of image every JZ2 frame .
在一些实施例中,按照如下公式计算音画同步时差:In some embodiments, the audio-visual synchronization time difference is calculated according to the following formula:
N=T1+T2-(T3+T4)N=T1+T2-(T3+T4)
式中,N为音画同步时差,T1为所述第一视频处理器对所述视频信号进行处理产生的第一画质处理耗时,T2为第二视频处理器对所述视频信号进行再处理产生的第二画质处理耗时,T3为所述第一音频处理器对所述音频信号进行处理产生的第一声音处理耗时,T4为所述第二音频处理器对所述音频信号进行再处理产生的第二声音处理耗时。In the formula, N is the time difference between audio and video synchronization, T1 is the time consumption of the first image quality processing generated by the first video processor processing the video signal, and T2 is the second video processor reprocessing the video signal. It takes time for the second image quality to be processed. T3 is the time for the first sound processing generated by the first audio processor to process the audio signal. T4 is the time for the second audio processor to process the audio signal. The second sound processing generated by the reprocessing takes time.
在一些实施例中,按照如下公式计算所述第一画质处理耗时和所述第二画质处理耗时:In some embodiments, the first image quality processing time and the second image quality processing time are calculated according to the following formula:
Figure PCTCN2020071103-appb-000002
Figure PCTCN2020071103-appb-000002
式中,T1为所述第一视频处理器对所述视频信号进行处理产生的第一画质处理耗时,T2为第二视频处理器对所述视频信号进行再处理产生的第二画质处理耗时;Tn为对每帧图像进行各环节画质处理时产生的耗时;Td为对每帧图像 进行画质处理时产生的延迟;f为刷新频率,单位为Hz;R为帧处理阈值,所述帧处理阈值用于指示读取当前帧及其之后的R-1帧的图像数据,来完成当前帧的画质处理;S为视频播放的帧数。In the formula, T1 is the time-consuming processing of the first image quality generated by the first video processor processing the video signal, and T2 is the second image quality generated by the second video processor reprocessing the video signal Processing time; Tn is the time consumed when the image quality is processed for each frame of the image; Td is the delay generated when the image quality is processed for each frame of the image; f is the refresh frequency in Hz; R is the frame processing Threshold, the frame processing threshold is used to instruct to read the image data of the current frame and subsequent R-1 frames to complete the image quality processing of the current frame; S is the number of frames for video playback.
本申请提供的技术方案具备如下有益效果:针对具有双***结构的显示设备,在第一芯片进行视频解码,将图像数据和声音数据分离后,获取第一画质处理耗时和第一声音处理耗时,以及获取第二画质处理耗时和第二声音处理耗时,从而准确获取到两个芯片对于声效和画质进行处理时分别产生的耗时,然后利用这些参数计算音画同步时差,即计算显示器播放图像与声音播放装置播放声音之间存在的时间差值,如果音画同步时差不在阈值范围内时,则认为声音与画面没有同步播放,根据音画同步时差,可以确定图像是超前于声音,还是图像是滞后于声音,从而有针对性地对第二芯片输出的视频信号和音频信号进行同步补偿,来实现音画同步,从而提高显示设备的视频播放效果。The technical solution provided by this application has the following beneficial effects: for a display device with a dual-system structure, video decoding is performed on the first chip, and after the image data and the sound data are separated, the first image quality processing is time-consuming and the first sound processing is obtained. Time-consuming and time-consuming to obtain the second image quality processing and the second sound processing, so as to accurately obtain the time-consuming processing of the two chips for the sound effect and image quality respectively, and then use these parameters to calculate the audio-visual synchronization time difference , That is, calculate the time difference between the image played by the display and the sound played by the sound playback device. If the audio and video synchronization time difference is not within the threshold range, it is considered that the sound and the image are not played synchronously. Whether the image is ahead of the sound or the image lags the sound, so that the video signal and audio signal output by the second chip are synchronized and compensated to achieve audio and picture synchronization, thereby improving the video playback effect of the display device.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present application or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the embodiments. Obviously, the drawings in the following description are only some of the present application. Embodiments, for those of ordinary skill in the art, without creative work, other drawings can be obtained based on these drawings.
图1为一种双***显示设备视频处理流程图;Figure 1 is a video processing flowchart of a dual-system display device;
图2为本申请实施例示出的显示设备与控制装置之间操作场景的示意图;2 is a schematic diagram of an operation scenario between a display device and a control device shown in an embodiment of the application;
图3为本申请实施例示出的控制装置100的硬件配置框图;3 is a block diagram of the hardware configuration of the control device 100 shown in an embodiment of the application;
图4为本申请实施例示出的显示设备200的硬件配置框图;4 is a block diagram of the hardware configuration of the display device 200 shown in an embodiment of the application;
图5为本申请实施例示出的显示设备200的硬件架构框图;FIG. 5 is a block diagram of the hardware architecture of the display device 200 shown in an embodiment of the application;
图6为本申请实施例示出的显示设备200的功能配置示意图;FIG. 6 is a schematic diagram of a functional configuration of a display device 200 shown in an embodiment of the application;
图7(a)为本申请实施例示出的显示设备200中软件配置示意图;FIG. 7(a) is a schematic diagram of software configuration in the display device 200 shown in an embodiment of the application;
图7(b)为本申请实施例示出的显示设备200中应用程序的配置示意图;FIG. 7(b) is a schematic diagram of the configuration of an application program in the display device 200 shown in an embodiment of the application;
图8为本申请实施例示出的显示设备200中用户界面的示意图;FIG. 8 is a schematic diagram of a user interface in a display device 200 according to an embodiment of the application;
图9为本申请实施例示出的一种音画同步处理方法流程图;FIG. 9 is a flowchart of a method for processing audio and video synchronization according to an embodiment of the application;
图10为本申请实施例示出的画质处理时获取帧的方式示意图;FIG. 10 is a schematic diagram of a method for acquiring frames during image quality processing according to an embodiment of the application; FIG.
图11为本申请实施例示出的第一芯片/第二芯片进行声音处理的各环节电路示意图;FIG. 11 is a schematic diagram of each link circuit of the sound processing performed by the first chip/second chip according to an embodiment of the application; FIG.
图12为本申请实施例示出的获取第一芯片/第二芯片的画质处理耗时和声音处理耗时的流程图;FIG. 12 is a flowchart of time-consuming image quality processing and sound processing of acquiring the first chip/second chip according to an embodiment of the application;
图13为本申请实施例示出的另一种音画同步处理方法流程图;FIG. 13 is a flowchart of another audio-visual synchronization processing method shown in an embodiment of the application;
图14为本申请实施例示出的一种显示设备执行音画同步方法的流程图。FIG. 14 is a flowchart of a method for performing audio-visual synchronization on a display device according to an embodiment of the application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following describes the technical solutions in the embodiments of the present application clearly and completely with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
本申请主要针对具有双***结构,即具有第一芯片(第一硬件***、A芯片)和第二芯片(第二硬件***、N芯片)的显示设备的音画同步处理,下面首先对具有双***硬件结构的显示设备的结构、功能和实现方式等方面进行详细说 明。This application is mainly aimed at the audio and video synchronization processing of a display device with a dual system structure, that is, a first chip (first hardware system, A chip) and a second chip (second hardware system, N chip). The structure, function and implementation of the display device of the system hardware structure will be described in detail.
为便于用户使用,显示设备上通常会设置各种外部装置接口,以便于连接不同的外设设备或线缆以实现相应的功能。而在显示设备的接口上连接有高清晰度的摄像头时,如果显示设备的硬件***没有接收源码的高像素摄像头的硬件接口,那么就会导致无法将摄像头接收到的数据呈现到显示设备的显示屏上。For the convenience of users, various external device interfaces are usually provided on the display device to facilitate the connection of different peripheral devices or cables to achieve corresponding functions. When a high-resolution camera is connected to the interface of the display device, if the hardware system of the display device does not have the hardware interface of the high-pixel camera that receives the source code, it will cause the data received by the camera to be unable to present the data received by the camera to the display of the display device. On the screen.
并且,受制于硬件结构,传统显示设备的硬件***仅支持一路硬解码资源,且通常最大仅能支持4K分辨率的视频解码,因此当要实现边观看网络电视边进行视频聊天时,为了不降低网络视频画面清晰度,就需要使用硬解码资源(通常是硬件***中的GPU)对网络视频进行解码,而在此情况下,只能采取由硬件***中的通用处理器(例如CPU)对视频进行软解码的方式处理视频聊天画面。Moreover, due to the hardware structure, the hardware system of traditional display devices only supports one hard decoding resource, and usually only supports 4K resolution video decoding. Therefore, when you want to realize the video chat while watching Internet TV, in order not to reduce The definition of the network video picture requires the use of hard decoding resources (usually the GPU in the hardware system) to decode the network video. In this case, the general-purpose processor (such as CPU) in the hardware system can only be used to decode the video. The video chat screen is processed by soft decoding.
采用软解码处理视频聊天画面,会大大增加CPU的数据处理负担,当CPU的数据处理负担过重时,可能会出现画面卡顿或者不流畅的问题。在一些实施例中,受制于CPU的数据处理能力,当采用CPU软解码处理视频聊天画面时,通常无法实现多路视频通话,当用户想要再同一聊天场景同时与多个其他用户进行视频聊天时,会出现接入受阻的情况。Using soft decoding to process the video chat screen will greatly increase the data processing burden on the CPU. When the data processing burden on the CPU is too heavy, the picture may freeze or become unsmooth. In some embodiments, due to the data processing capability of the CPU, when the CPU soft decoding is used to process the video chat screen, it is usually impossible to achieve multi-channel video calls. When the user wants to simultaneously video chat with multiple other users in the same chat scene At times, access will be blocked.
基于上述各方面的考虑,为克服上述缺陷,本申请公开了一种双硬件***架构,以实现多路视频聊天数据(至少一路本地视频)。Based on the above considerations, in order to overcome the above shortcomings, this application discloses a dual hardware system architecture to realize multiple channels of video chat data (at least one local video).
下面首先结合附图对本申请所涉及的概念进行说明。在此需要指出的是,以下对各个概念的说明,仅为了使本申请的内容更加容易理解,并不表示对本申请保护范围的限定。The concepts involved in the present application will be described below in conjunction with the drawings. It should be pointed out here that the following description of each concept is only to make the content of this application easier to understand, and does not mean to limit the scope of protection of this application.
本申请各实施例中使用的术语“电路”,可以是指任何已知或后来开发的硬 件、软件、固件、人工智能、模糊逻辑或硬件或/和软件代码的组合,能够执行与该元件相关的功能。The term "circuit" used in the various embodiments of this application can refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or a combination of hardware or/and software code that can execute related to the component Function.
本申请各实施例中使用的术语“遥控器”,是指电子设备(如本申请中公开的显示设备)的一个组件,该组件通常可在较短的距离范围内无线控制电子设备。该组件一般可以使用红外线和/或射频(RF)信号和/或蓝牙与电子设备连接,也可以包括WiFi、无线USB、蓝牙、动作传感器等功能电路。例如:手持式触摸遥控器,是以触摸屏中用户界面取代一般遥控装置中的大部分物理内置硬键。The term "remote control" used in the various embodiments of this application refers to a component of an electronic device (such as the display device disclosed in this application), which can generally control the electronic device wirelessly within a short distance. This component can generally use infrared and/or radio frequency (RF) signals and/or Bluetooth to connect with electronic devices, and can also include functional circuits such as WiFi, wireless USB, Bluetooth, and motion sensors. For example, a handheld touch remote control replaces most of the physical built-in hard keys in general remote control devices with the user interface in the touch screen.
本申请各实施例中使用的术语“手势”,是指用户通过一种手型的变化或手部运动等动作,用于表达预期想法、动作、目的/或结果的用户行为。The term "gesture" used in the embodiments of the present application refers to a user's behavior through a change of hand shape or hand movement to express expected ideas, actions, goals, and/or results.
本申请各实施例中使用的术语“硬件***”,可以是指由集成电路(Integrated Circuit,IC)、印刷电路板(Printed circuit board,PCB)等机械、光、电、磁器件构成的具有计算、控制、存储、输入和输出功能的实体部件。在本申请各个实施例中,硬件***通常也会被称为主板(motherboard)或芯片。The term "hardware system" used in the various embodiments of this application may refer to an integrated circuit (IC), printed circuit board (Printed circuit board, PCB) and other mechanical, optical, electrical, and magnetic devices with computing , Control, storage, input and output functions of the physical components. In the various embodiments of the present application, the hardware system is usually also referred to as a motherboard or a chip.
图2中示例性示出了根据实施例中显示设备与控制装置之间操作场景的示意图。如图2所示,用户可通过控制装置100来操作显示设备200。Fig. 2 exemplarily shows a schematic diagram of an operation scenario between the display device and the control device according to the embodiment. As shown in FIG. 2, the user can operate the display device 200 by controlling the device 100.
其中,控制装置100可以是遥控器100A,其可与显示设备200之间通过红外协议通信、蓝牙协议通信、紫蜂(ZigBee)协议通信或其他短距离通信方式进行通信,用于通过无线或其他有线方式来控制显示设备200。用户可以通过遥控器上按键、语音输入、控制面板输入等输入用户指令,来控制显示设备200。如:用户可以通过遥控器上音量加减键、频道控制键、上/下/左/右的移动按键、语音输入按键、菜单键、开关机按键等输入相应控制指令,来实现控制显示设备200的功能。Wherein, the control device 100 may be a remote controller 100A, which can communicate with the display device 200 through infrared protocol communication, Bluetooth protocol communication, ZigBee protocol communication or other short-distance communication methods for wireless or other short-distance communication. The display device 200 is controlled in a wired manner. The user can control the display device 200 by inputting user instructions through keys on the remote control, voice input, control panel input, etc. For example, the user can control the display device 200 by inputting corresponding control commands through the volume plus and minus keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, and power on/off keys on the remote control. Function.
控制装置100也可以是智能设备,如移动终端100B、平板电脑、计算机、笔记本电脑等,其可以通过本地网(LAN,Local Area Network)、广域网(WAN,Wide Area Network)、无线局域网((WLAN,Wireless Local Area Network)或其他网络与显示设备200之间通信,并通过与显示设备200相应的应用程序实现对显示设备200的控制。例如,使用在智能设备上运行的应用程序控制显示设备200。该应用程序可以在与智能设备关联的屏幕上通过直观的用户界面(UI,User Interface)为用户提供各种控制。The control device 100 can also be a smart device, such as a mobile terminal 100B, a tablet computer, a computer, a notebook computer, etc., which can be connected through a local area network (LAN, Wide Area Network), a wide area network (WAN, Wide Area Network), and a wireless local area network ((WLAN) , Wireless Local Area Network) or other networks communicate with the display device 200, and control the display device 200 through an application program corresponding to the display device 200. For example, use an application program running on a smart device to control the display device 200 The application can provide users with various controls through an intuitive user interface (UI, User Interface) on the screen associated with the smart device.
在一些实施例中,移动终端100B与显示设备200均可安装软件应用,从而可通过网络通信协议实现二者之间的连接通信,进而实现一对一控制操作的和数据通信的目的。如:可以使移动终端100B与显示设备200建立控制指令协议,将遥控控制键盘同步到移动终端100B上,通过控制移动终端100B上用户界面,实现控制显示设备200的功能;也可以将移动终端100B上显示的音视频内容传输到显示设备200上,实现同步显示功能。In some embodiments, both the mobile terminal 100B and the display device 200 can install software applications, so that the connection and communication between the two can be realized through a network communication protocol, thereby achieving the purpose of one-to-one control operation and data communication. For example, the mobile terminal 100B can be made to establish a control command protocol with the display device 200, the remote control keyboard can be synchronized to the mobile terminal 100B, and the function of controlling the display device 200 can be realized by controlling the user interface of the mobile terminal 100B; or the mobile terminal 100B The audio and video content displayed on the screen is transmitted to the display device 200 to realize the synchronous display function.
如图2所示,显示设备200还可与服务器300通过多种通信方式进行数据通信。在本申请各个实施例中,可允许显示设备200通过局域网、无线局域网或其他网络与服务器300进行通信连接。服务器300可以向显示设备200提供各种内容和互动。As shown in FIG. 2, the display device 200 can also communicate with the server 300 through multiple communication methods. In various embodiments of the present application, the display device 200 may be allowed to communicate with the server 300 via a local area network, a wireless local area network, or other networks. The server 300 may provide various contents and interactions to the display device 200.
在一些实施例中,显示设备200通过发送和接收信息,以及电子节目指南(EPG,Electronic Program Guide)互动,接收软件程序更新,或访问远程储存的数字媒体库。服务器300可以是一组,也可以是多组,可以是一类或多类服务器。通过服务器300提供视频点播和广告服务等其他网络服务内容。In some embodiments, the display device 200 transmits and receives information, interacts with an Electronic Program Guide (EPG, Electronic Program Guide), receives software program updates, or accesses a remotely stored digital media library. The server 300 may be a group or multiple groups, and may be one or more types of servers. The server 300 provides other network service content such as video-on-demand and advertising services.
显示设备200,一方面讲,可以是液晶显示器、OLED(Organic Light Emitting  Diode)显示器、投影显示设备;另一方面讲,显示设备被可以是智能电视或显示器和机顶盒组成的显示***。具体显示设备类型,尺寸大小和分辨率等不作限定,本领技术人员可以理解的是,显示设备200可以根据需要做性能和配置上的一些改变。The display device 200, on the one hand, may be a liquid crystal display, an OLED (Organic Light Emitting Diode) display, or a projection display device; on the other hand, the display device may be a smart TV or a display system composed of a display and a set-top box. The specific display device type, size, resolution, etc. are not limited, and those skilled in the art can understand that the display device 200 can make some changes in performance and configuration as required.
显示设备200除了提供广播接收电视功能之外,还可以附加提供计算机支持功能的智能网络电视功能。在一些实施例中包括,网络电视、智能电视、互联网协议电视(IPTV)等。在一些实施例中,显示设备可以不具备广播接收电视功能。In addition to providing the broadcast receiving TV function, the display device 200 may additionally provide a smart network TV function that provides a computer support function. In some embodiments, it includes Internet TV, Smart TV, Internet Protocol TV (IPTV), and the like. In some embodiments, the display device may not have the function of broadcasting and receiving TV.
如图2所述,显示设备上可以连接或设置有摄像头,用于将摄像头拍摄到的画面呈现在本显示设备或其他显示设备的显示界面上,以实现用户之间的交互聊天。具体的,摄像头拍摄到的画面可在显示设备上全屏显示、半屏显示、或者显示任意可选区域。As shown in Figure 2, the display device may be connected or provided with a camera, which is used to present the picture captured by the camera on the display interface of the display device or other display devices, so as to realize interactive chat between users. Specifically, the image captured by the camera can be displayed on the display device in full screen, half screen, or in any selectable area.
在一些实施例中在一些实施例中,摄像头通过连接板与显示器后壳连接,固定安装在显示器后壳的上侧中部,作为可安装的方式,可以固定安装在显示器后壳的任意位置,能保证其图像采集区域不被后壳遮挡即可,例如,图像采集区域与显示设备的显示朝向相同。In some embodiments, in some embodiments, the camera is connected to the rear case of the display through a connecting plate, and is fixedly installed on the upper middle part of the rear case of the display. As an installable way, it can be fixedly installed at any position of the rear case of the display. It is sufficient to ensure that the image capture area is not blocked by the rear shell. For example, the image capture area and the display device have the same orientation.
在一些实施例中在一些实施例中,摄像头通过连接板或者其他可想到的连接器可升降的与显示后壳连接,连接器上安装有升降马达,当用户要使用摄像头或者有应用程序要使用摄像头时,再升出显示器之上,当不需要使用摄像头时,其可内嵌到后壳之后,以达到保护摄像头免受损坏和保护用户的隐私安全。In some embodiments, in some embodiments, the camera can be connected to the display rear shell through a connecting plate or other conceivable connectors. A lifting motor is installed on the connector. When the user wants to use the camera or has an application to use The camera is raised above the display. When the camera is not needed, it can be embedded behind the back shell to protect the camera from damage and protect the privacy of the user.
在一些实施例中,本申请所采用的摄像头可以为1600万像素,以达到超高清显示目的。在实际使用中,也可采用比1600万像素更高或更低的摄像头。In some embodiments, the camera used in this application may be 16 million pixels to achieve the purpose of ultra-high-definition display. In actual use, a camera with higher or lower than 16 million pixels can also be used.
当显示设备上安装有摄像头以后,显示设备不同应用场景所显示的内容可得到多种不同方式的融合,从而达到传统显示设备无法实现的功能。When a camera is installed on the display device, the content displayed in different application scenarios of the display device can be merged in a variety of different ways, so as to achieve functions that cannot be achieved by traditional display devices.
示例性的,用户可以在边观看视频节目的同时,与至少一位其他用户进行视频聊天。视频节目的呈现可作为背景画面,视频聊天的窗口显示在背景画面之上。形象的,可以称该功能为“边看边聊”。Exemplarily, the user may have a video chat with at least one other user while watching a video program. The presentation of the video program can be used as the background screen, and the video chat window is displayed on the background screen. Visually, you can call this function "watch and chat".
在一些实施例中,在“边看边聊”的场景中,在观看直播视频或网络视频的同时,跨终端的进行至少一路的视频聊天。In some embodiments, in the scenario of “watching while chatting”, while watching live video or online video, at least one video chat is performed across terminals.
在一些实施例中,用户可以在边进入教育应用学习的同时,与至少一位其他用户进行视频聊天。例如,学生在学习教育应用程序中内容的同时,可实现与老师的远程互动。形象的,可以称该功能为“边学边聊”。In some embodiments, the user can have a video chat with at least one other user while entering the education application for learning. For example, students can realize remote interaction with teachers while learning content in educational applications. Visually, you can call this function "learning and chatting".
在一些实施例中,用户在玩纸牌游戏时,与进入游戏的玩家进行视频聊天。例如,玩家在进入游戏应用参与游戏时,可实现与其他玩家的远程互动。形象的,可以称该功能为“边看边玩”。In some embodiments, when a user is playing a card game, a video chat is conducted with players entering the game. For example, when a player enters a game application to participate in a game, it can realize remote interaction with other players. Visually, you can call this function "watch and play".
在一些实施例中,游戏场景与视频画面进行融合,将视频画面中人像进行抠图,显示在游戏画面中,提升用户体验。In some embodiments, the game scene and the video image are merged, and the portrait in the video image is cut out and displayed on the game image, which improves the user experience.
在一些实施例中,在体感类游戏中(如打球类、拳击类、跑步类、跳舞类等),通过摄像头获取人体姿势和动作,肢体检测和追踪、人体骨骼关键点数据的检测,再与游戏中动画进行融合,实现如体育、舞蹈等场景的游戏。In some embodiments, in somatosensory games (such as ball games, boxing games, running games, dancing games, etc.), human body postures and movements are acquired through a camera, limb detection and tracking, and key point data detection of human bones. Animations are integrated in the game to realize games such as sports, dance and other scenes.
在一些实施例中,用户可以在K歌应用中,与至少一位其他用户进行视频和语音的交互。形象的,可以称该功能为“边看边唱”。在一些实施例中,当至少一位用户在聊天场景进入该应用时,可多个用户共同完成一首歌的录制。In some embodiments, the user can interact with at least one other user in video and voice in the K song application. Visually, you can call this function "watch and sing". In some embodiments, when at least one user enters the application in a chat scene, multiple users can jointly complete the recording of a song.
在一些实施例中,用户可在本地打开摄像头获取图片和视频,形象的,可 以称该功能为“照镜子”。In some embodiments, the user can turn on the camera locally to obtain pictures and videos, which is vivid, and this function can be called "mirror".
在一些实施例中,还可以再增加更多功能或减少上述功能。本申请对该显示设备的功能不作具体限定。In some embodiments, more functions may be added or the above functions may be reduced. This application does not specifically limit the function of the display device.
图3中示例性示出了根据示例性实施例中控制装置100的配置框图。如图3所示,控制装置100包括控制器110、通信器130、用户输入/输出接口140、存储器190、供电电源180。Fig. 3 exemplarily shows a configuration block diagram of the control device 100 according to an exemplary embodiment. As shown in FIG. 3, the control device 100 includes a controller 110, a communicator 130, a user input/output interface 140, a memory 190, and a power supply 180.
控制装置100被配置为可控制所述显示设备200,以及可接收用户的输入操作指令,且将操作指令转换为显示设备200可识别和响应的指令,起到用户与显示设备200之间交互中介作用。如:用户通过操作控制装置100上频道加减键,显示设备200响应频道加减的操作。The control device 100 is configured to control the display device 200, and can receive user input operation instructions, and convert the operation instructions into instructions that can be recognized and responded to by the display device 200, acting as an intermediary for the interaction between the user and the display device 200 effect. For example, the user operates the channel plus and minus key on the control device 100, and the display device 200 responds to the channel plus and minus operation.
在一些实施例中,控制装置100可是一种智能设备。如:控制装置100可根据用户需求安装控制显示设备200的各种应用。In some embodiments, the control device 100 may be a smart device. For example, the control device 100 can install various applications for controlling the display device 200 according to user requirements.
在一些实施例中,如图2所示,移动终端100B或其他智能电子设备,可在安装操控显示设备200的应用之后,起到控制装置100类似功能。如:用户可以通过安装应用,在移动终端100B或其他智能电子设备上可提供的图形用户界面的各种功能键或虚拟按钮,以实现控制装置100实体按键的功能。In some embodiments, as shown in FIG. 2, the mobile terminal 100B or other smart electronic devices can perform similar functions to the control device 100 after installing an application for controlling the display device 200. For example, the user can install various function keys or virtual buttons of the graphical user interface that can be provided on the mobile terminal 100B or other smart electronic devices by installing applications to realize the function of the physical keys of the control device 100.
控制器110包括处理器112、RAM113和ROM114、通信接口以及通信总线。控制器110用于控制控制装置100的运行和操作,以及内部各部件之间通信协作以及外部和内部的数据处理功能。The controller 110 includes a processor 112, a RAM 113 and a ROM 114, a communication interface, and a communication bus. The controller 110 is used to control the operation and operation of the control device 100, as well as communication and cooperation between internal components, and external and internal data processing functions.
通信器130在控制器110的控制下,实现与显示设备200之间控制信号和数据信号的通信。如:将接收到的用户输入信号发送至显示设备200上。通信器130可包括WIFI电路131、蓝牙电路132、NFC电路133等通信电路中至 少一种。The communicator 130 realizes the communication of control signals and data signals with the display device 200 under the control of the controller 110. For example, the received user input signal is sent to the display device 200. The communicator 130 may include at least one of communication circuits such as a WIFI circuit 131, a Bluetooth circuit 132, and an NFC circuit 133.
用户输入/输出接口140,其中,输入接口包括麦克风141、触摸板142、传感器143、按键144等输入接口中至少一者。如:用户可以通过语音、触摸、手势、按压等动作实现用户指令输入功能,输入接口通过将接收的模拟信号转换为数字信号,以及数字信号转换为相应指令信号,发送至显示设备200。The user input/output interface 140, wherein the input interface includes at least one of input interfaces such as a microphone 141, a touch panel 142, a sensor 143, and a button 144. For example, the user can implement the user instruction input function through voice, touch, gesture, pressing and other actions. The input interface converts the received analog signal into a digital signal and the digital signal into a corresponding instruction signal, and sends it to the display device 200.
输出接口包括将接收的用户指令发送至显示设备200的接口。在一些实施例中,可以是红外接口,也可以是射频接口。如:红外信号接口时,需要将用户输入指令按照红外控制协议转化为红外控制信号,经红外发送电路进行发送至显示设备200。再如:射频信号接口时,需将用户输入指令转化为数字信号,然后按照射频控制信号调制协议进行调制后,由射频发送端子发送至显示设备200。The output interface includes an interface for sending the received user instruction to the display device 200. In some embodiments, it may be an infrared interface or a radio frequency interface. For example, in the case of an infrared signal interface, the user input instruction needs to be converted into an infrared control signal according to the infrared control protocol, and sent to the display device 200 via the infrared sending circuit. For another example: in the case of a radio frequency signal interface, a user input instruction needs to be converted into a digital signal, which is then modulated according to the radio frequency control signal modulation protocol, and then sent to the display device 200 by the radio frequency sending terminal.
在一些实施例中,控制装置100包括通信器130和输出接口中至少一者。控制装置100中配置通信器130,如:WIFI、蓝牙、NFC等电路,可将用户输入指令通过WIFI协议、或蓝牙协议、或NFC协议编码,发送至显示设备200.In some embodiments, the control device 100 includes at least one of a communicator 130 and an output interface. The control device 100 is configured with a communicator 130, such as WIFI, Bluetooth, NFC and other circuits, which can encode user input instructions through the WIFI protocol, or Bluetooth protocol, or NFC protocol, and send to the display device 200.
存储器190,用于在控制器110的控制下存储驱动和控制控制装置100的各种运行程序、数据和应用。存储器190,可以存储用户输入的各类控制信号指令。The memory 190 is used to store various operating programs, data, and applications for driving and controlling the control device 100 under the control of the controller 110. The memory 190 can store various control signal instructions input by the user.
供电电源180,用于在控制器110的控制下为控制装置100各电器元件提供运行电力支持。可以电池及相关控制电路。The power supply 180 is used to provide operating power support for the electrical components of the control device 100 under the control of the controller 110. Can battery and related control circuit.
图4中示例性示出了根据示例性实施例中显示设备200中硬件***的硬件配置框图。FIG. 4 exemplarily shows a hardware configuration block diagram of the hardware system in the display device 200 according to the exemplary embodiment.
在采用双硬件***架构时,硬件***的机构关系可以图4所示。为便于表 述以下将双硬件***架构中的一个硬件***称为第一硬件***或A***、A芯片,并将另一个硬件***称为第二硬件***或N***、N芯片。A芯片包含A芯片的控制器及通过各类接口与A芯片的控制器相连的各类电路,N芯片则包含N芯片的控制器及通过各类接口与N芯片的控制器相连的各类电路。A芯片及N芯片中可以各自安装有相对独立的操作***,A芯片的操作***和N芯片的操作***可以通过通信协议相互通信,示例性的:A芯片的操作***的framework层和N芯片的操作***的framework层可以进行通信进行命令和数据的传输,从而使显示设备200中存在两个在独立但又存在相互关联的子***。When the dual hardware system architecture is adopted, the mechanism relationship of the hardware system can be shown in Figure 4. For ease of description, one hardware system in the dual hardware system architecture is referred to as the first hardware system or A system, A chip, and the other hardware system is referred to as the second hardware system or N system, N chip. The A chip includes the controller of the A chip and various circuits connected to the controller of the A chip through various interfaces, and the N chip includes the controller of the N chip and various circuits connected to the controller of the N chip through various interfaces. . A relatively independent operating system can be installed in the A chip and the N chip. The operating system of the A chip and the operating system of the N chip can communicate with each other through a communication protocol. Exemplary: the framework layer of the A chip's operating system and the N chip's The framework layer of the operating system can communicate for the transmission of commands and data, so that there are two independent but interrelated subsystems in the display device 200.
如图4所示,A芯片与N芯片之间可以通过多个不同类型的接口实现连接、通信及供电。A芯片与N芯片之间接口的接口类型可以包括通用输入输出接口(General-purpose input/output,GPIO)、USB接口、HDMI接口、UART接口等。A芯片与N芯片之间可以使用这些接口中的一个或多个进行通信或电力传输。例如图4所示,在双硬件***架构下,可以由外接的电源(power)为N芯片供电,而A芯片则可以不由外接电源,而由N芯片供电。在一些实施例中,外接电源也可以分别连接N芯片和A芯片,给N芯片和A芯片供电。As shown in Figure 4, the A chip and the N chip can realize connection, communication and power supply through multiple different types of interfaces. The interface type of the interface between the A chip and the N chip may include general-purpose input/output (GPIO), USB interface, HDMI interface, UART interface, and the like. One or more of these interfaces can be used between the A chip and the N chip for communication or power transmission. For example, as shown in Figure 4, in the dual hardware system architecture, the N chip can be powered by an external power source, and the A chip can be powered by the N chip instead of the external power source. In some embodiments, the external power supply can also be connected to the N chip and the A chip respectively to supply power to the N chip and the A chip.
除用于与N芯片进行连接的接口之外,A芯片还可以包含用于连接其他设备或组件的接口,例如图4中所示的用于连接摄像头(Camera)的MIPI接口,蓝牙接口等。In addition to the interface for connecting with the N chip, the A chip may also include interfaces for connecting other devices or components, such as the MIPI interface for connecting to a camera (Camera) shown in FIG. 4, a Bluetooth interface, etc.
类似的,除用于与N芯片进行连接的接口之外,N芯片还可以包含用于连接显示屏TCON(Timer Control Register)的VBY接口,用于连接功率放大器(Amplifier,AMP)及扬声器(Speaker)的I2S接口;以及IR/Key接口,USB接口,Wifi接口,蓝牙接口,HDMI接口,Tuner接口等。Similarly, in addition to the interface used to connect with the N chip, the N chip can also include a VBY interface for connecting to the display screen TCON (Timer Control Register), which is used to connect a power amplifier (Amplifier, AMP) and a speaker (Speaker). ) I2S interface; and IR/Key interface, USB interface, Wifi interface, Bluetooth interface, HDMI interface, Tuner interface, etc.
下面结合图5对本申请双硬件***架构进行说明。需要说明的是图5仅仅是对本申请双硬件***架构的一个示例性说明,并不表示对本申请的限定。在实际应用中,两个硬件***均可根据需要包含更多或更少的硬件或接口。The dual hardware system architecture of the present application will be described below in conjunction with FIG. 5. It should be noted that FIG. 5 is only an exemplary description of the dual hardware system architecture of the present application, and does not represent a limitation to the present application. In practical applications, both hardware systems can contain more or less hardware or interfaces as required.
图5中示例性示出了根据图4显示设备200的硬件架构框图。如图5所示,显示设备200的硬件***可以包括A芯片和N芯片,以及通过各类接口与A芯片或N芯片相连接的电路。FIG. 5 exemplarily shows a block diagram of the hardware architecture of the display device 200 according to FIG. 4. As shown in FIG. 5, the hardware system of the display device 200 may include an A chip and an N chip, and circuits connected to the A chip or the N chip through various interfaces.
N芯片可以包括调谐解调器220、通信器230、外部装置接口250、控制器210、存储器290、用户输入接口260-3、视频处理器260-1、音频处理器260-2、显示器280、音频输出接口270、供电电路240。在其他实施例中N芯片也可以包括更多或更少的电路。The N chip may include a tuner and demodulator 220, a communicator 230, an external device interface 250, a controller 210, a memory 290, a user input interface 260-3, a video processor 260-1, an audio processor 260-2, a display 280, Audio output interface 270, power supply circuit 240. In other embodiments, the N chip may also include more or fewer circuits.
其中,调谐解调器220,用于对通过有线或无线方式接收广播电视信号,进行放大、混频和谐振等调制解调处理,从而从多个无线或有线广播电视信号中解调出用户所选择电视频道的频率中所携带的音视频信号,以及附加信息(例如EPG数据信号)。根据电视信号广播制式不同,调谐解调器220的信号途径可以有很多种,诸如:地面广播、有线广播、卫星广播或互联网广播等;以及根据调制类型不同,所述信号的调整方式可以数字调制方式,也可以模拟调制方式;以及根据接收电视信号种类不同,调谐解调器220可以解调模拟信号和/或数字信号。Among them, the tuner and demodulator 220 is used to perform modulation and demodulation processing such as amplification, mixing, and resonance on the broadcast and television signals received through wired or wireless methods, so as to demodulate the user’s information from multiple wireless or cable broadcast and television signals. Select the audio and video signals carried in the frequency of the TV channel, as well as additional information (such as EPG data signals). According to different television signal broadcasting systems, the signal path of the tuner and demodulator 220 can have many kinds, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting or Internet broadcasting, etc.; and according to different modulation types, the signal adjustment method can be digitally modulated The method may also be an analog modulation method; and according to different types of received television signals, the tuner demodulator 220 may demodulate analog signals and/or digital signals.
调谐解调器220,还用于根据用户选择,以及由控制器210控制,响应用户选择的电视频道频率以及该频率所携带的电视信号。The tuner and demodulator 220 is also used to respond to the TV channel frequency selected by the user and the TV signal carried by the frequency according to the user's selection and control by the controller 210.
在一些实施例中,调谐解调器220也可在外置设备中,如外置机顶盒等。这样,机顶盒通过调制解调后输出电视音视频信号,经过外部装置接口250输 入至显示设备200中。In some embodiments, the tuner demodulator 220 may also be in an external device, such as an external set-top box. In this way, the set-top box outputs TV audio and video signals through modulation and demodulation, and inputs them to the display device 200 through the external device interface 250.
通信器230是用于根据各种通信协议类型与外部设备或外部服务器进行通信的组件。例如:通信器230可以包括WIFI电路231,蓝牙通信协议电路232,有线以太网通信协议电路233,及红外通信协议电路等其他网络通信协议电路或近场通信协议电路(图中未示出)。The communicator 230 is a component for communicating with external devices or external servers according to various communication protocol types. For example, the communicator 230 may include a WIFI circuit 231, a Bluetooth communication protocol circuit 232, a wired Ethernet communication protocol circuit 233, and an infrared communication protocol circuit and other network communication protocol circuits or near field communication protocol circuits (not shown in the figure).
显示设备200可以通过通信器230与外部控制设备或内容提供设备之间建立控制信号和数据信号的连接。例如,通信器可根据控制器的控制接收遥控器100的控制信号。The display device 200 may establish a control signal and a data signal connection with an external control device or content providing device through the communicator 230. For example, the communicator may receive the control signal of the remote controller 100 according to the control of the controller.
外部装置接口250,是提供N芯片控制器210和A芯片及外部其他设备间数据传输的组件。外部装置接口250可按照有线/无线方式与诸如机顶盒、游戏装置、笔记本电脑等的外部设备连接,可接收外部设备的诸如视频信号(例如运动图像)、音频信号(例如音乐)、附加信息(例如EPG)等数据。The external device interface 250 is a component that provides data transmission between the N chip controller 210 and the A chip and other external devices. The external device interface 250 can be connected to external devices such as set-top boxes, game devices, notebook computers, etc. in a wired/wireless manner, and can receive external devices such as video signals (such as moving images), audio signals (such as music), and additional information (such as EPG) and other data.
其中,外部装置接口250可以包括:高清多媒体接口(HDMI)端子也称之为HDMI251、复合视频消隐同步(CVBS)端子也称之为AV252、模拟或数字分量端子也称之为分量253、通用串行总线(USB)端子254、红绿蓝(RGB)端子(图中未示出)等任一个或多个。本申请不对外部装置接口的数量和类型进行限制。Among them, the external device interface 250 may include: a high-definition multimedia interface (HDMI) terminal is also referred to as HDMI251, a composite video blanking synchronization (CVBS) terminal is also referred to as AV252, an analog or digital component terminal is also referred to as component 253, and universal Any one or more of a serial bus (USB) terminal 254, a red, green, and blue (RGB) terminal (not shown in the figure), etc. This application does not limit the number and types of external device interfaces.
控制器210,通过运行存储在存储器290上的各种软件控制程序(如操作***和/或各种应用程序),来控制显示设备200的工作和响应用户的操作。The controller 210 controls the operation of the display device 200 and responds to user operations by running various software control programs (such as an operating system and/or various application programs) stored on the memory 290.
如图5所示,控制器210包括只读存储器RAM213、随机存取存储器ROM214、图形处理器216、CPU处理器212、通信接口218、以及通信总线。其中,RAM213和ROM214以及图形处理器216、CPU处理器212、通信接口 218通过总线相连接。As shown in FIG. 5, the controller 210 includes a read-only memory RAM 213, a random access memory ROM 214, a graphics processor 216, a CPU processor 212, a communication interface 218, and a communication bus. Among them, the RAM 213 and the ROM 214, the graphics processor 216, the CPU processor 212, and the communication interface 218 are connected by a bus.
ROM213,用于存储各种***启动的指令。如在收到开机信号时,显示设备200电源开始启动,CPU处理器212运行ROM中***启动指令,将存储在存储器290的操作***拷贝至RAM214中,以开始运行启动操作***。当操作***启动完成后,CPU处理器212再将存储器290中各种应用程序拷贝至RAM214中,然后,开始运行启动各种应用程序。ROM213, used to store various system startup instructions. For example, when a power-on signal is received, the power of the display device 200 starts to start, and the CPU processor 212 executes the system start-up instruction in the ROM, and copies the operating system stored in the memory 290 to the RAM 214 to start running the start-up operating system. After the operating system is started, the CPU processor 212 copies various application programs in the memory 290 to the RAM 214, and then starts to run and start various application programs.
图形处理器216,用于产生各种图形对象,如:图标、操作菜单、以及用户输入指令显示图形等。包括运算器,通过接收用户输入各种交互指令进行运算,根据显示属性显示各种对象。以及包括渲染器,产生基于运算器得到的各种对象,进行渲染的结果显示在显示器280上。The graphics processor 216 is used to generate various graphics objects, such as icons, operation menus, and user input instructions to display graphics. Including an arithmetic unit, which performs operations by receiving various interactive commands input by the user, and displays various objects according to the display attributes. As well as including a renderer, various objects obtained based on the arithmetic unit are generated, and the result of the rendering is displayed on the display 280.
CPU处理器212,用于执行存储在存储器290中操作***和应用程序指令。以及根据接收外部输入的各种交互指令,来执行各种应用程序、数据和内容,以便最终显示和播放各种音视频内容。The CPU processor 212 is configured to execute operating system and application program instructions stored in the memory 290. And according to receiving various interactive instructions input from the outside, to execute various application programs, data and content, so as to finally display and play various audio and video content.
在一些实施例中,CPU处理器212,可以包括多个处理器。所述多个处理器中可包括一个主处理器以及多个或一个子处理器。主处理器,用于在预加电模式中执行显示设备200一些操作,和/或在正常模式下显示画面的操作。多个或一个子处理器,用于执行在待机模式等状态下的一种操作。In some embodiments, the CPU processor 212 may include multiple processors. The multiple processors may include one main processor and multiple or one sub-processors. The main processor is used to perform some operations of the display device 200 in the pre-power-on mode, and/or to display images in the normal mode. Multiple or one sub-processor, used to perform an operation in the standby mode and other states.
通信接口218,可包括第一接口218-1到第n接口218-n。这些接口可以是经由网络被连接到外部设备的网络接口。The communication interface 218 may include a first interface 218-1 to an nth interface 218-n. These interfaces may be network interfaces connected to external devices via a network.
控制器210可以控制显示设备200的整体操作。例如:响应于接收到用于选择在显示器280上显示UI对象的用户命令,控制器210便可以执行与由用户命令选择的对象有关的操作。The controller 210 may control the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
其中,所述对象可以是可选对象中的任何一个,例如超链接或图标。与所选择的对象有关操作,例如:显示连接到超链接页面、文档、图像等操作,或者执行与图标相对应程序的操作。用于选择UI对象用户命令,可以是通过连接到显示设备200的各种输入装置(例如,鼠标、键盘、触摸板等)输入命令或者与由用户说出语音相对应的语音命令。Wherein, the object may be any one of the selectable objects, such as a hyperlink or an icon. Operations related to the selected object, for example: display operations connected to hyperlink pages, documents, images, etc., or perform operations corresponding to the icon. The user command for selecting the UI object may be a command input through various input devices (for example, a mouse, a keyboard, a touch pad, etc.) connected to the display device 200 or a voice command corresponding to the voice spoken by the user.
存储器290,包括存储用于驱动和控制显示设备200的各种软件电路。如:存储器290中存储的各种软件电路,包括:基础电路、检测电路、通信电路、显示控制电路、浏览器电路、和各种服务电路等(图中未示出)。The memory 290 includes storing various software circuits for driving and controlling the display device 200. For example, various software circuits stored in the memory 290 include: basic circuits, detection circuits, communication circuits, display control circuits, browser circuits, and various service circuits (not shown in the figure).
其中,基础电路是用于显示设备200中各个硬件之间信号通信、并向上层电路发送处理和控制信号的底层软件电路。检测电路是用于从各种传感器或用户输入接口中收集各种信息,并进行数模转换以及分析管理的管理电路。语音识别电路中包括语音解析电路和语音指令数据库电路。显示控制电路是用于控制显示器280进行显示图像内容的电路,可以用于播放多媒体图像内容和UI界面等信息。通信电路,是用于与外部设备之间进行控制和数据通信的电路。浏览器电路,是用于执行浏览服务器之间数据通信的电路。服务电路,是用于提供各种服务以及各类应用程序在内的电路。Among them, the basic circuit is a low-level software circuit used for signal communication between various hardware in the display device 200 and sending processing and control signals to the upper-level circuit. The detection circuit is a management circuit used to collect various information from various sensors or user input interfaces, and perform digital-to-analog conversion, analysis and management. The voice recognition circuit includes a voice analysis circuit and a voice command database circuit. The display control circuit is a circuit used to control the display 280 to display image content, and can be used to play information such as multimedia image content and UI interfaces. The communication circuit is a circuit used for control and data communication with external devices. The browser circuit is a circuit used to perform data communication between browsing servers. The service circuit is a circuit used to provide various services and various applications.
同时,存储器290还用于存储接收外部数据和用户数据、各种用户界面中各个项目的图像以及焦点对象的视觉效果图等。At the same time, the memory 290 is also used to store and receive external data and user data, images of various items in various user interfaces, and visual effect diagrams of focus objects, and the like.
用户输入接口260-3,用于将用户的输入信号发送给控制器210,或者,将从控制器210输出的信号传送给用户。示例性的,控制装置(例如移动终端或遥控器)可将用户输入的诸如电源开关信号、频道选择信号、音量调节信号等输入信号发送至用户输入接口,再由用户输入接口260-3转送至控制器210; 或者,控制装置可接收经控制器210处理从用户输入接口260-3输出的音频、视频或数据等输出信号,并且显示接收的输出信号或将接收的输出信号输出为音频或振动形式。The user input interface 260-3 is used to send a user's input signal to the controller 210, or to transmit a signal output from the controller 210 to the user. Exemplarily, the control device (such as a mobile terminal or a remote control) can send input signals input by the user, such as a power switch signal, a channel selection signal, and a volume adjustment signal, to the user input interface, and then forward the input signal to the user input interface 260-3. Controller 210; Alternatively, the control device may receive output signals such as audio, video, or data output from the user input interface 260-3 processed by the controller 210, and display the received output signal or output the received output signal as audio or vibration form.
在一些实施例中,用户可在显示器280上显示的图形用户界面(GUI)输入用户命令,则用户输入接口260-3通过图形用户界面(GUI)接收用户输入命令。或者,用户可通过输入特定的声音或手势进行输入用户命令,则用户输入接口260-3通过传感器识别出声音或手势,来接收用户输入命令。In some embodiments, the user may input a user command on a graphical user interface (GUI) displayed on the display 280, and the user input interface 260-3 receives the user input command through the graphical user interface (GUI). Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user input interface 260-3 recognizes the sound or gesture through a sensor to receive the user input command.
视频处理器260-1,用于接收视频信号,根据输入信号的标准编解码协议,进行解压缩、解码、缩放、降噪、帧率转换、分辨率转换、图像合成等视频数据处理,可得到直接在显示器280上显示或播放的视频信号。The video processor 260-1 is used to receive video signals, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to the standard codec protocol of the input signal. The video signal displayed or played directly on the display 280.
在一些实施例中,视频处理器260-1,包括解复用电路、视频解码电路、图像合成电路、帧率转换电路、显示格式化电路等(图中未示出)。In some embodiments, the video processor 260-1 includes a demultiplexing circuit, a video decoding circuit, an image synthesis circuit, a frame rate conversion circuit, a display formatting circuit, etc. (not shown in the figure).
其中,解复用电路,用于对输入音视频数据流进行解复用处理,如输入MPEG-2,则解复用电路进行解复用成视频信号和音频信号等。Among them, the demultiplexing circuit is used to demultiplex the input audio and video data stream. For example, if MPEG-2 is input, the demultiplexing circuit will demultiplex into a video signal and an audio signal.
视频解码电路,用于对解复用后的视频信号进行处理,包括解码和缩放处理等。The video decoding circuit is used to process the demultiplexed video signal, including decoding and scaling.
图像合成电路,如图像合成器,其用于将图形生成器根据用户输入或自身生成的GUI信号,与缩放处理后视频画面进行叠加混合处理,以生成可供显示的图像信号。An image synthesis circuit, such as an image synthesizer, is used to superimpose and mix the GUI signal generated by the graphics generator with the zoomed video image according to user input or itself to generate an image signal for display.
帧率转换电路,用于对输入视频的帧率进行转换,如将输入的24Hz、25Hz、30Hz、60Hz视频的帧率转换为60Hz、120Hz或240Hz的帧率,其中,输入帧率可以与源视频流有关,输出帧率可以与显示器的刷新率有关。显示格式化 电路,用于将帧率转换电路输出的信号,改变为符合诸如显示器显示格式的信号,如将帧率转换电路输出的信号进行格式转换以输出RGB数据信号。Frame rate conversion circuit, used to convert the frame rate of the input video, such as converting the frame rate of the input 24Hz, 25Hz, 30Hz, 60Hz video to the frame rate of 60Hz, 120Hz or 240Hz, where the input frame rate can be the same as the source The video stream is related, and the output frame rate can be related to the refresh rate of the display. The display formatting circuit is used to change the signal output by the frame rate conversion circuit into a signal that conforms to the display format of a display, such as format conversion of the signal output by the frame rate conversion circuit to output RGB data signals.
显示器280,用于接收源自视频处理器260-1输入的图像信号,进行显示视频内容和图像以及菜单操控界面。显示器280包括用于呈现画面的显示器组件以及驱动图像显示的驱动组件。显示视频内容,可以来自调谐解调器220接收的广播信号中的视频,也可以来自通信器或外部设备接口输入的视频内容。显示器280,同时显示显示设备200中产生且用于控制显示设备200的用户操控界面UI。The display 280 is configured to receive image signals input from the video processor 260-1, display video content and images, and a menu control interface. The display 280 includes a display component for presenting a picture and a driving component for driving image display. The displayed video content can be from the video in the broadcast signal received by the tuner and demodulator 220, or from the video content input by the communicator or the interface of an external device. The display 280 simultaneously displays a user manipulation interface UI generated in the display device 200 and used to control the display device 200.
以及,根据显示器280类型不同,还包括用于驱动显示的驱动组件。或者,倘若显示器280为一种投影显示器,还可以包括一种投影装置和投影屏幕。And, depending on the type of the display 280, it also includes a driving component for driving the display. Alternatively, if the display 280 is a projection display, it may also include a projection device and a projection screen.
音频处理器260-2,用于接收音频信号,根据输入信号的标准编解码协议,进行解压缩和解码,以及降噪、数模转换、和放大处理等音频数据处理,得到可以在扬声器272中播放的音频信号。The audio processor 260-2 is used to receive audio signals, and perform decompression and decoding according to the standard codec protocol of the input signal, as well as audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing, and the result can be in the speaker 272 The audio signal to be played.
音频输出接口270,用于在控制器210的控制下接收音频处理器260-2输出的音频信号,音频输出接口可包括扬声器272,或输出至外接设备的发生装置的外接音响输出端子274,如:外接音响端子或耳机输出端子等。The audio output interface 270 is used to receive the audio signal output by the audio processor 260-2 under the control of the controller 210. The audio output interface may include a speaker 272 or output to an external audio output terminal 274 of the generator of an external device, such as : External audio terminal or headphone output terminal, etc.
在其他一些示例性实施例中,视频处理器260-1可以包括一个或多个芯片组成。音频处理器260-2,也可以包括一个或多个芯片组成。In some other exemplary embodiments, the video processor 260-1 may include one or more chips. The audio processor 260-2 may also include one or more chips.
以及,在其他一些示例性实施例中,视频处理器260-1和音频处理器260-2,可以为单独的芯片,也可以与控制器210一起集成在一个或多个芯片中。And, in some other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips, or may be integrated with the controller 210 in one or more chips.
供电电路240,用于在控制器210控制下,将外部电源输入的电力为显示设备200提供电源供电支持。供电电路240可以包括安装显示设备200内部的 内置电源电路,也可以是安装在显示设备200外部的电源,如在显示设备200中提供外接电源的电源接口。The power supply circuit 240 is configured to provide power supply support for the display device 200 with power input from an external power supply under the control of the controller 210. The power supply circuit 240 may include a built-in power supply circuit installed inside the display device 200, or may be a power supply installed outside the display device 200, such as a power interface for providing an external power supply in the display device 200.
与N芯片相类似,如图5所示,A芯片可以包括控制器310、通信器330、检测器340、存储器390。在某些实施例中还可以包括用户输入接口、视频处理器360-1、音频处理器360-2、显示器、音频输出接口(图中未示出)。在某些实施例中,也可以存在独立为A芯片供电的供电电路((图中未示出))。Similar to the N chip, as shown in FIG. 5, the A chip may include a controller 310, a communicator 330, a detector 340, and a memory 390. In some embodiments, it may also include a user input interface, a video processor 360-1, an audio processor 360-2, a display, and an audio output interface (not shown in the figure). In some embodiments, there may also be a power supply circuit ((not shown in the figure)) that independently supplies power to the A chip.
通信器330是用于根据各种通信协议类型与外部设备或外部服务器进行通信的组件。例如:通信器330可以包括WIFI电路331,蓝牙通信协议电路332,有线以太网通信协议电路333,及红外通信协议电路等其他网络通信协议电路或近场通信协议电路。(图中未示出)The communicator 330 is a component for communicating with external devices or external servers according to various communication protocol types. For example, the communicator 330 may include a WIFI circuit 331, a Bluetooth communication protocol circuit 332, a wired Ethernet communication protocol circuit 333, and an infrared communication protocol circuit and other network communication protocol circuits or near field communication protocol circuits. (Not shown in the picture)
A芯片的通信器330和N芯片的通信器230也有相互交互。例如,N芯片硬件***内的WiFi电路231用于连接外部网络,与外部服务器等产生网络通信。A芯片硬件***内的WiFi电路331用于连接至N芯片的WiFi电路231,而不与外界网络等产生直接连接,A芯片通过N芯片连接外部网络。因此,对于用户而言,一个如上述实施例中的显示设备至对外显示一个WiFi账号。The communicator 330 of the A chip and the communicator 230 of the N chip also interact with each other. For example, the WiFi circuit 231 in the N chip hardware system is used to connect to an external network, and to generate network communication with an external server or the like. The WiFi circuit 331 in the hardware system of the A chip is used to connect to the WiFi circuit 231 of the N chip without direct connection with an external network or the like, and the A chip connects to the external network through the N chip. Therefore, for the user, a display device as in the above embodiment may display a WiFi account to the outside.
检测器340,是显示设备A芯片用于采集外部环境或与外部交互的信号的组件。检测器340可以包括光接收器342,用于采集环境光线强度的传感器,可以通过采集环境光来自适应显示参数变化等;还可以包括图像采集器341,如相机、摄像头等,可以用于采集外部环境场景,以及用于采集用户的属性或与用户交互手势,可以自适应变化显示参数,也可以识别用户手势,以实现与用户之间互动的功能。The detector 340 is a component used by the chip of the display device A to collect signals from the external environment or interact with the outside. The detector 340 may include a light receiver 342, a sensor used to collect the intensity of ambient light, which can adaptively display parameter changes by collecting ambient light, etc.; it may also include an image collector 341, such as a camera, a camera, etc., which can be used to collect external The environment scene, as well as the user's attributes or gestures used to interact with the user, can adaptively change the display parameters, and can also recognize the user's gestures to achieve the function of interaction with the user.
外部装置接口350,提供控制器310与N芯片或外部其他设备间数据传输 的组件。外部装置接口可按照有线/无线方式与诸如机顶盒、游戏装置、笔记本电脑等的外部设备连接。The external device interface 350 provides components for data transmission between the controller 310 and the N chip or other external devices. The external device interface can be connected to external devices such as set-top boxes, game devices, notebook computers, etc., in a wired/wireless manner.
视频处理器360-1,用于处理相关视频信号。The video processor 360-1 is used to process related video signals.
控制器310,通过运行存储在存储器390上的各种软件控制程序(如用安装的第三方应用等),以及与N芯片的交互,来控制显示设备200的工作和响应用户的操作。The controller 310 controls the work of the display device 200 and responds to user operations by running various software control programs (such as installed third-party applications, etc.) stored on the memory 390 and interacting with the N chip.
如图5所示,控制器310包括只读存储器ROM313、随机存取存储器RAM314、图形处理器316、CPU处理器312、通信接口318、以及通信总线。其中,ROM313和RAM314以及图形处理器316、CPU处理器312、通信接口318通过总线相连接。As shown in FIG. 5, the controller 310 includes a read-only memory ROM 313, a random access memory RAM 314, a graphics processor 316, a CPU processor 312, a communication interface 318, and a communication bus. Among them, the ROM 313 and the RAM 314, the graphics processor 316, the CPU processor 312, and the communication interface 318 are connected by a bus.
ROM313,用于存储各种***启动的指令。CPU处理器312运行ROM中***启动指令,将存储在存储器390的操作***拷贝至RAM314中,以开始运行启动操作***。当操作***启动完成后,CPU处理器312再将存储器390中各种应用程序拷贝至RAM314中,然后,开始运行启动各种应用程序。ROM313, used to store various system startup instructions. The CPU processor 312 runs the system startup instruction in the ROM, and copies the operating system stored in the memory 390 to the RAM 314 to start the startup operating system. After the operating system is started, the CPU processor 312 copies various application programs in the memory 390 to the RAM 314, and then starts to run and start various application programs.
CPU处理器312,用于执行存储在存储器390中操作***和应用程序指令,和与N芯片进行通信、信号、数据、指令等传输与交互,以及根据接收外部输入的各种交互指令,来执行各种应用程序、数据和内容,以便最终显示和播放各种音视频内容。The CPU processor 312 is used to execute the operating system and application instructions stored in the memory 390, communicate with the N chip, transmit and interact with signals, data, instructions, etc., and execute according to various interactive instructions received from external inputs. Various applications, data and content, in order to finally display and play various audio and video content.
通信接口318为多个,可包括第一接口318-1到第n接口318-n。这些接口可以是经由网络被连接到外部设备的网络接口,也可以是经由网络被连接到N芯片的网络接口。There are multiple communication interfaces 318, which may include a first interface 318-1 to an nth interface 318-n. These interfaces may be network interfaces connected to external devices via the network, or network interfaces connected to the N chip via the network.
控制器310可以控制显示设备200的整体操作。例如:响应于接收到用于 选择在显示器280上显示UI对象的用户命令,控制器210便可以执行与由用户命令选择的对象有关的操作。The controller 310 may control the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
图形处理器316,用于产生各种图形对象,如:图标、操作菜单、以及用户输入指令显示图形等。包括运算器,通过接收用户输入各种交互指令进行运算,根据显示属性显示各种对象。以及包括渲染器,产生基于运算器得到的各种对象,进行渲染的结果显示在显示器280上。The graphics processor 316 is used to generate various graphics objects, such as icons, operation menus, and user input instructions to display graphics. Including an arithmetic unit, which performs operations by receiving various interactive commands input by the user, and displays various objects according to the display attributes. As well as including a renderer, various objects obtained based on the arithmetic unit are generated, and the result of the rendering is displayed on the display 280.
A芯片的图形处理器316与N芯片的图形处理器216均能产生各种图形对象。区别性的,若应用1安装于A芯片,应用2安装在N芯片,当用户在应用1的界面,且在应用1内进行用户输入的指令时,由A芯片图形处理器316产生图形对象。当用户在应用2的界面,且在应用2内进行用户输入的指令时,由N芯片的图形处理器216产生图形对象。Both the graphics processor 316 of the A chip and the graphics processor 216 of the N chip can generate various graphics objects. Differentily, if application 1 is installed on the A chip and application 2 is installed on the N chip, when the user is in the interface of the application 1 and the user inputs instructions in the application 1, the graphics object is generated by the A chip graphics processor 316. When the user is on the interface of Application 2 and performs the user-input instructions in Application 2, the graphics processor 216 of the N chip generates the graphics object.
图6中示例性示出了根据示例性实施例中显示设备的功能配置示意图。Fig. 6 exemplarily shows a schematic diagram of a functional configuration of a display device according to an exemplary embodiment.
如图6所示,A芯片的存储器390和N芯片的存储器290分别用于存储操作***、应用程序、内容和用户数据等,在A芯片的控制器310和N芯片的控制器210的控制下执行驱动显示设备200的***运行以及响应用户的各种操作。A芯片的存储器390和N芯片的存储器290可以包括易失性和/或非易失性存储器。As shown in FIG. 6, the memory 390 of the A chip and the memory 290 of the N chip are used to store operating systems, application programs, content, and user data, respectively, under the control of the controller 310 of the A chip and the controller 210 of the N chip. Perform system operations that drive the display device 200 and respond to various operations of the user. The memory 390 of the A chip and the memory 290 of the N chip may include volatile and/or nonvolatile memory.
对于N芯片,存储器290,具体用于存储驱动显示设备200中控制器210的运行程序,以及存储显示设备200内置各种应用程序,以及用户从外部设备下载的各种应用程序、以及与应用程序相关的各种图形用户界面,以及与图形用户界面相关的各种对象,用户数据信息,以及各种支持应用程序的内部数据。存储器290用于存储操作***(OS)内核、中间件和应用等***软件,以及存储 输入的视频数据和音频数据、及其他用户数据。For the N chip, the memory 290 is specifically used to store the operating program that drives the controller 210 in the display device 200, and store various application programs built in the display device 200, and various application programs downloaded by the user from an external device, and application programs. Various related graphical user interfaces, various objects related to graphical user interfaces, user data information, and various internal data supporting applications. The memory 290 is used to store system software such as an operating system (OS) kernel, middleware, and applications, as well as to store input video data and audio data, and other user data.
存储器290,具体用于存储视频处理器260-1和音频处理器260-2、显示器280、通信器230、调谐解调器220、输入/输出接口等驱动程序和相关数据。The memory 290 is specifically used to store driver programs and related data such as the video processor 260-1 and the audio processor 260-2, the display 280, the communicator 230, the tuner and demodulator 220, and the input/output interface.
在一些实施例中,存储器290可以存储软件和/或程序,用于表示操作***(OS)的软件程序包括,例如:内核、中间件、应用编程接口(API)和/或应用程序。示例性的,内核可控制或管理***资源,或其它程序所实施的功能(如所述中间件、API或应用程序),以及内核可以提供接口,以允许中间件和API,或应用访问控制器,以实现控制或管理***资源。In some embodiments, the memory 290 may store software and/or programs. The software programs used to represent an operating system (OS) include, for example, kernels, middleware, application programming interfaces (APIs), and/or application programs. Exemplarily, the kernel may control or manage system resources, or functions implemented by other programs (such as the middleware, API, or application program), and the kernel may provide interfaces to allow middleware and APIs, or applications to access the controller , In order to achieve control or management of system resources.
在一些实施例中,存储器290,包括广播接收电路2901、频道控制电路2902、音量控制电路2903、图像控制电路2904、显示控制电路2905、第一音频控制电路2906、外部指令识别电路2907、通信控制电路2908、光接收电路2909、电力控制电路2910、操作***2911、以及应用程序2912、浏览器电路(图中未示出)等等。控制器210通过运行存储器290中各种软件程序,来执行诸如:广播电视信号接收解调功能、电视频道选择控制功能、音量选择控制功能、图像控制功能、显示控制功能、音频控制功能、外部指令识别功能、通信控制功能、光信号接收功能、电力控制功能、支持各种功能的软件操控平台、以及浏览器功能等各类功能。In some embodiments, the memory 290 includes a broadcast receiving circuit 2901, a channel control circuit 2902, a volume control circuit 2903, an image control circuit 2904, a display control circuit 2905, a first audio control circuit 2906, an external command recognition circuit 2907, and a communication control circuit. The circuit 2908, the light receiving circuit 2909, the power control circuit 2910, the operating system 2911, and the application program 2912, the browser circuit (not shown in the figure), and so on. The controller 210 executes various software programs in the memory 290 such as: broadcast and television signal reception and demodulation function, TV channel selection control function, volume selection control function, image control function, display control function, audio control function, external command Various functions such as recognition function, communication control function, optical signal receiving function, power control function, software control platform supporting various functions, and browser function.
存储器390,包括存储用于驱动和控制显示设备200的各种软件电路。如:存储器390中存储的各种软件电路,包括:基础电路、检测电路、通信电路、显示控制电路、浏览器电路、和各种服务电路等(图中未示出)。由于存储器390与存储器290的功能比较相似,相关之处参见存储器290即可,在此就不再赘述。The memory 390 includes storing various software circuits for driving and controlling the display device 200. For example, various software circuits stored in the memory 390 include: basic circuits, detection circuits, communication circuits, display control circuits, browser circuits, and various service circuits (not shown in the figure). Since the functions of the memory 390 and the memory 290 are relatively similar, please refer to the memory 290 for related parts, which will not be repeated here.
在一些实施例中,存储器390,包括图像控制电路3904、第二音频控制电路3906、外部指令识别电路3907、通信控制电路3908、光接收电路3909、操作***3911、以及应用程序3912、浏览器电路3913等等。控制器210通过运行存储器290中各种软件程序,来执行诸如:图像控制功能、显示控制功能、音频控制功能、外部指令识别功能、通信控制功能、光信号接收功能、电力控制功能、支持各种功能的软件操控平台、以及浏览器功能等各类功能。In some embodiments, the memory 390 includes an image control circuit 3904, a second audio control circuit 3906, an external command recognition circuit 3907, a communication control circuit 3908, a light receiving circuit 3909, an operating system 3911, and an application program 3912, a browser circuit 3913 and so on. The controller 210 executes various software programs in the memory 290, such as: image control function, display control function, audio control function, external command recognition function, communication control function, optical signal receiving function, power control function, support for various Functional software control platform, as well as various functions such as browser functions.
区别性的,N芯片的外部指令识别电路2907和A芯片的外部指令识别电路3907可识别不同的指令。Differentily, the external command recognition circuit 2907 of the N chip and the external command recognition circuit 3907 of the A chip can recognize different commands.
在一些实施例中,由于摄像头等图像接收设备与A芯片连接,因此,A芯片的外部指令识别电路3907可包括图形识别电路2907-1,图形识别电路3907-1内存储有图形数据库,摄像头接收到外界的图形指令时,与图形数据库中的指令进行对应关系,以对显示设备作出指令控制。而由于语音接收设备以及遥控器与N芯片连接,因此,N芯片的外部指令识别电路2907可包括语音识别电路2907-2,语音识别电路2907-2内存储有语音数据库,语音接收设备等接收到外界的语音指令或时,与语音数据库中的指令进行对应关系,以对显示设备作出指令控制。同样的,遥控器等控制装置100与N芯片连接,由按键指令识别电路2907-3与控制装置100进行指令交互。In some embodiments, because the image receiving device such as a camera is connected to the A chip, the external command recognition circuit 3907 of the A chip may include a graphic recognition circuit 2907-1. The graphic recognition circuit 3907-1 stores a graphic database, and the camera receives When the graphics instructions are sent to the outside, the corresponding relationship is made with the instructions in the graphics database to control the display device. Since the voice receiving device and the remote control are connected to the N chip, the external command recognition circuit 2907 of the N chip may include a voice recognition circuit 2907-2. The voice recognition circuit 2907-2 stores a voice database, and the voice receiving device, etc. The external voice commands or time correspond to the commands in the voice database to control the display device. Similarly, a control device 100 such as a remote controller is connected to the N chip, and the key command recognition circuit 2907-3 interacts with the control device 100 in command.
图7(a)中示例性示出了根据示例性实施例中显示设备200中软件***的配置框图。FIG. 7(a) exemplarily shows a configuration block diagram of a software system in the display device 200 according to an exemplary embodiment.
对N芯片,如图7(a)中所示,操作***2911,包括用于处理各种基础***服务和用于实施硬件相关任务的执行操作软件,充当应用程序和硬件组件之间完成数据处理的媒介。For the N chip, as shown in Figure 7(a), the operating system 2911 includes operating software for processing various basic system services and implementing hardware-related tasks, acting as an application program and completing data processing between hardware components Medium.
在一些实施例中,部分操作***内核可以包含一系列软件,用以管理显示设备硬件资源,并为其他程序或软件代码提供服务。In some embodiments, part of the operating system kernel may include a series of software to manage the hardware resources of the display device and provide services for other programs or software codes.
在一些实施例中,部分操作***内核可包含一个或多个设备驱动器,设备驱动器可以是操作***中的一组软件代码,帮助操作或控制显示设备关联的设备或硬件。驱动器可以包含操作视频、音频和/或其他多媒体组件的代码。在一些实施例中,包括显示器、摄像头、Flash、WiFi和音频驱动器。In some embodiments, part of the operating system kernel may include one or more device drivers, and the device driver may be a set of software codes in the operating system to help operate or control the device or hardware associated with the display device. The drive may contain code to manipulate video, audio, and/or other multimedia components. In some embodiments, it includes a display, a camera, Flash, WiFi, and an audio driver.
其中,可访问性电路2911-1,用于修改或访问应用程序,以实现应用程序的可访问性和对其显示内容的可操作性。Among them, the accessibility circuit 2911-1 is used to modify or access the application program to realize the accessibility of the application program and the operability of its display content.
通信电路2911-2,用于经由相关通信接口和通信网络与其他外设的连接。The communication circuit 2911-2 is used to connect to other peripherals via the relevant communication interface and communication network.
用户界面电路2911-3,用于提供显示用户界面的对象,以供各应用程序访问,可实现用户可操作性。The user interface circuit 2911-3 is used to provide objects that display the user interface for access by various applications, and can realize user operability.
控制应用程序2911-4,用于控制进程管理,包括运行时间应用程序等。The control application 2911-4 is used to control process management, including runtime applications.
事件传输***2914,可在操作***2911内或应用程序2912中实现。一些实施例中,一方面在在操作***2911内实现,同时在应用程序2912中实现,用于监听各种用户输入事件,将根据各种事件指代响应各类事件或子事件的识别结果,而实施一组或多组预定义的操作的处理程序。The event transmission system 2914 can be implemented in the operating system 2911 or in the application 2912. In some embodiments, it is implemented in the operating system 2911 on the one hand, and implemented in the application program 2912 at the same time, for monitoring various user input events, and responding to the recognition results of various events or sub-events according to various events. And implement one or more sets of pre-defined operation procedures.
其中,事件监听电路2914-1,用于监听用户输入接口输入事件或子事件。Among them, the event monitoring circuit 2914-1 is used to monitor input events or sub-events of the user input interface.
事件识别电路2914-2,用于对各种用户输入接口输入各类事件的定义,识别出各种事件或子事件,且将其传输给处理用以执行其相应一组或多组的处理程序。The event recognition circuit 2914-2 is used to input the definition of various events to various user input interfaces, recognize various events or sub-events, and transmit them to the processing to execute their corresponding one or more sets of processing programs .
其中,事件或子事件,是指显示设备200中一个或多个传感器检测的输入,以及外界控制设备(如控制装置100等)的输入。如:语音输入各种子事件, 手势识别的手势输入子事件,以及控制装置的遥控按键指令输入的子事件等。在一些实施例中,遥控器中一个或多个子事件包括多种形式,包括但不限于按键按上/下/左右/、确定键、按键按住等中一个或组合。以及非实体按键的操作,如移动、按住、释放等操作。The event or sub-event refers to the input detected by one or more sensors in the display device 200 and the input of an external control device (such as the control device 100, etc.). Such as: various sub-events of voice input, gesture input sub-events of gesture recognition, and sub-events of remote control button command input of the control device. In some embodiments, one or more sub-events in the remote control include various forms, including but not limited to one or a combination of up/down/left/right/, confirm key, and key press. And the operations of non-physical keys, such as moving, pressing, and releasing.
界面布局管理电路2913,直接或间接接收来自于事件传输***2914监听到各用户输入事件或子事件,用于更新用户界面的布局,包括但不限于界面中各控件或子控件的位置,以及容器的大小或位置、层级等与界面布局相关各种执行操作。The interface layout management circuit 2913 directly or indirectly receives the events or sub-events monitored by the event transmission system 2914 from the user input, and is used to update the layout of the user interface, including but not limited to the position of the controls or sub-controls in the interface, and the container The size, position, level, etc. of the interface are related to the various execution operations of the interface layout.
由于A芯片的操作***3911与N芯片的操作***2911的功能比较相似,相关之处参见操作***2911即可,在此就不再赘述。Since the functions of the operating system 3911 of the A chip and the operating system 2911 of the N chip are relatively similar, please refer to the operating system 2911 for related details, and will not be repeated here.
如图7(b)中所示,显示设备的应用程序包含可在显示设备200执行的各种应用程序。As shown in FIG. 7(b), the application programs of the display device include various application programs that can be executed on the display device 200.
N芯片的应用程序2912可包含但不限于一个或多个应用程序,如:视频点播应用程序、应用程序中心、游戏应用等。A芯片的应用程序3912可包含但不限于一个或多个应用程序,如:直播电视应用程序、媒体中心应用程序等。需要说明的是,A芯片和N芯片上分别包含什么应用程序是根据操作***和其他设计确定的,本申请无需对A芯片和N芯片上所包含的应用程序做具体的限定和划分。The application program 2912 of the N chip may include, but is not limited to, one or more application programs, such as: video-on-demand application, application center, game application, and so on. The application 3912 of the A chip may include, but is not limited to, one or more applications, such as a live TV application, a media center application, and so on. It should be noted that the application programs contained on the A chip and the N chip are determined according to the operating system and other designs. This application does not need to specifically limit and divide the application programs contained on the A chip and the N chip.
直播电视应用程序,可以通过不同的信号源提供直播电视。例如,直播电视应用程可以使用来自有线电视、无线广播、卫星服务或其他类型的直播电视服务的输入提供电视信号。以及,直播电视应用程序可在显示设备200上显示直播电视信号的视频。Live TV applications can provide live TV through different sources. For example, a live TV application can provide a TV signal using input from cable TV, wireless broadcasting, satellite services, or other types of live TV services. And, the live TV application can display the video of the live TV signal on the display device 200.
视频点播应用程序,可以提供来自不同存储源的视频。不同于直播电视应用程序,视频点播提供来自某些存储源的视频显示。例如,视频点播可以来自云存储的服务器端、来自包含已存视频节目的本地硬盘储存器。Video-on-demand applications can provide videos from different storage sources. Unlike live TV applications, VOD provides video display from certain storage sources. For example, video on demand can come from the server side of cloud storage, and from the local hard disk storage that contains stored video programs.
媒体中心应用程序,可以提供各种多媒体内容播放的应用程序。例如,媒体中心,可以为不同于直播电视或视频点播,用户可通过媒体中心应用程序访问各种图像或音频所提供服务。Media center applications can provide various multimedia content playback applications. For example, the media center can provide services that are different from live TV or video on demand, and users can access various images or audio through the media center application.
应用程序中心,可以提供储存各种应用程序。应用程序可以是一种游戏、应用程序,或某些和计算机***或其他设备相关但可以在显示设备中运行的其他应用程序。应用程序中心可从不同来源获得这些应用程序,将它们储存在本地储存器中,然后在显示设备200上可运行。Application center, can provide storage of various applications. The application program can be a game, an application program, or some other application program that is related to a computer system or other device but can be run on a display device. The application center can obtain these applications from different sources, store them in the local storage, and then run on the display device 200.
图8中示例性示出了根据示例性实施例中显示设备200中用户界面的示意图。如图8所示,用户界面包括多个视图显示区,在一些实施例中,第一视图显示区201和播放画面202,其中,播放画面包括布局一个或多个不同项目。以及用户界面中还包括指示项目被选择的选择器,可通过用户输入而移动选择器的位置,以改变选择不同的项目。FIG. 8 exemplarily shows a schematic diagram of a user interface in the display device 200 according to an exemplary embodiment. As shown in FIG. 8, the user interface includes multiple view display areas. In some embodiments, the first view display area 201 and the play screen 202, where the play screen includes the layout of one or more different items. And the user interface also includes a selector indicating that the item is selected, and the position of the selector can be moved through user input to change the selection of different items.
需要说明的是,多个视图显示区可以呈现不同层级的显示画面。如,第一视图显示区可呈现视频聊天项目内容,第二视图显示区可呈现应用层项目内容(如,网页视频、VOD展示、应用程序画面等)。It should be noted that multiple view display areas can present display screens of different levels. For example, the display area of the first view may present the content of the video chat item, and the display area of the second view may present the content of the application layer item (eg, webpage video, VOD display, application program screen, etc.).
在一些实施例中,不同视图显示区的呈现存在优先级区别,优先级不同的视图显示区之间,视图显示区的显示优先级不同。如,***层的优先级高于应用层的优先级,当用户在应用层使用获取选择器和画面切换时,不遮挡***层的视图显示区的画面展示;以及,根据用户的选择使应用层的视图显示区的大 小和位置发生变化时,***层的视图显示区的大小和位置不受影响。In some embodiments, the presentation of different view display areas has different priorities, and the display priorities of the view display areas are different between view display areas with different priorities. For example, the priority of the system layer is higher than the priority of the application layer. When the user uses the acquisition selector and screen switching in the application layer, the screen display in the view display area of the system layer is not blocked; and the application layer is enabled according to the user's choice. When the size and position of the view display area change, the size and position of the view display area of the system layer will not be affected.
也可以呈现相同层级的显示画面,此时,选择器可以在第一视图显示区和第二视图显示区之间做切换,以及当第一视图显示区的大小和位置发生变化时,第二视图显示区的大小和位置可随及发生改变。The same level of display screen can also be presented. At this time, the selector can switch between the display area of the first view and the display area of the second view, and when the size and position of the display area of the first view change, the second view The size and position of the display area can be changed at any time.
由于A芯片及N芯片中可能分别安装有独立的操作***,从而使显示设备200中存在两个在独立但又存在相互关联的子***。例如,A芯片和N均可以独立安装有安卓(Android)及各类APP,使得每个芯片均可以实现一定的功能,并且使A芯片和N芯片协同实现某项功能。Since an independent operating system may be installed in the A chip and the N chip, there are two independent but related sub-systems in the display device 200. For example, both the A chip and the N chip can be independently installed with Android and various APPs, so that each chip can realize a certain function, and the A chip and the N chip can cooperate to realize a certain function.
在前述双***显示设备的基础上,如图9所示,本申请实施例提供一种音画同步处理方法,所述方法包括如下步骤:On the basis of the aforementioned dual-system display device, as shown in FIG. 9, an embodiment of the present application provides a method for audio-visual synchronization processing, and the method includes the following steps:
步骤S10,获取第一画质处理耗时、第一声音处理耗时、第二画质处理耗时和第二声音处理耗时。Step S10: Obtain the time-consuming processing of the first image quality, the time-consuming processing of the first sound, the time-consuming processing of the second image quality, and the time-consuming processing of the second sound.
第一芯片(即A芯片)主要播放网络片源和本地媒体等视频;第二芯片(即N芯片)可以通过HDMI 2.0外接机顶盒等设备,实现直播电视的播放,第二芯片用于将画质处理后的图像数据输出至显示屏,以显示视频图像,第二芯片还用于将声音处理后的声音数据输出至声音播放器(即图5中的音频输出接口270),用于播放视频的声音。The first chip (ie A chip) mainly plays videos such as network source and local media; the second chip (ie N chip) can be connected to a set-top box and other devices through HDMI 2.0 to realize live TV playback, and the second chip is used to improve image quality The processed image data is output to the display screen to display the video image, and the second chip is also used to output the processed sound data to the sound player (that is, the audio output interface 270 in Figure 5), which is used to play the video. sound.
其中,第一画质处理耗时是第一芯片对视频信号进行画质处理所产生的耗时,可以由第一芯片中的第一视频处理器获取;第一声音处理耗时是第一芯片对音频信号进行画质处理所产生的耗时,可以由第一芯片中的第一音频处理器获取;第二芯片通过通信接口(比如HDMI)接收第一芯片输出的视频信号,并对第一芯片输出的视频信号进行画质处理,由此在第二芯片中产生了第二画质 处理耗时,第二画质处理耗时可以由第二芯片中的第二视频处理器获取;第二芯片通过通信接口(比如HDMI)接收第一芯片输出的音频信号,并对第一芯片输出的音频信号进行声音处理,由此在第二芯片中产生了第二声音处理耗时,第二声音处理耗时可以有第二芯片中的第二音频处理器获取。Among them, the first image quality processing time is the time that the first chip performs image quality processing on the video signal, which can be obtained by the first video processor in the first chip; the first sound processing time is the first chip The time-consuming process of image quality processing on the audio signal can be obtained by the first audio processor in the first chip; the second chip receives the video signal output by the first chip through a communication interface (such as HDMI), and compares the video signal to the first chip. The video signal output by the chip is subjected to image quality processing, which results in the second image quality processing time consuming in the second chip, and the second image quality processing time consuming can be obtained by the second video processor in the second chip; The chip receives the audio signal output by the first chip through a communication interface (such as HDMI), and performs sound processing on the audio signal output by the first chip, thereby generating a second sound processing time-consuming second sound processing in the second chip Time-consuming can be obtained by the second audio processor in the second chip.
本实施例中,画质处理包括亮度、对比度、色度、色调、清晰度、图像降噪、动态对比度、伽玛、色温、白平衡、色彩校正、亮度动态范围、运动画面补偿等处理环节。In this embodiment, the image quality processing includes processing links such as brightness, contrast, chroma, hue, sharpness, image noise reduction, dynamic contrast, gamma, color temperature, white balance, color correction, dynamic range of brightness, motion picture compensation, and so on.
对于第一芯片和第二芯片,画质处理耗时主要通过包括两方面:一方面是完成常规图像处理任务,以使图像达到特定画质效果所需的时间,比如完成亮度、对比度、色度、色调、清晰度、图像降噪、动态对比度、伽玛、色温、白平衡、色彩校正、亮度动态范围、运动画面补偿等各项处理环节总共所消耗的时间;另一方面是对每帧图像进行处理时产生的PQ延迟,由于第一芯片和第二芯片是对图像进行一帧一帧处理,在处理当前帧时,会读取当前帧及其之后的几帧图像数据,通过参照当前帧之后的几帧图像数据,完成对当前帧的画质处理,由于画质处理过程中需要读取后几帧的图像数据,从而会导致每帧图像进行PQ处理时会产生延迟,即上述PQ延迟。For the first chip and the second chip, the time-consuming image quality processing mainly includes two aspects: On the one hand, it is the time required to complete conventional image processing tasks to make the image achieve a specific image quality effect, such as completing brightness, contrast, and chroma. , Hue, sharpness, image noise reduction, dynamic contrast, gamma, color temperature, white balance, color correction, brightness dynamic range, motion picture compensation and other processing links in total time consumed; on the other hand, each frame of image The PQ delay generated during processing, because the first chip and the second chip process the image frame by frame, when processing the current frame, it will read the current frame and the subsequent frames of image data, by referring to the current frame The next few frames of image data complete the image quality processing of the current frame. Because the image data of the next few frames needs to be read during the image quality processing, it will cause a delay in the PQ processing of each frame, that is, the above-mentioned PQ delay .
本实施例中所述的画质处理耗时即为第一方面图像处理耗时与第二方面PQ延迟的总和。因此可以通过如下公式来计算获取第一画质处理耗时和第二画质处理耗时:The time-consuming image quality processing in this embodiment is the sum of the time-consuming image processing in the first aspect and the PQ delay in the second aspect. Therefore, the following formula can be used to calculate the processing time for the first image quality and the processing time for the second image quality:
Figure PCTCN2020071103-appb-000003
Figure PCTCN2020071103-appb-000003
式中,T1为所述第一画质处理耗时;T2为所述第二画质处理耗时;Tn为对每帧图像进行各环节画质处理时产生的耗时;Td为对每帧图像进行画质处理 时产生的PQ延迟;f为刷新频率,单位为Hz;R为帧处理阈值,所述帧处理阈值用于指示读取当前帧及其之后的N-1帧的图像数据,来完成当前帧的画质处理;S为视频播放的帧数。In the formula, T1 is the time consuming for the first image quality processing; T2 is the time consuming for the second image quality processing; Tn is the time consuming for each frame of image quality processing; Td is the time for each frame The PQ delay generated when the image is processed for image quality; f is the refresh frequency, in Hz; R is the frame processing threshold, and the frame processing threshold is used to instruct to read the image data of the current frame and the subsequent N-1 frames, To complete the image quality processing of the current frame; S is the number of frames of video playback.
如图10所示,是以帧处理阈值等于4为例,当需要对第一帧图像进行PQ处理时,需要读取四帧图像,即第一帧及其后的第二帧、第三帧和第四帧,参照第二帧、第三帧和第四帧进行PQ处理,然后将处理后的图像数据写入第一帧,从而完成第一帧图像的PQ处理,假设刷新频率为60Hz,则处理一帧图像产生的PQ延迟Td=4/60≈66ms。其中,帧处理阈值大于或等于2,帧处理阈值越小则产生的PQ延迟越小,帧处理阈值越大则画质处理效果越好,因此可以根据实际应用要求进行设定,本实施例不作限定。As shown in Figure 10, the frame processing threshold is equal to 4 as an example. When PQ processing is required on the first frame of image, four frames of images need to be read, that is, the first frame and the second and third frames thereafter. And the fourth frame, refer to the second, third and fourth frames for PQ processing, and then write the processed image data into the first frame to complete the PQ processing of the first frame of image, assuming that the refresh frequency is 60Hz, Then processing a frame of image PQ delay Td=4/60≈66ms. Among them, the frame processing threshold is greater than or equal to 2. The smaller the frame processing threshold, the smaller the PQ delay. The larger the frame processing threshold, the better the image quality processing effect. Therefore, it can be set according to actual application requirements. This embodiment does not limited.
如图11所示,声音处理包括降噪(NoiseReduction)处理电路、声音信号幅度(Prescale)处理电路、AVC(Auto Volume Control,自动音量控制)电路、音效处理电路、GEQ(Graphic Equalize,图示均衡器)电路和PEQ(Parametric Equalizer,参量均衡器)电路等。其中,音效处理电路可以将声音处理为DTS(Digital Theater System,数字化影院***)或杜比(Dolby Atmos)等音效。按照图11所示的各环节和流程,对音频进行声音处理,第一芯片和第二芯片的声音处理耗时,即第一声音处理耗时和第二声音处理耗时为图11中处理通路各个电路进行声音处理时总共消耗的时间。As shown in Figure 11, sound processing includes noise reduction (Noise Reduction) processing circuit, sound signal amplitude (Prescale) processing circuit, AVC (Auto Volume Control, automatic volume control) circuit, sound effect processing circuit, GEQ (Graphic Equalize, graphic equalization) (Equalizer) circuit and PEQ (Parametric Equalizer, parametric equalizer) circuit, etc. Among them, the sound effect processing circuit can process the sound into DTS (Digital Theater System) or Dolby (Dolby Atmos) and other sound effects. According to the various links and procedures shown in Figure 11, the audio is processed by sound. The sound processing of the first chip and the second chip takes time, that is, the time taken for the first sound processing and the time taken for the second sound processing are the processing paths in Figure 11 The total time consumed by each circuit for sound processing.
降噪(NoiseReduction)处理用于消除由PCM(Pulse Code Modulation,脉冲编码调制)板引起的噪声,通过去噪有利于提高声音质量。声音信号幅度(Prescale)处理是对声音信号幅度进行处理,让不同的信号源进入处理,保持相同的信号幅度。AVC可以实现自动音量控制,限制信号源的声音输出幅度, 显示设备可以根据视频输入的音量水平,自动调节输出音量水平,保持声音的稳定,减少或消除爆音,同时放大较小的声音至适宜的范围;DTS音效和杜比音效都是对声音的音效进行处理,改善声音的播放效果。The noise reduction (Noise Reduction) process is used to eliminate the noise caused by the PCM (Pulse Code Modulation) board, and the noise reduction helps to improve the sound quality. The sound signal amplitude (Prescale) processing is to process the sound signal amplitude, allowing different signal sources to enter the processing, maintaining the same signal amplitude. AVC can realize automatic volume control and limit the sound output amplitude of the signal source. The display device can automatically adjust the output volume level according to the volume level of the video input to maintain the stability of the sound, reduce or eliminate the popping sound, and amplify the smaller sound to a suitable level. Scope; DTS sound effects and Dolby sound effects both process the sound effects of the sound to improve the sound playback effect.
GEQ电路通过面板上推拉键的分布,可直观地反映出所调出的均衡补偿曲线,各个频率的提升和衰减情况一目了然,它采用恒定Q值技术,每个频点设有一个推拉电位器,无论提升或衰减某频率,滤波器的频带宽始终不变。常用的专业均衡器则是将20Hz~20kHz的信号分成10段、15段、27段、31段来进行调节。这样根据不同的要求分别选择不同段数的频率均衡器。一般来说10段均衡器的频率点以倍频程间隔分布,使用在一般场合下,15段均衡器是2/3倍频程均衡器,使用在专业扩声上,31段均衡器是1/3倍频程均衡器,多数有在比较重要的需要精细补偿的场合下。参量均衡器对均衡调节的各种参数都可细致调节,多附设在调音台上,但也有独立的参量均衡器,调节的参数内容包括频段、频点、增益和品质因数Q值等,可以美化和修饰声音,使声音风格更加鲜明突出,丰富多彩达到所需要的艺术效果。The GEQ circuit can intuitively reflect the balance compensation curve that is called through the distribution of push-pull keys on the panel. The increase and attenuation of each frequency are clear at a glance. It uses constant Q technology, and each frequency point is equipped with a push-pull potentiometer. To boost or attenuate a certain frequency, the frequency bandwidth of the filter is always the same. The commonly used professional equalizer divides the 20Hz~20kHz signal into 10 segments, 15 segments, 27 segments, and 31 segments for adjustment. In this way, frequency equalizers with different numbers of segments are selected according to different requirements. Generally speaking, the frequency points of the 10-band equalizer are distributed in octave intervals. In general, the 15-band equalizer is a 2/3 octave equalizer, and when used in professional sound reinforcement, the 31-band equalizer is 1 The /3-octave equalizer is mostly used in more important occasions where fine compensation is required. The parametric equalizer can finely adjust various parameters of the equalization adjustment, and it is mostly attached to the mixer, but there is also an independent parametric equalizer. The adjusted parameters include frequency band, frequency point, gain and quality factor Q value, etc. Beautify and modify the sound, make the sound style more vivid and prominent, rich and colorful to achieve the required artistic effect.
本申请中,画质处理和声音处理的类型、具体处理过程等内容可以参照现有相关技术,本实施例不再赘述。In this application, the types and specific processing procedures of image quality processing and sound processing can be referred to the related art, which will not be repeated in this embodiment.
步骤S20,根据第一画质处理耗时、第一声音处理耗时、第二画质处理耗时和第二声音处理耗时,计算音画同步时差。Step S20: Calculate the audio-visual synchronization time difference according to the time-consuming first image quality processing, the time-consuming first sound processing, the time-consuming second image quality processing, and the time-consuming second sound processing.
在步骤S10执行完成后,具体可按照如下公式计算音画同步时差:After the execution of step S10 is completed, the audio-visual synchronization time difference can be calculated according to the following formula:
N=T1+T2-(T3+T4)N=T1+T2-(T3+T4)
式中,N为音画同步时差,T1为第一画质处理耗时,T2为第二画质处理耗时,T3为第一声音处理耗时,T4为第二声音处理耗时。计算双芯片总的画质处 理耗时与双芯片总的声音处理耗时之间的差值,即为所述音画同步时差。In the formula, N is the time difference between audio and video synchronization, T1 is the time consuming for the first image quality processing, T2 is the time consuming for the second image quality processing, T3 is the time consuming for the first sound processing, and T4 is the time consuming for the second sound processing. Calculating the difference between the total image quality processing time of the dual-chip and the total sound processing time of the dual-chip is the audio-visual synchronization time difference.
如图12所示,在本实施例可能的实现方式中,A芯片的网络视频或者本地媒体应用刚起播前,在第一芯片进行视频解码后,可以通过A芯片的通信电路向N芯片发送获取PQ耗时和声音耗时的指令;N芯片的通信电路接收并解析A芯片发送的指令,计算N芯片的第二画质处理耗时和第二声音处理耗时;然后N芯片通信电路将第二画质处理耗时和第二声音处理耗时打包发送至A芯片;A芯片通信电路接收到N芯片发送的参数数据,A芯片解析相关数据,即可获取到N芯片的第二画质处理耗时和第二声音处理耗时。As shown in Figure 12, in a possible implementation of this embodiment, the network video or local media application of the A chip can be sent to the N chip through the communication circuit of the A chip immediately before the video is decoded by the first chip. Get the PQ time-consuming and sound-consuming instructions; the communication circuit of the N chip receives and parses the instructions sent by the A chip, and calculates the second image quality processing time and the second sound processing time of the N chip; then the N chip communication circuit will The second image quality processing time and the second sound processing time are packaged and sent to the A chip; the A chip communication circuit receives the parameter data sent by the N chip, and the A chip analyzes the relevant data to obtain the second image quality of the N chip The processing is time consuming and the second sound processing is time consuming.
在网络视频或者本地媒体应用起播时,A芯片获取自身的第一画质处理耗时和第一声音处理耗时,加之已获取的N芯片的第二画质处理耗时和第二声音处理耗时,从而得到T1、T2、T3和T4,并将T1、T2、T3和T4的数据发送给A芯片中的控制器;A芯片中的控制器接收T1、T2、T3和T4后,即步骤S10执行完成后,根据步骤S20即可计算出音画同步时差。本申请中,A芯片和N芯片的耗时参数可以同时获取,或者先获取A芯片的耗时参数后获取N芯片的耗时参数,又或者先获取N芯片的耗时参数然后获取A芯片的耗时参数,本实施例对此不作限定。When the network video or local media application starts, the A chip obtains its own first image quality processing time and the first sound processing time, plus the acquired N chip's second image quality processing time and second sound processing time Time-consuming to obtain T1, T2, T3, and T4, and send the data of T1, T2, T3, and T4 to the controller in the A chip; after the controller in the A chip receives T1, T2, T3, and T4, that is After the execution of step S10 is completed, the audio-visual synchronization time difference can be calculated according to step S20. In this application, the time-consuming parameters of the A chip and the N-chip can be obtained at the same time, or the time-consuming parameters of the A chip and then the time-consuming parameters of the N chip can be obtained first, or the time-consuming parameters of the N chip and then the time-consuming parameters of the A chip The time-consuming parameter is not limited in this embodiment.
步骤S30,判断所述音画同步时差是否在阈值范围内。Step S30: It is judged whether the time difference of the audio-visual synchronization is within a threshold range.
在一些实施例中,所述阈值范围为-30ms~+20ms,20ms为阈值范围的上限值,阈值范围的上限值为允许图像比声音延迟出现的时间,即图像比声音最多晚20ms出现;-30ms为阈值范围的下限值,阈值范围的下限值为允许图像比声音超前出现的时间,即图像比声音最多早30ms出现。如果音画同步时差是否在阈值范围内,则认为声音和画面同步;反之,如果音画同步时差未在阈 值范围内,则认为声音和画面出现不同步的现象。In some embodiments, the threshold range is -30ms~+20ms, 20ms is the upper limit of the threshold range, and the upper limit of the threshold range is the time allowed for the image to appear later than the sound, that is, the image appears 20ms later than the sound at most ; -30ms is the lower limit of the threshold range, and the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound, that is, the image appears 30ms earlier than the sound at most. If the audio-visual synchronization time difference is within the threshold range, the sound and the picture are considered to be synchronized; on the contrary, if the audio-visual synchronization time difference is not within the threshold range, it is considered that the sound and the picture are out of sync.
如果所述音画同步时差大于阈值范围的上限值,则认为双芯片的画质处理耗时要大于双芯片的声音处理耗时,导致图像会滞后于声音;如果音画同步时差小于阈值范围的下限值,则认为双芯片的画质处理耗时要小于双芯片的声音处理耗时,导致图像会超前于声音。因此,通过音画同步时差与阈值范围的比较,可以准确获知画面与声音不同步的状态,可能是图像比声音晚出现,或者是图像比声音早出现,从而有针对性地采取同步调节手段。If the audio and video synchronization time difference is greater than the upper limit of the threshold range, it is considered that the image quality processing time of the dual-chip is greater than the audio processing time of the dual-chip, causing the image to lag behind the sound; if the audio and video synchronization time difference is less than the threshold range The lower limit of, it is considered that the image quality processing time of the dual-chip is less than the sound processing time of the dual-chip, which causes the image to be ahead of the sound. Therefore, by comparing the time difference between audio and video synchronization and the threshold range, it is possible to accurately know the state of the picture and the sound being out of sync. It may be that the picture appears later than the sound, or the picture appears earlier than the sound, so that the synchronization adjustment method can be adopted in a targeted manner.
如果所述音画同步时差未在阈值范围内,则执行步骤S40,则对所述第二芯片输出的视频信号和音频信号进行同步补偿,从而将显示设备播放的声音和画面调节同步。If the audio-visual synchronization time difference is not within the threshold range, step S40 is executed to synchronously compensate the video signal and the audio signal output by the second chip, so as to synchronize the sound played by the display device with the picture adjustment.
如果所述音画同步时差大于阈值范围的上限值,说明声音会比图像早出现,则可以采用音频延迟(Audio Delay)的方式;或者采用丢帧的方式,通过丢弃与声音不同步的一部分图像帧,从而将声音与图像调节同步。如果所述音画同步时差小于阈值范围的下限值,说明图像会比声音早出现,则可采用***帧的方式,干预图像播放,从而调节声音与画面同步。If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, indicating that the sound will appear earlier than the image, the audio delay method can be used; or the frame loss method can be used to discard the part that is not synchronized with the sound Image frames to synchronize sound and image adjustments. If the time difference between the audio and picture synchronization is less than the lower limit of the threshold range, it means that the image will appear earlier than the sound, and the frame can be inserted to intervene in the image playback, thereby adjusting the synchronization of the sound and the picture.
采用丢帧或插帧的方式来调节声音与画面同步时,可以包括两种同步的补偿模式,一种是将声音与图像调节立即同步,另一种是在预设的阈值时间Z内将声音与画面调节同步。When adjusting the sound and picture synchronization by dropping frames or inserting frames, two synchronization compensation modes can be included. One is to synchronize the sound and image adjustment immediately, and the other is to synchronize the sound within a preset threshold time Z. Synchronize with screen adjustment.
在执行步骤S10~步骤S30后,如果音画同步时差不在阈值范围内,则在步骤S40中,如图13所示,所述方法还可包括:After performing steps S10 to S30, if the audio-visual synchronization time difference is not within the threshold range, in step S40, as shown in FIG. 13, the method may further include:
步骤S401,判断音画同步时差是否大于阈值范围的上限。如果音画同步时差大于阈值范围的上限,说明声音超前图像播放,则可执行步骤S402、步骤 S403或者步骤S404;如果音画同步时差不大于阈值范围的上限,即音画同步时差小于阈值范围的下限值,说明图像超前声音播放,则可执行步骤S405或者步骤S406。Step S401: It is judged whether the time difference between audio and video synchronization is greater than the upper limit of the threshold range. If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, indicating that the sound is ahead of the image playback, step S402, step S403, or step S404 can be performed; if the audio-visual synchronization time difference is not greater than the upper limit of the threshold range, that is, the audio-visual synchronization time difference is less than the threshold range. The lower limit value indicates that the image is played in advance of the sound, and then step S405 or step S406 can be executed.
当声音超前图像播放时,则有N=T1+T2-(T3+T4)大于0,即双芯片的画质处理耗时T1+T2大于双芯片的声音处理耗时T3+T4,这种情况下,在步骤S402中,如果采用将声音和画面立即调节同步的补偿模式,立即丢帧数DZ=f×N/1000,则在判断出音画同步时差大于阈值范围的上限值时,立即丢弃DZ帧的图像。其中,f为刷新频率,用于表示每秒钟(s)图像显示的帧数;N为音画同步时差,单位为毫秒。When the sound is played in advance of the image, N=T1+T2-(T3+T4) is greater than 0, that is, the two-chip image quality processing time T1+T2 is greater than the two-chip sound processing time T3+T4, in this case Next, in step S402, if the compensation mode is used to adjust the synchronization of the sound and the picture immediately, and the number of frames lost immediately DZ=f×N/1000, when it is determined that the time difference between audio and video synchronization is greater than the upper limit of the threshold range, immediately Discard the image of the DZ frame. Among them, f is the refresh frequency, which is used to indicate the number of frames of image display per second (s); N is the time difference between audio and video synchronization, in milliseconds.
当声音超前图像播放时,在步骤S403中,如果采用将声音和画面在预设的阈值时间Z内调节同步的补偿模式,计算间隔帧数JZ1,JZ1=Z/N,则在判断出音画同步时差大于阈值范围的上限值时,每间隔JZ1帧就丢弃一帧图像。When the sound is played in advance of the image, in step S403, if the compensation mode is used to adjust the synchronization of the sound and the picture within the preset threshold time Z, and the interval frame number JZ1 is calculated, JZ1=Z/N, then the sound and picture are judged When the synchronization time difference is greater than the upper limit of the threshold range, one frame of image is discarded every JZ1 frame.
或者,当声音超前图像播放时,在步骤S404中,执行Audio Delay的补偿模式,使音频延迟播放,从而将声音与画面调节同步。Or, when the sound is played before the image, in step S404, the Audio Delay compensation mode is executed to delay the playback of the audio, thereby synchronizing the sound with the picture adjustment.
当图像超前声音播放时,则有N=T1+T2-(T3+T4)小于0,即双芯片的画质处理耗时T1+T2小于双芯片的声音处理耗时T3+T4,这种情况下,在步骤S405中,如果采用将声音和画面立即调节同步的补偿模式,则立即插帧数CZ=f×|N|/1000,则在判断出音画同步时差小于阈值范围的下限值时,立即***CZ帧图像。其中,f为刷新频率,用于表示每秒钟图像显示的帧数;|N|为音画同步时差的绝对值,单位为毫秒。When the image is played in advance of the sound, then N=T1+T2-(T3+T4) is less than 0, that is, the two-chip image quality processing time T1+T2 is less than the two-chip sound processing time T3+T4, in this case Next, in step S405, if the compensation mode that adjusts the sound and the picture immediately in synchronization is adopted, the number of frames immediately inserted is CZ=f×|N|/1000, and it is determined that the time difference between the audio and video synchronization is less than the lower limit of the threshold range When, insert the CZ frame image immediately. Among them, f is the refresh frequency, which is used to indicate the number of frames displayed per second; |N| is the absolute value of the time difference between audio and video synchronization, in milliseconds.
当图像超前声音播放时,在步骤S406中,如果采用将声音和画面在预设的阈值时间Z内调节同步的补偿模式,则计算间隔帧数JZ2,JZ2=Z/|N|,则在判 断出音画同步时差小于阈值范围的下限值时,每间隔JZ2帧***一帧图像。本实施例中,阈值时间Z可根据实际应用情况进行设置,本实施例不作限定。When the image is played in advance of the sound, in step S406, if a compensation mode is used to adjust the synchronization of the sound and the picture within the preset threshold time Z, the interval frame number JZ2 is calculated, JZ2=Z/|N|, then the judgment is When the time difference between the audio and video synchronization is less than the lower limit of the threshold range, one frame of image is inserted every JZ2 frame. In this embodiment, the threshold time Z can be set according to actual application conditions, which is not limited in this embodiment.
通过音画同步时差与阈值范围的比较,获取声音与画面不同步的状态后,不限于本实施例所列举的几种同步调节手段,本领域技术人员还可采用其他方式来调节声音与画面的同步。在视频播放过程中,用户可能会调节图像设置或声音设置,比如用户在图像设置中将图像模式调节为游戏模式,为了保证图像快速显示从而使用户获得更好的游戏体验,在游戏模式下需要将画面延迟尽可能降低。By comparing the time difference between audio and video synchronization and the threshold range, after acquiring the state that the sound is not synchronized with the picture, it is not limited to the several synchronization adjustment methods listed in this embodiment. Those skilled in the art can also use other methods to adjust the sound and picture. Synchronize. During the video playback process, the user may adjust the image settings or sound settings. For example, the user adjusts the image mode to the game mode in the image settings. In order to ensure that the image is displayed quickly so that the user can get a better gaming experience, it is required in the game mode. Keep the picture delay as low as possible.
又比如,用户在声音设置中将高级音效关闭,或者选择进入K歌低延迟模式,这时需要将声音延迟尽可能低,以保证用户具有更好K歌体验。由于游戏模式需要图像低延迟状态,所以画质处理有些方面会设置为最小,比如读取1帧图像进行画质处理,这样画质处理耗时就会降低,用户在调节声音设置后,声音处理耗时可能也会发生变化,因此需要重新计算音画同步时差,从而保证音画同步。For another example, the user turns off the advanced sound effects in the sound settings, or chooses to enter the K song low latency mode. At this time, the sound delay needs to be as low as possible to ensure that the user has a better K song experience. Since the game mode requires a low image delay state, some aspects of the image quality processing will be set to the minimum, such as reading 1 frame of image for image quality processing, so that the time-consuming image quality processing will be reduced. After the user adjusts the sound settings, the sound processing Time-consuming may also change, so it is necessary to recalculate the audio-visual synchronization time difference to ensure audio-visual synchronization.
在本实施例可能的实现方式中,所述方法还包括:在视频播放过程中,检测是否接收到图像设置操作;如果接收到图像设置操作,则重新获取第一画质处理耗时、第一声音处理耗时、第二画质处理耗时和第二声音处理耗时,以对所述音画同步时差进行修正;如果修正后的音画同步时差不在阈值范围内,则根据修正后的音画同步时差对所述第二芯片输出的视频信号和音频信号进行同步补偿,从而将声音和画面调节同步。In a possible implementation of this embodiment, the method further includes: during the video playback process, detecting whether an image setting operation is received; if the image setting operation is received, reacquiring the first image quality processing time-consuming, first Time-consuming sound processing, time-consuming second image quality processing, and time-consuming second sound processing are used to correct the audio-visual synchronization time difference; if the corrected audio-visual synchronization time difference is not within the threshold range, it will be based on the corrected audio and video synchronization time difference. The picture synchronization time difference synchronously compensates the video signal and the audio signal output by the second chip, so as to synchronize the sound and picture adjustment.
在本实施例可能的实现方式中,所述方法还包括:在视频播放过程中,检测是否接收到声音设置操作;如果接收到声音设置操作,则重新获取第一画质 处理耗时、第一声音处理耗时、第二画质处理耗时和第二声音处理耗时,以对所述音画同步时差进行修正;如果修正后的音画同步时差不在阈值范围内,则根据修正后的音画同步时差对所述第二芯片输出的视频信号和音频信号进行同步补偿,从而将声音和画面调节同步。In a possible implementation of this embodiment, the method further includes: during the video playback process, detecting whether a sound setting operation is received; if the sound setting operation is received, reacquiring the first image quality processing time-consuming, first Time-consuming sound processing, time-consuming second image quality processing, and time-consuming second sound processing are used to correct the audio-visual synchronization time difference; if the corrected audio-visual synchronization time difference is not within the threshold range, it will be based on the corrected audio and video synchronization time difference. The picture synchronization time difference synchronously compensates the video signal and the audio signal output by the second chip, so as to synchronize the sound and picture adjustment.
其中,图像设置操作和声音设置操作是用户对显示设备的操作,比如利用遥控器、鼠标或触屏操作等方式,通过启动显示界面中的“图像设置”和“声音设置”选项,从而使用户能根据使用需求,来适应性设置图像和声音的播放状态。图像设置操作和声音设置操作是由第一芯片进行检测的。Among them, the image setting operation and the sound setting operation are the user's operations on the display device, such as using the remote control, mouse or touch screen operation, by starting the "image setting" and "sound setting" options in the display interface, so that the user The playback status of images and sounds can be set adaptively according to usage requirements. The image setting operation and the sound setting operation are detected by the first chip.
在本实施例可能的实现方式中,所述方法还包括:在视频播放过程中,每隔预设时间获取第一画质处理耗时、第一声音处理耗时、第二画质处理耗时和第二声音处理耗时,以对所述音画同步时差进行修正;如果修正后的音画同步时差不在阈值范围内,则根据修正后的音画同步时差对所述第二芯片输出的视频信号和音频信号进行同步补偿,从而将声音和画面调节同步。每隔预设时间,比如2秒,定时对音画同步时差进行更新和修正,从而保证视频播放尽可能保持音画同步播放状态。In a possible implementation manner of this embodiment, the method further includes: during the video playback process, acquiring the first image quality every preset time is time-consuming, time-consuming to process the first sound, and time-consuming to process the second image quality. Time-consuming processing with the second sound to correct the audio-visual synchronization time difference; if the corrected audio-visual synchronization time difference is not within the threshold range, the video output from the second chip is performed according to the corrected audio-visual synchronization time difference The signal and audio signal are compensated synchronously to synchronize the sound and picture adjustments. Every preset time, such as 2 seconds, the time difference of audio and video synchronization is updated and corrected regularly, so as to ensure that the video playback keeps the audio and video synchronization playback status as much as possible.
本申请还提供一种显示设备实施例,用于实现如上所述的音画同步处理方法,至少应包括第一芯片、第二芯片、显示屏和音频输出接口,具体可参照前述关于双***架构的详细说明。所述第一芯片用于对播放数据进行解码,以将视频信号和音频信号进行分离,对视频信号进行画质处理,以及对音频信号进行声音处理,然后将一次处理后的视频信号和音频信号通过通信接口(比如HDMI)发送至所述第二芯片;所述第二芯片用于对第一芯片发送的视频信号进行画质处理,并将二次处理后的视频信号输出至显示屏,以及,对第一芯片发 送的音频信号进行声音处理,并将二次处理后的音频信号输出至音频输出接口。所述显示设备还包括:This application also provides an embodiment of a display device, which is used to implement the audio-visual synchronization processing method described above, which should at least include a first chip, a second chip, a display screen, and an audio output interface. For details, please refer to the aforementioned dual-system architecture Detailed description. The first chip is used to decode the playback data to separate the video signal and the audio signal, perform image quality processing on the video signal, and perform sound processing on the audio signal, and then combine the processed video signal and audio signal once. It is sent to the second chip through a communication interface (such as HDMI); the second chip is used to process the image quality of the video signal sent by the first chip, and output the secondary processed video signal to the display screen, and , Perform sound processing on the audio signal sent by the first chip, and output the secondary processed audio signal to the audio output interface. The display device also includes:
存储器,用于存储程序指令;Memory, used to store program instructions;
处理器,所述处理器被配置为调用并执行所述存储器中的程序指令,执行上述音画同步处理方法实施例中的全部步骤。A processor, the processor is configured to call and execute program instructions in the memory, and execute all the steps in the foregoing audio-visual synchronization processing method embodiment.
本实施例中,存储器和处理器可以集成一体,或者通过总线连接。处理器可以是中央处理单元(Central Processing Unit,CPU),其他通用处理器、数字信号处理器(Digital Signal Processing,DSP)、或者专用集成电路等。存储器可以为高速RAM存储器、磁盘存储器、只读存储器、U盘、硬盘、快闪存储器或者非易失性存储器等。本申请实施例所涉及的方法步骤,可以直接体现为硬件处理器执行完成,或者利用处理器中的硬件及软件电路组合执行完成。In this embodiment, the memory and the processor may be integrated or connected through a bus. The processor may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processing, DSP), or application specific integrated circuits. The memory can be high-speed RAM memory, disk memory, read-only memory, U disk, hard disk, flash memory, or non-volatile memory. The method steps involved in the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by using a combination of hardware and software circuits in the processor.
针对具有双***结构的显示设备,在第一芯片进行视频解码,将图像数据和声音数据分离后,获取第一画质处理耗时和第一声音处理耗时,以及获取第二画质处理耗时和第二声音处理耗时,从而准确获取到两个芯片对于声效和画质进行处理时分别产生的耗时,然后利用这些参数计算音画同步时差,即计算显示器播放图像与声音播放装置播放声音之间存在的时间差值,如果音画同步时差不在阈值范围内时,则认为声音与画面没有同步播放,根据音画同步时差,可以确定图像是超前于声音,还是图像是滞后于声音,从而有针对性地对第二芯片输出的视频信号和音频信号进行同步补偿,来实现音画同步,从而提高显示设备的视频播放效果。For a display device with a dual-system structure, after the first chip performs video decoding and separates the image data and the sound data, it takes time to obtain the first image quality and the first sound processing, and to obtain the second image quality. Time and second sound processing time-consuming, so as to accurately obtain the time consumed by the two chips when processing the sound effects and image quality, and then use these parameters to calculate the audio-visual synchronization time difference, that is, to calculate the display playback image and the sound playback device playback The time difference between the sounds. If the audio and video synchronization time difference is not within the threshold range, it is considered that the sound and the picture are not played synchronously. According to the audio and video synchronization time difference, it can be determined whether the image is ahead of the sound or the image is lagging behind the sound. In this way, the video signal and the audio signal output by the second chip are synchronously compensated in a targeted manner to achieve audio and video synchronization, thereby improving the video playback effect of the display device.
本实施例中,示出的是在显示设备中另外单独设置存储器和处理器,以执行音画同步处理方法的一种实现方式。本申请还提供另一种显示设备实施例, 所述显示设备包括第一芯片、第二芯片、显示器(即图5中的显示器280)和声音播放器,所述第一芯片与所述第二芯片之间通信连接,所述显示器用于显示视频图像,所述声音播放器用于播放音频,所述第一芯片上设有第一视频处理器(即图5中的视频处理器360-1)、第一音频处理器(即图5中的音频处理器360-2)和控制器,所述第二芯片上设有第二视频处理器(即图5中的视频处理器260-1)和第二音频处理器(即图5中的音频处理器260-2);本申请各实施例中,声音播放器即为图5中的音频输出接口270,音频输出接口270为显示设备的扬声器272,或者包括外接音响输出端子274,用于外接音响设备,实现声音播放。如图14所示,在该显示设备中:In this embodiment, an implementation manner of separately setting a memory and a processor in the display device to execute the audio-visual synchronization processing method is shown. This application also provides another embodiment of a display device. The display device includes a first chip, a second chip, a display (ie, the display 280 in FIG. 5), and a sound player. Communication connection between the chips, the display is used to display video images, the sound player is used to play audio, and a first video processor (that is, the video processor 360-1 in FIG. 5) is provided on the first chip , A first audio processor (that is, the audio processor 360-2 in FIG. 5) and a controller, and the second chip is provided with a second video processor (that is, the video processor 260-1 in FIG. 5) and The second audio processor (that is, the audio processor 260-2 in FIG. 5); in each embodiment of the present application, the sound player is the audio output interface 270 in FIG. 5, and the audio output interface 270 is the speaker 272 of the display device , Or include an external audio output terminal 274 for external audio equipment to realize sound playback. As shown in Figure 14, in the display device:
所述第一视频处理器用于通过输入接口接收视频信号,并对所述视频信号进行画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal through an input interface, and perform image quality processing on the video signal, and it takes time to obtain the first image quality processing;
所述第一音频处理器通过输入接口接收音频信号,并对所述音频信号进行声音处理,获取第一声音处理耗时;The first audio processor receives an audio signal through an input interface, and performs sound processing on the audio signal, and it takes time to obtain the first sound processing;
所述第二视频处理器通过通信接口接收所述第一芯片输出的视频信号,并对所述第一芯片输出的视频信号进行画质处理,获取第二画质处理耗时;The second video processor receives the video signal output by the first chip through a communication interface, and performs image quality processing on the video signal output by the first chip, and it takes time to obtain the second image quality processing;
所述第二音频处理器通过通信接口接收所述第一芯片输出的音频信号,并对所述第一芯片输出的音频信号进行声音处理,获取第二声音处理耗时;The second audio processor receives the audio signal output by the first chip through a communication interface, and performs sound processing on the audio signal output by the first chip, and it takes time to obtain the second sound processing;
第一芯片中的控制器被配置为:The controller in the first chip is configured as:
根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;Calculating the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
判断所述音画同步时差是否在阈值范围内;Judging whether the audio-visual synchronization time difference is within a threshold range;
如果所述音画同步时差不在阈值范围内,则对所述第二芯片输出的视频信 号和音频信号进行同步补偿,并将同步补偿后的视频信号传输至所述显示器,以及,将同步补偿后的音频信号输出至所述声音播放器;If the audio-visual synchronization time difference is not within the threshold range, the video signal and audio signal output by the second chip are synchronously compensated, and the synchronously compensated video signal is transmitted to the display, and the synchronously compensated Output the audio signal of to the sound player;
如果所述音画同步时差在阈值范围内,则将所述第二芯片输出的视频信号传输至所述显示器,以及,将所述第二芯片输出的音频信号输出至所述声音播放器。If the audio-visual synchronization time difference is within the threshold range, the video signal output by the second chip is transmitted to the display, and the audio signal output by the second chip is output to the sound player.
对应于前述方法实施例中,步骤S10、接收到图像设置操作或声音设置操作时的同步修正以及每隔预设时间的同步修正,都是由第一视频处理器获取第一画质处理耗时,第一音频处理器获取第一声音处理耗时,第二视频处理器获取第二画质处理耗时,第二音频处理器获取第二声音处理耗时,控制器接收第一画质处理耗时、第一声音处理耗时、第二画质处理耗时和第二声音处理耗时后,控制器中就同时具有两个芯片的画质处理耗时和声音处理耗时数据,以便由控制器计算音画同步时差;步骤S20~步骤S40及相应的细化步骤/公式等都是由第一芯片中设置的控制器来执行。Corresponding to the foregoing method embodiment, step S10, the synchronization correction when the image setting operation or the sound setting operation is received, and the synchronization correction every preset time are all time-consuming for the first video processor to obtain the first image quality. It takes time for the first audio processor to acquire the first sound processing, the second video processor to acquire the second image quality processing time, the second audio processor to acquire the second sound processing time, and the controller to receive the first image quality processing time After the time, the first sound processing time-consuming, the second image quality processing time-consuming and the second sound processing time-consuming, the controller has two chips at the same time for the image quality processing time-consuming and sound processing time-consuming data, so as to be controlled by The controller calculates the time difference between audio and video synchronization; steps S20 to S40 and the corresponding refinement steps/formulas are all executed by the controller set in the first chip.
本申请实施例还提出一种显示设备,包括The embodiment of the present application also proposes a display device, including
显示器,用于显示视频图像;Display, used to display video images;
声音播放器,用于播放音频;Sound player, used to play audio;
第一视频处理器用于接收视频信号,并进行第一画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
第一音频处理器用于接收音频信号,并进行第一声音处理,获取第一声音处理耗时;The first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing;
第二视频处理器用于接收所述第一视频处理器输出处理后的视频信号,并进行第二画质处理,获取第二画质处理耗时;The second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
第二音频处理器用于接收所述第一音频处理器输出处理后的音频信号,并进行第二声音处理,获取第二声音处理耗时;The second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
所述控制器被配置为:The controller is configured to:
根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;Calculating the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输出处理后的音频信号进行补偿,并传输到所述声音播放器。Compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, and output the processed video signal to the second audio processor according to the audio-visual synchronization time difference The audio signal is compensated and transmitted to the sound player.
具体的实现步骤可参考上述实施例的介绍,其中,第一视频处理器和第一音频处理器可以设置在第一芯片上,第二视频处理器和第二音频处理器可以设置在第二芯片上;For the specific implementation steps, refer to the introduction of the above-mentioned embodiment. Among them, the first video processor and the first audio processor can be arranged on the first chip, and the second video processor and the second audio processor can be arranged on the second chip. on;
在一些实施例中,第一视频处理器、第一音频处理器、第二视频处理器和第二音频处理器可以设置在同一芯片上。In some embodiments, the first video processor, the first audio processor, the second video processor, and the second audio processor may be provided on the same chip.
在一些实施例中,控制器设置在第一芯片上,也可以设置在第二芯片上。In some embodiments, the controller is provided on the first chip, and may also be provided on the second chip.
在一些实施例中,所述控制器在计算音画同步时差之后,还用于:In some embodiments, the controller is also used to:
判断所述音画同步时差是否在阈值范围内;Judging whether the audio-visual synchronization time difference is within a threshold range;
如果所述音画同步时差不在阈值范围内,根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输出处理后的音频信号进行补偿,并传输到所述声音播放器;If the audio-visual synchronization time difference is not within the threshold range, compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, according to the audio-visual synchronization time difference Compensate the processed audio signal output by the second audio processor and transmit it to the sound player;
如果所述音画同步时差在阈值范围内,将所述第二视频处理器输出处理后的视频信号传输至所述显示器,以及,将第二音频处理器输出处理后的音频信 号输出至所述声音播放器。具体的实现步骤可参考上述实施例的介绍。If the audio-visual synchronization time difference is within the threshold range, the video signal output processed by the second video processor is transmitted to the display, and the audio signal output processed by the second audio processor is output to the display Sound player. For specific implementation steps, refer to the introduction of the foregoing embodiment.
在一些实施例中,所述控制器还用于:In some embodiments, the controller is also used to:
在视频播放过程中,检测是否接收到图像设置操作或声音设置操作;During the video playback, detect whether the image setting operation or the sound setting operation is received;
如果检测到图像设置操作或声音设置操作,所述第一视频处理器重新获取所述第一画质处理耗时,所述第一音频处理器重新获取所述第一声音处理耗时,以及,所述第二视频处理器重新获取所述第二画质处理耗时,所述第二音频处理器重新获取所述第二声音处理耗时;If an image setting operation or a sound setting operation is detected, the process of reacquiring the first image quality by the first video processor takes time, and the process of reacquiring the first sound by the first audio processor takes time, and, It takes time for the second video processor to reacquire the second image quality processing, and it takes time for the second audio processor to reacquire the second sound processing;
所述控制器根据重新获取的所述第一画质处理耗时、所述第一声音处理耗、所述第二画质处理耗时和所述第二声音处理耗时,对所述音画同步时差进行修正。具体的实现步骤可参考上述实施例的介绍。According to the re-acquired time-consuming processing of the first image quality, the time-consuming processing of the first sound, the time-consuming processing of the second image quality, and the time-consuming processing of the second sound, the controller compares the audio and video Synchronize the time difference to correct it. For specific implementation steps, refer to the introduction of the foregoing embodiment.
在一些实施例中,在视频播放过程中,所述第一视频处理器每隔预设时间获取所述第一画质处理耗时,所述第一音频处理器每隔预设时间获取所述第一声音处理耗时,以及,所述第二视频处理器每隔预设时间获取第二画质处理耗时,所述第二音频处理器每隔预设时间获取所述第二声音处理耗时。具体的实现步骤可参考上述实施例的介绍。In some embodiments, during the video playback process, the first video processor acquires the first image quality every preset time and the processing time is time-consuming, and the first audio processor acquires the first image quality every preset time. The first sound processing is time-consuming, and the second video processor acquires the second image quality processing time every preset time, and the second audio processor acquires the second sound processing time every preset time. Time. For specific implementation steps, refer to the introduction of the foregoing embodiment.
在一些实施例中,所述控制器还用于:In some embodiments, the controller is also used to:
如果所述音画同步时差大于阈值范围的上限值,则进行丢帧或音频延迟;其中,所述阈值范围的上限值为允许图像比声音延迟出现的时间。具体的实现步骤可参考上述实施例的介绍。If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, frame dropping or audio delay is performed; wherein, the upper limit of the threshold range is the time allowed for the image to appear later than the sound. For specific implementation steps, refer to the introduction of the foregoing embodiment.
在一些实施例中,所述控制器还用于:In some embodiments, the controller is also used to:
如果所述音画同步时差小于阈值范围的下限值,则进行插帧;其中,所述阈值范围的下限值为允许图像比声音超前出现的时间。具体的实现步骤可参考 上述实施例的介绍。If the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound. For specific implementation steps, refer to the introduction of the foregoing embodiment.
在一些实施例中,当所述显示设备采用第一补偿模式,所述丢帧数为DZ=f×N/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;N为音画同步时差,单位为毫秒。其中第一补偿模式即为前述实施例的提到的将声音和画面立即调节同步的补偿模式。In some embodiments, when the display device adopts the first compensation mode, the number of dropped frames is DZ=f×N/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; N is the time difference between audio and video synchronization, in milliseconds. The first compensation mode is the compensation mode in which the sound and the picture are adjusted and synchronized immediately mentioned in the foregoing embodiment.
在一些实施例中,当所述显示设备采用第一补偿模式,所述插帧数CZ=f×|N|/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;|N|为音画同步时差的绝对值,单位为毫秒。具体实现方式参考上述实施例介绍。In some embodiments, when the display device adopts the first compensation mode, the number of interpolated frames CZ=f×|N|/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second. ; |N| is the absolute value of the time difference between audio and video synchronization, in milliseconds. The specific implementation manner is introduced with reference to the foregoing embodiment.
本申请实施例还提出一种音画同步处理方法,所述方法包括:The embodiment of the present application also proposes a method for audio-visual synchronization processing, which includes:
第一视频处理器用于接收视频信号,并进行第一画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
第一音频处理器用于接收音频信号,并进行第一声音处理,获取第一声音处理耗时;The first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing;
第二视频处理器用于接收所述第一视频处理器输出处理后的视频信号,并进行第二画质处理,获取第二画质处理耗时;The second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
第二音频处理器用于接收所述第一音频处理器输出处理后的音频信号,并进行第二声音处理,获取第二声音处理耗时;The second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
所述控制器被配置为:The controller is configured to:
根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;Calculating the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输 出处理后的音频信号进行补偿,并传输到所述声音播放器。Compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, and output the processed video signal to the second audio processor according to the audio-visual synchronization time difference The audio signal is compensated and transmitted to the sound player.
本申请实施例还提出一种显示设备,包括The embodiment of the present application also proposes a display device, including
显示器,被配置为显示图像内容;The display is configured to display image content;
声音再现器,被配置为再现声音信号;A sound reproducer, configured to reproduce sound signals;
第一处理芯片,包括第一视频处理器和第一音频处理器,通过输入接口接收外部音频信号和视频信号,所述第一音频处理器用于对所述音频信号进行处理,以及所述第一视频处理器用于对所述视频信号进行处理,且处理所述音频信号和所述视频信号时发生第一时延;The first processing chip includes a first video processor and a first audio processor, receives external audio signals and video signals through an input interface, the first audio processor is used to process the audio signals, and the first The video processor is configured to process the video signal, and a first time delay occurs when the audio signal and the video signal are processed;
第二处理芯片,用于通过连接线接收所述第一芯片输出的音频信号和视频信号,所述第二处理芯片包括第二视频处理器和第二音频处理器,所述第二音频处理器用于对接收所述第一处理芯片所述音频信号进行再处理,以及所述第二视频处理器用于对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第二时延;The second processing chip is configured to receive the audio signal and the video signal output by the first chip through a connecting line, the second processing chip includes a second video processor and a second audio processor, and the second audio processor is used for For reprocessing the audio signal received by the first processing chip, and the second video processor is used for reprocessing the video signal received by the first processing chip, and the audio signal and the The second time delay occurs when the video signal is reprocessed;
根据所述第一时延和所述第二时延,对再处理后所述视频信号和/或所述音频信号进行时延补偿,分别将时延补偿后的所述视频信号和所述音频信号输出所述显示器和所述声音再现器。具体实现步骤参考前述实施例。According to the first time delay and the second time delay, time delay compensation is performed on the video signal and/or the audio signal after reprocessing, and the video signal and the audio signal after the time delay compensation are respectively The signal is output to the display and the sound reproducer. For specific implementation steps, refer to the foregoing embodiment.
在一些实施例中,在视频播放过程中,所述第一芯片检测是否接收到图像设置操作或声音设置操作;In some embodiments, during the video playback process, the first chip detects whether an image setting operation or a sound setting operation is received;
如果所述第一芯片接收到图像设置操作或声音设置操作,则所述第一音频处理器重新对所述音频信号进行处理,以及所述第一视频处理器重新对所述视频信号进行处理,且重新处理所述音频信号和所述视频信号时发生第三时延;If the first chip receives an image setting operation or a sound setting operation, the first audio processor reprocesses the audio signal, and the first video processor reprocesses the video signal, And a third time delay occurs when the audio signal and the video signal are reprocessed;
以及,所述第二音频处理器用于对接收所述第一处理芯片所述音频信号进 行再处理,以及所述第二视频处理器用于对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第四时延;根据所述第三时延和所述第四时延,对再处理后所述视频信号和/或所述音频信号进行时延补偿。And, the second audio processor is used for reprocessing the audio signal received by the first processing chip, and the second video processor is used for reprocessing the video signal received by the first processing chip , And a fourth time delay occurs during the reprocessing of the audio signal and the video signal; according to the third time delay and the fourth time delay, the video signal and/or the audio signal after the reprocessing Perform delay compensation.
在一些实施例中,在视频播放过程中,每隔预定时间,所述第一音频处理器对所述音频信号进行处理,以及所述第一视频处理器对所述视频信号进行处理,且获取处理所述音频信号和所述视频信号时发生第一时延;以及每隔预定时间,所述第二音频处理器对接收所述第一处理芯片所述音频信号进行再处理,以及所述第二视频处理器对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第二时延。具体实现步骤参考前述实施例。In some embodiments, during the video playback process, at predetermined intervals, the first audio processor processes the audio signal, and the first video processor processes the video signal, and obtains A first time delay occurs when processing the audio signal and the video signal; and every predetermined time, the second audio processor reprocesses the audio signal received by the first processing chip, and the second The second video processor reprocesses the video signal received by the first processing chip, and a second time delay occurs when the audio signal and the video signal are reprocessed. For specific implementation steps, refer to the foregoing embodiment.
在一些实施例中,所述控制器按如下步骤对再处理后所述视频信号和/或所述音频信号进行时延补偿:In some embodiments, the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
如果音画同步时差大于阈值范围的上限值,则进行丢帧或音频延迟;其中,所述阈值范围的上限值为允许图像比声音延迟出现的时间;If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, then perform frame drop or audio delay; wherein the upper limit of the threshold range is the time allowed for the image to appear later than the sound;
其中,所述音画同步视差等于所述第一时延和所述第二时延之和。具体实现步骤参考前述实施例。Wherein, the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay. For specific implementation steps, refer to the foregoing embodiment.
在一些实施例中,所述控制器按如下步骤对再处理后所述视频信号和/或所述音频信号进行时延补偿:In some embodiments, the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
如果所述音画同步时差小于阈值范围的下限值,则进行插帧;其中,所述阈值范围的下限值为允许图像比声音超前出现的时间;If the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound;
其中,所述音画同步视差等于所述第一时延和所述第二时延之和。具体实 现步骤参考前述实施例。Wherein, the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay. For specific implementation steps, refer to the foregoing embodiment.
在一些实施例中,所述进行丢帧包括:In some embodiments, the performing frame dropping includes:
如果采用将声音和画面立即调节同步的补偿模式,则立即丢帧数DZ=f×N/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;N为音画同步时差,单位为毫秒;If the compensation mode that adjusts the sound and the picture immediately is synchronized, the number of immediately dropped frames is DZ=f×N/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; N is the synchronization of audio and video Time difference, in milliseconds;
如果采用将声音和画面在预设的阈值时间内调节同步的补偿模式,则计算间隔帧数JZ1,JZ1=Z/N,其中Z为所述阈值时间;每间隔JZ1帧丢弃一帧图像。具体实现步骤参考前述实施例。If the compensation mode that adjusts the sound and the picture in the preset threshold time is adopted, the interval frame number JZ1 is calculated, JZ1=Z/N, where Z is the threshold time; one frame of image is discarded every JZ1 frame interval. For specific implementation steps, refer to the foregoing embodiment.
在一些实施例中,所述进行插帧包括:In some embodiments, the interpolating frame includes:
如果采用将声音和画面立即调节同步的补偿模式,则立即插帧数CZ=f×|N|/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;|N|为音画同步时差的绝对值,单位为毫秒;If the compensation mode that adjusts the sound and the picture immediately is synchronized, the number of frames immediately inserted is CZ=f×|N|/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; |N| It is the absolute value of the audio and video synchronization time difference, in milliseconds;
如果采用将声音和画面在预设的阈值时间内调节同步的补偿模式,则计算间隔帧数JZ2,JZ2=Z/|N|,其中Z为所述阈值时间;每间隔JZ2帧***一帧图像。具体实现步骤参考前述实施例。If the compensation mode that adjusts the synchronization of sound and picture within a preset threshold time is adopted, calculate the number of interval frames JZ2, JZ2=Z/|N|, where Z is the threshold time; insert a frame of image every JZ2 frame . For specific implementation steps, refer to the foregoing embodiment.
在一些实施例中,按照如下公式计算音画同步时差:In some embodiments, the audio-visual synchronization time difference is calculated according to the following formula:
N=T1+T2-(T3+T4)N=T1+T2-(T3+T4)
式中,N为音画同步时差,T1为所述第一视频处理器对所述视频信号进行处理产生的第一画质处理耗时,T2为第二视频处理器对所述视频信号进行再处理产生的第二画质处理耗时,T3为所述第一音频处理器对所述音频信号进行处理产生的第一声音处理耗时,T4为所述第二音频处理器对所述音频信号进行再处理产生的第二声音处理耗时。具体实现步骤参考前述实施例。In the formula, N is the time difference between audio and video synchronization, T1 is the time consumption of the first image quality processing generated by the first video processor processing the video signal, and T2 is the second video processor reprocessing the video signal. It takes time for the second image quality to be processed. T3 is the time for the first sound processing generated by the first audio processor to process the audio signal. T4 is the time for the second audio processor to process the audio signal. The second sound processing generated by the reprocessing takes time. For specific implementation steps, refer to the foregoing embodiment.
在一些实施例中,按照如下公式计算所述第一画质处理耗时和所述第二画质处理耗时:In some embodiments, the first image quality processing time and the second image quality processing time are calculated according to the following formula:
Figure PCTCN2020071103-appb-000004
Figure PCTCN2020071103-appb-000004
式中,T1为所述第一视频处理器对所述视频信号进行处理产生的第一画质处理耗时,T2为第二视频处理器对所述视频信号进行再处理产生的第二画质处理耗时;Tn为对每帧图像进行各环节画质处理时产生的耗时;Td为对每帧图像进行画质处理时产生的延迟;f为刷新频率,单位为Hz;R为帧处理阈值,所述帧处理阈值用于指示读取当前帧及其之后的R-1帧的图像数据,来完成当前帧的画质处理;S为视频播放的帧数。In the formula, T1 is the time-consuming processing of the first image quality generated by the first video processor processing the video signal, and T2 is the second image quality generated by the second video processor reprocessing the video signal Processing time; Tn is the time consumed when the image quality is processed for each frame of the image; Td is the delay generated when the image quality is processed for each frame of the image; f is the refresh frequency in Hz; R is the frame processing Threshold, the frame processing threshold is used to instruct to read the image data of the current frame and subsequent R-1 frames to complete the image quality processing of the current frame; S is the number of frames for video playback.
具体实现步骤参考前述实施例。For specific implementation steps, refer to the foregoing embodiment.
本领域的技术人员可以清楚地了解到本申请实施例中的技术可借助软件加必需的通用硬件平台的方式来实现。具体实现中,本申请还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时可包括本申请提供的音画同步处理方法实施例中的部分或全部步骤。所述的计算机存储介质可为磁碟、光盘、只读存储记忆体(英文:read-only memory,简称:ROM)或随机存储记忆体(英文:random access memory,简称:RAM)等。Those skilled in the art can clearly understand that the technology in the embodiments of the present application can be implemented by means of software plus a necessary general hardware platform. In specific implementation, the present application also provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the audio-visual synchronization processing method embodiment provided in the present application when the program is executed. The computer storage medium may be a magnetic disk, an optical disc, a read-only memory (English: read-only memory, abbreviated as: ROM) or a random access memory (English: random access memory, abbreviated as: RAM), etc.
本领域技术人员在考虑说明书及实践这里公开的申请后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,并不构成对本申请保护范围的限定。本申请的真正范围和精神由所附的权利要求指出。After considering the specification and practicing the application disclosed herein, those skilled in the art will easily think of other embodiments of the application. This application is intended to cover any variations, uses, or adaptive changes of this application. These variations, uses, or adaptive changes follow the general principles of this application and include common knowledge or customary technical means in the technical field that are not disclosed in this application. . The description and embodiments are only regarded as exemplary, and do not constitute a limitation on the protection scope of the present application. The true scope and spirit of this application are pointed out by the appended claims.
本说明书中各个实施例之间相同相似的部分互相参照即可。The same or similar parts in the various embodiments in this specification can be referred to each other.

Claims (18)

  1. 一种显示设备,其特征在于,A display device, characterized in that,
    显示器,被配置为显示图像内容;The display is configured to display image content;
    声音再现器,被配置为再现声音信号;A sound reproducer, configured to reproduce sound signals;
    第一处理芯片,包括第一视频处理器和第一音频处理器,通过输入接口接收外部音频信号和视频信号,所述第一音频处理器用于对所述音频信号进行处理,以及所述第一视频处理器用于对所述视频信号进行处理,且处理所述音频信号和所述视频信号时发生第一时延;The first processing chip includes a first video processor and a first audio processor, receives external audio signals and video signals through an input interface, the first audio processor is used to process the audio signals, and the first The video processor is configured to process the video signal, and a first time delay occurs when the audio signal and the video signal are processed;
    第二处理芯片,用于通过连接线接收所述第一芯片输出的音频信号和视频信号,所述第二处理芯片包括第二视频处理器和第二音频处理器,所述第二音频处理器用于对接收所述第一处理芯片所述音频信号进行再处理,以及所述第二视频处理器用于对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第二时延;The second processing chip is configured to receive the audio signal and the video signal output by the first chip through a connecting line, the second processing chip includes a second video processor and a second audio processor, and the second audio processor is used for For reprocessing the audio signal received by the first processing chip, and the second video processor is used for reprocessing the video signal received by the first processing chip, and the audio signal and the The second time delay occurs when the video signal is reprocessed;
    根据所述第一时延和所述第二时延,对再处理后所述视频信号和/或所述音频信号进行时延补偿,分别将时延补偿后的所述视频信号和所述音频信号输出所述显示器和所述声音再现器。According to the first time delay and the second time delay, time delay compensation is performed on the video signal and/or the audio signal after reprocessing, and the video signal and the audio signal after the time delay compensation are respectively The signal is output to the display and the sound reproducer.
  2. 根据权利要求1所述的显示设备,其特征在于,The display device according to claim 1, wherein:
    在视频播放过程中,所述第一芯片检测是否接收到图像设置操作或声音设置操作;During the video playback process, the first chip detects whether an image setting operation or a sound setting operation is received;
    如果所述第一芯片接收到图像设置操作或声音设置操作,则所述第一音频处理器重新对所述音频信号进行处理,以及所述第一视频处理器重新对所述视频信号进行处理,且重新处理所述音频信号和所述视频信号时发生第三时延;If the first chip receives an image setting operation or a sound setting operation, the first audio processor reprocesses the audio signal, and the first video processor reprocesses the video signal, And a third time delay occurs when the audio signal and the video signal are reprocessed;
    以及,所述第二音频处理器用于对接收所述第一处理芯片所述音频信号进行再处理,以及所述第二视频处理器用于对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第四时延;根据所述第三时延和所述第四时延,对再处理后所述视频信号和/或所述音频信号进行时延补偿。And, the second audio processor is used for reprocessing the audio signal received by the first processing chip, and the second video processor is used for reprocessing the video signal received by the first processing chip , And a fourth time delay occurs during the reprocessing of the audio signal and the video signal; according to the third time delay and the fourth time delay, the video signal and/or the audio signal after the reprocessing Perform delay compensation.
  3. 根据权利要求1所述的显示设备,其特征在于,The display device according to claim 1, wherein:
    在视频播放过程中,每隔预定时间,所述第一音频处理器对所述音频信号进行处理,以及所述第一视频处理器对所述视频信号进行处理,且获取处理所述音频信号和所述视频信号时发生第一时延;以及每隔预定时间,所述第二音频处理器对接收所述第一处理芯片所述音频信号进行再处理,以及所述第二视频处理器对接收所述第一处理芯片所述视频信号进行再处理,且所述音频信号和所述视频信再处理时发生第二时延。During the video playback process, at predetermined intervals, the first audio processor processes the audio signal, and the first video processor processes the video signal, and obtains and processes the audio signal and A first time delay occurs when the video signal; and every predetermined time, the second audio processor reprocesses the audio signal received by the first processing chip, and the second video processor receives The video signal is reprocessed by the first processing chip, and a second time delay occurs when the audio signal and the video signal are reprocessed.
  4. 根据权利要求1-3任一项所述的显示设备,其特征在于,所述控制器按如下步骤对再处理后所述视频信号和/或所述音频信号进行时延补偿:The display device according to any one of claims 1 to 3, wherein the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
    如果音画同步时差大于阈值范围的上限值,则进行丢帧或音频延迟;其中,所述阈值范围的上限值为允许图像比声音延迟出现的时间;If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, then perform frame drop or audio delay; wherein the upper limit of the threshold range is the time allowed for the image to appear later than the sound;
    其中,所述音画同步视差等于所述第一时延和所述第二时延之和。Wherein, the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay.
  5. 根据权利要求1-3任一项所述的显示设备,其特征在于,所述控制器按如下步骤对再处理后所述视频信号和/或所述音频信号进行时延补偿:The display device according to any one of claims 1 to 3, wherein the controller performs time delay compensation on the video signal and/or the audio signal after reprocessing according to the following steps:
    如果所述音画同步时差小于阈值范围的下限值,则进行插帧;其中,所述阈值范围的下限值为允许图像比声音超前出现的时间;If the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound;
    其中,所述音画同步视差等于所述第一时延和所述第二时延之和。Wherein, the audio-visual synchronization disparity is equal to the sum of the first time delay and the second time delay.
  6. 根据权利要求4所述的显示设备,其特征在于,所述进行丢帧包括:The display device according to claim 4, wherein said dropping frames comprises:
    如果采用将声音和画面立即调节同步的补偿模式,则立即丢帧数DZ=f×N/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;N为音画同步时差,单位为毫秒;If the compensation mode that adjusts the sound and the picture immediately is synchronized, the number of immediately dropped frames is DZ=f×N/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; N is the synchronization of audio and video Time difference, in milliseconds;
    如果采用将声音和画面在预设的阈值时间内调节同步的补偿模式,则计算间隔帧数JZ1,JZ1=Z/N,其中Z为所述阈值时间;每间隔JZ1帧丢弃一帧图像。If the compensation mode that adjusts the sound and the picture in the preset threshold time is adopted, the interval frame number JZ1 is calculated, JZ1=Z/N, where Z is the threshold time; one frame of image is discarded every JZ1 frame interval.
  7. 根据权利要求5所述的显示设备,其特征在于,所述进行插帧包括:The display device according to claim 5, wherein the interpolating frame comprises:
    如果采用将声音和画面立即调节同步的补偿模式,则立即插帧数CZ=f×|N|/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;|N|为音画同步时差的绝对值,单位为毫秒;If the compensation mode that adjusts the sound and the picture immediately is synchronized, the number of frames immediately inserted is CZ=f×|N|/1000; where f is the refresh frequency, which is used to indicate the number of frames displayed per second; |N| It is the absolute value of the audio and video synchronization time difference, in milliseconds;
    如果采用将声音和画面在预设的阈值时间内调节同步的补偿模式,则计算间隔帧数JZ2,JZ2=Z/|N|,其中Z为所述阈值时间;每间隔JZ2帧***一帧图像。If the compensation mode that adjusts the synchronization of sound and picture within a preset threshold time is adopted, calculate the number of interval frames JZ2, JZ2=Z/|N|, where Z is the threshold time; insert a frame of image every JZ2 frame .
  8. 根据权利要求1所述的显示设备,其特征在于,按照如下公式计算音画同步时差:The display device of claim 1, wherein the audio-visual synchronization time difference is calculated according to the following formula:
    N=T1+T2-(T3+T4)N=T1+T2-(T3+T4)
    式中,N为音画同步时差,T1为所述第一视频处理器对所述视频信号进行处理产生的第一画质处理耗时,T2为第二视频处理器对所述视频信号进行再处理产生的第二画质处理耗时,T3为所述第一音频处理器对所述音频信号进行处理产生的第一声音处理耗时,T4为所述第二音频处理器对所述音频信号进行再处理产生的第二声音处理耗时。In the formula, N is the time difference between audio and video synchronization, T1 is the time consumption of the first image quality processing generated by the first video processor processing the video signal, and T2 is the second video processor reprocessing the video signal. It takes time for the second image quality to be processed. T3 is the time for the first sound processing generated by the first audio processor to process the audio signal. T4 is the time for the second audio processor to process the audio signal. The second sound processing generated by the reprocessing takes time.
  9. 根据权利要求1或8所述的显示设备,其特征在于,按照如下公式计算所述第一画质处理耗时和所述第二画质处理耗时:The display device according to claim 1 or 8, wherein the time-consuming processing of the first image quality and the time-consuming processing of the second image quality are calculated according to the following formula:
    Figure PCTCN2020071103-appb-100001
    Figure PCTCN2020071103-appb-100001
    式中,T1为所述第一视频处理器对所述视频信号进行处理产生的第一画质处理耗时,T2为第二视频处理器对所述视频信号进行再处理产生的第二画质处理耗时;Tn为对每帧图像进行各环节画质处理时产生的耗时;Td为对每帧图像进行画质处理时产生的延迟;f为刷新频率,单位为Hz;R为帧处理阈值,所述帧处理阈值用于指示读取当前帧及其之后的R-1帧的图像数据,来完成当前帧的画质处理;S为视频播放的帧数。In the formula, T1 is the time-consuming processing of the first image quality generated by the first video processor processing the video signal, and T2 is the second image quality generated by the second video processor reprocessing the video signal Processing time; Tn is the time consumed when the image quality is processed for each frame of the image; Td is the delay generated when the image quality is processed for each frame of the image; f is the refresh frequency in Hz; R is the frame processing Threshold, the frame processing threshold is used to instruct to read the image data of the current frame and subsequent R-1 frames to complete the image quality processing of the current frame; S is the number of frames for video playback.
  10. 一种音画同步处理方法,用于如权利要求1-9任一项所述的显示设备,其特征在于,所述方法包括:An audio-visual synchronization processing method, used for the display device according to any one of claims 1-9, characterized in that the method comprises:
    第一视频处理器用于通过输入接口接收视频信号,并对所述视频信号进行画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal through an input interface, and perform image quality processing on the video signal, and it takes time to obtain the first image quality processing;
    第一音频处理器通过输入接口接收音频信号,并对所述音频信号进行声音处理,获取第一声音处理耗时;The first audio processor receives the audio signal through the input interface, and performs sound processing on the audio signal, and it takes time to obtain the first sound processing;
    第二视频处理器通过通信接口接收第一芯片输出的视频信号,并对所述第一芯片输出的视频信号进行画质处理,获取第二画质处理耗时;The second video processor receives the video signal output by the first chip through the communication interface, and performs image quality processing on the video signal output by the first chip, and the second image quality processing is time-consuming;
    第二音频处理器通过通信接口接收所述第一芯片输出的音频信号,并对所述第一芯片输出的音频信号进行声音处理,获取第二声音处理耗时;The second audio processor receives the audio signal output by the first chip through the communication interface, and performs sound processing on the audio signal output by the first chip, and it takes time to obtain the second sound processing;
    控制器根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;The controller calculates the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
    所述控制器判断所述音画同步时差是否在阈值范围内;The controller judges whether the audio-visual synchronization time difference is within a threshold range;
    如果所述音画同步时差不在阈值范围内,则所述控制器对所述第二芯片输出的视频信号和音频信号进行同步补偿,并将同步补偿后的视频信号传输至所述显示器,以及,将同步补偿后的音频信号输出至所述声音播放器;If the audio-visual synchronization time difference is not within the threshold range, the controller performs synchronous compensation on the video signal and audio signal output by the second chip, and transmits the synchronously compensated video signal to the display, and, Outputting the synchronized and compensated audio signal to the sound player;
    如果所述音画同步时差在阈值范围内,则所述控制器将所述第二芯片输出的视频信号传输至所述显示器,以及,将所述第二芯片输出的音频信号输出至所述声音播放器。If the audio-visual synchronization time difference is within the threshold range, the controller transmits the video signal output by the second chip to the display, and outputs the audio signal output by the second chip to the sound player.
  11. 一种显示设备,其特征在于,包括A display device, characterized in that it comprises
    显示器,用于显示视频图像;Display, used to display video images;
    声音播放器,用于播放音频;Sound player, used to play audio;
    第一视频处理器用于接收视频信号,并进行第一画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
    第一音频处理器用于接收音频信号,并进行第一声音处理,获取第一声音处理耗时;The first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing;
    第二视频处理器用于接收所述第一视频处理器输出处理后的视频信号,并进行第二画质处理,获取第二画质处理耗时;The second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
    第二音频处理器用于接收所述第一音频处理器输出处理后的音频信号,并进行第二声音处理,获取第二声音处理耗时;The second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
    所述控制器被配置为:The controller is configured to:
    根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;Calculating the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
    根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输 出处理后的音频信号进行补偿,并传输到所述声音播放器。Compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, and output the processed video signal to the second audio processor according to the audio-visual synchronization time difference The audio signal is compensated and transmitted to the sound player.
  12. 根据权利要求11所述的显示设备,其特征在于,所述控制器在计算音画同步时差之后,还用于:11. The display device according to claim 11, wherein the controller is further configured to:
    判断所述音画同步时差是否在阈值范围内;Judging whether the audio-visual synchronization time difference is within a threshold range;
    如果所述音画同步时差不在阈值范围内,根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输出处理后的音频信号进行补偿,并传输到所述声音播放器;If the audio-visual synchronization time difference is not within the threshold range, compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, according to the audio-visual synchronization time difference Compensate the processed audio signal output by the second audio processor and transmit it to the sound player;
    如果所述音画同步时差在阈值范围内,将所述第二视频处理器输出处理后的视频信号传输至所述显示器,以及,将第二音频处理器输出处理后的音频信号输出至所述声音播放器。If the audio-visual synchronization time difference is within the threshold range, the video signal output processed by the second video processor is transmitted to the display, and the audio signal output processed by the second audio processor is output to the display Sound player.
  13. 根据权利要求11所述的显示设备,其特征在于,所述控制器还用于:The display device according to claim 11, wherein the controller is further configured to:
    在视频播放过程中,检测是否接收到图像设置操作或声音设置操作;During the video playback, detect whether the image setting operation or the sound setting operation is received;
    如果检测到图像设置操作或声音设置操作,所述第一视频处理器重新获取所述第一画质处理耗时,所述第一音频处理器重新获取所述第一声音处理耗时,以及,所述第二视频处理器重新获取所述第二画质处理耗时,所述第二音频处理器重新获取所述第二声音处理耗时;If an image setting operation or a sound setting operation is detected, the process of reacquiring the first image quality by the first video processor takes time, and the process of reacquiring the first sound by the first audio processor takes time, and, It takes time for the second video processor to reacquire the second image quality processing, and it takes time for the second audio processor to reacquire the second sound processing;
    所述控制器根据重新获取的所述第一画质处理耗时、所述第一声音处理耗、所述第二画质处理耗时和所述第二声音处理耗时,对所述音画同步时差进行修正。According to the re-acquired time-consuming processing of the first image quality, the time-consuming processing of the first sound, the time-consuming processing of the second image quality, and the time-consuming processing of the second sound, the controller compares the audio and video Synchronize the time difference to correct it.
  14. 根据权利要求13所述的显示设备,其特征在于,The display device according to claim 13, wherein:
    在视频播放过程中,所述第一视频处理器每隔预设时间获取所述第一画质 处理耗时,所述第一音频处理器每隔预设时间获取所述第一声音处理耗时,以及,所述第二视频处理器每隔预设时间获取第二画质处理耗时,所述第二音频处理器每隔预设时间获取所述第二声音处理耗时。15、根据权利要11-14任一所述的显示设备,其特征在于,所述控制器还用于:During video playback, it takes time for the first video processor to obtain the first image quality every preset time, and time to process the first audio processor to obtain the first sound every preset time , And, the second video processor acquires the second image quality processing time every preset time, and the second audio processor acquires the second sound processing time every preset time. 15. The display device according to any one of claims 11-14, wherein the controller is further configured to:
    如果所述音画同步时差大于阈值范围的上限值,则进行丢帧或音频延迟;其中,所述阈值范围的上限值为允许图像比声音延迟出现的时间。If the audio-visual synchronization time difference is greater than the upper limit of the threshold range, frame dropping or audio delay is performed; wherein, the upper limit of the threshold range is the time allowed for the image to appear later than the sound.
  15. 根据权利要11-14任一所述的显示设备,其特征在于,所述控制器还用于:The display device according to any one of claims 11-14, wherein the controller is further configured to:
    如果所述音画同步时差小于阈值范围的下限值,则进行插帧;其中,所述阈值范围的下限值为允许图像比声音超前出现的时间。If the audio-visual synchronization time difference is less than the lower limit of the threshold range, frame interpolation is performed; wherein the lower limit of the threshold range is the time allowed for the image to appear ahead of the sound.
  16. 根据权利要求15所述的显示设备,其特征在于,当所述显示设备采用第一补偿模式,所述丢帧数为DZ=f×N/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;N为音画同步时差,单位为毫秒。The display device according to claim 15, wherein when the display device adopts the first compensation mode, the number of dropped frames is DZ=f×N/1000; where f is the refresh frequency, which is used to represent The number of frames of image display in seconds; N is the time difference between audio and video synchronization, in milliseconds.
  17. 根据权利要求16所述的显示设备,其特征在于,当所述显示设备采用第一补偿模式,所述插帧数CZ=f×|N|/1000;其中,f为刷新频率,用于表示每秒钟图像显示的帧数;|N|为音画同步时差的绝对值,单位为毫秒。The display device according to claim 16, wherein when the display device adopts the first compensation mode, the number of interpolated frames CZ=f×|N|/1000; where f is the refresh frequency, which is used to indicate The number of frames displayed per second; |N| is the absolute value of the time difference between audio and video synchronization, in milliseconds.
  18. 一种音画同步处理方法,用于如权利要求11-18任一项所述的显示设备,其特征在于,所述方法包括:An audio-visual synchronization processing method, used for the display device according to any one of claims 11-18, characterized in that the method comprises:
    第一视频处理器用于接收视频信号,并进行第一画质处理,获取第一画质处理耗时;The first video processor is configured to receive a video signal and perform first image quality processing, and it takes time to obtain the first image quality processing;
    第一音频处理器用于接收音频信号,并进行第一声音处理,获取第一声音处理耗时;The first audio processor is configured to receive audio signals and perform first sound processing, and it takes time to obtain the first sound processing;
    第二视频处理器用于接收所述第一视频处理器输出处理后的视频信号,并进行第二画质处理,获取第二画质处理耗时;The second video processor is configured to receive the processed video signal output by the first video processor, and perform second image quality processing, and it takes time to obtain the second image quality processing;
    第二音频处理器用于接收所述第一音频处理器输出处理后的音频信号,并进行第二声音处理,获取第二声音处理耗时;The second audio processor is configured to receive the processed audio signal output by the first audio processor, and perform second sound processing, and it takes time to obtain the second sound processing;
    所述控制器被配置为:The controller is configured to:
    根据所述第一画质处理耗时、所述第二画质处理耗时、所述第一声音处理耗时和所述第二声音处理耗时,计算音画同步时差;Calculating the audio-visual synchronization time difference according to the time-consuming processing of the first image quality, the time-consuming processing of the second image quality, the time-consuming processing of the first sound, and the time-consuming processing of the second sound;
    根据所述音画同步时差对所述第二视频处理器输出处理后的视频信号进行补偿,并传输到所述显示器,根据所述音画同步时差对所述第二音频处理器输出处理后的音频信号进行补偿,并传输到所述声音播放器。Compensate the processed video signal output by the second video processor according to the audio-visual synchronization time difference, and transmit it to the display, and output the processed video signal to the second audio processor according to the audio-visual synchronization time difference The audio signal is compensated and transmitted to the sound player.
PCT/CN2020/071103 2019-09-04 2020-01-09 Sound and picture synchronization processing method and display device WO2021042655A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910832295.1A CN112449229B (en) 2019-09-04 2019-09-04 Sound and picture synchronous processing method and display equipment
CN201910832295.1 2019-09-04

Publications (1)

Publication Number Publication Date
WO2021042655A1 true WO2021042655A1 (en) 2021-03-11

Family

ID=74734611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071103 WO2021042655A1 (en) 2019-09-04 2020-01-09 Sound and picture synchronization processing method and display device

Country Status (2)

Country Link
CN (1) CN112449229B (en)
WO (1) WO2021042655A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189728A (en) * 2021-12-13 2022-03-15 深圳市日声数码科技有限公司 Playing system for converting digital video and audio input into analog format
CN114567813A (en) * 2022-03-08 2022-05-31 深圳创维-Rgb电子有限公司 Image quality improving method and device, playing equipment and computer readable storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489556B (en) * 2021-05-21 2022-12-09 荣耀终端有限公司 Method and equipment for playing sound
CN113891131A (en) * 2021-09-29 2022-01-04 四川长虹电器股份有限公司 Video playing method and system
CN114598917B (en) * 2022-01-27 2024-03-29 海信视像科技股份有限公司 Display device and audio processing method
CN116017012A (en) * 2022-11-28 2023-04-25 深圳创维-Rgb电子有限公司 Multi-screen synchronization method, device, display equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453655A (en) * 2007-11-30 2009-06-10 深圳华为通信技术有限公司 Method, system and device for customer controllable audio and video synchronization regulation
CN103219029A (en) * 2013-03-25 2013-07-24 广东欧珀移动通信有限公司 Method and system for automatically adjusting synchronization of audio and video
US20160309213A1 (en) * 2014-08-27 2016-10-20 Shenzhen Tcl New Technology Co., Ltd Audio/video signal synchronization method and apparatus
CN109275008A (en) * 2018-09-17 2019-01-25 青岛海信电器股份有限公司 A kind of method and apparatus of audio-visual synchronization
CN109379619A (en) * 2018-11-20 2019-02-22 青岛海信电器股份有限公司 Sound draws synchronous method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009157078A1 (en) * 2008-06-26 2009-12-30 富士通マイクロエレクトロニクス株式会社 Video/audio data output device and video/audio data output method
KR102218908B1 (en) * 2014-05-07 2021-02-23 엘지전자 주식회사 Digital device and method of processing a service thereof
CN104243853A (en) * 2014-08-25 2014-12-24 青岛歌尔声学科技有限公司 High-definition multimedia interface (HDMI) double-display device and display method
CN104914580B (en) * 2015-04-24 2018-04-27 北京小鸟看看科技有限公司 A kind of head-mounted display
CN104902317A (en) * 2015-05-27 2015-09-09 青岛海信电器股份有限公司 Audio video synchronization method and device
CN105657489A (en) * 2015-08-21 2016-06-08 乐视致新电子科技(天津)有限公司 Audio/video playing equipment
CN105744358B (en) * 2016-03-18 2018-09-14 青岛海信电器股份有限公司 The processing method and processing device of video playing
TWI622018B (en) * 2017-09-13 2018-04-21 緯創資通股份有限公司 Method, device and system for editing video
US10643298B2 (en) * 2018-02-14 2020-05-05 Realtek Semiconductor Corporation Video processing system and processing chip
CN109144642B (en) * 2018-08-14 2022-02-18 Oppo广东移动通信有限公司 Display control method, display control device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453655A (en) * 2007-11-30 2009-06-10 深圳华为通信技术有限公司 Method, system and device for customer controllable audio and video synchronization regulation
CN103219029A (en) * 2013-03-25 2013-07-24 广东欧珀移动通信有限公司 Method and system for automatically adjusting synchronization of audio and video
US20160309213A1 (en) * 2014-08-27 2016-10-20 Shenzhen Tcl New Technology Co., Ltd Audio/video signal synchronization method and apparatus
CN109275008A (en) * 2018-09-17 2019-01-25 青岛海信电器股份有限公司 A kind of method and apparatus of audio-visual synchronization
CN109379619A (en) * 2018-11-20 2019-02-22 青岛海信电器股份有限公司 Sound draws synchronous method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189728A (en) * 2021-12-13 2022-03-15 深圳市日声数码科技有限公司 Playing system for converting digital video and audio input into analog format
CN114567813A (en) * 2022-03-08 2022-05-31 深圳创维-Rgb电子有限公司 Image quality improving method and device, playing equipment and computer readable storage medium
CN114567813B (en) * 2022-03-08 2024-03-22 深圳创维-Rgb电子有限公司 Image quality improving method, device, playing equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112449229B (en) 2022-01-28
CN112449229A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
WO2021042655A1 (en) Sound and picture synchronization processing method and display device
WO2020248668A1 (en) Display and image processing method
CN111526415B (en) Double-screen display equipment and HDMI switching method thereof
CN112073797B (en) Volume adjusting method and display device
CN112153446B (en) Display device and streaming media video audio and video synchronization method
CN111405221B (en) Display device and display method of recording file list
WO2020248680A1 (en) Video data processing method and apparatus, and display device
WO2021189358A1 (en) Display device and volume adjustment method
WO2021031629A1 (en) Display apparatus, and multi-function button application method for control device
CN112214189A (en) Image display method and display device
CN111464840B (en) Display device and method for adjusting screen brightness of display device
CN112788422A (en) Display device
WO2021031589A1 (en) Display device and dynamic color gamut space adjustment method
CN112399243A (en) Playing method and display device
WO2021031620A1 (en) Display device and backlight brightness adjustment method
WO2020248699A1 (en) Sound processing method and display apparatus
WO2020248681A1 (en) Display device and method for displaying bluetooth switch states
CN111385631B (en) Display device, communication method and storage medium
WO2021031598A1 (en) Self-adaptive adjustment method for video chat window position, and display device
CN110602540B (en) Volume control method of display equipment and display equipment
CN113132769A (en) Display device and sound and picture synchronization method
CN112788423A (en) Display device and display method of menu interface
CN112783380A (en) Display apparatus and method
CN112104950B (en) Volume control method and display device
CN112218156B (en) Method for adjusting video dynamic contrast and display equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20860689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20860689

Country of ref document: EP

Kind code of ref document: A1