CN111935531A - Integrated display system graph processing method based on embedded platform - Google Patents
Integrated display system graph processing method based on embedded platform Download PDFInfo
- Publication number
- CN111935531A CN111935531A CN202010771327.4A CN202010771327A CN111935531A CN 111935531 A CN111935531 A CN 111935531A CN 202010771327 A CN202010771327 A CN 202010771327A CN 111935531 A CN111935531 A CN 111935531A
- Authority
- CN
- China
- Prior art keywords
- module
- video
- class
- display system
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 67
- 230000005540 biological transmission Effects 0.000 claims abstract description 34
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 23
- 238000000034 method Methods 0.000 claims description 18
- 238000011161 development Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 abstract description 12
- 238000007781 pre-processing Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 4
- 235000019800 disodium phosphate Nutrition 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000000306 component Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 108010025037 T140 peptide Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/643—Communication protocols
- H04N21/6437—Real-time Transport Protocol [RTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/268—Signal distribution or switching
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention relates to a graphic processing method of an integrated display system based on an embedded platform, which is technically characterized by comprising the following steps: the invention collects 4K original video through the HDMI input interface, encapsulates the code stream into RTSP after H.264 coding, pushes the RTSP to the network, receives the network video stream by using the VLC player, obtains correct video stream coding algorithm and resolution information, and simultaneously measures that the delay of a coding and decoding system is not more than 100 ms. And receiving and decoding 3 paths of 4K network video streams, processing the decoded video through a fusion algorithm, and overlapping the processed video on a background video for display. The image processing engine carries out preprocessing, format conversion, image quality enhancement and fusion superposition operation on the acquired image; the hardware codec encodes the video image and pushes the video image based on the RTSP protocol, so that the network transmission bandwidth of video data is reduced, and the transmission code rate lower than 10Mbps can be realized at the frame rate of 3840x2160 resolution ratio and 30 Hz.
Description
Technical Field
The invention belongs to the field of computer graphic processing, and particularly relates to a graphic processing method of an integrated display system based on an embedded platform.
Background
The comprehensive display system is an important component of military combat command control systems, comprehensive navigation and other weaponry, can comprehensively, accurately and real-timely display combat data, display visual battlefield situation information, and provide powerful support for operators to timely master battlefield dynamics, reasonably carry out attack and defense deployment and standardize strategic and tactical actions. With the rapid development of graphic image processing technology, software technology and computer technology, the advanced weapons in foreign countries have been equipped with comprehensive display systems with powerful functions, complex display graphics, large screen, high resolution and high integration level to replace the old mechanical instruments and CRT displays. The graphic processing unit is a core component of the comprehensive display system, and the functions of the graphic processing unit comprise video coding and decoding, video sharing and mutual display, graphic fusion and superposition and the like so as to meet the requirements of high-speed data processing and display. The higher the image resolution is, the larger the effective data volume carried by the image resolution is, and the faster the image processing speed is, the more early important battlefield information can be obtained. Particularly, in a naval vessel radar comprehensive display terminal, after the radar image acquisition, transmission and coordinate conversion process is finished, the graphic processing unit needs to fuse various information and superpose the information with an electronic map. After the electronic map is fused with the radar image, a large number of images from the radar can be subjected to data sharing, the images can be generated in real time, and the data can be displayed by drawing a situation map of the comprehensive navigation of the ship. Which all require strong graphics processing capabilities as a guarantee. A part of comprehensive display system of military weaponry is realized based on an AMD (auto-id-graphics-processing) company graphic processing chip, but the internal structure of the chip is unknown, and obviously, the requirement of autonomous controllable key components is not met. Although the domestic graphic processing chip can realize the high-resolution graphic display function, the performance of the domestic graphic processing chip can not meet the requirements of complex calculation tasks such as video encoding and decoding, real-time graphic fusion and superposition and the like, and the domestic graphic processing chip has certain limitation in the aspect of power consumption sensitive application. At present, a relatively mature embedded scheme is based on a DSP + FPGA combined architecture, the DSP is used as a graphic processing calculation core and is responsible for executing a complex graphic algorithm, and the FPGA is used as a coprocessor and is responsible for logic processing of a bottom layer algorithm. However, DSPs have encountered bottlenecks in processing video images with resolutions higher than 1080p, and the complexity and development cycle for implementing video processing algorithms in FPGAs is high. In addition, the DSP + FPGA architecture can reduce the system integration level of the whole integrated display system, and is not beneficial to system upgrading and maintenance.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a comprehensive display system graph processing method based on an embedded platform, which can realize network video transmission, multi-channel video real-time fusion and superposition and video dynamic resolution self-adaptation.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
a graphics processing method of an integrated display system based on an embedded platform comprises a method for realizing network video transmission by using an RTSP H.264 real-time streaming media transmission scheme, a method for realizing multi-channel video real-time fusion and superposition by using a video output processing module of an integrated display system graphics processing unit, and a method for realizing video dynamic resolution self-adaption by using a bottom layer driver and hardware.
Moreover, the RTSP H.264 real-time streaming media transmission scheme, the video input processing module of the integrated display system graphic processing unit and the bottom layer driver are realized based on an ARM and a hardware video coding and decoding core.
Moreover, the RTSP H.264 real-time streaming media transmission scheme is realized by a Live555 development framework, wherein the Live555 development framework comprises a UsageEnvironment module, a BasicUsageEnvironment module, a GroupSock module and a LiveMedia module; the UsageEnvironment module comprises a UsageEnvironment class, a task scheduler class and a HashTable class, and the basicTaskSchedule class is a subclass of the task scheduler class; the group clock module comprises an RTSPServer class; the LiveMedia module includes a ServerMediaSession class, a MediaSubsession class, and a FramedSource class.
Moreover, the implementation of network video transmission by the RTSP h.264 real-time streaming media transmission scheme includes the following steps:
step 1, setting a task scheduler class as a task scheduling center, creating a basic task scheduler class object, setting a UsageEnviroment module as an operating environment, and creating a basic UsageEnviroment module object;
step 2, creating an RTSPServer class object;
step 3, calling a setStreamSocket function through a setUPSocket function to create a Socket connection and simultaneously monitoring the port;
step 4, transferring the handle of the successive processing function and the handle of the socket word to a task scheduling center for association;
step 5, calling a select function to block the connection of the waiting client;
step 6, creating a ServerMediaSession class object, and creating a MediaSubsession sub-session for performing RTP packet of H.264 compressed video format on the acquired and coded data source acquired in the FramedSource class;
step 7, adding the ServerMediaSession class object into the RTSPServer class object;
and 8, entering a main loop doEventLoop function.
And the video input processing module of the graphic processing unit of the integrated display system comprises a VPSS module, a management module, a VO module, a VENC module, an SVP module, a VI module, a VDEC module and an AVS module, wherein the VPSS module is respectively connected with the management module, the VO module, the VENC module, the SVP module, the VI module, the VDEC module and the AVS module.
Moreover, the method for realizing the real-time fusion and superposition of the multichannel video by using the video input processing module of the graphic processing unit of the integrated display system comprises the following steps:
step 1, actively reading video and graphic data from corresponding positions of a memory by a VO module, and outputting the video and the graphic data through corresponding display equipment;
step 2, the channel of the high-definition video layer has zooming capability, and the VI module or the source image decompressed by the VDEC decoder is zoomed by the VPSS module and then output to a certain channel of the VO module for display;
step 3, judging the size of the image output to the channel of the VO module, if the size exceeds the size of the channel area of the VO module, zooming the image by the VO module, otherwise, keeping the image unchanged;
and step 4, connecting a channel output by the VPSS module, the VENC module and the SVP intelligent processing module, and performing real-time coding and intelligent algorithm processing on the video.
Moreover, implementing video dynamic resolution adaptation using underlying drivers and hardware includes the steps of:
step 1, capturing GPIO interruption by a bottom layer driver and reporting an ARM processor firmware program;
step 2, in the interrupt time, reading video output information by an ARM processor firmware program through an I2C interface;
step 3, calculating the resolution and frame rate of the input video image according to the read line effective signal, field effective signal and pixel clock signal;
and 4, configuring the input channel according to the video image parameters.
The invention has the advantages and positive effects that:
the invention collects 4K original video through the HDMI input interface, encapsulates the code stream into RTSP after H.264 coding, pushes the RTSP to the network, and simultaneously uses the VLC player to receive the network video stream, thereby being capable of obtaining correct video stream coding algorithm and resolution information, and measuring that the time delay of a coding and decoding system is not more than 100 ms. And receiving and decoding 3 paths of 4K network video streams, processing the decoded video through a fusion algorithm, and overlapping the processed video on a background video for display. The image processing engine carries out operations such as preprocessing, format conversion, image quality enhancement, fusion and superposition and the like on the acquired image; the hardware codec encodes the video image and pushes the video image based on the RTSP protocol so as to reduce the network transmission bandwidth of video data, and the transmission code rate lower than 10Mbps can be realized at the frame rate of 3840x2160 resolution ratio and 30 Hz.
Drawings
FIG. 1 is a hardware block diagram of the present invention;
FIG. 2 is a software architecture level diagram of the present invention;
FIG. 3 is a Live555 overall code frame diagram;
FIG. 4 is a flow chart of the RTSP server of the present invention;
FIG. 5 is a block diagram of the connection of the video input/processing module of the graphics processing unit of the integrated display system according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A graphics processing method of an integrated display system based on an embedded platform comprises a method for realizing network video transmission by using an RTSP H.264 real-time streaming media transmission scheme, a method for realizing multi-channel video real-time fusion and superposition by using a video output processing module of an integrated display system graphics processing unit, and a method for realizing video dynamic resolution self-adaption by using a bottom layer driver and hardware.
The RTSP H.264 real-time streaming media transmission scheme, the comprehensive display system graphic processing unit video output processing module and the bottom layer driver are realized based on a Haisi Hi3559AV100 chip. The Haisi Hi3559AV100 chip is provided with a dual-core ARM Cortex [email protected] and a dual-core ARM Cortex [email protected] processing unit; the method is provided with a hardware video coding and decoding core, the maximum resolution is 8192x8640 for H.264 coding, the maximum resolution is 16384x 8640 for H.265 coding, and the highest support of H264/H.265 video decoding is 7680x4320@30fps or 3840x2160@120 fps. Because the Haisi Hi3559AV100 chip does not support HDMI video input, consequently adopt longsin LT6911UXC protocol conversion chip to carry out signal conversion, longsin LT6911UXC protocol conversion chip can support 4K @60fps video input at most, can convert HDMI video signal input into MIPI signal output. As shown in fig. 1, a video signal is input in two ways: one is that HDMI signal inputs to the LONGXIN LT6911UXC protocol conversion chip, the LONGXIN LT6911UXC protocol conversion chip outputs MIPI signal as MIPI video signal input of Haisi Hi3559AV100 chip; and the other is that the video in the H.264/H.265 compression format is packaged into a video stream based on the RTSP protocol and transmitted through the network, and the Haisi Hi3559AV100 chip receives the video stream, analyzes the protocol and decodes the video stream, and extracts the original video data.
Fig. 2 shows a software architecture of a graphic processing unit of the integrated display system, and the functional hierarchy of the whole system is divided into a hardware layer, an operating system adaptation layer, a driver layer, a media software processing platform and an application layer. The operating system adopts a Ubuntu operating system, the key codes are realized by a standard C + + programming language, and the application program is compiled by using a GCC compiler special for Haisi Hi3559AV100 chips; the operating system adaptation layer comprises configuration of Haisi Hi3559AV100 chip, a cut device tree and a device driver; the multimedia software processing platform is a secondary development tool for modules such as a hardware coding and decoding core, a graphic processing unit, a video interface and the like in the Haisi processor; and the application layer calls an interface function provided by the media software processing platform and realizes video coding and decoding and real-time fusion and superposition functions aiming at the application requirements of the comprehensive display system.
The RTSP provides an extensible framework in the network video transmission realized by the H.264 real-time streaming media transmission scheme using the RTSP, and can transmit real-time data in a controlled or on-demand manner, wherein the real-time data comprises field data and stored data. The RTSP can control a plurality of data sending sessions, can simultaneously push and receive a plurality of paths of network video streams, and simultaneously packages the H.264 or H.265 compressed format video into the RTSP stream according to the bottom Ethernet transmission protocol in a flexible mode, thereby realizing the video transmission of a plurality of physical channels. The RTSP H.264 real-time streaming media transmission scheme is realized through a Live555 development framework, and can be conveniently applied to an embedded system. The Live555 realizes the support of the RTSP on various playing modes including unicast, multicast and broadcast, and realizes the support on streaming, receiving and packaging of various audio and video formats, including multiple audio and video codes of H.264, DV, AC3, MP4V-ES, MPV and T140.
As shown in FIG. 3, the Live555 development framework includes a UsageEnvironment module, a BasicUsageEnvironment module, a GroupSock module, and a LiveMedia module; wherein the UsageEnvironment module comprises a UsageEnvironment class, a task scheduler class and a HashTable class; the group clock module comprises an RTSPServer class; the LiveMedia module includes a ServerMediaSession class, a MediaSubsession class, and a FramedSource class. The UsageEnvironment class is used for processing input and output operations and error information; the task scheduler is used for task scheduling of events, including processing of asynchronous events and registration of callback functions, and realizes delay scheduling of the events through a DelayQueue function so as to control the sending rate of data packets. The basic UsageEnvironment class is a concrete embodiment of the UsageEnvironment class, and realizes the specific input and output operation and the processing operation of task scheduling; the basic task scheduler is a subclass of the task scheduler and is responsible for scheduling basic user tasks; the GroupSock module encapsulates a series of network interfaces, including a network address class and a packet forwarding class. The LiveMedia module realizes the consumption of resources of RTPSink and FileSink, realizes various processing operations of an RTP (real time protocol), an RTCP (real time control protocol), an RTSP (real time streaming protocol) client and an RTSP server, and realizes resource expansion aiming at various audio-visual coding formats. The video transmission mainly realizes the function of a streaming media server through a task scheduling mechanism and an RTSP service mechanism. The task scheduling mechanism is mainly realized through a task scheduler class, and the cyclic scheduling of three tasks, namely a network socket task, a delay task and a trigger event, is completed, so that a system operation framework is formed. The RTSP service mechanism is realized through a class library under a live media directory of engineering, and a streaming media server is realized by adding an RTSP protocol into an operation frame.
As shown in fig. 4, the implementation of network video transmission by the RTSP h.264 real-time streaming media transmission scheme includes the following steps:
step 1, setting a task scheduler class as a task scheduling center, creating a basic task scheduler class object, setting a UsageEnviroment module as an operating environment, and creating a basic UsageEnviroment module object;
step 2, then creating an RTSPServer class object;
step 3, calling a setStreamSocket function through a setUPSocket function to create a Socket connection and simultaneously monitoring the port;
step 4, transferring the handle of the successive processing function and the handle of the socket word to a task scheduling center for association;
step 5, calling a select function to block the connection of the waiting client;
step 6, creating a ServerMediaSession class object, and creating a MediaSubsession sub-session for performing RTP packet of H.264 compressed video format on the acquired and coded data source acquired in the FramedSource class;
step 7, adding the ServerMediaSession class object into the RTSPServer class object;
and 8, entering a main loop doEventLoop function.
The multi-channel video real-time fusion and superposition is to superpose and fuse multiple channels of video signals to form a channel of video signal, and the channel of video signal is output to a display device for display. The Haisi Hi3559AV100 chip can support 4 paths of 4K ultra-high-definition videos at most for fusion and superposition, wherein one path of video serves as a background in an output video and is zoomed to a size supported by a display, in addition, 3 paths of videos serve as a foreground and are superposed on the background and are displayed in a small window mode, a picture-in-picture effect is realized, the sources and switching of the videos of the foreground and the background can be selected, the display parameters such as the superposition position, the size and the priority of the foreground video can be adjusted randomly, the control of the video superposition effect can be carried out in real time, and the Haisi Hi3559AV100 chip can support 60-frame video output with 4K resolution at most. A plurality of videos can be displayed on one video layer, each video display area is called a channel, the channel belongs to video layer management, the videos are limited in the channel, and the channel is limited in the video layer. The super-definition and high-definition display equipment supports a plurality of channels to output and display simultaneously on software, output images are overlapped according to the priority sequence, when pictures of all the channels have overlapping areas, images with high priority are displayed on the upper layer, and if the priorities of all the channels are consistent, the larger the channel number is, the higher the default priority is. Meanwhile, the OSD region overlapping technology is applied to overlap channel number marks on different network video windows, so that different display channels can be distinguished conveniently.
As shown in fig. 5, the video input processing module of the graphic processing unit of the integrated display system includes a VPSS module, a management module, a VO module, a VENC module, an SVP module, a VI module, a VDEC module, and an AVS module, wherein the VPSS module is respectively connected to the management module, the VO module, the VENC module, the SVP module, the VI module, the VDEC module, and the AVS module. The method for realizing the real-time fusion and superposition of the multichannel video by using the video input processing module of the graphic processing unit of the comprehensive display system comprises the following steps:
step 1, actively reading video and graphic data from corresponding positions of a memory by a VO module, and outputting the video and the graphic data through corresponding display equipment;
step 2, the channel of the high-definition video layer has zooming capability, and the VI module or the source image decompressed by the VDEC decoder is zoomed by the VPSS module and then output to a certain channel of the VO module for display;
step 3, judging the size of the image output to the channel of the VO module, if the size exceeds the size of the channel area of the VO module, zooming the image by the VO module, otherwise, keeping the image unchanged;
and step 4, connecting a channel output by the VPSS module, the VENC module and the SVP intelligent processing module to realize real-time video coding and intelligent algorithm processing.
The Haisi Hi3559AV100 chip adopts an MIPI video input channel. The MIPI Rx is an acquisition unit supporting various differential video input interfaces, receives original video data through voltage differential signals, converts received serial differential signals into DC time sequences and transmits the DC time sequences to a video acquisition module of a Haisi Hi3559AV100 chip. The MIPI interface uses short packets in the CSI-2 protocol for synchronization. The MIPI Rx supports MIPI D-PHY and LVDS serial video input signals, is compatible with a DC video interface, supports data transmission requirements of various speeds and resolutions, and supports various external input devices. An interrupt pin of the video conversion chip is connected with a GPIO pin of the Haesi Hi3559AV100 chip, and when the external input resolution is changed, the conversion chip is triggered to interrupt. Since there is a minimum limit to the MIPI transmission rate, there is a problem that the transmission of low-resolution video using 4 lanes is unstable, and therefore, adjustment needs to be made to the conversion chip firmware and haisi Hi3559AV100 chip firmware program. Only one Lane0 single Lane is used when the pixel clock is less than 80M; when the pixel clock is more than or equal to 80M and less than 150M, two Lane0 and Lane1 are used; when the average molecular weight is 150M or more, 4 Lane or 8 Lane are used. Under the condition that the resolution of the external HDMI input video is constantly changed, the graphic processing unit can quickly detect the change of the resolution and carry out input and output self-adaptive adjustment, the whole process is completely and autonomously finished without external configuration, and the output of the video signal is stable.
The method for realizing the video dynamic resolution self-adaption by using the bottom layer driver and hardware comprises the following steps:
step 1, capturing GPIO interruption by a bottom layer driver and reporting an ARM processor firmware program;
step 2, in the interrupt time, reading video output information by an ARM processor firmware program through an I2C interface;
step 3, calculating the resolution and frame rate of the input video image according to the read line effective signal, field effective signal and pixel clock signal;
and 4, configuring the input channel according to the video image parameters.
The simulation test is carried out by the integrated display system graph processing method based on the embedded platform, and the test result is obtained as follows: the graphic processing unit designed by the invention collects 4K original video through the HDMI input interface, encapsulates the code stream into RTSP after H.264 coding and pushes the RTSP to the network. The VLC player is used for receiving the network video stream, so that the correct video stream coding algorithm and resolution information can be obtained, and meanwhile, the delay of the coding and decoding system is measured to be not more than 100 ms. And simultaneously, receiving 3 paths of 4K network video streams and decoding, processing the decoded video through a fusion algorithm, and overlapping the processed video on a background video for display. The image processing engine carries out preprocessing, format conversion, image quality enhancement and fusion superposition operation on the acquired image; the hardware codec checks the original image for coding and pushing so as to reduce the network transmission bandwidth of the video data, and the transmission rate lower than 10Mbps can be realized at the frame rate of 3840x2160 resolution ratio and 30 Hz.
It should be emphasized that the embodiments described herein are illustrative rather than restrictive, and thus the present invention is not limited to the embodiments described in the detailed description, but also includes other embodiments that can be derived from the technical solutions of the present invention by those skilled in the art.
Claims (7)
1. A comprehensive display system graph processing method based on an embedded platform is characterized in that: the method comprises a method for realizing network video transmission by using an RTSP H.264 real-time streaming media transmission scheme, a method for realizing real-time fusion and superposition of multi-channel videos by using a comprehensive display system graphic processing unit video input processing module, and a method for realizing video dynamic resolution self-adaption by using a bottom layer driver and hardware.
2. The integrated display system graphics processing method based on embedded platform according to claim 1, characterized in that: the RTSP H.264 real-time streaming media transmission scheme, the comprehensive display system graphic processing unit video output processing module and the bottom layer driver are realized based on an ARM and a hardware video coding and decoding core.
3. The integrated display system graphics processing method based on embedded platform according to claim 1, characterized in that: the RTSP H.264 real-time streaming media transmission scheme is realized through a Live555 development framework, wherein the Live555 development framework comprises a UsageEnvironment module, a basic UsageEnvironment module, a GroupSock module and a Livemedia module; the UsageEnvironment module comprises a UsageEnvironment class, a task scheduler class and a HashTable class, and the basicTaskSchedule class is a subclass of the task scheduler class; the group clock module comprises an RTSPServer class; the LiveMedia module includes a ServerMediaSession class, a MediaSubsession class, and a FramedSource class.
4. The integrated display system graphics processing method based on embedded platform according to claim 1, characterized in that: the RTSP H.264 real-time streaming media transmission scheme for realizing network video transmission comprises the following steps:
step 1, setting a task scheduler class as a task scheduling center, creating a basic task scheduler class object, setting a UsageEnviroment module as an operating environment, and creating a basic UsageEnviroment module object;
step 2, creating an RTSPServer class object;
step 3, calling a setStreamSocket function through a setUPSocket function to create a Socket connection and simultaneously monitoring the port;
step 4, transferring the handle of the successive processing function and the handle of the socket word to a task scheduling center for association;
step 5, calling a select function to block the connection of the waiting client;
step 6, creating a ServerMediaSession class object, and creating a MediaSubsession sub-session for performing RTP packet of H.264 compressed video format on the acquired and coded data source acquired in the FramedSource class;
step 7, adding the ServerMediaSession class object into the RTSPServer class object;
and 8, entering a main loop doEventLoop function.
5. The integrated display system graphics processing method based on embedded platform according to claim 1, characterized in that: the video output processing module of the graphic processing unit of the integrated display system comprises a VPSS module, a management module, a VO module, a VENC module, an SVP module, a VI module, a VDEC module and an AVS module, wherein the VPSS module is respectively connected with the management module, the VO module, the VENC module, the SVP module, the VI module, the VDEC module and the AVS module.
6. The integrated display system graphics processing method based on embedded platform according to claim 1, characterized in that: the method for realizing the real-time fusion and superposition of the multichannel video by using the video input processing module of the graphic processing unit of the comprehensive display system comprises the following steps:
step 1, actively reading video and graphic data from corresponding positions of a memory by a VO module, and outputting the video and the graphic data through corresponding display equipment;
step 2, the channel of the high-definition video layer has zooming capability, and the VI module or the source image decompressed by the VDEC decoder is zoomed by the VPSS module and then output to a certain channel of the VO module for display;
step 3, judging the size of the image output to the channel of the VO module, if the size exceeds the size of the channel area of the VO module, zooming the image by the VO module, otherwise, keeping the image unchanged;
and step 4, connecting a channel output by the VPSS module, the VENC module and the SVP intelligent processing module, and performing real-time coding and intelligent algorithm processing on the video.
7. The integrated display system graphics processing method based on embedded platform according to claim 1, characterized in that: the method for realizing the video dynamic resolution self-adaption by using the bottom layer driver and hardware comprises the following steps:
step 1, capturing GPIO interruption by a bottom layer driver and reporting an ARM processor firmware program;
step 2, in the interrupt time, reading video output information by an ARM processor firmware program through an I2C interface;
step 3, calculating the resolution and frame rate of the input video image according to the read line effective signal, field effective signal and pixel clock signal;
and 4, configuring the input channel according to the video image parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010771327.4A CN111935531A (en) | 2020-08-04 | 2020-08-04 | Integrated display system graph processing method based on embedded platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010771327.4A CN111935531A (en) | 2020-08-04 | 2020-08-04 | Integrated display system graph processing method based on embedded platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111935531A true CN111935531A (en) | 2020-11-13 |
Family
ID=73306880
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010771327.4A Pending CN111935531A (en) | 2020-08-04 | 2020-08-04 | Integrated display system graph processing method based on embedded platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111935531A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112333445A (en) * | 2020-11-23 | 2021-02-05 | 武汉华中天纬测控有限公司 | Embedded high-definition video compression transmission system |
CN112383732A (en) * | 2020-12-09 | 2021-02-19 | 上海移远通信技术股份有限公司 | Signal transmission system and method with adaptive resolution |
CN112492247A (en) * | 2020-11-30 | 2021-03-12 | 天津津航计算技术研究所 | Video display design method based on LVDS input |
CN113163137A (en) * | 2021-04-29 | 2021-07-23 | 众立智能科技(深圳)有限公司 | Method and system for realizing multi-picture superposition of Haisi coding and decoding chip |
CN113965711A (en) * | 2021-09-28 | 2022-01-21 | 天津七所精密机电技术有限公司 | 4K video display control device and method based on domestic Haisi platform |
CN114199464A (en) * | 2021-12-09 | 2022-03-18 | 湖北久之洋信息科技有限公司 | SF6 gas leakage detection handheld equipment for realizing video display control |
CN117411978A (en) * | 2023-12-13 | 2024-01-16 | 北京拓目科技有限公司 | MVPS (mechanical vapor compression system) series video processing system and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202406198U (en) * | 2011-12-28 | 2012-08-29 | 湖南大学 | Caption overlaying system facing to real-time audio/video stream |
CN205105318U (en) * | 2015-09-30 | 2016-03-23 | 武汉钢铁(集团)公司 | Real -time video device of imparting knowledge to students |
CN111064906A (en) * | 2019-11-27 | 2020-04-24 | 北京计算机技术及应用研究所 | Domestic processor and domestic FPGA multi-path 4K high-definition video comprehensive display method |
-
2020
- 2020-08-04 CN CN202010771327.4A patent/CN111935531A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202406198U (en) * | 2011-12-28 | 2012-08-29 | 湖南大学 | Caption overlaying system facing to real-time audio/video stream |
CN205105318U (en) * | 2015-09-30 | 2016-03-23 | 武汉钢铁(集团)公司 | Real -time video device of imparting knowledge to students |
CN111064906A (en) * | 2019-11-27 | 2020-04-24 | 北京计算机技术及应用研究所 | Domestic processor and domestic FPGA multi-path 4K high-definition video comprehensive display method |
Non-Patent Citations (2)
Title |
---|
FLAOTER: "海思MPP", 《HTTPS://BLOG.CSDN.NET/FLAOTER/ARTICLE/DETAILS/92402685》 * |
鲁云: "基于Hi3516A的H265码流实时传输***设计", 《微型机与应用》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112333445A (en) * | 2020-11-23 | 2021-02-05 | 武汉华中天纬测控有限公司 | Embedded high-definition video compression transmission system |
CN112492247A (en) * | 2020-11-30 | 2021-03-12 | 天津津航计算技术研究所 | Video display design method based on LVDS input |
CN112383732A (en) * | 2020-12-09 | 2021-02-19 | 上海移远通信技术股份有限公司 | Signal transmission system and method with adaptive resolution |
CN112383732B (en) * | 2020-12-09 | 2023-08-04 | 上海移远通信技术股份有限公司 | Resolution adaptive signal transmission system and method |
CN113163137A (en) * | 2021-04-29 | 2021-07-23 | 众立智能科技(深圳)有限公司 | Method and system for realizing multi-picture superposition of Haisi coding and decoding chip |
CN113965711A (en) * | 2021-09-28 | 2022-01-21 | 天津七所精密机电技术有限公司 | 4K video display control device and method based on domestic Haisi platform |
CN114199464A (en) * | 2021-12-09 | 2022-03-18 | 湖北久之洋信息科技有限公司 | SF6 gas leakage detection handheld equipment for realizing video display control |
CN117411978A (en) * | 2023-12-13 | 2024-01-16 | 北京拓目科技有限公司 | MVPS (mechanical vapor compression system) series video processing system and method |
CN117411978B (en) * | 2023-12-13 | 2024-03-22 | 北京拓目科技有限公司 | MVPS (mechanical vapor compression system) series video processing system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111935531A (en) | Integrated display system graph processing method based on embedded platform | |
CN103686432B (en) | A kind of screen sharing method and system based on looking networking | |
US8855194B2 (en) | Updating non-shadow registers in video encoder | |
US20110032995A1 (en) | Video encoding and decoding device | |
US20090322784A1 (en) | System and method for virtual 3d graphics acceleration and streaming multiple different video streams | |
KR20190121867A (en) | Method and apparatus for packaging and streaming virtual reality media content | |
WO2017032082A1 (en) | Method and apparatus for setting transparency of screen menu and audio/video playing device | |
CN103581570A (en) | Large-size screen splice system and method based on multi-media communication | |
CN103841432A (en) | Transmission method and equipment of composite video data | |
CN102231824A (en) | Video monitoring random coded format digital matrix system and implementation method thereof | |
CN111031389B (en) | Video processing method, electronic device and storage medium | |
CN105554416A (en) | FPGA (Field Programmable Gate Array)-based high-definition video fade-in and fade-out processing system and method | |
US9832521B2 (en) | Latency and efficiency for remote display of non-media content | |
CN103369299A (en) | Video monitoring method based on H.264 coding technology | |
CN111147801A (en) | Video data processing method and device for video networking terminal | |
CN105100813A (en) | Video image preprocessing method and apparatus | |
CN116260800A (en) | Video real-time superposition processing device and method based on embedded platform | |
CN101916219A (en) | Streaming media display platform of on-chip multi-core network processor | |
Costa et al. | Wall screen: an ultra-high definition video-card for the internet of things | |
WO2022222842A1 (en) | Dynamic image encoding and decoding methods, apparatus and device and storage medium | |
CN114501091B (en) | Method and device for generating remote driving picture and electronic equipment | |
CN113965711A (en) | 4K video display control device and method based on domestic Haisi platform | |
US20100073566A1 (en) | On-screen display method and a display device using the same | |
CN112788024B (en) | Method and system for real-time coding of 8K ultra-high-definition video | |
US8189681B1 (en) | Displaying multiple compressed video streams on display devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201113 |