CN116723282B - Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method - Google Patents

Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method Download PDF

Info

Publication number
CN116723282B
CN116723282B CN202310984168.XA CN202310984168A CN116723282B CN 116723282 B CN116723282 B CN 116723282B CN 202310984168 A CN202310984168 A CN 202310984168A CN 116723282 B CN116723282 B CN 116723282B
Authority
CN
China
Prior art keywords
definition
interest
video image
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310984168.XA
Other languages
Chinese (zh)
Other versions
CN116723282A (en
Inventor
查晓辉
佘俊
杨超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhuoyuan Science & Technology Co ltd
Original Assignee
Chengdu Zhuoyuan Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhuoyuan Science & Technology Co ltd filed Critical Chengdu Zhuoyuan Science & Technology Co ltd
Priority to CN202310984168.XA priority Critical patent/CN116723282B/en
Publication of CN116723282A publication Critical patent/CN116723282A/en
Application granted granted Critical
Publication of CN116723282B publication Critical patent/CN116723282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0125Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards being a high definition standard

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for generating an ultra-high definition-to-high definition multi-machine intelligent video, wherein the method comprises the steps that a signal acquisition module acquires an ultra-high definition video signal and sends each frame of video image of the acquired ultra-high definition video signal to an ultra-high definition-to-high definition multi-machine intelligent selection module; the ultra-high definition-to-high definition multi-machine intelligent selection module respectively generates high definition video image modules corresponding to the regions of interest according to the selected regions of interest, pre-processes each frame of video image according to the regions of interest, uniformly scales, caches the video images to the high definition video image modules corresponding to the regions of interest, and carries out consistency correction on output delay of the high definition video image modules of the respective regions of interest, and carries out video image parallel output after finishing the consistency correction of the output delay; by the method and the device, different pictures required by selection and output from the panoramic image can be realized.

Description

Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for generating an ultra-high definition-to-high definition multi-machine intelligent video.
Background
The field production of the machine position often needs to use a plurality of cameras to capture pictures from a plurality of angles, and a switching table for supporting a plurality of signals is also needed to switch the pictures, so that the huge operating systems are labor-consuming to operate, and are high in cost, the industrial development of the ultra-high definition-to-high definition multi-machine position switching system in China is limited by factors such as technical barriers, and therefore, the domestic substitution of high-end products is urgently needed.
At present, the traditional multi-machine-position broadcasting system captures pictures at different angles of a plurality of machine positions, a switching station for supporting multiple signals is needed to switch the pictures, the image standard is high definition 1080p and 1080i, and the technology can not realize that one machine position is used for manufacturing a plurality of pictures for broadcasting and outputting. The technical difficulties faced by the ultra-high definition multi-machine-bit guided broadcasting system are the application problems of the image standard technology, the video coding technology, the network communication protocol, the double-flow technology and the live broadcasting technology under the multi-machine-bit guided broadcasting system. The research and development of the 4K+AI ultrahigh-definition-to-high-definition multi-camera intelligent video processor can be completed by only matching one to two 4K cameras, and the on-site video production does not need complex machine position setting and scene scheduling, and does not need to additionally erect a switching table. On the one hand, the guiding industry is also marked to develop from the whole indoor guiding room with huge space of the traditional equipment to the light portable guiding, the reduction of the volume and the functional refinement greatly provide more advantages for living broadcast anytime anywhere, and the interference of a plurality of complicated operation flows and moving inconvenience is effectively reduced.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an ultrahigh-definition-to-high-definition multi-machine intelligent video generation method, which comprises the following steps:
the method comprises the steps that firstly, a signal acquisition module acquires an ultra-high definition video signal, and each frame of video image of the acquired ultra-high definition video signal is sent to an ultra-high definition-to-high definition multi-machine intelligent selection module;
step two, the ultrahigh-definition high-definition multi-machine intelligent selection module respectively generates high-definition video image modules corresponding to the regions of interest according to the selected regions of interest, pre-processes each frame of video image according to the regions of interest, uniformly scales, caches the video images to the high-definition video image modules corresponding to the regions of interest, and carries out consistency correction on output delay of the high-definition video image modules corresponding to the regions of interest, and carries out video image parallel output after finishing consistency correction on the output delay;
step three, obtaining a currently output high-definition video image, if the currently output high-definition video image is the whole image, entering a step four, and if the currently output high-definition video image is the region-of-interest image, entering a step five;
step four, mixing the selected high-definition video image of the region of interest with the currently output high-definition video image, and switching the mixed high-definition video image of the region of interest from the whole image;
and fifthly, after mixing the current region of interest image with the whole image, switching the current region of interest image into the whole image, mixing the high-definition video image of the selected region of interest with the switched whole image, and switching the whole image into the high-definition video image of the selected region of interest to finish image switching.
Further, the ultrahigh-definition-to-high-definition multi-machine-position intelligent selection module generates high-definition video image modules corresponding to the regions of interest according to the selected regions of interest, respectively, and comprises:
the region of interest comprises a plurality of partial image areas and a selected moving object in the selected whole image, and the ultrahigh-definition-to-high-definition multi-position intelligent selection module respectively generates high-definition video image modules corresponding to the region of interest according to the selected partial image areas and the selected moving object, wherein the high-definition video image modules corresponding to the region of interest are respectively in communication connection with the whole image video modules in the ultrahigh-definition-to-high-definition multi-position intelligent selection module.
Further, the high-definition video image module for preprocessing each frame of video image and buffering the preprocessed video image to the corresponding region of interest after unified scaling comprises:
according to the selected multiple regions of interest, generating a video image of the corresponding region of interest from each frame of video image of the ultra-high-definition video signal according to the image position and the image size of the region of interest, and caching the video image of the corresponding region of interest to a high-definition video image module of the corresponding region of interest;
and (3) for the selected moving target, adopting a Kalman filtering algorithm to correlate with a Hungary algorithm to realize tracking of the selected moving target and generate a tracking video image.
Further, the step of performing the consistency correction on the output delay of the high-definition video image module corresponding to each region of interest, and performing the parallel output of video images after the completion of the consistency correction on the output delay comprises the following steps:
the time delay monitoring module acquires the transmission time delay of the ultra-high-definition video signal transmission line and the output time delay of the high-definition video image module of each corresponding region of interest, respectively acquires the transmission time delay difference between the transmission time delay of the ultra-high-definition video signal transmission line and the output time delay of the high-definition video image module of each corresponding region of interest, corrects the transmission time delay difference of each parallel transmission line, and enables the time delay difference between the output time delay of the high-definition video image module of each corresponding region of interest and the transmission time delay of the ultra-high-definition video signal transmission line to be within a set difference threshold value range.
The system comprises a signal acquisition module, a remote control module, an ultra-high definition-to-high definition multi-machine intelligent selection module, a picture monitoring module and a transmission delay control module;
the signal acquisition module, the remote control module, the picture monitoring module and the transmission delay control module are respectively connected with the ultrahigh-definition-to-high-definition multi-machine intelligent selection module.
The beneficial effects of the invention are as follows: according to the invention, the application of shooting by multiple persons through multiple high-definition cameras in the past is realized by creatively using one panoramic 4K camera, and different pictures required by the shooting are selected and output from the panoramic image and are simultaneously output to a master control. The purposes of reducing personnel and cost of equipment are realized, and the operation is more convenient.
Drawings
FIG. 1 is a flow chart of a method for generating ultra-high definition-to-high definition multi-machine intelligent video;
FIG. 2 is a schematic diagram of an ultra-high definition to high definition multi-level intelligent video generation system;
FIG. 3 is a schematic diagram of a smart code technology;
FIG. 4 is a schematic diagram of an RCNN network;
fig. 5 is a diagram of an RPN network structure;
FIG. 6 is a diagram of the RCNN network architecture;
FIG. 7 is a schematic diagram of target tracking;
fig. 8 is an algorithm flow chart.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to the accompanying drawings, but the scope of the present invention is not limited to the following description.
For the purpose of making the technical solution and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention. It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with the examples.
As shown in fig. 1, the method for generating the ultra-high definition-to-high definition multi-machine intelligent video comprises the following steps:
the method comprises the steps that firstly, a signal acquisition module acquires an ultra-high definition video signal, and each frame of video image of the acquired ultra-high definition video signal is sent to an ultra-high definition-to-high definition multi-machine intelligent selection module;
step two, the ultrahigh-definition high-definition multi-machine intelligent selection module respectively generates high-definition video image modules corresponding to the regions of interest according to the selected regions of interest, pre-processes each frame of video image according to the regions of interest, uniformly scales, caches the video images to the high-definition video image modules corresponding to the regions of interest, and carries out consistency correction on output delay of the high-definition video image modules corresponding to the regions of interest, and carries out video image parallel output after finishing consistency correction on the output delay;
step three, obtaining a currently output high-definition video image, if the currently output high-definition video image is the whole image, entering a step four, and if the currently output high-definition video image is the region-of-interest image, entering a step five;
step four, mixing the selected high-definition video image of the region of interest with the currently output high-definition video image, and switching the mixed high-definition video image of the region of interest from the whole image;
and fifthly, after mixing the current region of interest image with the whole image, switching the current region of interest image into the whole image, mixing the high-definition video image of the selected region of interest with the switched whole image, and switching the whole image into the high-definition video image of the selected region of interest to finish image switching.
The ultra-high definition-to-high definition multi-machine intelligent selection module respectively generates a high definition video image module corresponding to the region of interest according to the selected region of interest, and comprises the following steps:
the region of interest comprises a plurality of partial image areas and a selected moving object in the selected whole image, and the ultrahigh-definition-to-high-definition multi-position intelligent selection module respectively generates high-definition video image modules corresponding to the region of interest according to the selected partial image areas and the selected moving object, wherein the high-definition video image modules corresponding to the region of interest are respectively in communication connection with the whole image video modules in the ultrahigh-definition-to-high-definition multi-position intelligent selection module.
The high-definition video image module for preprocessing each frame of video image and caching the preprocessed video image to a corresponding region of interest after unified scaling comprises:
according to the selected multiple regions of interest, generating a video image of the corresponding region of interest from each frame of video image of the ultra-high-definition video signal according to the image position and the image size of the region of interest, and caching the video image of the corresponding region of interest to a high-definition video image module of the corresponding region of interest;
and (3) for the selected moving target, adopting a Kalman filtering algorithm to correlate with a Hungary algorithm to realize tracking of the selected moving target and generate a tracking video image.
The method for carrying out consistency correction on the output delay of the high-definition video image module corresponding to each region of interest, after finishing the consistency correction of the output delay, carrying out video image parallel output comprises the following steps:
the time delay monitoring module acquires the transmission time delay of the ultra-high-definition video signal transmission line and the output time delay of the high-definition video image module of each corresponding region of interest, respectively acquires the transmission time delay difference between the transmission time delay of the ultra-high-definition video signal transmission line and the output time delay of the high-definition video image module of each corresponding region of interest, corrects the transmission time delay difference of each parallel transmission line, and enables the time delay difference between the output time delay of the high-definition video image module of each corresponding region of interest and the transmission time delay of the ultra-high-definition video signal transmission line to be within a set difference threshold value range.
As shown in fig. 2, the ultra-high definition-to-high definition multi-machine intelligent video generating system applies the ultra-high definition-to-high definition multi-machine intelligent video generating method, and comprises a signal acquisition module, a remote control module, an ultra-high definition-to-high definition multi-machine intelligent selection module, a picture monitoring module and a transmission delay control module;
the signal acquisition module, the remote control module, the picture monitoring module and the transmission delay control module are respectively connected with the ultrahigh-definition-to-high-definition multi-machine intelligent selection module.
Specifically, the technology aims to focus on video format analysis, video AI algorithm, automatic tracking, special ASIC chip and 4K ultra-high definition key technology, and is realized by intelligent image analysis of an ultra-high definition processor and operation mode of a remote controller, and virtual multi-machine-position picture output. The ultra-high definition-to-high definition multi-machine intelligent video processor is developed, and industrialization of multi-machine-position broadcasting is gradually realized in the later stage. Mainly realizes the following functions: 4K monitoring output; virtual machine position operation; controlling a camera; switching slow live broadcasting; signal quality and performance metrics; AI auxiliary picture selection; an automatic tracking and stacking function; and the AI automatic tracking auxiliary picture selection output is remotely carried out through an external remote control panel.
(2) Video format technology-smart ROI coding technology is focused on. As shown in fig. 3.
The video coding technology of the intelligent ROI region of interest is used for controlling quantization parameters according to different levels of the region of interest in the image, so that the code rate distribution of different levels is realized, and the balance between the coding code rate and the picture quality is effectively realized. Perceptual coding and optimization strategies for various scenarios are also derived in the art. Under a static scene, merging large-area same-color pixel points with the whole unchanged; under the dynamic scene, aiming at the change condition between adjacent pictures, the interested area and the non-interested area in the pictures are separated, and different coding modes are adopted for separate coding integration. The background model is fixed in a certain period of time without changing the environment, only extraction is needed without repeated coding, and the intelligent image algorithm greatly reduces the data processing amount by only transmitting the area with variation, thereby shortening the coding processing time. The intelligent coding technology is added with the intelligent analysis technology, the advanced scene self-adaptive code rate control algorithm is adopted to realize higher coding efficiency than H.265/H.264, the code rate can be reduced by more than 30% in the daytime, more than 70% in the night, and meanwhile, the storage and transmission cost can be obviously reduced by providing high-quality video.
(3) Video AI technology-video recognition algorithm, faster-RCNN target detection technology. As shown in FIG. 4, the main task of stage 1 is to generate proposal boxes by means of rpn network, and the main task of stage 2 is to classify and precisely locate the proposal boxes.
Stage 1: the original image is characterized by being extracted by a backup and a feature map is output. The backbone is vgg, downsampled by x 16 and followed by 512 filters output feature maps of 3 x 512. Each point on the Feature map acts as an anchor point and generates 3 scales and 3 size anchor boxes in some cases, so each point processing on each Feature map is responsible for predicting the class and offset of 9-n anchors. Extracting 18 features for 18 filters of 1 x 512 connected to the feature map, and predicting the probability that 9 anchors are foreground or background; 36 filters of 1×1×512 were connected to the feature map, 36 features were extracted, and 4 coordinates of 9 anchors were predicted. 9 anchor boxes are allocated to each point of our feature map. To train classification and regression in an RPN network, each anchor needs to be labeled, i.e. the coordinates of the 1 st anchor box being 1 (foreground) or 0 (background) and the ground trunk of each anchor box are labeled. For a feature map of 60×40, the number of anchor boxes generated is 60×40×9=21.6k, and the frame out of bounds at the boundary is removed, and the rest is about 6k through nms processing. Finally, 128 foreground anchor boxes are selected through score sorting of the 9 th anchor box, and the 256 anchor boxes are utilized for PRN training. The selection conditions of the positive samples are as follows: a) Anchor box with maximum IOU with groundtrunk; b) An anchor box with an IOU of any groudtluth greater than 0.7. Satisfying either a) or b) is that one conditional cap can be selected as a positive sample. The selection conditions of the negative sample are as follows: the IOU with all groundtrunk is less than 0.3. Anchor boxes with IOU between 0.3-0.7 ignore not participate in training. After the training of the RPN network line, 256 proposals can be output from the original image through the RPN network line. As shown in fig. 5.
Stage 2: the original image is passed through the RPN network to produce a series of proposal boxes. These proposal boxes extract corresponding features on the feature map generated by the backhaul, and since the size of each proposalbox is different, the subsequent network lines are connected to the full connection layer, so that the output size of each proposal box is required to be constant, for this purpose, each proposal box is followed by a ropooling module to convert the output of each proposal box into 7×7512 and then the full connection layer is used for classification and coordinate regression of the proposal box. As shown in fig. 6.
Automatic tracking technology-the task of target tracking is to associate time sequence target identity, and the main problem solved by the automatic tracking technology is target data association and matching. As shown in fig. 7, two targets are detected at time 1 and denoted as target a, target b; two targets are detected at time 2 and noted as target a ', target b'. Tracking, i.e., determining whether a 'is a or b at the previous time, and similarly determining whether b' is a or b at the previous time. Thus, each object is assigned a unique id, the same objects share the same id, and over time, the same objects are spatially associated together, each object forming a track. And accordingly, the application and analysis of the business functions can be performed.
And (3) adopting a multi-target tracking algorithm, and based on a target detection result, adopting a Kalman filtering algorithm to correlate front and rear targets with a Hungary algorithm so as to realize tracking. A specific algorithm flow is shown in fig. 8. First, create corresponding Tracks using the detected detection of the first frame, initialize the Kalman filtering, and predict the next frame Tracks based on the Tracks of the frame. And secondly, detecting the detection of the current frame and carrying out IOU Match with the Tracks predicted by the previous frame. And obtaining a matching Cost Matrix. Thirdly, according to the cost matrix, the Hungary algorithm matches all target detection frames of the current frame with the track predicted by the previous frame. There are three types of matching results. The first, the detection frame and the track frame are Matched to obtain Matched Tracks, the second, the detection frame is not Matched to the track to obtain Unmatched detections, and the third, the track frame is not Matched to the detection frame to obtain Unmatched Tracks. Fourthly, updating Kalman for the Matched Tracks and predicting the next frame Tracks, distributing new Tracks for Uumatched Detections and initializing Kalman filtering to predict the next frame Tracks; and deleting the Unfolded Tracks live. Fifth, repeating the second to fourth steps until the video is finished.
(5) Special ASIC chip-mainly researching high-performance integrated circuit of ASIC chip
ASIC chip system, circuit, process are highly integrated, and high performance integrated circuit. ASIC chip is designed in the form of pure digital logic circuit, which is beneficial to reduce the chip area. For small area chips, the wafer of the same specification can be cut out to be more number of chips, which is helpful for reducing the wafer cost. The ASIC chip can realize higher performance in the same power consumption range, and can meet the limit of the novel intelligent product on energy consumption.
(6) 4K ultra-high definition technology-mainly researching the image preprocessing technology under ultra-high definition resolution. Each image is sorted out and submitted to recognition module for recognition, a process called image preprocessing. In image analysis, the quality of the image directly influences the accuracy of the design and effect of the recognition algorithm, and the processing is performed before the input image is subjected to feature extraction, segmentation and matching. The system mainly uses two methods of core image preprocessing technology (image ROI) and (scaling in image geometric transformation).
Image ROI-refers to the region of interest in image analysis. When the ROI area is found, the image to be processed is changed from the original large image to a small image area, so that the processing time can be reduced, and the processing precision can be increased. Taking face recognition as an example, the face to be detected in the image is a region of interest (ROI). The extraction of ROI region cuts is an indispensable operation in the processor vision.
Image scaling-refers to the enlargement or reduction of a pair of images to obtain a new image. The scaling operation is performed on an image, and the scaling operation can be performed according to the scaling operation or the scaling operation can be performed by designating the width and the height of the image. Zooming in an image effectively expands the image matrix, and zooming out effectively compresses the image matrix. Enlarging the image increases the image file size and reducing the image reduces the file size.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (3)

1. The ultra-high definition-to-high definition multi-machine intelligent video generation method is characterized by comprising the following steps of:
the method comprises the steps that firstly, a signal acquisition module acquires an ultra-high definition video signal, and each frame of video image of the acquired ultra-high definition video signal is sent to an ultra-high definition-to-high definition multi-machine intelligent selection module;
step two, the ultrahigh-definition high-definition multi-machine intelligent selection module respectively generates high-definition video image modules corresponding to the regions of interest according to the selected regions of interest, pre-processes each frame of video image according to the regions of interest, uniformly scales, caches the video images to the high-definition video image modules corresponding to the regions of interest, and carries out consistency correction on output delay of the high-definition video image modules corresponding to the regions of interest, and carries out video image parallel output after finishing consistency correction on the output delay;
step three, obtaining a currently output high-definition video image, if the currently output high-definition video image is the whole image, entering a step four, and if the currently output high-definition video image is the region-of-interest image, entering a step five;
step four, mixing the selected high-definition video image of the region of interest with the currently output high-definition video image, and switching the mixed high-definition video image of the region of interest from the whole image;
step five, after mixing the current region of interest image and the whole image, switching to the whole image, mixing the high-definition video image of the selected region of interest with the switched whole image, switching to the high-definition video image of the selected region of interest from the switched whole image, and completing image switching;
the ultra-high definition-to-high definition multi-machine intelligent selection module respectively generates a high definition video image module corresponding to the region of interest according to the selected region of interest, and comprises the following steps:
the region of interest comprises a plurality of partial image areas and a selected moving object in the selected whole image, and the ultrahigh-definition-to-high-definition multi-position intelligent selection module respectively generates high-definition video image modules corresponding to the region of interest according to the selected partial image areas and the selected moving object, wherein the high-definition video image modules corresponding to the region of interest are respectively in communication connection with the whole image video modules in the ultrahigh-definition-to-high-definition multi-position intelligent selection module.
2. The method for generating the ultra-high definition to high definition multi-bit intelligent video according to claim 1, wherein the high definition video image module for preprocessing each frame of video image and buffering the preprocessed frame of video image to a corresponding region of interest after unified scaling comprises:
according to the selected multiple regions of interest, generating a video image of the corresponding region of interest from each frame of video image of the ultra-high-definition video signal according to the image position and the image size of the region of interest, and caching the video image of the corresponding region of interest to a high-definition video image module of the corresponding region of interest;
and (3) for the selected moving target, adopting a Kalman filtering algorithm to correlate with a Hungary algorithm to realize tracking of the selected moving target and generate a tracking video image.
3. The method for generating the ultra-high definition to high definition multi-machine intelligent video according to claim 2, wherein the step of performing the output delay consistency correction on the output delays of the high definition video image modules corresponding to the regions of interest, and performing the parallel output of the video images after the completion of the output delay consistency correction comprises the steps of:
the time delay monitoring module acquires the transmission time delay of the ultra-high-definition video signal transmission line and the output time delay of the high-definition video image module of each corresponding region of interest, respectively acquires the transmission time delay difference between the transmission time delay of the ultra-high-definition video signal transmission line and the output time delay of the high-definition video image module of each corresponding region of interest, corrects the transmission time delay difference of each parallel transmission line, and enables the time delay difference between the output time delay of the high-definition video image module of each corresponding region of interest and the transmission time delay of the ultra-high-definition video signal transmission line to be within a set difference threshold value range.
CN202310984168.XA 2023-08-07 2023-08-07 Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method Active CN116723282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310984168.XA CN116723282B (en) 2023-08-07 2023-08-07 Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310984168.XA CN116723282B (en) 2023-08-07 2023-08-07 Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method

Publications (2)

Publication Number Publication Date
CN116723282A CN116723282A (en) 2023-09-08
CN116723282B true CN116723282B (en) 2023-10-20

Family

ID=87868228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310984168.XA Active CN116723282B (en) 2023-08-07 2023-08-07 Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method

Country Status (1)

Country Link
CN (1) CN116723282B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104660951A (en) * 2015-01-21 2015-05-27 上海交通大学 Super-resolution amplification method of ultra-high definition video image converted from high definition video image
CN111311645A (en) * 2020-02-25 2020-06-19 四川新视创伟超高清科技有限公司 Ultrahigh-definition video cut target tracking and identifying method
CN111741274A (en) * 2020-08-25 2020-10-02 北京中联合超高清协同技术中心有限公司 Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture
CN111757162A (en) * 2020-06-19 2020-10-09 广州博冠智能科技有限公司 High-definition video playing method, device, equipment and storage medium
CN111818312A (en) * 2020-08-25 2020-10-23 北京中联合超高清协同技术中心有限公司 Ultra-high-definition video monitoring conversion device and system with variable vision field
CN112104866A (en) * 2020-08-05 2020-12-18 成都卓元科技有限公司 8K video transmission mode
CN115225973A (en) * 2022-05-11 2022-10-21 北京广播电视台 Ultra-high-definition video playing interaction method, system, electronic equipment and storage medium
CN115379068A (en) * 2022-07-15 2022-11-22 惠州市德赛西威智能交通技术研究院有限公司 Multi-camera synchronization method and device
CN116320214A (en) * 2023-02-08 2023-06-23 四川新视创伟超高清科技有限公司 Virtual multi-machine application method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI547177B (en) * 2015-08-11 2016-08-21 晶睿通訊股份有限公司 Viewing Angle Switching Method and Camera Therefor

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104660951A (en) * 2015-01-21 2015-05-27 上海交通大学 Super-resolution amplification method of ultra-high definition video image converted from high definition video image
CN111311645A (en) * 2020-02-25 2020-06-19 四川新视创伟超高清科技有限公司 Ultrahigh-definition video cut target tracking and identifying method
CN111757162A (en) * 2020-06-19 2020-10-09 广州博冠智能科技有限公司 High-definition video playing method, device, equipment and storage medium
CN112104866A (en) * 2020-08-05 2020-12-18 成都卓元科技有限公司 8K video transmission mode
CN111741274A (en) * 2020-08-25 2020-10-02 北京中联合超高清协同技术中心有限公司 Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture
CN111818312A (en) * 2020-08-25 2020-10-23 北京中联合超高清协同技术中心有限公司 Ultra-high-definition video monitoring conversion device and system with variable vision field
CN115225973A (en) * 2022-05-11 2022-10-21 北京广播电视台 Ultra-high-definition video playing interaction method, system, electronic equipment and storage medium
CN115379068A (en) * 2022-07-15 2022-11-22 惠州市德赛西威智能交通技术研究院有限公司 Multi-camera synchronization method and device
CN116320214A (en) * 2023-02-08 2023-06-23 四川新视创伟超高清科技有限公司 Virtual multi-machine application method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
4K节目与制作技术要求探讨;王珮;;现代电视技术(08);全文 *

Also Published As

Publication number Publication date
CN116723282A (en) 2023-09-08

Similar Documents

Publication Publication Date Title
Acharya et al. Real-time image-based parking occupancy detection using deep learning.
TWI677826B (en) License plate recognition system and method
CN102567727B (en) Method and device for replacing background target
CN104063883B (en) A kind of monitor video abstraction generating method being combined based on object and key frame
Chen et al. Surrounding vehicle detection using an FPGA panoramic camera and deep CNNs
CN100542303C (en) A kind of method for correcting multi-viewpoint vedio color
CN103973969A (en) Electronic device and image composition method thereof
CN112207821B (en) Target searching method of visual robot and robot
US20210248427A1 (en) Method and system of neural network object recognition for image processing
CN111462155B (en) Motion detection method, device, computer equipment and storage medium
CN115810025A (en) Indoor pedestrian positioning method and system based on UWB and vision
CN110555377A (en) pedestrian detection and tracking method based on fisheye camera overlook shooting
CN104363426A (en) Traffic video monitoring system and method with target associated in multiple cameras
CN112380923A (en) Intelligent autonomous visual navigation and target detection method based on multiple tasks
CN112884803B (en) Real-time intelligent monitoring target detection method and device based on DSP
Dahirou et al. Motion Detection and Object Detection: Yolo (You Only Look Once)
CN116723282B (en) Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method
CN115601791B (en) Unsupervised pedestrian re-identification method based on multi-former and outlier sample re-distribution
CN110633641A (en) Intelligent security pedestrian detection method, system and device and storage medium
Lee et al. Sub-optimal camera selection in practical vision networks through shape approximation
CN114640785A (en) Site model updating method and system
CN110769258A (en) Image compression method and system for multi-semantic region of specific scene
CN101127118A (en) Target extraction method using dynamic projection as background
RU2788301C1 (en) Object recognition method in video surveillance system
CN115767040B (en) 360-degree panoramic monitoring automatic cruising method based on interactive continuous learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant