CN113099182B - Multi-window real-time scaling method based on airborne parallel processing architecture - Google Patents

Multi-window real-time scaling method based on airborne parallel processing architecture Download PDF

Info

Publication number
CN113099182B
CN113099182B CN202110378208.7A CN202110378208A CN113099182B CN 113099182 B CN113099182 B CN 113099182B CN 202110378208 A CN202110378208 A CN 202110378208A CN 113099182 B CN113099182 B CN 113099182B
Authority
CN
China
Prior art keywords
resolution
video image
camera
window
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110378208.7A
Other languages
Chinese (zh)
Other versions
CN113099182A (en
Inventor
徐晓枫
刘国栋
庞澜
张卫国
范鹏程
侯军占
黄维东
何鹏
韩琪
贾子庆
张夏疆
宁路锐
吴英春
东栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian institute of Applied Optics
Original Assignee
Xian institute of Applied Optics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian institute of Applied Optics filed Critical Xian institute of Applied Optics
Priority to CN202110378208.7A priority Critical patent/CN113099182B/en
Publication of CN113099182A publication Critical patent/CN113099182A/en
Application granted granted Critical
Publication of CN113099182B publication Critical patent/CN113099182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention belongs to the technical field of airborne photoelectric reconnaissance and monitoring, and particularly relates to a multi-window real-time scaling method based on an airborne parallel processing architecture. The invention realizes the quick roaming real-time zooming of the window, supports the simultaneous zooming of the independent windows of multiple users, improves the reconnaissance efficiency of the photoelectric monitoring system with large coverage area and meets the reconnaissance requirements of different types. The technical scheme can be used for rapidly zooming and roaming mass image data, overcomes the limitations of data link bandwidth and software and hardware processing capacity, meets the requirements of observation instantaneity and smoothness, and can efficiently, rapidly and accurately complete information reconnaissance. A plurality of independent user windows are simultaneously zoomed, roamed and tracked, different observers mainly detect a plurality of target areas in the face of rapidly changing battlefield conditions, the monitoring efficiency of the large-range high-resolution photoelectric monitoring system can be greatly improved, and the system can be applied to a plurality of fields of urban combat, anti-terrorism, border monitoring, air detection and the like.

Description

Multi-window real-time scaling method based on airborne parallel processing architecture
Technical Field
The invention belongs to the technical field of airborne photoelectric reconnaissance and monitoring, and particularly relates to a multi-window real-time scaling method based on an airborne parallel processing architecture.
Background
The fields of urban combat, anti-terrorism and the like need to permanently perform real-time dynamic monitoring on a large-range area and perform large-field and high-resolution remote imaging. Due to the size limitation of the conventional imaging device, such optoelectronic systems often need to be tiled for high-resolution large-area gaze and adopt a distributed parallel image processing architecture.
This makes the total pixel of such optoelectronic system reach more than billions, and the data volume is huge. The data are analyzed in time, the characteristics of a large-range high-resolution photoelectric monitoring system, such as large observation range, high ground resolution, good dynamic monitoring real-time performance and strong target positioning and tracking capability, are exerted, and strict requirements are provided for the data transmission performance and the software and hardware realization of a data analysis algorithm.
Due to the limitation of the bandwidth of a data link, the global video with the highest resolution is difficult to transmit in real time, and the real-time performance and fluency of observation cannot meet the requirements, which may cause the potential change of intelligence between data acquisition and analysis. The attention ability of the observer cannot take into account a plurality of targets in the whole field of view, which can cause target missing detection and delay fighters, so that the efficient and intelligent image processing method is a key technology which needs to be solved urgently.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to improve the reconnaissance efficiency of the large-coverage photoelectric monitoring system and meet different types of reconnaissance requirements.
(II) technical scheme
In order to solve the technical problem, the invention provides a multi-window real-time scaling method based on an airborne parallel processing architecture, which comprises the following steps:
the method comprises the following steps: constructing a large-range high-resolution photoelectric monitoring system, wherein a camera parallel processing architecture is adopted; the photoelectric monitoring system comprises: the system comprises N cameras with the resolution of 5472 × 3648 pixels, N image processing components, a tera broadband switch, a window scheduling component and a ground station;
the method comprises the following steps that an image processing assembly is deployed behind each camera, and the cameras and the image processing assemblies are numbered, wherein the serial numbers are 1,2, \8230;, N; each image processing assembly comprises: the device comprises a copying unit, a scaling unit, a storage unit and a data retrieval unit;
step two: after the camera collects the video image, the copying unit copies the video image into four parts, namely, the original video image is divided into four paths and then sent to the zooming unit;
step three: the zooming unit zooms each path of video image into four layers of video images with original resolution, 1/4 original resolution, 1/16 original resolution and 1/64 original resolution respectively;
step four: the storage unit develops four shared memory spaces and stores the four shared memory spaces into four layers of video images with resolution respectively;
step five: the ground station sends a window retrieval request, and the data retrieval unit acquires a corresponding video image from the storage unit according to request parameters of the window retrieval request and sends the video image to the window scheduling unit;
step six: the window scheduling unit packs the data to form complete window data and compresses and transmits all the window data;
step seven: after the ground station receives the compressed video image, the user software extracts the window data needed by the ground station for decompression and display.
In the first step, the camera is a 5472 × 3648-pixel camera.
In the first step, N is an even number and is not less than 4.
The cameras adopt an array splicing distribution mode, the edges of the view fields are partially overlapped, and certain splicing loss exists.
In the first step, according to the video image acquisition requirement, a larger coverage area can be formed by increasing the number of cameras.
And in the second step, the camera collects an original video image according to the required pixel format, resolution and frame rate.
In the second step, the camera acquires video data according to the horizontal resolution x =5472, the vertical resolution y =3648, and the required pixel format and frame rate.
In the third step, the video image with the original resolution is the original video image with unchanged length and width; the video image with the 1/4 original resolution ratio is a video image with the length and the width changed into 1/2 of the original video image; the video image with the 1/16 original resolution ratio is a video image with the length and the width changed into 1/4 of the original video image; the video image with the 1/64 original resolution ratio is a video image with the length and the width changed into 1/8 of the original video image; and numbering 1-4 layers for the four layers of video images.
In the fifth step, if a single user of the ground station requests a window with a resolution of 640 × 480 from the data retrieval unit, the request parameters include the number of a camera where the video that the user needs to observe is located and the resolution level where the video is located, and the starting coordinates (j, k) of the pixel position of the window in the camera video define that the pixel position at the upper left corner of the camera video is (0, 0), and the pixel position is positive from right to bottom;
1) If the search level is 1 level, i.e. when the window is scratched on the original resolution video image,
if x is larger than or equal to j +640 and y is larger than or equal to k +480, namely the requested window is within the field of view of one camera, the video image with 640 × 480 resolution is scratched from the (j, k) pixel position of the video image with the original resolution of the camera;
if x is less than j +640 and y is more than or equal to k +480, namely the requested window is in the field of view of the two cameras, the (x-j) × 480 resolution video image is scratched from the (j, k) pixel position of the original resolution video image of the camera; matting a video image of (640-x + j) × 480 resolution from (0, k) pixel positions of a horizontally adjacent camera original resolution video image;
if x < j +640 and y < k +480, namely the requested window is within the visual field range of the four cameras, scratching the video image with the resolution of (x-j) × (y-k) from the pixel position of (j, k) of the video image with the original resolution of the camera; matting (640-x + j) × (y-k) resolution video images from (0, k) pixel positions of horizontally adjacent camera raw resolution video images; matting (x-j) × (480-y + k) resolution video images from (j, 0) pixel locations of vertically neighboring camera raw resolution video images; matting (640-x + j) × (480-y + k) resolution video images from (0, 0) pixel locations of diagonally adjacent camera raw resolution video images;
2) The retrieval level is 2 levels, i.e. when a window is scratched on a 1/4 original resolution video image,
if x/2 is more than or equal to j +640 and y/2 is more than or equal to k +480, namely the requested window is within the field of view of one camera, the video image with 640 x 480 resolution is scratched from the (j, k) pixel position of the 1/4 original resolution video image of the camera;
if x/2 is less than j +640 and y/2 is more than or equal to k +480, namely the requested window is within the visual field range of the two cameras, the video image with (x/2-j) × 480 resolution is scratched from the (j, k) pixel position of the video image with 1/4 original resolution of the camera, and the video image with (640-x/2 +j) × 480 resolution is scratched from the (0, k) pixel position of the video image with 1/4 original resolution of the horizontally adjacent camera;
if x/2 < j +640 and y/2< -k +480, that is, the requested window is within the field of view of four cameras, then the video image with (x/2-j) × (y/2-k) resolution is scratched from the (j, k) pixel position of the video image with 1/4 original resolution of the present camera; keying out a video image of (640-x/2 + j) (y/2-k) resolution from (0, k) pixel position of a video image of 1/4 original resolution of a horizontally adjacent camera; picking up a video image with a resolution of (x/2-j) ((480-y/2 + k)) from a pixel position of (j, 0) of a video image with a 1/4 original resolution of a vertical adjacent camera; keying out (640-x/2 + j) (480-y/2 + k) resolution video images from (0, 0) pixel positions of the diagonal neighboring camera 1/4 original resolution video images;
3) The retrieval level is 3 layers, namely when a window is scratched on a 1/16 original resolution video image,
if x/4 is more than or equal to j +640 and y/4 is more than or equal to k +480, namely the requested window is within the field of view of one camera, the video image with 640 x 480 resolution is scratched from the (j, k) pixel position of the 1/16 original resolution video image of the camera;
if x/4 is less than j +640 and y/4 is more than or equal to k +480, namely the requested window is within the field of view of the two cameras, the (x/4-j) × 480 resolution video image is scratched from the (j, k) pixel position of the 1/16 original resolution video image of the camera, and the (640-x/4) +j) × 480 resolution video image is scratched from the (0, k) pixel position of the 1/16 original resolution video image of the horizontally adjacent camera;
if x/4 < j +640 and y/4< -k +480, i.e. the requested window is within the field of view of four cameras, then the video image with (x/4-j) × (y/4-k) resolution is scratched from the (j, k) pixel position of the original resolution video image of the present camera 1/16; keying out a video image of (640-x/4 + j) (y/4-k) resolution from a (0, k) pixel position of a horizontally adjacent camera 1/16 original resolution video image; keying out (x/4-j) (480-y/4 + k) resolution video images from (j, 0) pixel positions of the original resolution video images of the vertically adjacent cameras 1/16; keying out (640-x/4 + j) (480-y/4 + k) resolution video images from (0, 0) pixel positions of the diagonal neighboring camera 1/16 original resolution video images;
4) The retrieval level is 4 layers, namely when a window is scratched on a 1/64 original resolution video image;
y/8< -480 > so that the requested window is at least in the field of view of two vertically adjacent cameras;
if x/8 is larger than or equal to j +640, scratching a video image with 640 x (y/8-k) resolution from the (j, k) pixel position of the video image with 1/64 original resolution of the camera; keying 640 x (480-y/8 + k) resolution video images from the (j, 0) pixel position of the vertical neighboring camera 1/64 original resolution video image;
if x/8 < j +640, namely the requested window is within the field of view of four cameras, the video image with the resolution of (x/8-j) × (y/8-k) is scratched from the (j, k) pixel position of the video image with the original resolution of 1/64 camera; keying out (640-x/8 + j) (y/8-k) resolution video images from (0, k) pixel locations of horizontal neighboring camera 1/64 original resolution video images; matting (x/8-j) (480-y/8 + k) resolution video images from (j, 0) pixel positions of a vertical adjacent camera 1/64 original resolution video image; matting (640-x/8 + j) (480-y/8 + k) resolution video images from (0, 0) pixel positions of diagonally adjacent camera 1/64 original resolution video images;
the obtained video image is sent to a window scheduling unit; the processing steps of the ground station when a plurality of users request the window are the same.
In the sixth step, after the video images are gathered to the window scheduling unit, if a certain window pixel requested by a single user on the ground is all in the field of view of one camera, the window video images do not need to be spliced; if a certain requested window pixel is distributed in the pixel range of two or four cameras, splicing and packaging the video image required by one window obtained from each image processing component; and after the complete windows are obtained, packaging the video images of every 4 user windows into a data packet, performing compression coding by using a compression unit, and sending the compressed video stream data to the ground station.
(III) advantageous effects
Compared with the prior art, the invention provides a method for processing the massive image data in the large-range high-resolution photoelectric monitoring system with an airborne parallel processing architecture, which realizes the quick roaming real-time zooming of the windows, supports the simultaneous zooming of the independent windows of multiple users, improves the reconnaissance efficiency of the large-coverage photoelectric monitoring system and meets different types of reconnaissance requirements.
The multi-window real-time scaling method based on the airborne parallel processing architecture provided by the technical scheme has the following advantages:
(1) The method has the advantages that the massive image data are subjected to fast zooming and roaming processing, the limitation of data link bandwidth and software and hardware processing capacity is overcome, the requirements of real-time performance and smoothness of observation are met, and information reconnaissance can be completed efficiently, fast and accurately.
(2) The windows of the independent users are simultaneously zoomed, roamed and tracked, different observers mainly detect a plurality of target areas in the face of rapidly changing battlefield conditions, the monitoring efficiency of the large-range high-resolution photoelectric monitoring system can be greatly improved, and the system can be applied to a plurality of fields of urban combat, anti-terrorism, border monitoring, air detection and the like.
Drawings
Fig. 1 is a schematic view of a camera array according to the present invention.
Fig. 2 is a system block diagram of the technical solution of the present invention.
Fig. 3 is a schematic diagram of video block splicing according to the technical solution of the present invention.
Fig. 4 is a schematic view of window packing according to the present invention.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
In order to solve the problems in the prior art, the present invention provides a multi-window real-time scaling method based on an airborne parallel processing architecture, as shown in fig. 1 to 4, which includes the following steps:
the method comprises the following steps: constructing a large-range high-resolution photoelectric monitoring system, wherein a camera parallel processing architecture is adopted; the photoelectric monitoring system comprises: the system comprises N cameras with the resolution of 5472 × 3648 pixels, N image processing components, a tera broadband switch, a window scheduling component and a ground station;
the method comprises the following steps that an image processing assembly is deployed behind each camera, and the cameras and the image processing assemblies are numbered, wherein the serial numbers are 1,2, \8230;, N; each image processing component includes: the device comprises a copying unit, a scaling unit, a storage unit and a data retrieval unit;
step two: after the camera collects the video image, the copying unit copies the video image into four parts, namely, the original video image is divided into four paths and then sent to the zooming unit;
step three: the scaling unit scales each path of video image into four layers of video images with original resolution, 1/4 original resolution, 1/16 original resolution and 1/64 original resolution respectively;
step four: the storage unit develops four shared memory spaces and stores the four shared memory spaces into four layers of video images with resolution respectively;
step five: the ground station sends a window retrieval request, and the data retrieval unit acquires a corresponding video image from the storage unit according to request parameters of the window retrieval request and sends the video image to the window scheduling unit;
step six: the window scheduling unit packs the data to form complete window data and compresses and transmits all the window data;
step seven: after the ground station receives the compressed video image, the user software extracts the window data needed by the ground station for decompression and display.
In the first step, the camera is a 5472 × 3648-pixel camera.
In the first step, N is an even number and is not less than 4.
The cameras adopt an array splicing distribution mode, the edges of the view fields are partially overlapped, and certain splicing loss exists.
In the first step, according to the video image acquisition requirement, a larger coverage area can be formed by increasing the number of cameras.
In the second step, the camera collects the original video image according to the required pixel format, resolution and frame rate.
In the second step, the camera acquires video data according to the horizontal resolution x =5472, the vertical resolution y =3648, the required pixel format and the required frame rate.
In the third step, the video image with the original resolution is an original video image with unchanged length and width; the video image with the 1/4 original resolution ratio is a video image with the length and the width changed into 1/2 of the original video image; the video image with the 1/16 original resolution ratio is a video image with the length and the width changed into 1/4 of the original video image; the video image with the 1/64 original resolution is a video image with the length and the width changed into 1/8 of the original video image; and numbering 1-4 layers for the four layers of video images.
In the fifth step, if a single user of the ground station requests a window with a resolution of 640 × 480 from the data retrieval unit, the request parameters include a camera number where a video that the user needs to observe is located and a resolution level where the video is located, and a pixel position starting coordinate (j, k) of the window in the camera video defines that the pixel position at the upper left corner of the camera video is (0, 0), and the pixel position is positive from the right to the bottom;
1) If the search level is 1 level, i.e. when the window is scratched on the original resolution video image,
if x is larger than or equal to j +640 and y is larger than or equal to k +480, namely the requested window is within the visual field range of one camera, scratching a video image with 640 × 480 resolution from the (j, k) pixel position of the video image with the original resolution of the camera;
if x is less than j +640 and y is more than or equal to k +480, namely the requested window is in the field of view of the two cameras, the (x-j) × 480 resolution video image is scratched from the (j, k) pixel position of the original resolution video image of the camera; matting (640-x + j) × 480 resolution video images from (0, k) pixel locations of horizontally adjacent camera raw resolution video images;
if x is less than j +640 and y is less than k +480, namely the requested window is within the field of view of the four cameras, the video image with the resolution of (x-j) x (y-k) is scratched from the (j, k) pixel position of the video image with the original resolution of the camera; matting (640-x + j) × (y-k) resolution video images from (0, k) pixel positions of horizontally adjacent camera raw resolution video images; matting (x-j) × (480-y + k) resolution video images from (j, 0) pixel positions of vertical neighboring camera original resolution video images; matting (640-x + j) × (480-y + k) resolution video images from (0, 0) pixel positions of diagonally adjacent camera original resolution video images;
2) The retrieval level is 2 levels, i.e. when a window is scratched on a 1/4 original resolution video image,
if x/2 is more than or equal to j +640 and y/2 is more than or equal to k +480, namely the requested window is within the field of view of one camera, the video image with 640 x 480 resolution is scratched from the (j, k) pixel position of the 1/4 original resolution video image of the camera;
if x/2 is less than j +640 and y/2 is more than or equal to k +480, namely the requested window is within the field of view of two cameras, the video image with (x/2-j) × 480 resolution is scratched from the (j, k) pixel position of the video image with 1/4 original resolution of the camera, and the video image with (640-x/2) +j) × 480 resolution is scratched from the (0, k) pixel position of the video image with 1/4 original resolution of the horizontally adjacent camera;
if x/2 < j +640 and y/2< k +480, i.e. the requested window is within the field of view of four cameras, then keying out the (x/2-j) (y/2-k) resolution video image from the (j, k) pixel position of the present camera 1/4 original resolution video image; keying out (640-x/2 + j) (y/2-k) resolution video images from (0, k) pixel locations of horizontal neighboring camera 1/4 original resolution video images; keying out (x/2-j) (480-y/2 + k) resolution video images from (j, 0) pixel positions of 1/4 original resolution video images of a vertically adjacent camera; keying out (640-x/2 + j) (480-y/2 + k) resolution video images from (0, 0) pixel positions of the diagonal neighboring camera 1/4 original resolution video images;
3) The retrieval level is 3 layers, namely when a window is scratched on a 1/16 original resolution video image,
if x/4 is more than or equal to j +640 and y/4 is more than or equal to k +480, namely the requested window is within the visual field range of one camera, scratching a video image with 640 x 480 resolution from the (j, k) pixel position of the 1/16 original resolution video image of the camera;
if x/4 is less than j +640 and y/4 is more than or equal to k +480, namely the requested window is within the field of view of the two cameras, the (x/4-j) × 480 resolution video image is scratched from the (j, k) pixel position of the 1/16 original resolution video image of the camera, and the (640-x/4) +j) × 480 resolution video image is scratched from the (0, k) pixel position of the 1/16 original resolution video image of the horizontally adjacent camera;
if x/4 < j +640 and y/4< -k +480, i.e. the requested window is within the field of view of four cameras, then the video image with (x/4-j) × (y/4-k) resolution is scratched from the (j, k) pixel position of the original resolution video image of the present camera 1/16; keying out a video image of (640-x/4 + j) (y/4-k) resolution from a (0, k) pixel position of a horizontally adjacent camera 1/16 original resolution video image; keying out (x/4-j) (480-y/4 + k) resolution video images from (j, 0) pixel positions of the original resolution video images of the vertically adjacent cameras 1/16; matting (640-x/4 + j) (480-y/4 + k) resolution video images from (0, 0) pixel positions of the diagonal neighboring camera 1/16 original resolution video images;
4) The retrieval level is 4 layers, namely when a window is scratched on a 1/64 original resolution video image;
y/8 & lt 480 & gt, so that the requested window is at least in the field of view of two vertically adjacent cameras;
if x/8 is larger than or equal to j +640, scratching a video image with 640 x (y/8-k) resolution from the (j, k) pixel position of the video image with 1/64 original resolution of the camera; keying out a video image of 640 x (480-y/8 + k) resolution from the (j, 0) pixel position of the vertical neighboring camera 1/64 original resolution video image;
if x/8 < j +640, namely the requested window is within the field of view of four cameras, the video image with the resolution of (x/8-j) × (y/8-k) is scratched from the (j, k) pixel position of the video image with the original resolution of 1/64 camera; matting (640-x/8 + j) (y/8-k) resolution video images from (0, k) pixel positions of a horizontal neighboring camera 1/64 original resolution video image; keying out (x/8-j) (480-y/8 + k) resolution video images from (j, 0) pixel positions of 1/64 original resolution video images of a vertical adjacent camera; matting (640-x/8 + j) (480-y/8 + k) resolution video images from (0, 0) pixel positions of diagonally adjacent camera 1/64 original resolution video images;
the obtained video image is sent to a window scheduling unit; the processing steps of the ground station when a plurality of users request the window are the same.
In the sixth step, after the video images are collected to the window scheduling unit, if a certain window pixel requested by a single user on the ground is all in the field of view of one camera, the window video images do not need to be spliced; if a certain requested window pixel is distributed in the pixel range of two or four cameras, splicing and packaging the video image required by one window obtained from each image processing assembly; and after the complete window is obtained, packaging the video images of every 4 user windows into a data packet, and performing compression coding by using a compression unit to obtain compressed video stream data and then sending the compressed video stream data to the ground station.
Example 1
The main task of this embodiment is to provide a method for processing massive video image data in a large-range high-resolution photoelectric monitoring system with an airborne parallel processing architecture, to implement fast roaming real-time scaling of windows, and to support simultaneous scaling of multiple independent windows, so that it is seen that effective scheduling of video data is the key technology of the present invention.
As shown in fig. 1-4, the multi-window real-time scaling method based on the airborne parallel processing architecture of the present embodiment includes the following steps:
the method comprises the following steps: a large-range high-resolution photoelectric monitoring system is built, a camera parallel processing architecture is adopted, and the large-range high-resolution photoelectric monitoring system comprises N (N is an even number, N is more than or equal to 4) cameras with the resolution of 5472 x 3648 pixels, N video image processing assemblies, a ten-gigabit broadband switch, a window scheduling assembly and a ground station. A video image processing component is deployed behind each camera and the cameras and components are numbered, serial No. 1,2, \8230; \8230, N. Each video image processing assembly consists of a copying unit, a zooming unit, a storage unit and a data retrieval unit. The cameras adopt an array splicing distribution mode, the edges of the view fields are partially overlapped, and certain splicing loss exists. If desired, the number of cameras can be increased to create a larger coverage area.
Step two: after the camera collects video data according to the horizontal resolution x =5472 and the vertical resolution y =3648 and according to the required pixel format and frame rate, the copying unit copies the video data into four parts, namely, the original video is divided into four paths and then sent to the scaling unit.
Step three: the scaling unit scales each path of video image to 1 time (length and width are unchanged) of the original resolution, 1/4 time (length and width are all changed to 1/2 of the original), 1/16 time (length and width are all changed to 1/4 of the original), and 1/64 time (length and width are all changed to 1/8 of the original), namely four layers of video images of the original resolution, 1/4 of the original resolution, 1/16 of the original resolution and 1/64 of the original resolution are numbered by 1-4 layers.
Step four: the storage unit develops four shared memory spaces and stores the video data with four layers of resolution respectively.
Step five: a single user at a ground station requests a window with the resolution of 640 × 480 from a data retrieval unit, the request parameters comprise the number of a camera where a video to be observed by the user is located and the resolution level where the video is located, the starting coordinates (j, k) of the pixel position of the window in the camera video define that the pixel position at the upper left corner of the camera video is (0, 0), and the pixel position is positive downwards rightwards;
1) The retrieval level is 1 level, i.e. when a window is scratched on the original resolution video image,
if x is larger than or equal to j +640 and y is larger than or equal to k +480, namely the requested window is within the field of view of one camera, the video image with 640 × 480 resolution is scratched from the (j, k) pixel position of the video image with the original resolution of the camera;
if x is less than j +640 and y is more than or equal to k +480, namely the requested window is within the visual field range of the two cameras, the (x-j) × 480 resolution video image is scratched from the (j, k) pixel position of the original resolution video image of the camera; matting (640-x + j) × 480 resolution video images from (0, k) pixel locations of horizontally adjacent camera raw resolution video images;
if x < j +640 and y < k +480, namely the requested window is within the visual field range of the four cameras, scratching the video image with the resolution of (x-j) × (y-k) from the pixel position of (j, k) of the video image with the original resolution of the camera; matting (640-x + j) × (y-k) resolution video images from (0, k) pixel locations of horizontally adjacent camera raw resolution video images; matting (x-j) × (480-y + k) resolution video images from (j, 0) pixel locations of vertically neighboring camera raw resolution video images; matting (640-x + j) × (480-y + k) resolution video images from (0, 0) pixel locations of diagonally adjacent camera raw resolution video images;
2) The retrieval level is 2 layers, namely when a window is scratched on a 1/4 original resolution video image,
if x/2 ≧ j +640 and y/2 ≧ k +480, i.e., the requested window is within the field of view of one camera, then the video image of 640 × 480 resolution is decimated from the (j, k) pixel location of the present camera's 1/4 original resolution video image.
If x/2 is less than j +640 and y/2 is more than or equal to k +480, namely the requested window is within the field of view of the two cameras, the (x/2-j) × 480 resolution video image is scratched from the (j, k) pixel position of the 1/4 original resolution video image of the camera, and the (640-x/2) +j) × 480 resolution video image is scratched from the (0, k) pixel position of the 1/4 original resolution video image of the horizontally adjacent camera.
If x/2 < j +640 and y/2< -k +480, that is, the requested window is within the field of view of four cameras, then the video image with (x/2-j) × (y/2-k) resolution is scratched from the (j, k) pixel position of the video image with 1/4 original resolution of the present camera; keying out (640-x/2 + j) (y/2-k) resolution video images from (0, k) pixel locations of horizontal neighboring camera 1/4 original resolution video images; keying out (x/2-j) (480-y/2 + k) resolution video images from (j, 0) pixel positions of 1/4 original resolution video images of a vertically adjacent camera; video images of (640-x/2 + j) (480-y/2 + k) resolution are decimated from the (0, 0) pixel position of the diagonal adjacent camera 1/4 original resolution video image.
3) The retrieval level is 3 layers, namely when a window is scratched on a 1/16 original resolution video image,
if x/4 ≧ j +640 and y/4 ≧ k +480, i.e., the requested window is within the field of view of one camera, then the video image of 640 x 480 resolution is decimated from the (j, k) pixel location of the present camera's 1/16 original resolution video image.
If x/4 is less than j +640 and y/4 is more than or equal to k +480, namely the requested window is within the field of view of two cameras, then the video image with (x/4-j) × 480 resolution is scratched from the (j, k) pixel position of the video image with 1/16 original resolution of the camera, and the video image with (640-x/4) +j) > 480 resolution is scratched from the (0, k) pixel position of the video image with 1/16 original resolution of the horizontally adjacent camera.
If x/4 < j +640 and y/4< k +480, i.e., the requested window is within the field of view of four cameras, then keying out the (x/4-j) (y/4-k) resolution video image from the (j, k) pixel location of the present camera 1/16 original resolution video image; keying out a video image of (640-x/4 + j) (y/4-k) resolution from (0, k) pixel position of a video image of 1/16 original resolution of a horizontally adjacent camera; keying out (x/4-j) (480-y/4 + k) resolution video images from (j, 0) pixel positions of the original resolution video images of the vertically adjacent cameras 1/16; video images of (640-x/4 + j) (480-y/4 + k) resolution are scratched from (0, 0) pixel position of the diagonal adjacent camera 1/16 original resolution video image.
4) The retrieval level is 4 layers, i.e. when windows are scratched on 1/64 original resolution video images, y/8 is restricted to 480, so that the requested window is at least in the field of view range of two vertically adjacent cameras.
If x/8 is larger than or equal to j +640, the video image with 640 x (y/8-k) resolution is scratched from the (j, k) pixel position of the video image with 1/64 original resolution of the camera; a video image of 640 x (480-y/8 + k) resolution is scratched from the (j, 0) pixel position of the vertical neighboring camera 1/64 original resolution video image.
If x/8 < j +640, namely the requested window is within the field of view of four cameras, the video image with the resolution of (x/8-j) × (y/8-k) is scratched from the (j, k) pixel position of the video image with the original resolution of 1/64 camera; matting (640-x/8 + j) (y/8-k) resolution video images from (0, k) pixel positions of a horizontal neighboring camera 1/64 original resolution video image; keying out (x/8-j) (480-y/8 + k) resolution video images from (j, 0) pixel positions of 1/64 original resolution video images of a vertical adjacent camera; video images of (640-x/8 + j) (480-y/8 + k) resolution are scratched from (0, 0) pixel positions of diagonally adjacent camera 1/64 original resolution video images.
The resulting video data is sent to the window scheduling unit. The processing steps of the video image processing component are the same when a plurality of users of the ground station request the window.
Step six: after the video data are collected to a window scheduling unit, if a certain window pixel requested by a single user on the ground is all in the field of view of a camera, the window video image data do not need to be spliced; if the requested window pixels are distributed in the pixel range of two or four cameras, the video data required by one window obtained from each processing component needs to be spliced and packaged. And after the complete window is obtained, packaging the video data of every 4 user windows into a data packet, and performing compression coding by using a compression unit to obtain compressed video stream data and then sending the compressed video stream data to the ground station.
Step seven: after the ground station receives the compressed video data, the user software extracts the window data needed by the ground station for decompression and display.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A multi-window real-time scaling method based on an airborne parallel processing architecture is characterized by comprising the following steps:
the method comprises the following steps: constructing a large-range high-resolution photoelectric monitoring system, wherein a camera parallel processing architecture is adopted; the photoelectric monitoring system comprises: the system comprises N cameras with the resolution of 5472 × 3648 pixels, N image processing components, a tera broadband switch, a window scheduling component and a ground station;
wherein, an image processing component is arranged behind each camera, and the cameras and the image processing components are numbered with serial numbers of 1,2, \8230;, N; each image processing assembly comprises: the device comprises a copying unit, a scaling unit, a storage unit and a data retrieval unit;
step two: after the camera collects the video image, the copying unit copies the video image into four parts, namely, the original video image is divided into four paths and then sent to the zooming unit;
step three: the zooming unit zooms each path of video image into four layers of video images with original resolution, 1/4 original resolution, 1/16 original resolution and 1/64 original resolution respectively;
step four: the storage unit develops four shared memory spaces and stores the four shared memory spaces into four layers of video images with resolution respectively;
step five: the ground station sends a window retrieval request, and the data retrieval unit acquires a corresponding video image from the storage unit according to request parameters of the window retrieval request and sends the video image to the window scheduling component; the request parameters comprise the number of a camera where a video required to be observed by a user is located and the resolution level where the video is located;
step six: the window scheduling component packs to form complete window data and compresses and transmits all the window data;
step seven: after the ground station receives the compressed video image, the user software extracts the window data needed by the ground station for decompression and display.
2. The method as claimed in claim 1, wherein in step one, N is an even number, and N ≧ 4.
3. The method as claimed in claim 1, wherein the cameras are distributed in an array splicing manner, the edges of the fields of view are partially overlapped, and certain splicing loss exists.
4. The method as claimed in claim 1, wherein in the step one, the number of cameras is increased to form a larger coverage area according to the video image capturing requirement.
5. The method according to claim 1, wherein in the second step, the camera captures the original video image according to the required pixel format, resolution and frame rate.
6. The method for multi-window real-time scaling based on airborne parallel processing architecture as claimed in claim 5, wherein in the second step, the camera acquires video data according to the horizontal resolution x =5472, the vertical resolution y =3648, and the required pixel format and frame rate.
7. The method according to claim 6, wherein in step three, the video image with original resolution is an original video image with unchanged length and width; the video image with the 1/4 original resolution ratio is a video image with the length and the width changed into 1/2 of the original video image; the video image with the 1/16 original resolution ratio is a video image with the length and the width changed into 1/4 of the original video image; the video image with the 1/64 original resolution is a video image with the length and the width changed into 1/8 of the original video image; and numbering 1-4 layers of the four layers of video images.
8. The method according to claim 7, wherein in the fifth step, if a single user at the ground station requests a window with a resolution of 640 × 480 from the data retrieval unit, the request parameters include a camera number where the video that the user needs to observe is located and a resolution level where the video is located, and the starting coordinates (j, k) of the pixel position of the window in the camera video define that the pixel position at the top left corner of the camera video is (0, 0) and the pixel position is positive from the right to the bottom;
1) If the search level is 1 level, i.e. when the window is scratched on the original resolution video image,
if x is larger than or equal to j +640 and y is larger than or equal to k +480, namely the requested window is within the visual field range of one camera, scratching a video image with 640 × 480 resolution from the (j, k) pixel position of the video image with the original resolution of the camera;
if x is less than j +640 and y is more than or equal to k +480, namely the requested window is in the field of view of the two cameras, the (x-j) × 480 resolution video image is scratched from the (j, k) pixel position of the original resolution video image of the camera; matting a video image of (640-x + j) × 480 resolution from (0, k) pixel positions of a horizontally adjacent camera original resolution video image;
if x < j +640 and y < k +480, namely the requested window is within the visual field range of the four cameras, scratching the video image with the resolution of (x-j) × (y-k) from the pixel position of (j, k) of the video image with the original resolution of the camera; matting (640-x + j) × (y-k) resolution video images from (0, k) pixel positions of horizontally adjacent camera raw resolution video images; matting (x-j) × (480-y + k) resolution video images from (j, 0) pixel positions of vertical neighboring camera original resolution video images; matting (640-x + j) × (480-y + k) resolution video images from (0, 0) pixel locations of diagonally adjacent camera raw resolution video images;
2) The retrieval level is 2 layers, namely when a window is scratched on a 1/4 original resolution video image,
if x/2 is more than or equal to j +640 and y/2 is more than or equal to k +480, namely the requested window is within the visual field range of one camera, scratching a video image with 640 x 480 resolution from the (j, k) pixel position of the 1/4 original resolution video image of the camera;
if x/2 is less than j +640 and y/2 is more than or equal to k +480, namely the requested window is within the field of view of two cameras, the video image with (x/2-j) × 480 resolution is scratched from the (j, k) pixel position of the video image with 1/4 original resolution of the camera, and the video image with (640-x/2) +j) × 480 resolution is scratched from the (0, k) pixel position of the video image with 1/4 original resolution of the horizontally adjacent camera;
if x/2 < j +640 and y/2< -k +480, that is, the requested window is within the field of view of four cameras, then the video image with (x/2-j) × (y/2-k) resolution is scratched from the (j, k) pixel position of the video image with 1/4 original resolution of the present camera; keying out (640-x/2 + j) (y/2-k) resolution video images from (0, k) pixel locations of horizontal neighboring camera 1/4 original resolution video images; picking up a video image with a resolution of (x/2-j) ((480-y/2 + k)) from a pixel position of (j, 0) of a video image with a 1/4 original resolution of a vertical adjacent camera; keying out (640-x/2 + j) (480-y/2 + k) resolution video images from (0, 0) pixel positions of the diagonal neighboring camera 1/4 original resolution video images;
3) The retrieval level is 3 layers, i.e. when a window is scratched on a 1/16 original resolution video image,
if x/4 is more than or equal to j +640 and y/4 is more than or equal to k +480, namely the requested window is within the visual field range of one camera, scratching a video image with 640 x 480 resolution from the (j, k) pixel position of the 1/16 original resolution video image of the camera;
if x/4 is less than j +640 and y/4 is more than or equal to k +480, namely the requested window is within the field of view of the two cameras, the (x/4-j) × 480 resolution video image is scratched from the (j, k) pixel position of the 1/16 original resolution video image of the camera, and the (640-x/4) +j) × 480 resolution video image is scratched from the (0, k) pixel position of the 1/16 original resolution video image of the horizontally adjacent camera;
if x/4 < j +640 and y/4< k +480, i.e., the requested window is within the field of view of four cameras, then keying out the (x/4-j) (y/4-k) resolution video image from the (j, k) pixel location of the present camera 1/16 original resolution video image; keying out a video image of (640-x/4 + j) (y/4-k) resolution from a (0, k) pixel position of a horizontally adjacent camera 1/16 original resolution video image; keying out (x/4-j) (480-y/4 + k) resolution video images from (j, 0) pixel positions of the original resolution video images of the vertically adjacent cameras 1/16; keying out (640-x/4 + j) (480-y/4 + k) resolution video images from (0, 0) pixel positions of the diagonal neighboring camera 1/16 original resolution video images;
4) The retrieval level is 4 layers, namely when a window is scratched on a 1/64 original resolution video image;
y/8< -480 > so that the requested window is at least in the field of view of two vertically adjacent cameras;
if x/8 is larger than or equal to j +640, the video image with 640 x (y/8-k) resolution is scratched from the (j, k) pixel position of the video image with 1/64 original resolution of the camera; keying 640 x (480-y/8 + k) resolution video images from the (j, 0) pixel position of the vertical neighboring camera 1/64 original resolution video image;
if x/8 < j +640, namely the requested window is within the field of view of the four cameras, scratching the video image with the resolution of (x/8-j) × (y/8-k) from the pixel position of (j, k) of the video image with the original resolution of 1/64 camera; keying out (640-x/8 + j) (y/8-k) resolution video images from (0, k) pixel locations of horizontal neighboring camera 1/64 original resolution video images; keying out (x/8-j) (480-y/8 + k) resolution video images from (j, 0) pixel positions of 1/64 original resolution video images of a vertical adjacent camera; keying out (640-x/8 + j) (480-y/8 + k) resolution video images from (0, 0) pixel positions of the diagonal neighboring camera 1/64 original resolution video images;
the obtained video image is sent to a window scheduling component; the processing steps are the same when multiple users of the ground station request the window.
9. The method according to claim 1, wherein in the sixth step, after the video images are collected to the window scheduling component, if a certain window pixel requested by a single user on the ground is all within the field of view of one camera, the window video images do not need to be stitched; if a certain requested window pixel is distributed in the pixel range of two or four cameras, splicing and packaging the video image required by one window obtained from each image processing component; and after the complete windows are obtained, packaging the video images of every 4 user windows into a data packet, performing compression coding by using a compression unit, and sending the compressed video stream data to the ground station.
CN202110378208.7A 2021-04-08 2021-04-08 Multi-window real-time scaling method based on airborne parallel processing architecture Active CN113099182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110378208.7A CN113099182B (en) 2021-04-08 2021-04-08 Multi-window real-time scaling method based on airborne parallel processing architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110378208.7A CN113099182B (en) 2021-04-08 2021-04-08 Multi-window real-time scaling method based on airborne parallel processing architecture

Publications (2)

Publication Number Publication Date
CN113099182A CN113099182A (en) 2021-07-09
CN113099182B true CN113099182B (en) 2022-11-22

Family

ID=76675469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110378208.7A Active CN113099182B (en) 2021-04-08 2021-04-08 Multi-window real-time scaling method based on airborne parallel processing architecture

Country Status (1)

Country Link
CN (1) CN113099182B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2318225C2 (en) * 2005-11-30 2008-02-27 Федеральное Государственное Унитарное Предприятие "Государственный оптический институт им. С.И. Вавилова" Airborne multi-spectral system
CN101421747A (en) * 2006-02-13 2009-04-29 索尼株式会社 System and method to combine multiple video streams
TW200939779A (en) * 2008-02-28 2009-09-16 Videoiq Inc Intelligent high resolution video system
CN102428492A (en) * 2009-05-13 2012-04-25 皇家飞利浦电子股份有限公司 A display apparatus and a method therefor
CN102801963A (en) * 2012-08-27 2012-11-28 北京尚易德科技有限公司 Electronic PTZ method and device based on high-definition digital camera monitoring
CN104317968A (en) * 2014-11-18 2015-01-28 苏州科达科技股份有限公司 Page self-adaptive adjusting method and system
CN105009057A (en) * 2013-02-20 2015-10-28 谷歌公司 Intelligent window placement with multiple windows using high DPI screens
CN107005648A (en) * 2015-07-30 2017-08-01 深圳市大疆创新科技有限公司 A kind of method, control device and the control system that control mobile device to shoot
CN110992260A (en) * 2019-10-15 2020-04-10 网宿科技股份有限公司 Method and device for reconstructing video super-resolution
CN112130667A (en) * 2020-09-25 2020-12-25 深圳市佳创视讯技术股份有限公司 Interaction method and system for ultra-high definition VR (virtual reality) video
CN112511896A (en) * 2020-11-05 2021-03-16 浙江大华技术股份有限公司 Video rendering method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075926A (en) * 1997-04-21 2000-06-13 Hewlett-Packard Company Computerized method for improving data resolution
US7995652B2 (en) * 2003-03-20 2011-08-09 Utc Fire & Security Americas Corporation, Inc. Systems and methods for multi-stream image processing
US20150253974A1 (en) * 2014-03-07 2015-09-10 Sony Corporation Control of large screen display using wireless portable computer interfacing with display controller
CN110659571B (en) * 2019-08-22 2023-09-15 杭州电子科技大学 Streaming video face detection acceleration method based on frame buffer queue

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2318225C2 (en) * 2005-11-30 2008-02-27 Федеральное Государственное Унитарное Предприятие "Государственный оптический институт им. С.И. Вавилова" Airborne multi-spectral system
CN101421747A (en) * 2006-02-13 2009-04-29 索尼株式会社 System and method to combine multiple video streams
TW200939779A (en) * 2008-02-28 2009-09-16 Videoiq Inc Intelligent high resolution video system
CN102428492A (en) * 2009-05-13 2012-04-25 皇家飞利浦电子股份有限公司 A display apparatus and a method therefor
CN102801963A (en) * 2012-08-27 2012-11-28 北京尚易德科技有限公司 Electronic PTZ method and device based on high-definition digital camera monitoring
CN105009057A (en) * 2013-02-20 2015-10-28 谷歌公司 Intelligent window placement with multiple windows using high DPI screens
CN104317968A (en) * 2014-11-18 2015-01-28 苏州科达科技股份有限公司 Page self-adaptive adjusting method and system
CN107005648A (en) * 2015-07-30 2017-08-01 深圳市大疆创新科技有限公司 A kind of method, control device and the control system that control mobile device to shoot
CN110992260A (en) * 2019-10-15 2020-04-10 网宿科技股份有限公司 Method and device for reconstructing video super-resolution
CN112130667A (en) * 2020-09-25 2020-12-25 深圳市佳创视讯技术股份有限公司 Interaction method and system for ultra-high definition VR (virtual reality) video
CN112511896A (en) * 2020-11-05 2021-03-16 浙江大华技术股份有限公司 Video rendering method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-stereo Matching for Light Field Camera Arrays;Ségolène Rogge;《2018 26th European Signal Processing Conference (EUSIPCO)》;20181203;全文 *
多用户多视窗机载光电监视***在城市反恐中的应用研究;刘国栋;《飞控与探测》;20200717;全文 *

Also Published As

Publication number Publication date
CN113099182A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN109348119B (en) Panoramic monitoring system
CN103795976B (en) A kind of full-time empty 3 d visualization method
CN102148965B (en) Video monitoring system for multi-target tracking close-up shooting
US20170019605A1 (en) Multiple View and Multiple Object Processing in Wide-Angle Video Camera
CN101421747B (en) System and method to combine multiple video streams
US7542047B2 (en) Multi-dimensional texture drawing apparatus, compressing apparatus, drawing system, drawing method, and drawing program
CN102510474B (en) 360-degree panorama monitoring system
CN105554450B (en) Distributed video panorama display system
US20130021434A1 (en) Method and System of Simultaneously Displaying Multiple Views for Video Surveillance
US20040075738A1 (en) Spherical surveillance system architecture
CN102801963B (en) Electronic PTZ method and device based on high-definition digital camera monitoring
CN106534789A (en) Integrated intelligent security and protection video monitoring system
WO2012082127A1 (en) Imaging system for immersive surveillance
JP2010504711A (en) Video surveillance system and method for tracking moving objects in a geospatial model
CN109040601B (en) Multi-scale unstructured billion pixel VR panoramic photography system
US9418299B2 (en) Surveillance process and apparatus
CN107846623B (en) Video linkage method and system
CN109636763B (en) Intelligent compound eye monitoring system
CN110087032A (en) A kind of panorama type tunnel video monitoring devices and method
CN107018386A (en) A kind of video flowing multiresolution observation system
CN202841372U (en) Distribution type full-view monitoring system
CN114067235A (en) Data processing system and method based on cloud edge
CN112422909A (en) Video behavior analysis management system based on artificial intelligence
CN112040140A (en) Wide-view-field high-resolution hybrid imaging device based on light field
CN113099182B (en) Multi-window real-time scaling method based on airborne parallel processing architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant