CN111355928A - Video stitching method and system based on multi-camera content analysis - Google Patents

Video stitching method and system based on multi-camera content analysis Download PDF

Info

Publication number
CN111355928A
CN111355928A CN202010126853.5A CN202010126853A CN111355928A CN 111355928 A CN111355928 A CN 111355928A CN 202010126853 A CN202010126853 A CN 202010126853A CN 111355928 A CN111355928 A CN 111355928A
Authority
CN
China
Prior art keywords
image
module
video
information
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010126853.5A
Other languages
Chinese (zh)
Inventor
孙凯
李锐
金长新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Inspur Hi Tech Investment and Development Co Ltd
Original Assignee
Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Hi Tech Investment and Development Co Ltd filed Critical Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority to CN202010126853.5A priority Critical patent/CN111355928A/en
Publication of CN111355928A publication Critical patent/CN111355928A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a video splicing method and a video splicing system based on multi-camera content analysis, and belongs to the fields of industrial protection, social monitoring and community security protection; the system comprises a characteristic point module, an image homography matrix calculation module, an image fusion cutting module and a foreground matching module; the method and the system for video processing can quickly and efficiently remove the overlapped area in the multi-angle video to obtain the final image video information, realize the content analysis and processing work of the multi-scene and multi-camera image video by using the single-frame characteristic point matching technology, save the storage space of the final image video information obtained by processing, reduce the information storage cost, reduce the workload of image video processing, and improve the working efficiency of image video processing while ensuring the integrity of the video content and the video quality.

Description

Video stitching method and system based on multi-camera content analysis
Technical Field
The invention discloses a video splicing method and a video splicing system based on multi-camera content analysis, and relates to the technical field of industrial protection, social monitoring and community security.
Background
In recent years, scientific and technical development is rapid, a large number of monitoring cameras are deployed in fields of industry, public security, stadiums and the like, such as infrared cameras, monocular wide-angle cameras, binocular stereo cameras, binocular image splicing cameras, multi-lens image splicing panoramic cameras and the like, the cameras generate a large number of videos every day, but most of the videos are repeated and invalid, so that the processing workload of the videos becomes very large, a large number of invalid video information needs to be processed when monitoring information of key operation is called by related departments, the efficiency is seriously influenced, and the storage cost of enterprises or related units is also increased. However, the deployment of multiple cameras is very common, and in relatively private places such as vehicle-mounted systems, garages, and theatrical venues, multiple cameras are often deployed, and the cameras are adapted according to different types of scenes so as to be able to acquire required image video information. The deployment of many cameras can satisfy the needs of many scenes, need not worry information loss or lack when using, only need integrate the image video of a plurality of cameras storage, can obtain whole image video information, therefore many cameras are necessary.
However, the operation of multiple cameras has obvious disadvantages, and the repeated content is more, so that the content analysis of the repeated image video is needed, the repeated and invalid information is removed, the needed image information is spliced by using a corresponding technology, and the finally needed image video is output, which not only can save the storage space and the storage cost, but also can reduce the operation amount of image video processing.
The existing image content analysis method which is commonly used is a splicing synthesis method through multi-camera imaging, the main process is to extract feature points of multiple videos and perform matching splicing on the feature points, but the technology generally adopts a comparison algorithm based on pixel levels of each image frame of different cameras, so that the calculation power is relatively consumed, the calculation process is relatively complex, and therefore the existing technology is lack of the image content analysis method which can quickly calculate an overlapping area and quickly splice images.
At present, the content analysis of the multi-scene camera is not complete in technical means, and the scenes are relatively complex, which means that the requirement for splicing multiple videos is very strict, so that a fast and effective video splicing method is needed to solve the problems.
Disclosure of Invention
The invention provides a video splicing method and a system thereof based on multi-camera content analysis aiming at the problems of the prior art, the adopted technical scheme is the video splicing method based on multi-camera content analysis, and the method comprises the following specific steps:
s1, extracting and analyzing single-frame feature points in the image contents shot by a plurality of cameras at the same time;
s2, analyzing the single-frame feature points of the multiple images to obtain key frame feature point information;
s3, fusing and cutting the image key frame feature point information to obtain a standby image;
and S4 splicing the standby images formed at different moments to obtain final image video information.
The specific step of acquiring the key frame feature point information in S1 includes:
s11, extracting feature points from the foreground region of the multi-shot overlapping region by using a local feature method;
s12 matches the feature points by nearest neighbor method, and filters the result.
And S2, performing feature point matching by using the single-frame image extracted from the multiple shots, and storing the key frame characteristic information in a relevant position.
The specific step of obtaining the standby image in S3 includes:
s31, accumulating the key frame characteristic information to obtain criticizing points, and filtering to obtain an overlapping area;
s32 removes the overlapping area from the image area, resulting in a spare image.
A video splicing system based on multi-camera content analysis comprises a characteristic point module, an image homography matrix calculation module, an image fusion cutting module and a foreground matching module;
a characteristic point module: extracting and analyzing single-frame feature points in image contents shot by a plurality of cameras at the same time;
an image homography matrix calculation module: analyzing single frame feature points of a plurality of images to obtain key frame feature point information;
the image fusion cutting module: fusing and cutting the image key frame feature point information to obtain a standby image;
a foreground matching module: and splicing the standby images formed at different moments to obtain final image video information.
The characteristic point module comprises a single-frame characteristic point sub-extraction module and a single-frame characteristic point matching sub-module;
the single-frame feature point sub-extraction module: extracting feature points from the foreground region of the multi-shot overlapping region by using a local feature method;
a single-frame feature point matching sub-module: and matching the characteristic points by using a nearest neighbor method, and filtering the result.
The image fusion cutting module performs feature point matching by using single-frame images extracted from multiple lenses and stores key frame feature information in relevant positions.
The image fusion cutting module comprises a multi-frame characteristic point pair fusion module and a criticizing point filtering module;
the multi-frame characteristic point pair fusion module: accumulating the key frame characteristic information to obtain criticizing points, and filtering to obtain an overlapping area;
batch review filtering module: and removing the overlapped area from the image area to obtain a standby image.
The invention has the beneficial effects that: a video stitching method based on multi-camera content analysis comprises the steps of extracting image key frame characteristic point information, calculating an image area module of a plurality of camera contents, and performing fusion cutting on an image to obtain final complete image video information; the method and the system for video processing can quickly and efficiently remove the overlapped area in the multi-angle video to obtain the final image video information, realize the content analysis and processing work of the multi-scene and multi-camera image video by using the single-frame characteristic point matching technology, save the storage space of the final image video information obtained by processing, reduce the information storage cost, reduce the workload of image video processing, and improve the working efficiency of image video processing while ensuring the integrity of the video content and the video quality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of the process of the present invention; fig. 2 is a schematic diagram of the system of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
The first embodiment is as follows:
a video splicing method based on multi-camera content analysis comprises the following specific steps:
s1, extracting and analyzing single-frame feature points in the image contents shot by a plurality of cameras at the same time;
s2, analyzing the single-frame feature points of the multiple images to obtain key frame feature point information;
s3, fusing and cutting the image key frame feature point information to obtain a standby image;
and S4 splicing the standby images formed at different moments to obtain final image video information.
When the shooting work is carried out, a first image and a second image which are shot by a first camera and a second camera simultaneously are used as a group of data to be processed according to the S1 method, single-frame feature point information is extracted, single-frame feature points of a plurality of images are analyzed according to S2 to obtain key frame feature point information, a standby image is obtained through S3, and then the standby image is spliced into final image video information according to S4, so that the content analysis and processing work of image videos of multiple scenes and multiple cameras by using a single-frame feature point matching technology can be realized, the storage space and the storage cost are saved, and the work load of image video processing is reduced;
further, the specific step of acquiring the key frame feature point information in S1 includes:
s11, extracting feature points from the foreground region of the multi-shot overlapping region by using a local feature method;
s12, matching the feature points by using a nearest neighbor method, and filtering the result;
further, the S2 performs feature point matching by using the single frame image extracted from the multiple shots, and stores the key frame feature information in a relevant position;
still further, the step of obtaining the spare image in S3 includes:
s31, accumulating the key frame characteristic information to obtain criticizing points, and filtering to obtain an overlapping area;
s32 removes the overlapping area from the image area, resulting in a spare image.
Example two:
the invention also provides a video splicing system based on multi-camera content analysis corresponding to the method, wherein the system comprises a characteristic point module, an image homography matrix calculation module, an image fusion cutting module and a foreground matching module;
a characteristic point module: extracting and analyzing single-frame feature points in image contents shot by a plurality of cameras at the same time;
an image homography matrix calculation module: analyzing single frame feature points of a plurality of images to obtain key frame feature point information;
the image fusion cutting module: fusing and cutting the image key frame feature point information to obtain a standby image;
a foreground matching module: splicing the standby images formed at different moments to obtain final image video information;
when the shooting work is carried out, a first image and a second image which are shot by a first camera and a second camera simultaneously are used as a group of data to be processed through a characteristic point module, single-frame characteristic point information is extracted, single-frame characteristic points of a plurality of images are analyzed according to an image homography matrix calculation module to obtain key frame characteristic point information, a standby image is obtained through an image fusion cutting module, and then final image video information is formed according to a foreground matching module, so that the content analysis processing work of the image videos of multiple scenes and multiple cameras by using a single-frame characteristic point matching technology can be realized, the storage space and the storage cost are saved, and the work load of image video processing is reduced;
furthermore, the feature point module comprises a single-frame feature point sub-extraction module and a single-frame feature point matching sub-module;
the single-frame feature point sub-extraction module: extracting feature points from the foreground region of the multi-shot overlapping region by using a local feature method;
a single-frame feature point matching sub-module: and matching the characteristic points by using a nearest neighbor method, and filtering the result.
Furthermore, the image fusion cutting module performs feature point matching by using the single-frame image extracted from the multiple lenses, and stores the key frame feature information in a relevant position.
Still further, the image fusion cutting module comprises a multi-frame characteristic point pair fusion module and a criticizing point filtering module;
the multi-frame characteristic point pair fusion module: accumulating the key frame characteristic information to obtain criticizing points, and filtering to obtain an overlapping area;
batch review filtering module: and removing the overlapped area from the image area to obtain a standby image.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A video splicing method based on multi-camera content analysis is characterized by comprising the following specific steps:
s1, extracting and analyzing single-frame feature points in the image contents shot by a plurality of cameras at the same time;
s2, analyzing the single-frame feature points of the multiple images to obtain key frame feature point information;
s3, fusing and cutting the image key frame feature point information to obtain a standby image;
and S4 splicing the standby images formed at different moments to obtain final image video information.
2. The method for video stitching based on multi-camera content analysis according to claim 1, wherein the specific step of obtaining the keyframe feature point information in S1 comprises:
s11, extracting feature points from the foreground region of the multi-shot overlapping region by using a local feature method;
s12 matches the feature points by nearest neighbor method, and filters the result.
3. The method for video stitching based on multi-camera content analysis according to claim 2, wherein said S2 performs feature point matching by extracting a single frame image of a multi-shot and stores key frame feature information in a relevant position.
4. The method for video stitching based on multi-camera content analysis according to claim 3, wherein the step of obtaining the alternative images at S3 comprises:
s31, accumulating the key frame characteristic information to obtain criticizing points, and filtering to obtain an overlapping area;
s32 removes the overlapping area from the image area, resulting in a spare image.
5. A video splicing system based on multi-camera content analysis is characterized by comprising a characteristic point module, an image homography matrix calculation module, an image fusion cutting module and a foreground matching module;
a characteristic point module: extracting and analyzing single-frame feature points in image contents shot by a plurality of cameras at the same time;
an image homography matrix calculation module: analyzing single frame feature points of a plurality of images to obtain key frame feature point information;
the image fusion cutting module: fusing and cutting the image key frame feature point information to obtain a standby image;
a foreground matching module: and splicing the standby images formed at different moments to obtain final image video information.
6. The multi-camera content analysis-based video stitching system as recited in claim 5, wherein the feature point module comprises a single-frame feature point sub-extraction module and a single-frame feature point matching sub-module;
the single-frame feature point sub-extraction module: extracting feature points from the foreground region of the multi-shot overlapping region by using a local feature method;
a single-frame feature point matching sub-module: and matching the characteristic points by using a nearest neighbor method, and filtering the result.
7. The multi-camera content analysis-based video stitching system as recited in claim 6, wherein the image fusion clipping module performs feature point matching by extracting a single frame image of multiple shots and stores key frame feature information in a relevant location.
8. The multi-camera content analysis-based video stitching system as recited in claim 7, wherein the image fusion clipping module comprises a multi-frame feature point pair fusion module and a criticizing point filtering module;
the multi-frame characteristic point pair fusion module: accumulating the key frame characteristic information to obtain criticizing points, and filtering to obtain an overlapping area;
batch review filtering module: and removing the overlapped area from the image area to obtain a standby image.
CN202010126853.5A 2020-02-28 2020-02-28 Video stitching method and system based on multi-camera content analysis Pending CN111355928A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010126853.5A CN111355928A (en) 2020-02-28 2020-02-28 Video stitching method and system based on multi-camera content analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010126853.5A CN111355928A (en) 2020-02-28 2020-02-28 Video stitching method and system based on multi-camera content analysis

Publications (1)

Publication Number Publication Date
CN111355928A true CN111355928A (en) 2020-06-30

Family

ID=71194174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010126853.5A Pending CN111355928A (en) 2020-02-28 2020-02-28 Video stitching method and system based on multi-camera content analysis

Country Status (1)

Country Link
CN (1) CN111355928A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188163A (en) * 2020-09-29 2021-01-05 厦门汇利伟业科技有限公司 Method and system for automatic de-duplication splicing of real-time video images
CN113014882A (en) * 2021-03-08 2021-06-22 中国铁塔股份有限公司黑龙江省分公司 Multi-source multi-protocol video fusion monitoring system
CN113810665A (en) * 2021-09-17 2021-12-17 北京百度网讯科技有限公司 Video processing method, device, equipment, storage medium and product
CN114449130A (en) * 2022-03-07 2022-05-06 北京拙河科技有限公司 Multi-camera video fusion method and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
US20160088287A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Image stitching for three-dimensional video
CN105761209A (en) * 2016-03-16 2016-07-13 武汉大学 Nuclear safety shell surface image fusion method and system
CN105915804A (en) * 2016-06-16 2016-08-31 恒业智能信息技术(深圳)有限公司 Video stitching method and system
WO2016185556A1 (en) * 2015-05-19 2016-11-24 三菱電機株式会社 Composite image generation device, composite image generation method, and composite image generation program
CN106991644A (en) * 2016-01-20 2017-07-28 上海慧体网络科技有限公司 A kind of method that video-splicing is carried out based on sports ground multi-path camera
CN107424181A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of improved image mosaic key frame rapid extracting method
CN108038822A (en) * 2017-11-23 2018-05-15 极翼机器人(上海)有限公司 A kind of mobile phone holder distant view photograph joining method
CN109801220A (en) * 2019-01-23 2019-05-24 北京工业大学 Mapping parameters method in a kind of splicing of line solver Vehicular video
CN110096950A (en) * 2019-03-20 2019-08-06 西北大学 A kind of multiple features fusion Activity recognition method based on key frame
US20190333187A1 (en) * 2018-04-30 2019-10-31 Tata Consultancy Services Limited Method and system for frame stitching based image construction in an indoor environment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160088287A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Image stitching for three-dimensional video
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
WO2016185556A1 (en) * 2015-05-19 2016-11-24 三菱電機株式会社 Composite image generation device, composite image generation method, and composite image generation program
CN106991644A (en) * 2016-01-20 2017-07-28 上海慧体网络科技有限公司 A kind of method that video-splicing is carried out based on sports ground multi-path camera
CN105761209A (en) * 2016-03-16 2016-07-13 武汉大学 Nuclear safety shell surface image fusion method and system
CN105915804A (en) * 2016-06-16 2016-08-31 恒业智能信息技术(深圳)有限公司 Video stitching method and system
CN107424181A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of improved image mosaic key frame rapid extracting method
CN108038822A (en) * 2017-11-23 2018-05-15 极翼机器人(上海)有限公司 A kind of mobile phone holder distant view photograph joining method
US20190333187A1 (en) * 2018-04-30 2019-10-31 Tata Consultancy Services Limited Method and system for frame stitching based image construction in an indoor environment
CN109801220A (en) * 2019-01-23 2019-05-24 北京工业大学 Mapping parameters method in a kind of splicing of line solver Vehicular video
CN110096950A (en) * 2019-03-20 2019-08-06 西北大学 A kind of multiple features fusion Activity recognition method based on key frame

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188163A (en) * 2020-09-29 2021-01-05 厦门汇利伟业科技有限公司 Method and system for automatic de-duplication splicing of real-time video images
CN113014882A (en) * 2021-03-08 2021-06-22 中国铁塔股份有限公司黑龙江省分公司 Multi-source multi-protocol video fusion monitoring system
CN113810665A (en) * 2021-09-17 2021-12-17 北京百度网讯科技有限公司 Video processing method, device, equipment, storage medium and product
CN114449130A (en) * 2022-03-07 2022-05-06 北京拙河科技有限公司 Multi-camera video fusion method and system

Similar Documents

Publication Publication Date Title
CN111355928A (en) Video stitching method and system based on multi-camera content analysis
AU2017250159B2 (en) Video recording method, server, system, and storage medium
US10728510B2 (en) Dynamic chroma key for video background replacement
CN103795920A (en) Photo processing method and device
US20090169067A1 (en) Face detection and tracking method
US20160323505A1 (en) Photographing processing method, device and computer storage medium
KR102183473B1 (en) Method for monitoring images and apparatus for the same
CN112188163B (en) Method and system for automatic de-duplication splicing of real-time video images
CN103020275A (en) Video analysis method based on video abstraction and video retrieval
CN113055613A (en) Panoramic video stitching method and device based on mine scene
CN202841372U (en) Distribution type full-view monitoring system
CN104363391A (en) Image defective pixel compensation method and system and photographing device
CN114007044A (en) Opencv-based image splicing system and method
Tang et al. Very deep residual network for image matting
CN114996227A (en) Monitoring video compression and restoration method
CN113139419A (en) Unmanned aerial vehicle detection method and device
US11044399B2 (en) Video surveillance system
CN110827333B (en) Relay protection pressing plate splicing identification method, system and medium
US10194072B2 (en) Method and apparatus for remote detection of focus hunt
Aktar et al. Performance evaluation of feature descriptors for aerial imagery mosaicking
CN115914676A (en) Real-time monitoring comparison method and system for ultra-high-definition video signals
CN106296580A (en) A kind of method and device of image mosaic
CN114554158A (en) Panoramic video stitching method and system based on road traffic scene
CN106203244B (en) A kind of determination method and device of lens type
CN115631345A (en) Feature matching method of multi-view device and multi-view device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200630

RJ01 Rejection of invention patent application after publication