CN112565630B - Video frame synchronization method for video stitching - Google Patents

Video frame synchronization method for video stitching Download PDF

Info

Publication number
CN112565630B
CN112565630B CN202011422728.5A CN202011422728A CN112565630B CN 112565630 B CN112565630 B CN 112565630B CN 202011422728 A CN202011422728 A CN 202011422728A CN 112565630 B CN112565630 B CN 112565630B
Authority
CN
China
Prior art keywords
time
video
frame
image
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011422728.5A
Other languages
Chinese (zh)
Other versions
CN112565630A (en
Inventor
贾刚勇
宋子伟
李尤慧子
殷昱煜
蒋从锋
张纪林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011422728.5A priority Critical patent/CN112565630B/en
Publication of CN112565630A publication Critical patent/CN112565630A/en
Application granted granted Critical
Publication of CN112565630B publication Critical patent/CN112565630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a video frame synchronization method for video splicing, which comprises the following steps: (1) Enabling edge equipment for image stitching to shoot a subtle clock before image stitching; (2) The method comprises the steps that an edge end collects video information and caches each frame of image and system time obtained by each frame of image; (3) Reading and processing video information from different edge caches; (4) Identifying the time of a millisecond-level clock shot by each frame of image of the acquired video; (5) Calculating the system time difference of different edge ends according to the recognized result and the acquisition time of each frame of image; (6) When video images are spliced, the time of the obtained video frame images is adjusted according to the obtained system time difference, so that video frames used for splicing are shot at the same time, and errors during image splicing are reduced. The method and the device can effectively calibrate the video sequence, avoid the conditions of ghost, object splitting, blurring and the like of spliced images, and ensure the quality of the spliced video.

Description

Video frame synchronization method for video stitching
Technical Field
The invention relates to the technical field of image stitching, in particular to a video frame synchronization method for video stitching.
Background
With the development and progress of digital image technology, the research in the technical field of image stitching is getting more and more hot, and becomes an important research direction in computer vision processing and computer graphics. Video stitching has important application in the fields of virtual reality, security monitoring, aerospace and the like. Although research technology for image stitching is mature at present, in video image stitching, since video stitching requires registration and stitching of video sequences, which requires huge computing power, the current video stitching is difficult to be completed on a real-time line, so many problems still remain to be solved in video stitching. The video stitching is different from the image stitching, and only two images of a plane are stitched together in the image stitching, and the video stitching further comprises time information and also involves the movement of a foreground object. The dimension of time needs to be considered compared to image stitching, and thus the process for video image stitching cannot be equated with still image stitching.
The traditional video stitching ignores errors in the time of image acquisition, and the obtained images are directly used for stitching, so that the obtained results can be subject to conditions such as target tearing, segmentation, ghosting and the like. Because various influences can be exerted in the video acquisition process, different videos have time deviation, such as the situation that a camera cannot be started at the same time, and network delay and blocking exist in image transmission. To solve this problem, the obtained video needs to be calibrated synchronously in time, so as to ensure the video splicing effect.
Disclosure of Invention
The invention aims at: a video frame alignment method for multi-video sequence stitching is provided, whereby the images selected for stitching of a video sequence are temporally coincident video frames. The problems of ghosting, object segmentation, dislocation and the like of images in video splicing caused by time difference are solved, and the quality of video splicing is improved.
The main conception of the invention is as follows: before video stitching, image acquisition devices at different edge ends acquire images on a millisecond clock, and record the time of each image frame, and the edge devices send the acquired image information to a server. The server obtains a time offset by identifying the time shot in the video sequence and calculating according to the time difference of the images with consistent time in the images, unifies the time dimensions on different systems by introducing a millisecond-level clock, and calibrates the time of the video frame by the offset.
A method for video frame synchronization for video stitching, comprising the steps of:
(1) Image acquisition equipment at different edge ends is used for acquiring video sequences from the same millisecond clock;
(2) The edge end records shooting time of each frame of image of the video sequence and transmits the video sequence to the server end;
(3) The server identifies the obtained video sequence to obtain time information of a millisecond-level clock on the video frame image;
(4) Matching video frames of different video sequences according to time information obtained from the video frame images;
(5) Calculating a system time difference between the video frames according to the matched video frames;
(6) Taking the obtained system time difference as a calibration offset in splicing, adding the original time of the images to the obtained system time difference when splicing the video sequences, unifying the time of different systems under the same dimension, and selecting video frame images with the same time on the basis to splice
To ensure accuracy of the system time difference, the system time difference needs to be calculated a plurality of times. Until a certain video sequence ends. And selecting the median from all the obtained system time differences as the final system time difference.
The invention has the beneficial effects that: because of the difference of hardware and the influence of network between the cameras, the cameras can not shoot images at the same moment. The time error between video frames for splicing video is reduced to the millisecond level through the millisecond-level offset, so that the video frames for splicing in video frame splicing are greatly ensured to be obtained at the same time.
Drawings
FIG. 1 is a flow chart of a video frame synchronization method of the present invention;
FIG. 2 is a schematic diagram of a clock with millisecond level for different camera shots;
FIG. 3 is a flow chart of image stitching with time alignment;
FIG. 4 is an effect diagram of a system time difference without removing the same frame and the blurred frame;
fig. 5 is an effect diagram of the system time difference to remove the same frame and the blurred frame.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto; before describing the embodiments of the present invention, some basic concepts will be described first:
(1) System time: the time inside the system of different edge terminals may be inconsistent;
(2) Clock time: time of millisecond-level clock shot by the camera.
In the conventional field of multiple video processing, it is assumed that video sequences of different video sources acquire images at the same time. Errors in time such as network delay and inconsistent camera startup are ignored, and due to the difference in time of image acquisition, double images, image splitting and the like often exist in the obtained effect if the obtained images are directly used for splicing.
The camera in the present embodiment uses a camera with a frame rate of 30 and an image resolution of 680×480, and displays millisecond time using a millisecond display on a computer. The video frame synchronization flow method of this embodiment is shown in fig. 1, different edge devices perform video acquisition on a unified area, the placement of cameras of the edge device is shown in fig. 2, and the shooting time of each video frame is recorded when an image is acquired. And sending the processed video sequence to a server in a form of 2-system coding, and after the video sequence is received by the server, carrying out digital identification processing on the decoded image to obtain the time of a millisecond clock in the image.
The above numbers are identified: firstly, preprocessing an obtained image to remove noise of the image, performing corner detection on the processed image to extract the shape in the image, and because the used millisecond time is displayed in a rectangle, storing time information after image graph extraction in the obtained shape area, and screening the graph shape through setting a plurality of thresholds to obtain the graph area with the time information. And extracting the time information in the image by using the image processing library tess of python on the obtained image area.
After all video frames are processed to obtain corresponding time information, the clock time of different video sequences is matched, and the deviation delta t of the system time is calculated according to the matching result. The specific calculation method is as follows:
(1) And detecting the first frame image of the video sequence A and the video sequence B to obtain the clock time of the first frame image.
(2) The resulting time was converted and then differenced, and the difference between them was calculated.
(3) And (5) if the difference value meets the set threshold value, switching to the step (5).
(4) For the case where the difference is not satisfied, the following may occur:
1) If the time of the sequence A is longer than that of the sequence B, selecting the next frame image of the sequence B, re-detecting the clock time, and re-carrying out (2) with the video frame of the current sequence A;
2) If the time of the sequence A is smaller than that of the sequence B, selecting the next frame image of the sequence A, re-detecting the clock time, and re-carrying out (2) with the video frame of the current sequence B.
(5) And recording the obtained system time difference value, and repeating the process until a certain video sequence is finished.
(6) Taking the median of all the system time differences as the time difference deltat of different edge systems.
For time information in the image, the millisecond time of computer display is related to the refresh rate of the computer display screen, and due to the rapid time change of the millisecond clock, partial images appear in the imaging process of the camera, and the partial images can cause the situation that the time information extracted from the video sequence after digital identification is inconsistent with the original time. When the refresh rate of the computer screen is smaller than the frame rate of the camera, the clock information acquired by the continuous multiple video frames is completely the same; video frames with identical time information extracted from the continuous video frames and video frames with inconsistent detected time information are eliminated before matching.
For the two cases, a corresponding method is needed to be adopted for processing, and for the case that the clock time after the digital identification of a plurality of continuous video frames is consistent: if the temporal information in the image satisfies the following condition, the frame is considered to need to be rejected.
t i-1 =t i or t i =t i+1
Wherein t is i Representing the clock time in the i-th frame image.
For the situation that false shadows appear in images and cause false recognition, the video frame sequences are coherent in time, the time for acquiring the previous frame of images is necessarily smaller than the time for acquiring the next frame of images, the difference value between the previous frame of images and the next frame of images is in a range, and if the time information in the images meets the following formula, the false shadows appear is considered to be needed to be removed.
t i -t i-1
Where τ is a set one time threshold, in this example the frame rate of the camera is 30fps, and τ=50 ms is set taking into account fluctuations in the camera itself. The formula can ensure that video frames with virtual shadows cannot be obtained when video frames are selected for calculation.
After the system time difference Δt of the edge is obtained, the video frame stitching process is shown in fig. 3, and for the video frame sequence of the edge, the video sequence needs to be calibrated according to the obtained system time difference, so that two stitched video frames are images obtained at the same time.
Fig. 4 and 5 show the results before and after removing the interference frame, and it can be found that the stability of the results is significantly improved after removing the ghost image.
Parts of the invention not described in detail are known to those skilled in the art.
The foregoing embodiments are merely illustrative of the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the present invention and to implement the same according to the present invention, not to limit the scope of the present invention. All changes, modifications, substitutions, combinations, and simplifications that are made by the spirit and principles of the invention are intended to be equivalent, and therefore, included within the scope of the invention.

Claims (3)

1. A video frame synchronization method for video stitching, comprising the steps of:
(1) Image acquisition equipment at different edge ends is used for acquiring video sequences from the same millisecond clock;
(2) The edge end records shooting time of each frame of image of the video sequence and transmits the video sequence to the server end;
(3) The server identifies the obtained video sequence to obtain time information on the image;
(4) Matching video frames of different video sequences according to the time information on the obtained images;
(5) Calculating a system time difference between the video frames according to the matched video frames;
the steps (4) - (5) are specifically as follows:
5-1, detecting a first frame image of the video sequence A and the video sequence B to obtain the clock time of the first frame image;
5-2, converting the obtained clock time, then making a difference, and calculating a difference value between the clock time and the clock time;
5-3, if the difference value meets the set threshold value, switching to 5-5;
5-4, when the difference is not satisfied, executing the following operations:
if the time of the sequence A is longer than that of the sequence B, selecting the next frame image of the sequence B, re-detecting the clock time, and re-carrying out 5-2 with the video frame of the current sequence A;
if the time of the sequence A is smaller than the time of the sequence A, selecting the next frame image of the sequence A, re-detecting the clock time, and re-carrying out 5-2 with the video frame of the current sequence B;
5-5, recording the obtained time difference value, and repeating the process until a certain video sequence is finished;
5-6, taking the median of all the time differences as the time differences of different edge systems;
(6) After the system time difference delta t of the edge end is obtained, calibrating the video sequence according to the obtained system time difference for the video frame sequence of the edge end, so that two spliced video frames are images obtained at the same time;
for time information in the image, the millisecond time displayed by the computer is related to the refresh rate of the computer display screen, and due to the rapid change of the time of the millisecond clock, partial images appear in the imaging process of the camera, and the partial images can cause the situation that the time information extracted from the video sequence after digital identification is inconsistent with the original time; when the refresh rate of the computer screen is smaller than the frame rate of the camera, the clock information acquired by the continuous multiple video frames is completely the same; video frames with the same extracted time information and video frames with inconsistent detected time information for the continuous multiple video frames are eliminated before matching;
for the two cases, a corresponding method is needed to be adopted for processing, and for the case that the clock time after the digital identification of a plurality of continuous video frames is consistent: if the time information in the image frame meets the following conditions, the frame is considered to need to be rejected;
t i-1 =t i or t i =t i+1
wherein t is i Representing a clock time in an i-th frame image;
for the situation that false shadows appear in images and cause false recognition, the video frame sequences are coherent in time, the time for acquiring the previous frame of images is necessarily smaller than the time for acquiring the next frame of images, the difference value between the previous frame of images and the next frame of images is in a range, and if the time information in the images meets the following formula, the false shadows appear is considered to be needed to be removed;
t i -t i-1
where τ is a set time threshold, and τ=50 ms is set in consideration of fluctuations of the camera itself, thereby ensuring that video frames with ghosts are not present when selecting video frames for calculation.
2. The video frame synchronization method of claim 1, wherein step (3) uses OCR digital recognition at the time of acquiring the millisecond-level clock in the image.
3. The video frame synchronization method of claim 2, wherein the OCR digital recognition is to intercept the region of interest from the video frame based on the characteristics of the photographed clock, namely: an area containing the time of display; and carrying out numerical identification on the region of interest to obtain time information of a clock of image shooting.
CN202011422728.5A 2020-12-08 2020-12-08 Video frame synchronization method for video stitching Active CN112565630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011422728.5A CN112565630B (en) 2020-12-08 2020-12-08 Video frame synchronization method for video stitching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011422728.5A CN112565630B (en) 2020-12-08 2020-12-08 Video frame synchronization method for video stitching

Publications (2)

Publication Number Publication Date
CN112565630A CN112565630A (en) 2021-03-26
CN112565630B true CN112565630B (en) 2023-05-05

Family

ID=75059640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011422728.5A Active CN112565630B (en) 2020-12-08 2020-12-08 Video frame synchronization method for video stitching

Country Status (1)

Country Link
CN (1) CN112565630B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1594315A1 (en) * 2004-05-05 2005-11-09 MacroSystem Digital Video AG Method and apparatus for synchronising videosignals
CN110290287A (en) * 2019-06-27 2019-09-27 上海玄彩美科网络科技有限公司 Multi-cam frame synchornization method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102857704B (en) * 2012-09-12 2015-08-19 天津大学 With the multisource video joining method of time-domain synchronous calibration technology
CN108206966B (en) * 2016-12-16 2020-07-03 杭州海康威视数字技术股份有限公司 Video file synchronous playing method and device
CN107135330B (en) * 2017-07-04 2020-04-28 广东工业大学 Method and device for video frame synchronization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1594315A1 (en) * 2004-05-05 2005-11-09 MacroSystem Digital Video AG Method and apparatus for synchronising videosignals
CN110290287A (en) * 2019-06-27 2019-09-27 上海玄彩美科网络科技有限公司 Multi-cam frame synchornization method

Also Published As

Publication number Publication date
CN112565630A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
EP2326091B1 (en) Method and apparatus for synchronizing video data
CN103460248B (en) Image processing method and device
KR101524548B1 (en) Apparatus and method for alignment of images
EP1665808A1 (en) Temporal interpolation of a pixel on basis of occlusion detection
CN110991287A (en) Real-time video stream face detection tracking method and detection tracking system
US10096114B1 (en) Determining multiple camera positions from multiple videos
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN112771843A (en) Information processing method, device and imaging system
US20170116741A1 (en) Apparatus and Methods for Video Foreground-Background Segmentation with Multi-View Spatial Temporal Graph Cuts
US11250581B2 (en) Information processing apparatus, information processing method, and storage medium
CN112565630B (en) Video frame synchronization method for video stitching
CN111160340B (en) Moving object detection method and device, storage medium and terminal equipment
US11044399B2 (en) Video surveillance system
CN115830064B (en) Weak and small target tracking method and device based on infrared pulse signals
CN116823611A (en) Multi-focus image-based referenced super-resolution method
CN112396639A (en) Image alignment method
CN116188535A (en) Video tracking method, device, equipment and storage medium based on optical flow estimation
CN114998283A (en) Lens blocking object detection method and device
CN112991419B (en) Parallax data generation method, parallax data generation device, computer equipment and storage medium
Low et al. Frame Based Object Detection--An Application for Traffic Monitoring
CN114422777A (en) Image recognition-based time delay testing method and device and storage medium
KR20100118811A (en) Shot change detection method, shot change detection reliability calculation method, and software for management of surveillance camera system
CN110248182B (en) Scene segment shot detection method
Hosen et al. An Effective Multi-Camera Dataset and Hybrid Feature Matcher for Real-Time Video Stitching
Chen et al. Revisiting Event-based Video Frame Interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant