CN112565630A - Video frame synchronization method for video splicing - Google Patents

Video frame synchronization method for video splicing Download PDF

Info

Publication number
CN112565630A
CN112565630A CN202011422728.5A CN202011422728A CN112565630A CN 112565630 A CN112565630 A CN 112565630A CN 202011422728 A CN202011422728 A CN 202011422728A CN 112565630 A CN112565630 A CN 112565630A
Authority
CN
China
Prior art keywords
video
time
image
sequence
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011422728.5A
Other languages
Chinese (zh)
Other versions
CN112565630B (en
Inventor
贾刚勇
宋子伟
李尤慧子
殷昱煜
蒋从锋
张纪林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011422728.5A priority Critical patent/CN112565630B/en
Publication of CN112565630A publication Critical patent/CN112565630A/en
Application granted granted Critical
Publication of CN112565630B publication Critical patent/CN112565630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a video frame synchronization method for video splicing, which comprises the following steps: (1) shooting a clock of a microsecond level by edge equipment for image splicing before image splicing; (2) the method comprises the steps that video information is collected by an edge terminal, and each frame of image and system time obtained by each frame of image are cached; (3) reading and processing video information cached from different edge terminals; (4) identifying the time of a millisecond clock shot by each frame of image of the acquired video; (5) calculating the time difference of different edge ends according to the identification result and the acquisition time of each frame of image; (6) and when the video images are spliced, adjusting the time of the obtained video frame images according to the obtained system time difference, so that the video frames for splicing are shot at the same time, thereby reducing the error during image splicing. The method can effectively calibrate the video sequence, avoid the conditions of ghost, object fracture, blur and the like of the spliced image, and ensure the quality of the spliced video.

Description

Video frame synchronization method for video splicing
Technical Field
The invention relates to the technical field of image splicing, in particular to a video frame synchronization method for video splicing.
Background
With the development and progress of digital image technology, the research in the field of image stitching technology is getting more and more intense, and is becoming an important research direction in computer vision processing and computer graphics. And the video splicing has important application in the fields of virtual reality, security monitoring, aerospace and the like. Although the current research technology for image stitching is mature, in the stitching of video images, the video stitching requires the registration and stitching of video sequences, which requires huge computing power, so that the current video stitching is difficult to be completed on line in real time, and thus, many problems still need to be solved. The video splicing is different from image splicing, only two planar images need to be spliced together in the image splicing, and the video splicing also contains time information and also relates to the motion of a foreground object. The dimension of time needs to be considered compared to image stitching, so the process for video image stitching cannot be equated with still image stitching.
In the traditional video splicing, errors in image acquisition time are ignored, and the obtained images are directly used for splicing, so that the obtained results have the conditions of target tearing, segmentation, ghost and the like. Due to various influences in the video acquisition process, different videos have time deviation, such as the situation that the cameras cannot be started simultaneously, and network delay and jamming exist in image transmission. To solve this problem, the time synchronization calibration needs to be performed on the obtained video, so as to ensure the effect of video splicing.
Disclosure of Invention
The invention aims to: a video frame calibration method for multi-video sequence splicing is provided, so that images selected by the video sequence during splicing are video frames which are consistent in time. The video splicing method and the video splicing device solve the problems of ghost, object segmentation, dislocation and the like of images in video splicing caused by time difference, and improve the quality of video splicing.
The main conception of the invention is as follows: before video splicing, image acquisition equipment at different edge ends firstly acquires images for a millisecond-level clock, and simultaneously records the time of respectively obtaining the image frames, and the edge equipment sends the obtained image information to a server. The server calculates time offset according to time difference of images with consistent time in the images by identifying the time shot in the video sequence, unifies time dimensions on different systems by introducing a millisecond-level clock, and calibrates the time of a video frame by the offset.
A method of video frame synchronization for video splicing, comprising the steps of:
(1) acquiring a video sequence by using image acquisition equipment at different edge ends to the same millisecond-level clock;
(2) the edge end records the shooting time of each frame of image of the video series and transmits the video series to the server end;
(3) the server side identifies the obtained video sequence to obtain the time information of a millisecond clock on the video frame image;
(4) matching the video frames of different video sequences according to the time information obtained from the video frame images;
(5) calculating the system time difference between video frames according to the matched video frames;
(6) the obtained system time difference is used as a calibration offset when splicing, the original time of the images is added with the obtained system time difference when splicing the video sequence, the time of different systems is unified under the same dimensionality, and video frame images with the same time are selected for splicing on the basis
To ensure the accuracy of the system time difference, the system time difference needs to be calculated multiple times. Until the end of a certain video sequence. And selecting a median from all the obtained system time differences as a final system time difference.
The invention has the beneficial effects that: according to the method, before video splicing, the time of millisecond-level clocks is utilized to unify the time of cameras at different edges, and millisecond-level offset between the startup time of the cameras is calculated to correct the problem of inconsistent startup time caused by hardware difference during shooting. The time error between the video frames for splicing the video is reduced to the millisecond level through the millisecond-level offset, and the video frames for splicing in the video frame splicing are obtained at the same time.
Drawings
FIG. 1 is a flow chart of a video frame synchronization method according to the present invention;
FIG. 2 is a schematic diagram of a clock with millisecond level for different camera shooting;
FIG. 3 is a flow chart of image stitching with time alignment;
FIG. 4 is a graph of the effect of the system time difference without removing the same frame and the blurred frame;
fig. 5 is a diagram showing the effect of removing the system time difference between the same frame and the blurred frame.
Detailed Description
The present invention will be described in further detail with reference to the following examples and drawings, but the embodiments of the present invention are not limited thereto; before describing the embodiments of the present invention, some basic concepts are first described:
(1) system time: the time of different edge end systems may not be consistent;
(2) clock time: the time of a millisecond clock shot by the camera.
In the conventional multi-video processing field, it is assumed that video sequences of different video sources all acquire images at the same time. Errors in time such as network delay and inconsistent camera startup are ignored, and due to differences in image acquisition time, if the obtained images are directly used for splicing, the obtained effects often have the situations of ghosting, image splitting and the like.
The camera in this embodiment uses a camera with a frame rate of 30 and an image resolution of 680 × 480, and uses a millisecond display on a computer to display millisecond time. The video frame synchronization flow method of this embodiment is shown in fig. 1, where different edge devices perform video acquisition on a uniform area, the cameras of the edge devices are placed as shown in fig. 2, and the shooting time of each video frame is recorded when an image is acquired. And sending the processed video sequence to a server in a 2-system coding mode, and after receiving the video sequence, carrying out digital identification processing on the decoded image by the server to obtain the time of a millisecond clock in the image.
The above numbers are identified: firstly, preprocessing the obtained image to remove noise of the image, carrying out corner detection on the processed image to extract the shape in the image, storing time information in the obtained shape region after the image is subjected to image extraction because the used millisecond-level time is displayed in a rectangle, and screening the image shape through setting of some thresholds to obtain the image region with the time information. And then, extracting the time information in the image by using an image processing library tess of python for the obtained image area.
After all the video frames are processed to obtain corresponding time information, clock times of different video sequences are matched, and deviation delta t of system time is calculated according to a matching result. The specific calculation method is as follows:
(1) and detecting the first frame images of the video sequence A and the video sequence B to obtain the clock time of the first frame images.
(2) And converting the obtained time, making a difference, and calculating the difference value between the obtained time and the obtained time.
(3) And (5) if the difference value meets the set threshold value.
(4) The following may occur for the case where the difference is not satisfied:
1) if the time of the sequence A is longer than that of the sequence B, selecting the next frame image of the sequence B, and re-detecting the clock time and then re-performing the step (2) with the video frame of the current sequence A;
2) and if the time of the sequence A is less than that of the sequence B, selecting the next frame image of the sequence A, and re-detecting the clock time and then re-performing with the video frame of the current sequence B (2).
(5) And recording the obtained system time difference value, and repeating the process until a certain video sequence is ended.
(6) And taking the median of all the system time difference values as the time difference delta t of different edge end systems.
As for time information in an image, millisecond time displayed by a computer is related to the refresh rate of a computer display screen, and due to rapid time change of a millisecond clock, partial images appear in the imaging process of a camera, and the images of the partial images are subjected to virtual images, so that the situation that the time information extracted after digital identification of a video sequence is inconsistent with the original time can be caused by the virtual images. When the refresh rate of the computer screen is smaller than the frame rate of the camera, the clock information obtained by a plurality of continuous video frames is completely the same; the video frames with the same time information extracted from the plurality of continuous video frames and the video frames with inconsistent detected time information are excluded before matching.
For the two cases, a corresponding method is required to be adopted for processing, and for the case that the clock time after the digital identification of the continuous multiple video frames is consistent: and if the time information in the image meets the following conditions, the frame is considered to need to be eliminated.
ti-1=ti or ti=ti+1
Wherein t isiRepresenting the clock time in the ith frame of image.
For the situation that false identification is caused by the occurrence of the ghost in the image, the sequence of the video frames has continuity in time, the time for acquiring the image of the previous frame is necessarily less than the time for acquiring the image of the next frame, and the difference value between the two frames is within a range, and if the time information in the image meets the following formula, the occurrence of the ghost is considered to be eliminated.
ti-ti-1
Where τ is a time threshold set, the frame rate of the camera is 30fps in this example, and τ is set to 50ms in consideration of the fluctuation of the camera itself. The formula can ensure that the video frame with the ghost can not be selected for calculation.
After the system time difference Δ t of the edge is obtained, the video frame splicing process is as shown in fig. 3, and for the video frame sequence of the edge, the video sequence needs to be calibrated according to the obtained system time difference, so that the two spliced video frames are images obtained at the same time.
Fig. 4 and 5 show the results before and after the removal of the interference frame, and it can be seen that the stability of the results is significantly improved after the removal of the ghost image.
Parts of the invention not described in detail are well known to those skilled in the art.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All changes, modifications, substitutions, combinations, and simplifications which may be made without departing from the spirit or scope of the present invention are to be interpreted as being equivalent in all respects.

Claims (4)

1. A video frame synchronization method for video splicing, comprising the steps of:
(1) acquiring a video sequence by using image acquisition equipment at different edge ends to the same millisecond-level clock;
(2) the edge end records the shooting time of each frame of image of the video sequence and transmits the video sequence to the server end;
(3) the server side identifies the obtained video sequence to obtain time information on the image;
(4) matching the video frames of different video sequences according to the time information of the obtained image;
(5) calculating the system time difference between video frames according to the matched video frames;
(6) and taking the obtained time difference as a calibration offset during splicing, adding the original time of the images to the obtained system time difference during the splicing of the video sequence, unifying the time of different systems under the same dimensionality, and selecting the video frame images with the same time for splicing on the basis.
2. The video frame synchronization method of claim 1, wherein the step (3) uses OCR digital recognition at the time of acquiring a millisecond clock in the image.
3. The video frame synchronization method of claim 2, wherein the OCR digital recognition is to intercept an area of interest from the video frame according to the characteristics of the shooting clock: a region containing a displayed time; and carrying out numerical identification on the region of interest to obtain time information of a clock for image shooting.
4. The video frame synchronization method according to claim 1, wherein the step (5) is specifically:
5-1, detecting the first frame images of the video sequence A and the sequence B to obtain the clock time of the first frame images;
5-2, converting the obtained time, making a difference, and calculating the difference value between the obtained time and the obtained difference value;
5-3, if the difference value meets the set threshold value, 5-5 is carried out;
5-4, when the difference is not satisfied, executing the following operations:
if the time of the sequence A is longer than that of the sequence B, selecting the next frame image of the sequence B, and re-detecting the clock time and then re-performing the step (2) with the video frame of the current sequence A;
if the time of the sequence A is less than the time of the sequence, selecting the next frame image of the sequence A, and re-detecting the clock time and then re-performing the step (2) with the video frame of the current sequence B;
5-5, recording the obtained time difference value, and repeating the process until a certain video sequence is finished;
and 5-6, taking the median of all the time difference values as the time difference of different edge end systems.
CN202011422728.5A 2020-12-08 2020-12-08 Video frame synchronization method for video stitching Active CN112565630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011422728.5A CN112565630B (en) 2020-12-08 2020-12-08 Video frame synchronization method for video stitching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011422728.5A CN112565630B (en) 2020-12-08 2020-12-08 Video frame synchronization method for video stitching

Publications (2)

Publication Number Publication Date
CN112565630A true CN112565630A (en) 2021-03-26
CN112565630B CN112565630B (en) 2023-05-05

Family

ID=75059640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011422728.5A Active CN112565630B (en) 2020-12-08 2020-12-08 Video frame synchronization method for video stitching

Country Status (1)

Country Link
CN (1) CN112565630B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1594315A1 (en) * 2004-05-05 2005-11-09 MacroSystem Digital Video AG Method and apparatus for synchronising videosignals
CN102857704A (en) * 2012-09-12 2013-01-02 天津大学 Multisource video stitching method with time domain synchronization calibration technology
CN107135330A (en) * 2017-07-04 2017-09-05 广东工业大学 A kind of method and apparatus of video frame synchronization
CN108206966A (en) * 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 A kind of video file synchronous broadcast method and device
CN110290287A (en) * 2019-06-27 2019-09-27 上海玄彩美科网络科技有限公司 Multi-cam frame synchornization method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1594315A1 (en) * 2004-05-05 2005-11-09 MacroSystem Digital Video AG Method and apparatus for synchronising videosignals
CN102857704A (en) * 2012-09-12 2013-01-02 天津大学 Multisource video stitching method with time domain synchronization calibration technology
CN108206966A (en) * 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 A kind of video file synchronous broadcast method and device
CN107135330A (en) * 2017-07-04 2017-09-05 广东工业大学 A kind of method and apparatus of video frame synchronization
CN110290287A (en) * 2019-06-27 2019-09-27 上海玄彩美科网络科技有限公司 Multi-cam frame synchornization method

Also Published As

Publication number Publication date
CN112565630B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
EP2326091B1 (en) Method and apparatus for synchronizing video data
US9760999B2 (en) Image processing apparatus, imaging apparatus, and image processing method
US9247139B2 (en) Method for video background subtraction using factorized matrix completion
KR101524548B1 (en) Apparatus and method for alignment of images
JPH0799660A (en) Motion compensation predicting device
CN111340749B (en) Image quality detection method, device, equipment and storage medium
CN109714623B (en) Image display method and device, electronic equipment and computer readable storage medium
CN111696044B (en) Large-scene dynamic visual observation method and device
EP2296095B1 (en) Video descriptor generator
EP0632915B1 (en) A machine method for compensating for non-linear picture transformations, e.g. zoom and pan, in a video image motion compensation system
US11250581B2 (en) Information processing apparatus, information processing method, and storage medium
CN112330618B (en) Image offset detection method, device and storage medium
CN111696143B (en) Event data registration method and system
CN115830064B (en) Weak and small target tracking method and device based on infrared pulse signals
CN112565630B (en) Video frame synchronization method for video stitching
CN111160340A (en) Moving target detection method and device, storage medium and terminal equipment
CN108198204B (en) Zero-threshold nuclear density estimation moving target detection method
CN112396639A (en) Image alignment method
CN116188535A (en) Video tracking method, device, equipment and storage medium based on optical flow estimation
CN112991419B (en) Parallax data generation method, parallax data generation device, computer equipment and storage medium
CN114998283A (en) Lens blocking object detection method and device
CN114422777A (en) Image recognition-based time delay testing method and device and storage medium
CN111369592B (en) Newton interpolation-based rapid global motion estimation method
CN114821075A (en) Space target capturing method and device, terminal equipment and storage medium
Hosen et al. An Effective Multi-Camera Dataset and Hybrid Feature Matcher for Real-Time Video Stitching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant