CN112637488A - Edge fusion method and device for audio and video synchronous playing system - Google Patents

Edge fusion method and device for audio and video synchronous playing system Download PDF

Info

Publication number
CN112637488A
CN112637488A CN202011501339.1A CN202011501339A CN112637488A CN 112637488 A CN112637488 A CN 112637488A CN 202011501339 A CN202011501339 A CN 202011501339A CN 112637488 A CN112637488 A CN 112637488A
Authority
CN
China
Prior art keywords
audio
video
playing
projectors
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011501339.1A
Other languages
Chinese (zh)
Other versions
CN112637488B (en
Inventor
杨培春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qidebao Technology Co ltd
Original Assignee
Shenzhen Puhui Zhilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Puhui Zhilian Technology Co ltd filed Critical Shenzhen Puhui Zhilian Technology Co ltd
Priority to CN202011501339.1A priority Critical patent/CN112637488B/en
Publication of CN112637488A publication Critical patent/CN112637488A/en
Application granted granted Critical
Publication of CN112637488B publication Critical patent/CN112637488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an edge fusion method and device of an audio and video synchronous playing system, wherein the edge fusion method comprises the following steps: controlling a plurality of cameras to obtain corresponding video images within a video range, and transmitting data streams corresponding to the video images to an edge fusion processor for edge fusion to obtain panoramic video images subjected to edge fusion processing; segmenting the panoramic image according to the number of projectors, and correspondingly projecting the image by using the projectors; and extracting audio data corresponding to the video, monitoring the video and audio playing progress in real time, and performing consistency adjustment on the video and the audio according to the progress. The apparatus comprises modules corresponding to the steps of the method.

Description

Edge fusion method and device for audio and video synchronous playing system
Technical Field
The invention provides an edge fusion method and device of an audio and video synchronous playing system, and belongs to the technical field of video processing.
Background
With the development of science and technology, the display effect of bright and ultra-large pictures, pure colors and high resolution is pursued, which is a potential requirement of people for visual perception. The desire for large-picture, multicolor, high-brightness and high-resolution display effects is stronger and stronger, and the traditional television wall, the projection hard splicing screen, the box body splicing wall and other seam splicing modes lack picture integrity and brightness and color uniformity, so that the requirements of people in the aspect are difficult to meet. The edge blending technique which is rapidly growing recently is a new seamless splicing technique which is emerging in recent years, and the technique can better improve the visual effect of spliced images and becomes the most effective way for meeting the requirement. The edge fusion technology is to overlap the edges of the pictures projected by a group of projectors, and to display a whole picture which is brighter, oversized and high-resolution without gaps through the fusion technology, and the picture effect is as if the picture quality is projected by one projector. When two or more projectors are combined to project two sides of a frame, part of image lights are overlapped, and the most main function of edge fusion is to gradually adjust the lights of the overlapped parts of the two projectors to make the brightness contrast of the overlapped area consistent with the peripheral images, so that the whole picture is complete and uniform and cannot be seen as the result of splicing of the multiple projectors. However, in the existing edge blending process for video playing, the problem of poor projection imaging effect caused by that the overlapping area does not reach the standard and the problem of asynchronous audio and video playing often occur.
Disclosure of Invention
The invention provides an edge fusion method and a device thereof for an audio and video synchronous playing system, which are used for solving the problems of poor projection imaging effect and asynchronous audio and video playing caused by that an overlapping area can not reach a standard:
the invention provides an edge fusion method of an audio and video synchronous playing system, which comprises the following steps:
controlling a plurality of cameras to obtain corresponding video images within a video range, and transmitting data streams corresponding to the video images to an edge fusion processor for edge fusion to obtain panoramic video images subjected to edge fusion processing;
segmenting the panoramic image according to the number of projectors, and correspondingly projecting the image by using the projectors;
and extracting audio data corresponding to the video, monitoring the video and audio playing progress in real time, and performing consistency adjustment on the video and the audio according to the progress.
Further, the segmenting the panoramic image according to the number of projectors and correspondingly projecting the image by using the projectors includes:
determining the number of the segmented image blocks to be obtained according to the number of the projectors, wherein the number of the segmented image blocks is consistent with the number of the projectors;
determining the range of an overlapping area between two adjacent projectors according to the proportion and the number of the divided image blocks;
dividing the panoramic image according to the number and the overlapping area of the divided image blocks to obtain corresponding divided image blocks; performing image fusion processing on the edge parts of every two adjacent segmented image blocks through an edge fusion processor to obtain segmented image blocks after the edge fusion processing;
adjusting the distance and the included angle between every two adjacent projectors by using the overlapping area range to enable the overlapping part of each divided image block projected by the projectors to be consistent with the overlapping area range;
and sending the segmented image blocks subjected to the edge fusion processing to corresponding projectors for projection.
Further, the overlapping area range between two adjacent projectors is obtained by the following formula:
Figure BDA0002843698490000021
wherein S iscArea not representing the overlap region range; sfThe area of the divided image block is represented, S represents the area corresponding to the panoramic image, and D represents the width of the divided image block; l representsDividing the length of the image block; λ represents an area adjustment coefficient, and the value range of the area adjustment coefficient is 0.78-0.87, preferably 0.81.
Further, the extracting of the audio data corresponding to the video, the real-time monitoring of the video and audio playing progress, and the video and audio consistency adjustment according to the progress comprise:
audio data corresponding to the video is extracted in real time,
simultaneously playing the audio and the panoramic video projected after fusion;
monitoring the playing progress of the panoramic video and the audio in real time, and judging whether the playing time points of the panoramic video and the audio are consistent or not;
if the audio playing time point is not consistent with the panoramic video playing time point, automatically adjusting the audio playing speed to keep the audio playing speed consistent with the panoramic video playing time, wherein the audio playing speed is adjusted through an audio speed adjusting model.
Further, the audio playing speed adjustment model is as follows:
Figure BDA0002843698490000031
wherein v istIndicating the adjusted audio playing speed; v. of0Indicating the audio playing speed before adjustment; v. of1Indicating the speed of video playback, T1Indicating the time length of the played video; t represents the overall duration of the video; t is0Representing the playing time of the audio; t issIndicating a preset standard deviation of time duration, TsThe value range of (A) is 3s-6s, preferably 5 s.
An edge blending device of an audio and video synchronous playing system, the edge blending device comprises:
the panoramic acquisition module is used for controlling the plurality of cameras to acquire corresponding video images in a video range and transmitting data streams corresponding to the video images to the edge fusion processor for edge fusion to obtain panoramic video images after the edge fusion processing;
the segmentation module is used for segmenting the panoramic image according to the number of the projectors and correspondingly projecting the image by using the projectors;
and the adjusting module is used for extracting audio data corresponding to the video, monitoring the video and audio playing progress in real time and carrying out video and audio consistency adjustment according to the progress.
Further, the segmentation module includes:
the image segmentation module is used for determining the number of segmented image blocks to be obtained according to the number of the projectors, and the number of the segmented image blocks is consistent with the number of the projectors;
the overlapping area determining module is used for determining the overlapping area range between two adjacent projectors according to the proportion and the number of the divided image blocks;
the image segmentation block edge processing module is used for segmenting the panoramic image according to the number and the overlapping area of the image segmentation blocks to obtain corresponding image segmentation blocks; performing image fusion processing on the edge parts of every two adjacent segmented image blocks through an edge fusion processor to obtain segmented image blocks after the edge fusion processing;
the angle adjusting module is used for adjusting the distance and the included angle between every two adjacent projectors by utilizing the overlapping area range so that the overlapping part of each divided image block projected by the projectors is consistent with the overlapping area range;
and the projection module is used for sending the segmented image blocks subjected to the edge fusion processing to the corresponding projectors for projection.
Further, the overlapping area range between two adjacent projectors is obtained by the following formula:
Figure BDA0002843698490000041
wherein S iscArea not representing the overlap region range; sfRepresenting the area of a divided image blockS represents the area corresponding to the panoramic image, and D represents the width of the divided image block; l represents the length of the divided image block; λ represents an area adjustment coefficient, and the value range of the area adjustment coefficient is 0.78-0.87, preferably 0.81.
Further, the adjustment module includes:
the extraction module is used for extracting the audio data corresponding to the video in real time,
the playing module is used for simultaneously playing the audio and the fused panoramic video for projection;
the monitoring module is used for monitoring the playing progress of the panoramic video and the audio in real time and judging whether the playing time points of the panoramic video and the audio are consistent or not;
and the audio adjusting module is used for automatically adjusting the audio playing speed when the audio playing time point is inconsistent with the panoramic video playing time point, so that the audio playing speed is consistent with the panoramic video playing time, wherein the audio playing speed is adjusted through an audio speed adjusting model.
Further, the audio playing speed adjustment model is as follows:
Figure BDA0002843698490000051
wherein v istIndicating the adjusted audio playing speed; v. of0Indicating the audio playing speed before adjustment; v. of1Indicating the speed of video playback, T1Indicating the time length of the played video; t represents the overall duration of the video; t is0Representing the playing time of the audio; t issIndicating a preset standard deviation of time duration, TsThe value range of (A) is 3s-6s, preferably 5 s.
The invention has the beneficial effects that:
the edge fusion method and the device of the audio and video synchronous playing system can acquire the area of the overlapping area between the video images to be played by two adjacent projectors effectively at one time according to the panoramic images actually shot by the camera and the number of the projectors, do not need to adjust the area of the overlapping area subsequently, improve the image processing efficiency effectively, and reduce the time wasted by repeated adjustment due to the fact that the overlapping area does not reach the standard effectively. Meanwhile, the audio playing speed and the video playing speed can be kept integrally consistent by adjusting the audio playing speed, and the synchronism of audio playing and video playing is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is a system block diagram of the apparatus of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
As shown in fig. 1, the edge fusion method of an audio and video synchronous playing system provided by the present invention includes:
s1, controlling a plurality of cameras to obtain corresponding video images in a video range, and transmitting data streams corresponding to the video images to an edge fusion processor for edge fusion to obtain panoramic video images after edge fusion processing;
s2, segmenting the panoramic image according to the number of projectors, and projecting the image correspondingly by using the projectors;
and S3, extracting audio data corresponding to the video, monitoring the video and audio playing progress in real time, and performing consistency adjustment of the video and the audio according to the progress.
The working principle of the technical scheme is as follows: firstly, controlling a plurality of cameras to obtain corresponding video images in a video range, and transmitting data streams corresponding to the video images to an edge fusion processor for edge fusion to obtain panoramic video images after the edge fusion processing; then, dividing the panoramic image according to the number of the projectors, and correspondingly projecting the image by using the projectors; and finally, extracting audio data corresponding to the video, monitoring the video and audio playing progress in real time, and performing consistency adjustment on the video and the audio according to the progress.
The effect of the above technical scheme is as follows: the method can acquire the area of the overlapped area between the video images to be played by two adjacent effective projectors at one time according to the panoramic images actually shot by the camera and the number of the projectors, does not need to adjust the area of the overlapped area subsequently, effectively improves the image processing efficiency, and effectively reduces the time wasted by repeated adjustment due to the fact that the overlapped area does not reach the standard. Meanwhile, the audio playing speed and the video playing speed can be kept integrally consistent by adjusting the audio playing speed, and the synchronism of audio playing and video playing is improved.
In an embodiment of the present invention, the segmenting a panoramic image according to the number of projectors and projecting the image by using the projectors correspondingly comprises:
s201, determining the number of segmented image blocks to be obtained according to the number of projectors, wherein the number of the segmented image blocks is consistent with the number of the projectors;
s202, determining the range of an overlapping area between two adjacent projectors according to the proportion and the number of the divided image blocks;
s203, segmenting the panoramic image according to the number and the overlapping area of the segmented image blocks to obtain corresponding segmented image blocks; performing image fusion processing on the edge parts of every two adjacent segmented image blocks through an edge fusion processor to obtain segmented image blocks after the edge fusion processing;
s204, adjusting the distance and the included angle between every two adjacent projectors by using the overlapping area range to enable the overlapping part of each divided image block projected by the projectors to be consistent with the overlapping area range;
and S205, sending the segmented image blocks subjected to the edge fusion processing to corresponding projectors for projection.
Wherein, the overlapping area range between two adjacent projectors is obtained by the following formula:
Figure BDA0002843698490000071
wherein S iscArea not representing the overlap region range; sfThe area of the divided image block is represented, S represents the area corresponding to the panoramic image, and D represents the width of the divided image block; l represents the length of the divided image block; λ represents an area adjustment coefficient, and the value range of the area adjustment coefficient is 0.78-0.87, preferably 0.81.
The working principle of the technical scheme is as follows: firstly, determining the number of divided image blocks to be obtained according to the number of projectors, wherein the number of the divided image blocks is consistent with the number of the projectors; then, determining the range of an overlapping area between two adjacent projectors according to the proportion and the number of the divided image blocks; then, the panoramic image is divided according to the number and the overlapping area of the divided image blocks to obtain corresponding divided image blocks; performing image fusion processing on the edge parts of every two adjacent segmented image blocks through an edge fusion processor to obtain segmented image blocks after the edge fusion processing; then, adjusting the distance and the included angle between every two adjacent projectors by using the overlapping area range to enable the overlapping part of each divided image block projected by the projectors to be consistent with the overlapping area range; and finally, sending the segmented image blocks subjected to the edge fusion processing to a corresponding projector for projection.
The effect of the above technical scheme is as follows: according to the method and the formula, the obtained overlapping area can obtain the area of the overlapping area between the video images to be played by two adjacent projectors effectively at one time according to the panoramic image actually shot by the camera and the number of the projectors, the subsequent adjustment of the area of the overlapping area is not needed, the image processing efficiency is effectively improved, and the time wasted by repeated adjustment due to the fact that the overlapping area does not reach the standard is effectively reduced. Meanwhile, the overlapping area obtained by the formula can be effectively combined with the actual situation of the size of the actual projection video image of the projector, namely, the overlapping area range is set according to the length and the width of the segmentation image block, so that the overlapping area can be obtained and suitable for various projector models and specifications, the obtaining mode of the overlapping area is not needed to be smoother according to different specifications of the projection video image of the projector, and the universality of the steps of the method are improved.
In an embodiment of the present invention, the extracting audio data corresponding to a video, monitoring video and audio playing progress in real time, and performing consistency adjustment of the video and the audio according to the progress includes:
s301, extracting audio data corresponding to the video in real time,
s302, simultaneously playing the audio and the fused panoramic video for projection;
s303, monitoring the playing progress of the panoramic video and the audio in real time, and judging whether the playing time points of the panoramic video and the audio are consistent;
s304, if the audio playing time point is not consistent with the panoramic video playing time point, automatically adjusting the audio playing speed to keep the audio playing speed consistent with the panoramic video playing time, wherein the audio playing speed is adjusted through an audio speed adjusting model.
Wherein, the audio playing speed adjusting model is as follows:
Figure BDA0002843698490000081
wherein v istIndicating the adjusted audio playing speed; v. of0Indicating the audio playing speed before adjustment; v. of1Indicating the speed of video playback, T1Indicating the time length of the played video; t represents the overall duration of the video; t is0Representing the playing time of the audio; t issIndicating a preset standard deviation of time duration, TsThe value range of (A) is 3s-6s, preferably 5 s.
The working principle of the technical scheme is as follows: firstly, audio data corresponding to a video is extracted in real time, and the audio and a panoramic video which is projected after fusion are played simultaneously; then, monitoring the playing progress of the panoramic video and the audio in real time, and judging whether the playing time points of the panoramic video and the audio are consistent or not; if the audio playing time point is not consistent with the panoramic video playing time point, automatically adjusting the audio playing speed to keep the audio playing speed consistent with the panoramic video playing time, wherein the audio playing speed is adjusted through an audio speed adjusting model.
The effect of the above technical scheme is as follows: by adjusting the audio playing speed, the audio playing speed and the video playing speed can keep integral consistency, and the synchronism of the audio playing and the video playing is improved. Meanwhile, the audio playing speed is adjusted through the formula, the matching performance of audio adjustment and the overall video and audio playing progress can be effectively improved, and the problem that when the audio and video are asynchronous at the position close to the end of the overall video playing and before the video is completely played, the audio adjustment is not finished, so that the video and the audio cannot be synchronized and consistent before the overall video playing is avoided. Meanwhile, the speed of the audio is adjusted through the formula and the mode, so that the comfort degree of the audio progress in the ears of listeners in the audio adjusting process can be effectively improved, the problems of audio faults caused by too fast or jumping audio speed adjustment or audio faults and missing playing caused by too large difference between the audio playing speed and the original playing speed are prevented, and the comfort degree of the listeners and the audio playing integrity in the audio speed adjusting process are effectively improved.
The embodiment of the invention provides an edge fusion device of an audio and video synchronous playing system, as shown in fig. 2, the edge fusion device comprises:
the panoramic acquisition module is used for controlling the plurality of cameras to acquire corresponding video images in a video range and transmitting data streams corresponding to the video images to the edge fusion processor for edge fusion to obtain panoramic video images after the edge fusion processing;
the segmentation module is used for segmenting the panoramic image according to the number of the projectors and correspondingly projecting the image by using the projectors;
and the adjusting module is used for extracting audio data corresponding to the video, monitoring the video and audio playing progress in real time and carrying out video and audio consistency adjustment according to the progress.
The working principle of the technical scheme is as follows: firstly, a panorama acquisition module is used for controlling a plurality of cameras to acquire corresponding video images in a video range, and data streams corresponding to the video images are transmitted to an edge fusion processor for edge fusion, so that a panorama video image after edge fusion processing is obtained; then, segmenting the panoramic image according to the number of projectors through a segmentation module, and correspondingly projecting the image by using the projectors; and finally, extracting audio data corresponding to the video by adopting an adjusting module, monitoring the video and audio playing progress in real time, and carrying out video and audio consistency adjustment according to the progress.
The effect of the above technical scheme is as follows: the method can acquire the area of the overlapped area between the video images to be played by two adjacent effective projectors at one time according to the panoramic images actually shot by the camera and the number of the projectors, does not need to adjust the area of the overlapped area subsequently, effectively improves the image processing efficiency, and effectively reduces the time wasted by repeated adjustment due to the fact that the overlapped area does not reach the standard. Meanwhile, the audio playing speed and the video playing speed can be kept integrally consistent by adjusting the audio playing speed, and the synchronism of audio playing and video playing is improved.
In one embodiment of the present invention, the segmentation module includes:
the image segmentation module is used for determining the number of segmented image blocks to be obtained according to the number of the projectors, and the number of the segmented image blocks is consistent with the number of the projectors;
the overlapping area determining module is used for determining the overlapping area range between two adjacent projectors according to the proportion and the number of the divided image blocks;
the image segmentation block edge processing module is used for segmenting the panoramic image according to the number and the overlapping area of the image segmentation blocks to obtain corresponding image segmentation blocks; performing image fusion processing on the edge parts of every two adjacent segmented image blocks through an edge fusion processor to obtain segmented image blocks after the edge fusion processing;
the angle adjusting module is used for adjusting the distance and the included angle between every two adjacent projectors by utilizing the overlapping area range so that the overlapping part of each divided image block projected by the projectors is consistent with the overlapping area range;
and the projection module is used for sending the segmented image blocks subjected to the edge fusion processing to the corresponding projectors for projection.
Wherein, the overlapping area range between two adjacent projectors is obtained by the following formula:
Figure BDA0002843698490000101
wherein S iscArea not representing the overlap region range; sfThe area of the divided image block is represented, S represents the area corresponding to the panoramic image, and D represents the width of the divided image block; l represents the length of the divided image block; λ represents an area adjustment coefficient, and the value range of the area adjustment coefficient is 0.78-0.87, preferably 0.81.
The working principle of the technical scheme is as follows: firstly, determining the number of divided image blocks to be obtained according to the number of projectors through an image dividing module, wherein the number of the divided image blocks is consistent with the number of the projectors; then, an overlapping area determining module is used for determining the overlapping area range between two adjacent projectors according to the proportion and the number of the divided image blocks; then, a segmentation image block edge processing module is adopted to segment the panoramic image according to the number and the overlapping area of the segmentation image blocks to obtain corresponding segmentation image blocks; performing image fusion processing on the edge parts of every two adjacent segmented image blocks through an edge fusion processor to obtain segmented image blocks after the edge fusion processing; then, the distance and the included angle between every two adjacent projectors are adjusted by utilizing the overlapping area range through an angle adjusting module, so that the overlapping part of each divided image block projected by the projectors is consistent with the overlapping area range; and finally, sending the segmented image blocks subjected to edge fusion processing to corresponding projectors for projection through a projection module.
The effect of the above technical scheme is as follows: according to the method and the formula, the obtained overlapping area can obtain the area of the overlapping area between the video images to be played by two adjacent projectors effectively at one time according to the panoramic image actually shot by the camera and the number of the projectors, the subsequent adjustment of the area of the overlapping area is not needed, the image processing efficiency is effectively improved, and the time wasted by repeated adjustment due to the fact that the overlapping area does not reach the standard is effectively reduced. Meanwhile, the overlapping area obtained by the formula can be effectively combined with the actual situation of the size of the actual projection video image of the projector, namely, the overlapping area range is set according to the length and the width of the segmentation image block, so that the overlapping area can be obtained and suitable for various projector models and specifications, the obtaining mode of the overlapping area is not needed to be smoother according to different specifications of the projection video image of the projector, and the universality of the steps of the method are improved.
In one embodiment of the present invention, the adjusting module includes:
the extraction module is used for extracting the audio data corresponding to the video in real time,
the playing module is used for simultaneously playing the audio and the fused panoramic video for projection;
the monitoring module is used for monitoring the playing progress of the panoramic video and the audio in real time and judging whether the playing time points of the panoramic video and the audio are consistent or not;
and the audio adjusting module is used for automatically adjusting the audio playing speed when the audio playing time point is inconsistent with the panoramic video playing time point, so that the audio playing speed is consistent with the panoramic video playing time, wherein the audio playing speed is adjusted through an audio speed adjusting model.
Wherein, the audio playing speed adjusting model is as follows:
Figure BDA0002843698490000121
wherein v istIndicating the adjusted audio playing speed; v. of0Indicating the audio playing speed before adjustment; v. of1Indicating the speed of video playback, T1Indicating the time length of the played video; t represents the overall duration of the video; t is0Representing the playing time of the audio; t issIndicating a preset standard deviation of time duration, TsThe value range of (A) is 3s-6s, preferably 5 s.
The working principle of the technical scheme is as follows: firstly, extracting audio data corresponding to a video in real time through an extraction module, and then simultaneously playing the audio and the panoramic video which is fused and projected by using a playing module; then, monitoring the playing progress of the panoramic video and the audio in real time by using a monitoring module, and judging whether the playing time points of the panoramic video and the audio are consistent; and finally, when the audio playing time point is not consistent with the panoramic video playing time point through an audio adjusting module, automatically adjusting the audio playing speed to keep the audio playing speed consistent with the panoramic video playing time, wherein the audio playing speed is adjusted through an audio speed adjusting model.
The effect of the above technical scheme is as follows: by adjusting the audio playing speed, the audio playing speed and the video playing speed can keep integral consistency, and the synchronism of the audio playing and the video playing is improved. Meanwhile, the audio playing speed is adjusted through the formula, the matching performance of audio adjustment and the overall video and audio playing progress can be effectively improved, and the problem that when the audio and video are asynchronous at the position close to the end of the overall video playing and before the video is completely played, the audio adjustment is not finished, so that the video and the audio cannot be synchronized and consistent before the overall video playing is avoided. Meanwhile, the speed of the audio is adjusted through the formula and the mode, so that the comfort degree of the audio progress in the ears of listeners in the audio adjusting process can be effectively improved, the problems of audio faults caused by too fast or jumping audio speed adjustment or audio faults and missing playing caused by too large difference between the audio playing speed and the original playing speed are prevented, and the comfort degree of the listeners and the audio playing integrity in the audio speed adjusting process are effectively improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. An edge fusion method of an audio and video synchronous playing system is characterized in that the edge fusion method comprises the following steps:
controlling a plurality of cameras to obtain corresponding video images within a video range, and transmitting data streams corresponding to the video images to an edge fusion processor for edge fusion to obtain panoramic video images subjected to edge fusion processing;
segmenting the panoramic image according to the number of projectors, and correspondingly projecting the image by using the projectors;
and extracting audio data corresponding to the video, monitoring the video and audio playing progress in real time, and performing consistency adjustment on the video and the audio according to the progress.
2. The edge blending method according to claim 1, wherein the segmenting the panoramic image according to the number of projectors and projecting the panoramic image by using the projectors correspondingly comprises:
determining the number of the segmented image blocks to be obtained according to the number of the projectors, wherein the number of the segmented image blocks is consistent with the number of the projectors;
determining the range of an overlapping area between two adjacent projectors according to the proportion and the number of the divided image blocks;
dividing the panoramic image according to the number and the overlapping area of the divided image blocks to obtain corresponding divided image blocks; performing image fusion processing on the edge parts of every two adjacent segmented image blocks through an edge fusion processor to obtain segmented image blocks after the edge fusion processing;
adjusting the distance and the included angle between every two adjacent projectors by using the overlapping area range to enable the overlapping part of each divided image block projected by the projectors to be consistent with the overlapping area range;
and sending the segmented image blocks subjected to the edge fusion processing to corresponding projectors for projection.
3. The edge blending method according to claim 2, wherein the overlapping region range between two adjacent projectors is obtained by the following formula:
Figure FDA0002843698480000011
wherein S iscArea not representing the overlap region range; sfThe area of the divided image block is represented, S represents the area corresponding to the panoramic image, and D represents the width of the divided image block; l represents the length of the divided image block; and lambda represents an area adjustment coefficient, and the value range of the area adjustment coefficient is 0.78-0.87.
4. The edge fusion method according to claim 1, wherein the extracting of audio data corresponding to the video, monitoring video and audio playing progress in real time, and performing consistency adjustment of the video and the audio according to the progress comprises:
audio data corresponding to the video is extracted in real time,
simultaneously playing the audio and the panoramic video projected after fusion;
monitoring the playing progress of the panoramic video and the audio in real time, and judging whether the playing time points of the panoramic video and the audio are consistent or not;
if the audio playing time point is not consistent with the panoramic video playing time point, automatically adjusting the audio playing speed to keep the audio playing speed consistent with the panoramic video playing time, wherein the audio playing speed is adjusted through an audio speed adjusting model.
5. The edge blending method according to claim 4, wherein the audio playback speed adjustment model is:
Figure FDA0002843698480000021
wherein v istIndicating the adjusted audio playing speed; v. of0Indicating the audio playing speed before adjustment; v. of1Indicating the speed of video playback, T1Indicating the time length of the played video; t represents the overall duration of the video; t is0Representing the playing time of the audio; t issIndicating a preset standard deviation of time duration, TsThe value range of (A) is 3s-6 s.
6. An edge blending device of an audio and video synchronous playing system is characterized in that the edge blending device comprises:
the panoramic acquisition module is used for controlling the plurality of cameras to acquire corresponding video images in a video range and transmitting data streams corresponding to the video images to the edge fusion processor for edge fusion to obtain panoramic video images after the edge fusion processing;
the segmentation module is used for segmenting the panoramic image according to the number of the projectors and correspondingly projecting the image by using the projectors;
and the adjusting module is used for extracting audio data corresponding to the video, monitoring the video and audio playing progress in real time and carrying out video and audio consistency adjustment according to the progress.
7. The edge blending device of claim 6, wherein the segmentation module comprises:
the image segmentation module is used for determining the number of segmented image blocks to be obtained according to the number of the projectors, and the number of the segmented image blocks is consistent with the number of the projectors;
the overlapping area determining module is used for determining the overlapping area range between two adjacent projectors according to the proportion and the number of the divided image blocks;
the image segmentation block edge processing module is used for segmenting the panoramic image according to the number and the overlapping area of the image segmentation blocks to obtain corresponding image segmentation blocks; performing image fusion processing on the edge parts of every two adjacent segmented image blocks through an edge fusion processor to obtain segmented image blocks after the edge fusion processing;
the angle adjusting module is used for adjusting the distance and the included angle between every two adjacent projectors by utilizing the overlapping area range so that the overlapping part of each divided image block projected by the projectors is consistent with the overlapping area range;
and the projection module is used for sending the segmented image blocks subjected to the edge fusion processing to the corresponding projectors for projection.
8. The edge blending device of claim 7, wherein the range of the overlapping region between two adjacent projectors is obtained by the following formula:
Figure FDA0002843698480000031
wherein S iscArea not representing the overlap region range; sfThe area of the divided image block is represented, S represents the area corresponding to the panoramic image, and D represents the width of the divided image block; l represents the length of the divided image block; and lambda represents an area adjustment coefficient, and the value range of the area adjustment coefficient is 0.78-0.87.
9. The edge blending device of claim 6, wherein the adjustment module comprises:
the extraction module is used for extracting the audio data corresponding to the video in real time,
the playing module is used for simultaneously playing the audio and the fused panoramic video for projection;
the monitoring module is used for monitoring the playing progress of the panoramic video and the audio in real time and judging whether the playing time points of the panoramic video and the audio are consistent or not;
and the audio adjusting module is used for automatically adjusting the audio playing speed when the audio playing time point is inconsistent with the panoramic video playing time point, so that the audio playing speed is consistent with the panoramic video playing time, wherein the audio playing speed is adjusted through an audio speed adjusting model.
10. The edge blending device of claim 9, wherein the audio playback speed adjustment model is:
Figure FDA0002843698480000041
wherein v istIndicating the adjusted audio playing speed; v. of0Indicating the audio playing speed before adjustment; v. of1Indicating the speed of video playback, T1Indicating the time length of the played video; t represents the overall duration of the video; t is0Representing the playing time of the audio; t issIndicating a preset standard deviation of time duration, TsThe value range of (A) is 3s-6 s.
CN202011501339.1A 2020-12-17 2020-12-17 Edge fusion method and device for audio and video synchronous playing system Active CN112637488B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011501339.1A CN112637488B (en) 2020-12-17 2020-12-17 Edge fusion method and device for audio and video synchronous playing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011501339.1A CN112637488B (en) 2020-12-17 2020-12-17 Edge fusion method and device for audio and video synchronous playing system

Publications (2)

Publication Number Publication Date
CN112637488A true CN112637488A (en) 2021-04-09
CN112637488B CN112637488B (en) 2022-02-22

Family

ID=75317399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011501339.1A Active CN112637488B (en) 2020-12-17 2020-12-17 Edge fusion method and device for audio and video synchronous playing system

Country Status (1)

Country Link
CN (1) CN112637488B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005117431A1 (en) * 2004-05-26 2005-12-08 Vividas Technologies Pty Ltd Method for synchronising video and audio data
CN101303880A (en) * 2008-06-30 2008-11-12 北京中星微电子有限公司 Method and apparatus for recording and playing audio-video document
CN102081284A (en) * 2010-12-08 2011-06-01 苏州创捷传媒展览有限公司 Edge blending method for splicing multiple projection images
CN201887871U (en) * 2010-12-24 2011-06-29 苏州创捷传媒展览有限公司 Control system for large screen projection display
CN201947404U (en) * 2010-04-12 2011-08-24 范治江 Panoramic video real-time splice display system
CN103442309A (en) * 2013-08-01 2013-12-11 珠海全志科技股份有限公司 Method and device for keeping audio and video synchronization by using speed conversion algorithm
CN110324689A (en) * 2019-07-08 2019-10-11 广州酷狗计算机科技有限公司 Method, apparatus, terminal and the storage medium that audio-visual synchronization plays
CN110418183A (en) * 2019-08-05 2019-11-05 北京字节跳动网络技术有限公司 Audio and video synchronization method, device, electronic equipment and readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005117431A1 (en) * 2004-05-26 2005-12-08 Vividas Technologies Pty Ltd Method for synchronising video and audio data
CN101303880A (en) * 2008-06-30 2008-11-12 北京中星微电子有限公司 Method and apparatus for recording and playing audio-video document
CN201947404U (en) * 2010-04-12 2011-08-24 范治江 Panoramic video real-time splice display system
CN102081284A (en) * 2010-12-08 2011-06-01 苏州创捷传媒展览有限公司 Edge blending method for splicing multiple projection images
CN201887871U (en) * 2010-12-24 2011-06-29 苏州创捷传媒展览有限公司 Control system for large screen projection display
CN103442309A (en) * 2013-08-01 2013-12-11 珠海全志科技股份有限公司 Method and device for keeping audio and video synchronization by using speed conversion algorithm
CN110324689A (en) * 2019-07-08 2019-10-11 广州酷狗计算机科技有限公司 Method, apparatus, terminal and the storage medium that audio-visual synchronization plays
CN110418183A (en) * 2019-08-05 2019-11-05 北京字节跳动网络技术有限公司 Audio and video synchronization method, device, electronic equipment and readable medium

Also Published As

Publication number Publication date
CN112637488B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
WO2018121333A1 (en) Real-time generation method for 360-degree vr panoramic graphic image and video
CN107087123B (en) Real-time high-definition image matting method based on cloud processing
CN105072314A (en) Virtual studio implementation method capable of automatically tracking objects
US20060165310A1 (en) Method and apparatus for a virtual scene previewing system
CN107358577B (en) Rapid splicing method of cubic panoramic image
US20070247518A1 (en) System and method for video processing and display
WO2017113534A1 (en) Method, device, and system for panoramic photography processing
CN105407254A (en) Real-time video and image synthesis method, real-time video and image synthesis device and global shooting equipment
CN111277764B (en) 4K real-time video panorama stitching method based on GPU acceleration
CN114979689B (en) Multi-machine-position live broadcast guide method, equipment and medium
CN113923377A (en) Virtual film-making system of LED (light emitting diode) circular screen
US8933990B2 (en) Method for 3D visual mapping using 3D stereoscopic video content
CN116071471A (en) Multi-machine-position rendering method and device based on illusion engine
CN111083368A (en) Simulation physics cloud platform panoramic video display system based on high in clouds
CN116781958B (en) XR-based multi-machine-position presentation system and method
CN112637488B (en) Edge fusion method and device for audio and video synchronous playing system
CN107277467A (en) A kind of monitor video joining method
CN112887589A (en) Panoramic shooting method and device based on unmanned aerial vehicle
CN109361897A (en) The joining method of monitor video
CN112581416B (en) Edge fusion processing and control system and method for playing video
WO2022105584A1 (en) Method and apparatus for creating panoramic picture on basis of large screen, and intelligent terminal and medium
CN112019747B (en) Foreground tracking method based on holder sensor
CN111901579A (en) Large-scene projection display splicing method
CN114762353A (en) Device and method for playing virtual reality images input by multiple cameras in real time
CN115866311B (en) Virtual screen surrounding atmosphere rendering method for intelligent glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 518000 Room 201, building 4, software industry base, No. 19, 17 and 18, Haitian 1st Road, Binhai community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee after: Shenzhen qidebao Technology Co.,Ltd.

Address before: 518000 1705, satellite building, 61 Gaoxin South 9th Road, Gaoxin high tech Zone community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Puhui Zhilian Technology Co.,Ltd.