TWI773047B - Multi-video image setting method and multi-video processing method - Google Patents

Multi-video image setting method and multi-video processing method Download PDF

Info

Publication number
TWI773047B
TWI773047B TW109145832A TW109145832A TWI773047B TW I773047 B TWI773047 B TW I773047B TW 109145832 A TW109145832 A TW 109145832A TW 109145832 A TW109145832 A TW 109145832A TW I773047 B TWI773047 B TW I773047B
Authority
TW
Taiwan
Prior art keywords
image
partial area
corrected
feature
images
Prior art date
Application number
TW109145832A
Other languages
Chinese (zh)
Other versions
TW202226837A (en
Inventor
劉建村
Original Assignee
宏正自動科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏正自動科技股份有限公司 filed Critical 宏正自動科技股份有限公司
Priority to TW109145832A priority Critical patent/TWI773047B/en
Priority to CN202111287535.8A priority patent/CN114666635B/en
Publication of TW202226837A publication Critical patent/TW202226837A/en
Application granted granted Critical
Publication of TWI773047B publication Critical patent/TWI773047B/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A multi-video image processing method includes the following steps. Firstly, a first image stream including a plurality of first images is received. Then, a second image stream including a plurality of second images is received. Then, step of capturing overlapping image areas is performed, wherein the step includes: a first partial area of each first image is obtained; and a second partial area of each second image is obtained, wherein image of the first partial area and image of the second partial area are the same and overlap. the first video stream and the second video stream are synchronized when the image of the first partial area and the image of the second partial area change in the content.

Description

多視訊圖像設定方法及多視訊圖像處理方法 Multi-video image setting method and multi-video image processing method

本發明是有關於一種圖像設定方法及圖像處理方法,且特別是有關於一種多視訊圖像設定方法及多視訊圖像處理方法。 The present invention relates to an image setting method and an image processing method, and more particularly, to a multi-video image setting method and a multi-video image processing method.

現在影像後製過程中,有一部份要處理的問題就是影音不同步的問題。然而,目前影像同步取樣技術大致有二種方法:無線訊號(RF)同步傳送技術及聲音特徵比對技術。以無線訊號同步傳送技術來說,每台攝像器都電性連接於一台無線訊號接收器,各無線訊號接收器接收同一台無線訊號發送器的無線訊號,藉此實現影像同步化。以聲音特徵比對技術來說,取得每個影音檔的音軌,藉由分析音軌的聲音特徵實現影像同步化。然而,聲音特徵分析需要取樣一段時間,此導致時間延遲長,且在室外拍攝時,攝像器的收音品質通常不佳,使用聲音特徵比對技術的同步化效果通常不佳。因此,亟需提出一種能改善前述習知問題的技術。 In the process of video post-production, part of the problem to be dealt with is the problem of out-of-sync video and audio. However, currently, there are roughly two methods of image synchronization sampling technology: wireless signal (RF) synchronization transmission technology and sound feature comparison technology. In terms of wireless signal synchronous transmission technology, each camera is electrically connected to a wireless signal receiver, and each wireless signal receiver receives the wireless signal of the same wireless signal transmitter, thereby realizing image synchronization. In terms of sound feature comparison technology, the audio track of each audio and video file is obtained, and the video synchronization is realized by analyzing the sound feature of the audio track. However, the sound feature analysis requires sampling for a period of time, which results in a long time delay, and when shooting outdoors, the sound quality of the camera is usually poor, and the synchronization effect using the sound feature comparison technology is usually poor. Therefore, there is an urgent need to provide a technology that can improve the aforementioned conventional problems.

因此,本發明提出一種多視訊圖像設定方法及多視訊圖像處理方法,可改善習知問題。 Therefore, the present invention provides a multi-video image setting method and a multi-video image processing method, which can improve the conventional problems.

本發明一實施例提出一種多視訊圖像處理方法。多視訊圖像處理方法包括以下步驟:接收一第一影像串流,第一影像串流包括複數個第一圖像;接收一第二影像串流,第二影像串流包括複數個第二圖像;執行一重疊影像區域抓取步驟,包含:取得各第一圖像之一第一局部區域;及取得各第二圖像之一第二局部區域,其中第一局部區域的圖像與第二局部區域的圖像一樣且重疊。當第一局部區域的圖像或第二局部區域的圖像的內容發生變化時,同步第一影像串流與第二影像串流。 An embodiment of the present invention provides a multi-video image processing method. The multi-video image processing method includes the following steps: receiving a first image stream, the first image stream including a plurality of first images; receiving a second image stream, the second image stream including a plurality of second images performing an overlapping image area capturing step, including: obtaining a first partial area of each first image; and obtaining a second partial area of each second image, wherein the image of the first partial area is the same as the first partial area. The images of the two local regions are the same and overlap. When the content of the image in the first partial area or the image in the second partial area changes, the first image stream and the second image stream are synchronized.

本發明另一實施例提出一種多視訊圖像設定方法。多視訊圖像設定方法包括以下步驟:接收一第一設定圖像;接收一第二設定圖像;執行一特徵分析步驟,包括:取得第一設定圖像之複數個第一特徵點;及,取得第二設定圖像之複數個第二特徵點;以及,執行一重疊影像區域分析步驟,包括:比對此些第一特徵點與此些第二特徵點,取得相匹配之至少一第一匹配特徵點與至少一第二匹配特徵點,其中至少一第一匹配特徵點為此些第一特徵點之至少一者,至少一第二匹配特徵點為此些第二特徵點之至少一者;依據此些第一匹配特徵點,取得第一設定圖像之一第一局部區域,其中第一局部區域包含至少一第一匹配特徵點;及,依據此些第二匹配特徵點,取得第二設定圖像之一第二 局部區域,其中第二局部區域包含至少一第二匹配特徵點。其中,第一局部區域的圖像重疊於第二局部區域的圖像。 Another embodiment of the present invention provides a multi-video image setting method. The multi-video image setting method includes the following steps: receiving a first setting image; receiving a second setting image; performing a feature analysis step, including: obtaining a plurality of first feature points of the first setting image; and, Obtaining a plurality of second feature points of the second setting image; and performing an overlapping image area analysis step, including: comparing the first feature points and the second feature points to obtain at least one matching first feature point matching feature point and at least one second matching feature point, wherein at least one first matching feature point is at least one of these first feature points, and at least one second matching feature point is at least one of these second feature points ; According to these first matching feature points, obtain a first partial region of the first setting image, wherein the first partial region includes at least one first matching feature point; and, according to these second matching feature points, obtain the first matching feature point Two set images one of the second A local area, wherein the second local area includes at least one second matching feature point. Wherein, the image of the first partial area overlaps the image of the second partial area.

為了對本發明之上述及其他方面有更佳的瞭解,下文特舉實施例,並配合所附圖式詳細說明如下: In order to have a better understanding of the above-mentioned and other aspects of the present invention, the following specific examples are given and described in detail in conjunction with the accompanying drawings as follows:

10:第一攝像器 10: The first camera

20:第二攝像器 20: Second camera

30:攝像對象 30: Camera subject

100:多視訊圖像處理裝置 100: Multi-video image processing device

110:第一輸入埠 110: The first input port

120:第二輸入埠 120: The second input port

130:圖像處理單元 130: Image processing unit

B11:第一邊界 B11: First Boundary

B12:第二邊界 B12: Second Boundary

B13:第三邊界 B13: Third Frontier

B14:第四邊界 B14: Fourth Boundary

B21:第五邊界 B21: Fifth Boundary

B22:第六邊界 B22: Sixth Boundary

B23:第七邊界 B23: Seventh Boundary

B24:第八邊界 B24: Eighth Boundary

BP1:第一邊界點 BP1: first boundary point

BP1:第二邊界點 BP1: Second boundary point

CM1,CM1’:第一校正圖像 CM1, CM1': the first corrected image

CM2,CM2’:第二校正圖像 CM2, CM2': Second corrected image

CM11,CM11’,CM21,CM12’:校正後影像 CM11, CM11’, CM21, CM12’: Corrected images

D1:方向 D1: Direction

D2:方向 D2: Direction

F1:第一特徵點 F1: The first feature point

FC11,FC12,FC13:第一特徵集 FC11, FC12, FC13: The first feature set

F2:第二特徵點 F2: Second feature point

FC21,FC22,FC23:第二特徵集 FC21, FC22, FC23: Second feature set

MF11~MF14:第一匹配特徵點 MF11~MF14: The first matching feature point

MF21~MF24:第二匹配特徵點 MF21~MF24: The second matching feature point

CM:拼接圖像 CM: Stitched images

RE1,RE1’:第一局部區域 RE1,RE1': the first local area

RE2,RE2’:第二局部區域 RE2,RE2': the second local area

RE3’:第三局部區域 RE3': the third local area

RE4’:第四局部區域 RE4': Fourth local area

S1:第一距離 S1: first distance

S2:第二距離 S2: Second distance

SM1:第一設定圖像 SM1: First setting image

SM11:影像 SM11: Video

SM2:第二設定圖像 SM2: Second setting image

SM21:影像 SM21: Video

VS1:第一影像串流 VS1: The first video stream

VS2:第二影像串流 VS2: Second Video Streaming

VM1:第一圖像 VM1: first image

VM2:第二圖像 VM2: Second image

△T:時間差 △T: time difference

S110、S120~S130、S131~S132、S140、S141~S142、S150、S151~S153、S210、S220、S230、S231~S232、S240、S241~S242、S250、S260、S261~S262、S270、S271~S272、S280、S371~S372:步驟 S110, S120~S130, S131~S132, S140, S141~S142, S150, S151~S153, S210, S220, S230, S231~S232, S240, S241~S242, S250, S260, S261~S262, S270, S271~ S272, S280, S371~S372: Steps

第1圖繪示依照本發明一實施例之多視訊圖像處理裝置的功能方塊圖。 FIG. 1 is a functional block diagram of a multi-video image processing apparatus according to an embodiment of the present invention.

第2圖繪示依照本發明一實施例之多視訊圖像設定方法的流程圖。 FIG. 2 is a flowchart illustrating a method for setting multiple video images according to an embodiment of the present invention.

第3A~3D圖繪示使用第1圖之多視訊圖像處理裝置執行多視訊圖像設定方法的過程圖。 FIGS. 3A to 3D are process diagrams illustrating a multi-video image setting method using the multi-video image processing apparatus of FIG. 1 .

第4圖繪示依照本發明一實施例之多視訊圖像處理方法的流程圖。 FIG. 4 is a flowchart illustrating a multi-video image processing method according to an embodiment of the present invention.

第5圖繪示第1圖之多視訊圖像處理裝置接收影像串流的示意圖。 FIG. 5 is a schematic diagram of the multi-video image processing apparatus of FIG. 1 receiving an image stream.

第6A~6G圖繪示使用第1圖之多視訊圖像處理裝置執行多視訊圖像處理方法的過程圖。 FIGS. 6A to 6G are process diagrams illustrating a multi-video image processing method using the multi-video image processing apparatus of FIG. 1 .

第7圖繪示依照本發明另一實施例之同步化步驟的流程圖。 FIG. 7 is a flowchart illustrating a synchronization step according to another embodiment of the present invention.

請參照第1圖,其繪示依照本發明一實施例之多視 訊圖像處理裝置100的功能方塊圖。多視訊圖像處理裝置100包括第一輸入埠110、第二輸入埠120及圖像處理單元130。圖像處理單元130耦接第一輸入埠110及第二輸入埠120。 Please refer to FIG. 1, which shows a multi-view according to an embodiment of the present invention A functional block diagram of the image processing apparatus 100 is shown. The multi-video image processing apparatus 100 includes a first input port 110 , a second input port 120 and an image processing unit 130 . The image processing unit 130 is coupled to the first input port 110 and the second input port 120 .

如第1圖所示,第一輸入埠110用以接收第一設定圖像SM1。第二輸入埠110用以接收第二設定圖像SM2。在一實施例中,第一設定圖像SM1來自於第一攝像器10,第二設定圖像SM2來自於第二攝像器20。其中,第一攝像器10及第二攝像器20是分別朝向同一拍攝對象30,以擷取拍攝對象30的畫面。然本發明實施例不限制拍攝對象30的類型,其可以是人物、物體、風景、室內場景、戶外場景等。 As shown in FIG. 1, the first input port 110 is used for receiving the first setting image SM1. The second input port 110 is used for receiving the second setting image SM2. In one embodiment, the first setting image SM1 comes from the first camera 10 , and the second setting image SM2 comes from the second camera 20 . Wherein, the first camera 10 and the second camera 20 are respectively directed towards the same object 30 to capture the image of the object 30 . However, the embodiment of the present invention does not limit the type of the photographed object 30, which may be a person, an object, a landscape, an indoor scene, an outdoor scene, or the like.

圖像處理單元130自第一設定圖像SM1取得對應的第一局部區域RE1,自第二設定圖像SM2取得對應的第二局部區域RE2。其中,第一局部區域RE1的圖像重疊於第二局部區域RE2的圖像(見第3D圖,詳細說明容後描述)。透過第一局部區域RE1的圖像與第二局部區域RE2的圖像的重疊資訊,可快速後續同步化在多視訊圖像處理流程中的二影像串流。 The image processing unit 130 obtains the corresponding first partial area RE1 from the first setting image SM1, and obtains the corresponding second partial area RE2 from the second setting image SM2. Wherein, the image of the first partial region RE1 is superimposed on the image of the second partial region RE2 (see FIG. 3D, the detailed description will be described later). Through the overlapping information of the image of the first partial area RE1 and the image of the second partial area RE2, the two video streams in the multi-video image processing process can be quickly synchronized subsequently.

以下係以第2及3A~3D圖說明依照本發明一實施例之多視訊圖像設定方法的流程。 The following describes the flow of the multi-video image setting method according to an embodiment of the present invention by using Figures 2 and 3A to 3D.

請參照第2圖及第3A~3D圖,第2圖繪示依照本發明一實施例之多視訊圖像設定方法的流程圖,而第3A~3D圖繪示使用第1圖之多視訊圖像處理裝置100執行多視訊圖像設定方法的過程圖。 Please refer to FIG. 2 and FIGS. 3A to 3D. FIG. 2 shows a flowchart of a multi-video image setting method according to an embodiment of the present invention, and FIGS. 3A to 3D illustrate the multi-video image using FIG. 1. A process diagram of the image processing apparatus 100 executing the multi-video image setting method.

在步驟S110中,第一輸入埠110接收第一設定圖像SM1。在本實施例中,拍攝對象30例如是包含椅子及窗戶,因此,第一設定圖像SM1中的影像SM11例如是包含椅子及窗戶的影像(如第3A圖所示)。 In step S110, the first input port 110 receives the first setting image SM1. In the present embodiment, the photographing object 30 includes, for example, a chair and a window. Therefore, the image SM11 in the first setting image SM1 is, for example, an image including a chair and a window (as shown in FIG. 3A ).

在步驟S120中,第二輸入埠120接收第二設定圖像SM2。第二設定圖像SM2中的影像SM21例如是包含椅子及窗戶的影像(如第3A圖所示)。 In step S120, the second input port 120 receives the second setting image SM2. The image SM21 in the second setting image SM2 is, for example, an image including a chair and a window (as shown in FIG. 3A ).

在步驟S130中,圖像處理單元130執行圖像校正步驟。於此僅以一例子作說明,詳言之,圖像處理單元130於執行圖像校正步驟之前,會先判斷第一攝像器10及第二攝像器20面向拍攝對象30的視角為相同(如,視角平行)或不同(如,視角不平行),及/或影像SM11與影像SM21的比例相同(如,尺寸1:1)或不同(如,尺寸非1:1)。當視角不相同時,如第3A圖所示,影像SM11與影像SM21呈現的比例會不同。於此,當第一設定圖像SM1及第二設定圖像SM2經判斷後的比例不同時,即執行步驟S130,進而校正第一設定圖像SM1及第二設定圖像SM2,讓校正後第一設定圖像SM1及第二設定圖像SM2的視角相同。若當第一攝像器10及第二攝像器20面向拍攝對象30的視角相同時,則第一設定圖像SM1及第二設定圖像SM2的比例相同,如此可省略步驟S130。 In step S130, the image processing unit 130 performs an image correction step. This is just an example to illustrate, in detail, before executing the image correction step, the image processing unit 130 will first determine that the viewing angles of the first camera 10 and the second camera 20 facing the object 30 are the same (eg, , the viewing angles are parallel) or different (eg, the viewing angles are not parallel), and/or the ratio of the image SM11 and the image SM21 is the same (eg, the size is 1:1) or different (eg, the size is not 1:1). When the viewing angles are different, as shown in FIG. 3A , the proportions of the image SM11 and the image SM21 will be different. Here, when the determined ratios of the first setting image SM1 and the second setting image SM2 are different, step S130 is executed, and the first setting image SM1 and the second setting image SM2 are further corrected, so that the corrected first setting image SM1 and the second setting image SM2 are corrected. The viewing angles of the first setting image SM1 and the second setting image SM2 are the same. If the viewing angles of the first camera 10 and the second camera 20 facing the photographed object 30 are the same, the proportions of the first setting image SM1 and the second setting image SM2 are the same, so step S130 can be omitted.

其中,圖像校正步驟可以包括數個步驟S131~S132。 The image correction step may include several steps S131 to S132.

在步驟S131中,圖像處理單元130校正第一設定圖像SM1為第一校正圖像CM1(如第3B圖所示)。於此,圖像處理單元130可儲存校正第一設定圖像SM1為第一校正圖像CM1時所需要的第一校正參數。本發明實施例不限定第一校正參數的內容,其可為左右偏轉角度、上下偏轉角度、縮放比例、解析度調整比例、色差調整比例等或其他任何可調整影像之參數,換言之,只要是能讓第一設定圖像SM1校正成第一校正圖像CM1的參數皆為第一校正參數之範疇。在後續多視訊圖像處理方法中,圖像處理單元130可依據此第一校正參數快速校正第一影像串流的第一圖像(於後描述)。本文的校正參數可由圖像處理單元130分析圖像後取得。 In step S131, the image processing unit 130 corrects the first setting image SM1 to be the first corrected image CM1 (as shown in FIG. 3B). Here, the image processing unit 130 may store the first correction parameters required for correcting the first setting image SM1 to be the first correction image CM1. The embodiments of the present invention do not limit the content of the first correction parameter, which may be left and right deflection angles, up and down deflection angles, scaling ratios, resolution adjustment ratios, chromatic aberration adjustment ratios, etc., or any other parameters that can adjust the image, in other words, as long as they can The parameters for correcting the first setting image SM1 into the first corrected image CM1 are all within the scope of the first correction parameters. In the subsequent multi-video image processing method, the image processing unit 130 can quickly correct the first image of the first video stream according to the first correction parameter (described later). The correction parameters herein can be obtained after analyzing the image by the image processing unit 130 .

在步驟S132中,圖像處理單元130校正第二設定圖像SM2為一第二校正圖像CM2(如第3B圖所示)。圖像處理單元130可儲存校正第二設定圖像SM2所需要的一第二校正參數。本發明實施例不限定第二校正參數的內容,只要是能讓第二設定圖像SM2校正成第二校正圖像CM2的參數即可。在後續多視訊圖像處理方法中,圖像處理單元130可依據此第二校正參數快速校正第二影像串流的第二圖像(於後描述)。 In step S132, the image processing unit 130 corrects the second setting image SM2 to be a second corrected image CM2 (as shown in FIG. 3B). The image processing unit 130 can store a second calibration parameter required for calibrating the second setting image SM2. The embodiment of the present invention does not limit the content of the second correction parameter, as long as it is a parameter that enables the second setting image SM2 to be corrected into the second corrected image CM2. In the subsequent multi-video image processing method, the image processing unit 130 can quickly correct the second image of the second video stream according to the second correction parameter (described later).

於一實施例中,圖像處理單元130可將第一校正參數與第二校正參數儲存於具有儲存功能的暫存器、記憶體或硬碟等儲存媒體,本發明並非以此為限制。於此,後續圖像處理單元130可視需要而自該儲存媒體讀取第一校正參數與第二校正參 數。 In one embodiment, the image processing unit 130 may store the first calibration parameter and the second calibration parameter in a storage medium such as a register, a memory, or a hard disk with a storage function, but the invention is not limited thereto. Here, the subsequent image processing unit 130 may read the first calibration parameter and the second calibration parameter from the storage medium as needed. number.

於此,經過圖像校正步驟校正後,第一校正圖像CM1的視角與第二校正圖像CM2的視角相同。如此,第一校正圖像CM1的影像CM11與第二校正圖像CM2的影像CM21的比例可為1:1。例如,第一校正圖像CM1具有校正後影像CM11,第二校正圖像CM2具有校正後影像CM21,其中校正後影像CM11與校正後影像CM21的比例等於或接近1:1。 Here, after the correction in the image correction step, the viewing angle of the first corrected image CM1 is the same as the viewing angle of the second corrected image CM2. In this way, the ratio of the image CM11 of the first corrected image CM1 to the image CM21 of the second corrected image CM2 may be 1:1. For example, the first corrected image CM1 has a corrected image CM11, and the second corrected image CM2 has a corrected image CM21, wherein the ratio of the corrected image CM11 to the corrected image CM21 is equal to or close to 1:1.

在步驟S140中,圖像處理單元130執行特徵分析步驟。特徵分析步驟包括數個步驟S141~S142。 In step S140, the image processing unit 130 performs a feature analysis step. The feature analysis step includes several steps S141 to S142.

在步驟S141中,圖像處理單元130取得第一校正圖像CM1之數個第一特徵點F1(如第3C圖所示)。圖像處理單元130可採用任何已知的影像分析技術,取得第一校正圖像CM1之數個第一特徵點F1。在另一實施例中,若第一設定圖像SM1不需校正(省略步驟S130),圖像處理單元130取得第一設定圖像SM1之數個第一特徵點F1。 In step S141 , the image processing unit 130 obtains several first feature points F1 of the first corrected image CM1 (as shown in FIG. 3C ). The image processing unit 130 can use any known image analysis technology to obtain the first feature points F1 of the first corrected image CM1. In another embodiment, if the first setting image SM1 does not need to be corrected (step S130 is omitted), the image processing unit 130 obtains several first feature points F1 of the first setting image SM1.

在步驟S142中,圖像處理單元130取得第二校正圖像CM2之數個第二特徵點F2(如第3C圖所示)。圖像處理單元130可採用任何已知的影像分析技術,取得第二校正圖像CM2之數個第二特徵點F2。其中,第一特徵點F1及F2例如是點、線段(如,拍攝對象30的輪廓線等)、線段上的點(如,拍攝對象30的轉角、角落等)等。在另一實施例中,若第二設定圖像SM2不需校正(省略步驟S130),圖像處理單元130取得第二設定圖像SM2 之數個第二特徵點F2。 In step S142, the image processing unit 130 obtains several second feature points F2 of the second corrected image CM2 (as shown in FIG. 3C). The image processing unit 130 can use any known image analysis technology to obtain the plurality of second feature points F2 of the second corrected image CM2. The first feature points F1 and F2 are, for example, points, line segments (eg, contour lines of the photographing object 30 , etc.), points on the line segments (eg, the corners, corners, etc. of the photographing object 30 ), and the like. In another embodiment, if the second setting image SM2 does not need to be corrected (step S130 is omitted), the image processing unit 130 obtains the second setting image SM2 several second feature points F2.

在步驟S150中,圖像處理單元130執行一重疊影像區域分析步驟。重疊影像區域分析步驟包括數個步驟S151~S153。 In step S150, the image processing unit 130 performs an overlapping image area analysis step. The step of analyzing the overlapping image area includes several steps S151 to S153.

在步驟S151中,圖像處理單元130比對此些第一特徵點F1與此些第二特徵點F2(如第3D圖所示),取得此些第一特徵點F1與此些第二特徵點F2中相匹配之至少一第一匹配特徵點(如,MF11~MF14)與至少一第二匹配特徵點(如,MF21~MF24)。 In step S151 , the image processing unit 130 compares the first feature points F1 and the second feature points F2 (as shown in FIG. 3D ) to obtain the first feature points F1 and the second features At least one first matching feature point (eg, MF11 - MF14 ) and at least one second matching feature point (eg , MF21 - MF24 ) are matched in the point F2 .

舉例來說,第一校正圖像CM1中的第一匹配特徵點MF11與第二校正圖像CM2中的第二匹配特徵點MF21為校正圖像中影像的相同特徵(如,窗戶的一角),因此圖像處理單元130將第一匹配特徵點MF11與第二匹配特徵點MF21定義為「相匹配特徵點」(如第3D圖所示)。第一校正圖像CM1中的第一匹配特徵點MF12與第二校正圖像CM2中的第二匹配特徵點MF22為校正圖像中影像的相同特徵(如,窗戶的另一角),因此圖像處理單元130將第一匹配特徵點MF12與第二匹配特徵點MF22定義為「相匹配特徵點」(如第3D圖所示)。 For example, the first matching feature point MF11 in the first corrected image CM1 and the second matching feature point MF21 in the second corrected image CM2 are the same feature of the image in the corrected image (eg, a corner of a window), Therefore, the image processing unit 130 defines the first matching feature point MF11 and the second matching feature point MF21 as "matching feature points" (as shown in FIG. 3D ). The first matching feature point MF12 in the first corrected image CM1 and the second matching feature point MF22 in the second corrected image CM2 are the same features of the image in the corrected image (eg, the other corner of the window), so the image The processing unit 130 defines the first matching feature point MF12 and the second matching feature point MF22 as "matching feature points" (as shown in FIG. 3D ).

相似地,第一校正圖像CM1中的第一匹配特徵點MF13與第二校正圖像CM2中的第二匹配特徵點MF23為校正圖像中影像的相同特徵(如,椅子的一角),因此圖像處理單元130將第一匹配特徵點MF13與第二匹配特徵點MF23定義為「相匹 配特徵點」(如第3D圖所示)。第一校正圖像CM1中的第一匹配特徵點MF14與第二校正圖像CM2中的第二匹配特徵點MF24為校正圖像中影像的相同特徵(如,椅子的另一角),因此圖像處理單元130將第一匹配特徵點MF14與第二匹配特徵點MF24定義為「相匹配特徵點」(如第3D圖所示)。 Similarly, the first matching feature point MF13 in the first corrected image CM1 and the second matching feature point MF23 in the second corrected image CM2 are the same feature of the image in the corrected image (eg, a corner of a chair), so The image processing unit 130 defines the first matching feature point MF13 and the second matching feature point MF23 as "matching". feature points” (as shown in Figure 3D). The first matching feature point MF14 in the first corrected image CM1 and the second matching feature point MF24 in the second corrected image CM2 are the same features of the image in the corrected image (eg, the other corner of the chair), so the image The processing unit 130 defines the first matching feature point MF14 and the second matching feature point MF24 as "matching feature points" (as shown in FIG. 3D ).

本發明實施例的匹配特徵點的數量係以四個為例說明,然相匹配特徵點的數量可視第一設定圖像SM1的影像SM11及第二設定圖像SM2的影像SM21而定,本發明實施例不加以限定。此外,在實施例中,由於「相匹配特徵點」係成對,因此第一匹配特徵點的數量與第二匹配特徵點的數量可相同。 The number of matching feature points in the embodiment of the present invention is described by taking four as an example. However, the number of matching feature points can be determined according to the image SM11 of the first setting image SM1 and the image SM21 of the second setting image SM2. Examples are not limited. In addition, in the embodiment, since the "matching feature points" are paired, the number of the first matching feature points and the number of the second matching feature points may be the same.

在步驟S152中,圖像處理單元130依據此些第一匹配特徵點,取得第一校正圖像CM1之第一局部區域RE1(校正後第一局部區域),其中第一局部區域RE1包含所有的第一匹配特徵點,例如是第一匹配特徵點MF11~MF14(如第3D圖所示)。 In step S152, the image processing unit 130 obtains the first partial region RE1 (the corrected first partial region) of the first corrected image CM1 according to the first matching feature points, wherein the first partial region RE1 includes all the The first matching feature points are, for example, the first matching feature points MF11 to MF14 (as shown in the 3D figure).

在步驟S153中,圖像處理單元130依據此些第二匹配特徵點,取得第二校正圖像CM2之第二局部區域RE2(校正後第二局部區域),其中第二局部區域RE2包含所有的第二匹配特徵點,例如是第二匹配特徵點MF21~MF24(如第3D圖所示)。 In step S153, the image processing unit 130 obtains the second partial region RE2 (the corrected second partial region) of the second corrected image CM2 according to the second matching feature points, wherein the second partial region RE2 includes all the The second matching feature points are, for example, the second matching feature points MF21 to MF24 (as shown in the 3D figure).

第一局部區域RE1的圖像重疊於第二局部區域RE2的圖像,即,第一局部區域RE1的圖像與第二局部區域RE2的圖像的內容相同且比例等於或接近1:1。換言之,第一局部區域RE1的圖像與第二局部區域RE2的圖像一樣且重疊。 The image of the first partial area RE1 overlaps the image of the second partial area RE2, that is, the image of the first partial area RE1 and the image of the second partial area RE2 have the same content and a ratio equal to or close to 1:1. In other words, the image of the first partial region RE1 is identical to and overlapped with the image of the second partial region RE2.

如第3D圖所示,第一局部區域RE1具有第一邊界B11、第二邊界B12、第三邊界B13及第四邊界B14,其中第一邊界B11、第二邊界B12及第三邊界B13為第一校正圖像CM1本身的邊界。第二局部區域RE2具有第五邊界B21、第六邊界B22、第七邊界B23及第八邊界B24,其中第六邊界B22、第七邊界B23及第八邊界B24為第二校正圖像CM2本身的邊界。 As shown in FIG. 3D, the first partial region RE1 has a first boundary B11, a second boundary B12, a third boundary B13 and a fourth boundary B14, wherein the first boundary B11, the second boundary B12 and the third boundary B13 are the first boundary B11, the second boundary B12 and the third boundary B13. A boundary of the corrected image CM1 itself. The second partial region RE2 has a fifth boundary B21, a sixth boundary B22, a seventh boundary B23 and an eighth boundary B24, wherein the sixth boundary B22, the seventh boundary B23 and the eighth boundary B24 are the parts of the second corrected image CM2 itself. boundary.

第一局部區域RE1之第四邊界B14可依據第二校正圖像CM2決定。例如,第一校正圖像CM1的第一匹配特徵點至第四邊界B14之間的距離與第二校正圖像CM2之相匹配的第二匹配特徵點至對應邊界之間的距離係相等。詳細來說,第二校正圖像CM2的第二匹配特徵點MF21沿一方向D1至第八邊界B24行經一第一距離S1,因此,第一校正圖像CM1的第一匹配特徵點MF11(其與第二匹配特徵點MF21相匹配)沿相同方向D1至第四邊界B14也行經相同的第一距離S1,而決定第四邊界B14。在決定第四邊界B14後,圖像處理單元130依據第四邊界B14及第一校正圖像CM1本身的邊界(如,第一邊界B11、第二邊界B12、第三邊界B13)決定第一局部區域RE1的所有邊界(以粗虛線繪示於第3D圖)。 The fourth boundary B14 of the first partial region RE1 can be determined according to the second corrected image CM2. For example, the distance between the first matching feature point of the first corrected image CM1 and the fourth boundary B14 is equal to the distance between the matching second matching feature point of the second corrected image CM2 and the corresponding boundary. In detail, the second matching feature point MF21 of the second corrected image CM2 travels a first distance S1 along a direction D1 to the eighth boundary B24. Therefore, the first matching feature point MF11 of the first corrected image CM1 (which (matching with the second matching feature point MF21) along the same direction D1 to the fourth boundary B14 also travels the same first distance S1 to determine the fourth boundary B14. After determining the fourth boundary B14, the image processing unit 130 determines the first part according to the fourth boundary B14 and the boundaries of the first corrected image CM1 itself (eg, the first boundary B11, the second boundary B12, and the third boundary B13). All boundaries of the region RE1 (drawn in bold dashed lines in Figure 3D).

相似地,第二局部區域RE2之第五邊界B21可依據第一校正圖像CM1決定。例如,第二校正圖像CM2的第二匹配特徵點至第一邊界B11之間的距離與第一校正圖像CM1之相匹配的第一匹配特徵點至對應邊界之間的距離係相等。詳細來說, 第一校正圖像CM1的第一匹配特徵點MF11沿一方向D2至第一邊界B11行經一第二距離S2,因此第二校正圖像CM2的第二匹配特徵點MF21(其與第一匹配特徵點MF11相匹配)沿相同方向D2至第五邊界B21也行經相同的第二距離S2,而決定出第五邊界B21。在決定第五邊界B21後,圖像處理單元130依據第五邊界B21及第二校正圖像CM2本身的邊界(如,第六邊界B22、第七邊界B23及第八邊界B24)決定第二局部區域RE2的所有邊界(以粗虛線繪示於第3D圖)。 Similarly, the fifth boundary B21 of the second partial region RE2 can be determined according to the first corrected image CM1. For example, the distance between the second matching feature point of the second corrected image CM2 and the first boundary B11 is the same as the distance between the matching first matching feature point of the first corrected image CM1 and the corresponding boundary. describe in detail, The first matching feature point MF11 of the first corrected image CM1 travels a second distance S2 along a direction D2 to the first boundary B11, so the second matching feature point MF21 of the second corrected image CM2 (which is the same as the first matching feature The same second distance S2 is also traveled along the same direction D2 to the fifth boundary B21, and the fifth boundary B21 is determined. After determining the fifth boundary B21, the image processing unit 130 determines the second part according to the fifth boundary B21 and the boundaries of the second corrected image CM2 itself (eg, the sixth boundary B22, the seventh boundary B23 and the eighth boundary B24). All boundaries of the region RE2 (drawn in bold dashed lines in Figure 3D).

在取得第一局部區域RE1的邊界及第二局部區域RE2的邊界後,圖像處理單元130可紀錄第一局部區域RE1的數個第一邊界點BP1的數個第一座標以及紀錄第二局部區域RE2的數個第二邊界點BP2的數個第二座標。在後續多視訊圖像處理方法中,圖像處理單元130可依據所紀錄之數個第一座標及數個第二座標,快速決定第一影像串流的第一圖像與第二影像串流的第二圖像的重疊區域。 After obtaining the boundary of the first partial area RE1 and the boundary of the second partial area RE2, the image processing unit 130 may record several first coordinates of several first boundary points BP1 of the first partial area RE1 and record the second partial area Several second coordinates of several second boundary points BP2 of the region RE2. In the subsequent multi-video image processing method, the image processing unit 130 can quickly determine the first image and the second image stream of the first image stream according to the recorded first coordinates and the second coordinates the overlapping area of the second image.

雖然前述實施例之圖像設定流程係以第一設定圖像SM1及第二設定圖像SM2需要校正為例說明,然於另一實施例中,若第一設定圖像SM1及第二設定圖像SM2不需校正(例如,第一設定圖像SM1的圖像及第二設定圖像SM2的圖像的視角及比例相同),則可省略步驟S130,而第一局部區域RE1為第一設定圖像SM1的局部區域,且第二局部區域RE2為第二設定圖像SM2的局部區域。第2圖之流程中對校正圖像的執行步驟皆改為針對 設定圖像執行。 Although the image setting process of the foregoing embodiment is described by taking the example that the first setting image SM1 and the second setting image SM2 need to be corrected, in another embodiment, if the first setting image SM1 and the second setting image SM2 If the image SM2 does not need to be corrected (for example, the viewing angle and scale of the image of the first setting image SM1 and the image of the second setting image SM2 are the same), then step S130 can be omitted, and the first partial region RE1 is the first setting A partial area of the image SM1, and the second partial area RE2 is a partial area of the second set image SM2. In the flow chart of Fig. 2, the steps for correcting the image are changed to Set image execution.

在完成前述多視訊圖像設定方法後,多視訊圖像處理裝置100可在線上(如,直播)執行多視訊圖像處理方法。以下進一步舉例說明。 After completing the aforementioned multi-video image setting method, the multi-video image processing apparatus 100 may execute the multi-video image processing method online (eg, live). Further examples are given below.

請參照第4、5及6A~6G圖,第4圖繪示依照本發明一實施例之多視訊圖像處理方法的流程圖,第5圖繪示第1圖之多視訊圖像處理裝置100接收影像串流的示意圖,而第6A~6G圖繪示使用第1圖之多視訊圖像處理裝置100執行多視訊圖像處理方法的過程圖。在完成前述多視訊圖像設定方法後,可在維持第一攝像器10的位置及第二攝像器20的位置下,執行多視訊圖像處理方法。由於第一攝像器10的位置及第二攝像器20的位置不變,因此在後續多視訊圖像處理流程中,多視訊圖像處理裝置100所記錄的校正參數及座標,可適用於對第一影像串流VS1及第二影像串流VS2之處理。 Please refer to FIGS. 4, 5 and 6A~6G. FIG. 4 shows a flowchart of a multi-video image processing method according to an embodiment of the present invention, and FIG. 5 shows the multi-video image processing apparatus 100 of FIG. 1. It is a schematic diagram of receiving an image stream, and FIGS. 6A to 6G are process diagrams illustrating a multi-video image processing method using the multi-video image processing apparatus 100 of FIG. 1 . After completing the aforementioned multi-video image setting method, the multi-video image processing method can be executed while maintaining the position of the first camera 10 and the position of the second camera 20 . Since the position of the first camera 10 and the position of the second camera 20 remain unchanged, in the subsequent multi-video image processing process, the correction parameters and coordinates recorded by the multi-video image processing device 100 can be applied to the Processing of a video stream VS1 and a second video stream VS2.

在步驟S210中,第一輸入埠110接收一第一影像串流(video streaming)VS1,第一影像串流VS1包括數個第一圖像VM1(如第5及6A圖所示)。 In step S210, the first input port 110 receives a first video stream VS1, and the first video stream VS1 includes a plurality of first images VM1 (as shown in FIGS. 5 and 6A).

在步驟S220中,第二輸入埠120接收一第二影像串流VS2,第二影像串流VS2包括數個第二圖像VM2(如第5及6A圖所示)。 In step S220, the second input port 120 receives a second video stream VS2, and the second video stream VS2 includes a plurality of second images VM2 (as shown in FIGS. 5 and 6A).

在步驟S230中,圖像處理單元130執行一圖像校正步驟。圖像校正步驟包括以下步驟S231~S232。 In step S230, the image processing unit 130 performs an image correction step. The image correction step includes the following steps S231 to S232.

在步驟S231中,依據第一校正參數,校正各第一圖像VM1為對應的第一校正圖像CM1’(如第6B圖所示)。第一校正參數在前述多視訊圖像設定方法中已儲存於圖像處理單元130,因此圖像處理單元130可依據第一校正參數快速地將第一圖像VM1校正為第一校正圖像CM1’。於另一實施例中,圖像處理單元130可自儲存媒體取得第一校正參數。 In step S231, according to the first correction parameter, each first image VM1 is corrected to be the corresponding first corrected image CM1' (as shown in FIG. 6B). The first correction parameters have been stored in the image processing unit 130 in the aforementioned multi-video image setting method, so the image processing unit 130 can quickly correct the first image VM1 into the first corrected image CM1 according to the first correction parameters '. In another embodiment, the image processing unit 130 may obtain the first calibration parameter from a storage medium.

在步驟S232中,依據第二校正參數,校正各第二圖像VM2為對應的第二校正圖像CM2’(如第6B圖所示)。第二校正參數在前述多視訊圖像設定方法中已儲存於圖像處理單元130,因此圖像處理單元130可依據第二校正參數快速地將第二圖像VM2校正為第二校正圖像CM2’。於另一實施例中,圖像處理單元130可自儲存媒體取得第二校正參數。 In step S232, according to the second correction parameter, each second image VM2 is corrected to be the corresponding second corrected image CM2' (as shown in FIG. 6B). The second correction parameters have been stored in the image processing unit 130 in the aforementioned multi-video image setting method, so the image processing unit 130 can quickly correct the second image VM2 into the second corrected image CM2 according to the second correction parameters '. In another embodiment, the image processing unit 130 can obtain the second calibration parameter from the storage medium.

校正後,各第一校正圖像CM1’的視角及比例與對應的各第二校正圖像CM2’的視角及比例相同,例如,第一校正圖像CM1’的影像CM11’與第二校正圖像CM2’的影像CM12’的比例為1:1。 After the correction, the viewing angle and scale of each first corrected image CM1' are the same as those of the corresponding second corrected image CM2', for example, the image CM11' of the first corrected image CM1' and the second corrected image Image CM12' like CM2' has a ratio of 1:1.

在步驟S240中,圖像處理單元130執行一重疊影像區域抓取步驟。重疊影像區域抓取步驟包括以下步驟S241~S242。 In step S240, the image processing unit 130 executes a step of capturing overlapping image regions. The step of capturing the overlapping image area includes the following steps S241 to S242.

在步驟S241中,圖像處理單元130依據所記錄的數個第一座標,取得各第一校正圖像CM1’之第一局部區域RE1’(校正後第一局部區域)(如第6C圖所示)。由於數個第一座標在前 述多視訊圖像設定方法中已儲存於圖像處理單元130,因此圖像處理單元130可快速地決定各第一校正圖像CM1’之第一局部區域RE1’(如第6C圖所示)。 In step S241, the image processing unit 130 obtains the first partial region RE1' (the corrected first partial region) of each first corrected image CM1' according to the recorded first coordinates (as shown in FIG. 6C ). Show). Since several first coordinates come first The multi-video image setting method has been stored in the image processing unit 130, so the image processing unit 130 can quickly determine the first partial region RE1' of each first corrected image CM1' (as shown in FIG. 6C ) .

在步驟S242中,圖像處理單元130依據所記錄的數個第二座標,取得各第二校正圖像CM2’之第二局部區域RE2’(校正後第二局部區域)(如第6C圖所示),其中第一局部區域RE1’的圖像與第二局部區域RE2’的圖像一樣且重疊。由於數個第二座標在前述多視訊圖像設定方法中已儲存於,圖像處理單元130可快速地決定各第二校正圖像CM2’之第二局部區域RE2’。 In step S242, the image processing unit 130 obtains the second partial region RE2' (the corrected second partial region) of each second corrected image CM2' according to the recorded second coordinates (as shown in FIG. 6C ). shown), wherein the image of the first partial region RE1' is identical to and overlapped with the image of the second partial region RE2'. Since several second coordinates have been stored in the aforementioned multi-video image setting method, the image processing unit 130 can quickly determine the second partial region RE2' of each second corrected image CM2'.

在步驟S250中,圖像處理單元130判斷第一局部區域RE1’的圖像及第二局部區域RE2’的圖像的內容是否發生變化,意即第一局部區域RE1’的圖像及第二局部區域RE2’的圖像的內容是否有任何異動。例如,第一局部區域RE1’的影像及第二局部區域RE2’的影像的位置改變,或一新物體或一新人物進入第一局部區域RE1’或第二局部區域RE2’,或原物體或原人物有任何的移動。若是,流程進入步驟S244;若否,流程回到步驟S230,繼續取得第一校正圖像CM1’的第一局部區域RE1’及第二校正圖像CM2’的第二局部區域RE2’。 In step S250, the image processing unit 130 determines whether the content of the image of the first partial area RE1' and the image of the second partial area RE2' has changed, that is, the image of the first partial area RE1' and the image of the second partial area RE1' have changed. Whether there is any change in the content of the image of the local area RE2'. For example, the position of the image of the first partial area RE1' and the image of the second partial area RE2' changes, or a new object or a new person enters the first partial area RE1' or the second partial area RE2', or the original object or The original character has any movement. If yes, the flow goes to step S244; if not, the flow returns to step S230, and continues to obtain the first partial region RE1' of the first corrected image CM1' and the second partial region RE2' of the second corrected image CM2'.

在步驟S260中,圖像處理單元130執行一特徵集分析步驟。步驟S260包括數個步驟S261~262。 In step S260, the image processing unit 130 performs a feature set analysis step. Step S260 includes several steps S261-262.

在步驟S261中,圖像處理單元130取得各第一校正圖像CM1’的第一局部區域RE1’的一第一特徵集,第一特徵集 例如是數個第一匹配特徵點的集合,如數個第一匹配特徵點MF11’~MF14’的集合(如第6D圖所示)。圖像處理單元130可採用任何已知影像分析技術,取得第一局部區域RE1’的第一匹配特徵點。 In step S261, the image processing unit 130 obtains a first feature set of the first partial region RE1' of each first corrected image CM1', the first feature set For example, it is a set of several first matching feature points, such as a set of several first matching feature points MF11'~MF14' (as shown in Fig. 6D). The image processing unit 130 may adopt any known image analysis technology to obtain the first matching feature points of the first partial region RE1'.

在步驟S262中,圖像處理單元130取得各第二校正圖像CM2’的第二局部區域RE2’的一第二特徵集,第二特徵集例如是數個匹配特徵點的集合,如數個第二匹配特徵點MF21’~MF24’的集合(如第6D圖所示)。圖像處理單元130可採用任何已知影像分析技術,取得第二局部區域RE2’的第二匹配特徵點。 In step S262, the image processing unit 130 obtains a second feature set of the second partial region RE2' of each second corrected image CM2'. The second feature set is, for example, a set of several matching feature points, such as several th A set of two matching feature points MF21'~MF24' (as shown in Fig. 6D). The image processing unit 130 may adopt any known image analysis technology to obtain the second matching feature points of the second local region RE2'.

在步驟S270中,圖像處理單元130同步第一影像串流VS1與第二影像串流VS2。步驟S270包括數個步驟S271~272。 In step S270, the image processing unit 130 synchronizes the first video stream VS1 and the second video stream VS2. Step S270 includes several steps S271-272.

在步驟S271中,圖像處理單元130比對數個第一校正圖像CM1’之數個第一特徵集與數個第二校正圖像CM2’之此些第二特徵集,以取得此些第一特徵集與此些第二特徵集之間的一時間差。 In step S271, the image processing unit 130 compares the first feature sets of the first corrected images CM1' with the second feature sets of the second corrected images CM2' to obtain the first feature sets A time difference between a feature set and the second feature sets.

舉例來說,第一影像串流VS1的數個第一校正圖像CM1’具有數個第一特徵集FC11、數個第一特徵集FC12及數個第一特徵集FC13(如第6E圖所示),其中第一特徵集FC11、第一特徵集FC12與第一特徵集FC13係相異,而第二影像串流VS2的數個第二校正圖像CM2’具有數個第一特徵集FC21、數個第一 特徵集FC22及數個第一特徵集FC23,其中第一特徵集FC21、第一特徵集FC22與第一特徵集FC23係相異。第一特徵集FC12與第一特徵集FC22相同。圖像處理單元130比對相同之第一特徵集FC12與第一特徵集FC22在時間軸上的差異,可取得二者的時間差△T。 For example, the first corrected images CM1 ′ of the first video stream VS1 have first feature sets FC11 , first feature sets FC12 , and first feature sets FC13 (as shown in FIG. 6E ) shown), wherein the first feature set FC11, the first feature set FC12 and the first feature set FC13 are different, and the plurality of second corrected images CM2' of the second video stream VS2 have a plurality of first feature sets FC21 , several first The feature set FC22 and several first feature sets FC23, wherein the first feature set FC21, the first feature set FC22 and the first feature set FC23 are different. The first feature set FC12 is the same as the first feature set FC22. The image processing unit 130 compares the same first feature set FC12 and the first feature set FC22 with the difference on the time axis, and can obtain the time difference ΔT between the two.

在步驟S272中,圖像處理單元130依據時間差△T,同步化第一影像串流VS1與第二影像串流VS2。例如,圖像處理單元130將第一影像串流VS1的第一影像串流VS1的播放時間往前移或往後移,讓時間差△T縮短為0,或將第二影像串流VS2的第二影像串流VS2的播放時間往前移或往後移,讓時間差△T縮短為0。當時間差△T縮短為0,完成第一影像串流VS1與第二影像串流VS2之同步化。 In step S272, the image processing unit 130 synchronizes the first video stream VS1 and the second video stream VS2 according to the time difference ΔT. For example, the image processing unit 130 moves the playback time of the first video stream VS1 of the first video stream VS1 forward or backward to shorten the time difference ΔT to 0, or changes the playback time of the first video stream VS1 of the second video stream VS2 to 0. The playback time of the two video streams VS2 is moved forward or backward, so that the time difference ΔT is shortened to 0. When the time difference ΔT is shortened to 0, the synchronization of the first video stream VS1 and the second video stream VS2 is completed.

在步驟S280中,拼接同步後之第一影像串流VS1與第二影像串流VS2。 In step S280, the synchronized first video stream VS1 and the second video stream VS2 are spliced together.

舉例來說,圖像處理單元130用以:分離同步化後的各第一校正圖像CM1’之第一局部區域RE1’與第一局部區域RE1’以外的第三局部區域RE3’(如第6F圖所示);分離同步化後的各第二校正圖像CM2’之第二局部區域RE2’與第二局部區域RE2’以外的一第四局部區域RE4’(如第6F圖所示);將同步的第一局部區域RE1’與第二局部區域RE2’之一者、第三局部區域RE3’與第四局部區域RE4’拼接成一拼接圖像CM,如第6G圖所示。拼接圖像CM的解析度高於第一校正圖像CM1’的解析度及/ 或第二校正圖像CM2’的解析度,例如,拼接圖像M的解析度可達8K解析度,然亦可更高或更低。 For example, the image processing unit 130 is used for: separating the first partial region RE1 ′ of each first corrected image CM1 ′ after synchronization and the third partial region RE3 ′ (eg, the first partial region RE1 ′) other than the first partial region RE1 ′. 6F); separate the second partial region RE2' of each second corrected image CM2' after synchronization and a fourth partial region RE4' other than the second partial region RE2' (as shown in Fig. 6F) ; The synchronized first partial area RE1' and one of the second partial area RE2', the third partial area RE3' and the fourth partial area RE4' are stitched into a stitched image CM, as shown in Figure 6G. The resolution of the stitched image CM is higher than that of the first corrected image CM1' and/ Or the resolution of the second corrected image CM2', for example, the resolution of the spliced image M can be up to 8K resolution, but it can also be higher or lower.

然後,圖像處理單元130可將數個拼接圖像CM(如第6G圖所示)形成的一影像串流(未繪示)以一有線通訊技術或一無線通訊技術傳送至一外部電子裝置,如電腦、手持式通訊裝置及/或顯示螢幕等。 Then, the image processing unit 130 can transmit an image stream (not shown) formed by a plurality of spliced images CM (as shown in FIG. 6G ) to an external electronic device by a wired communication technology or a wireless communication technology , such as computers, handheld communication devices and/or display screens, etc.

綜上,本發明實施例可採用較低解析度(如超高畫質(Full HD)解析度)的攝像器(如,第一攝像器10及第二攝像器20),而輸出一較高解析度(如,8K解析度)的拼接圖像CM。此外,在不需要額外設備下,多視訊圖像處理裝置可僅靠影像訊號,執行多個影像串流的同步化,並輸出高解度(如,8K解析度)的同步化畫面。 To sum up, in the embodiment of the present invention, cameras (eg, the first camera 10 and the second camera 20 ) with a lower resolution (eg, Full HD resolution) can be used, and a higher output A stitched image CM of a resolution (eg, 8K resolution). In addition, without additional equipment, the multi-video image processing device can perform synchronization of multiple video streams only by relying on video signals, and output synchronized images with high resolution (eg, 8K resolution).

以下係介紹另一種同步化方法。 Another synchronization method is described below.

請參照第7圖,其繪示依照本發明另一實施例之同步化步驟的流程圖。 Please refer to FIG. 7 , which shows a flowchart of synchronization steps according to another embodiment of the present invention.

第4圖之多視訊圖像處理方法的流程圖之步驟S271及S272可以第7圖之步驟S371及S372取代。 Steps S271 and S272 of the flowchart of the multi-video image processing method in FIG. 4 can be replaced with steps S371 and S372 in FIG. 7 .

在步驟S371中,圖像處理單元130比對數個第一校正圖像CM1’之數個第一特徵集與數個第二校正圖像CM2’之此些第二特徵集,以取得此些第一特徵集與此些第二特徵集之間的一偵率差。 In step S371, the image processing unit 130 compares the first feature sets of the first corrected images CM1' with the second feature sets of the second corrected images CM2' to obtain the first feature sets A detection rate difference between a feature set and the second feature sets.

舉例來說,第一影像串流VS1的數個第一校正圖像 CM1’具有數個第一特徵集FC11、數個第一特徵集FC12及數個第一特徵集FC13,而第二影像串流VS2的數個第二校正圖像CM2’具有數個第一特徵集FC21、數個第一特徵集FC22及數個第一特徵集FC23(如第6E圖所示),其中第一特徵集FC12與第一特徵集FC22相同。圖像處理單元130比對相同之第一特徵集FC12(Frame Per Second,FPS)與第一特徵集FC22的偵率,可取得二者的偵率差。 For example, several first corrected images of the first video stream VS1 CM1' has several first feature sets FC11, several first feature sets FC12, and several first feature sets FC13, and several second corrected images CM2' of the second video stream VS2 have several first features A set FC21, several first feature sets FC22 and several first feature sets FC23 (as shown in FIG. 6E), wherein the first feature set FC12 is the same as the first feature set FC22. The image processing unit 130 compares the detection rates of the same first feature set FC12 (Frame Per Second, FPS) and the first feature set FC22 to obtain the difference between the detection rates.

在步驟S372中,圖像處理單元130依據偵率差,同步化第一影像串流VS1與第二影像串流VS2。例如,第一影像串流VS1中具有第一特徵集FC12的第一校正圖像CM1’的偵率例如是60FPS,而第二影像串流VS2中具有第二特徵集FC22的第二校正圖像CM2’的偵率例如是59FPS,二者的偵率差為1FPS。視實際狀況而定,第一影像串流VS1的偵率可以小於或大於第二影像串流VS2的偵率。 In step S372, the image processing unit 130 synchronizes the first video stream VS1 and the second video stream VS2 according to the detection rate difference. For example, the detection rate of the first corrected image CM1' with the first feature set FC12 in the first video stream VS1 is, for example, 60 FPS, and the second corrected image with the second feature set FC22 in the second video stream VS2 The detection rate of CM2' is, for example, 59 FPS, and the difference between the detection rates of the two is 1 FPS. Depending on the actual situation, the detection rate of the first video stream VS1 may be smaller or greater than the detection rate of the second video stream VS2.

在步驟S372中,圖像處理單元130依據偵率差,同步化第一影像串流VS1與第二影像串流VS2。例如,當第一影像串流VS1的偵率大於第二影像串流VS2的偵率時,圖像處理單元130於第二影像串流VS2中具有第二特徵集FC22的數個第二校正圖像CM2’中***至少一張具有第二特徵集FC22的第二校正圖像CM2’,使第一影像串流VS1的偵率與第二影像串流VS2的偵率相同;或者,圖像處理單元130於第一影像串流VS1中具有第一特徵集FC12的數個第一校正圖像CM1’中刪除至少一張具有 第一特徵集FC11的第一校正圖像CM1’,使第一影像串流VS1的偵率與第二影像串流VS2的偵率相同。至此,完成第一影像串流VS1與第二影像串流VS2的同步化。 In step S372, the image processing unit 130 synchronizes the first video stream VS1 and the second video stream VS2 according to the detection rate difference. For example, when the detection rate of the first video stream VS1 is greater than the detection rate of the second video stream VS2, the image processing unit 130 has several second calibration maps of the second feature set FC22 in the second video stream VS2 Inserting at least one second corrected image CM2' with the second feature set FC22 into the image CM2', so that the detection rate of the first video stream VS1 is the same as the detection rate of the second video stream VS2; or, image processing The unit 130 deletes at least one of the first corrected images CM1 ′ having the first feature set FC12 in the first video stream VS1 . The first corrected image CM1' of the first feature set FC11 makes the detection rate of the first video stream VS1 the same as the detection rate of the second video stream VS2. So far, the synchronization of the first video stream VS1 and the second video stream VS2 is completed.

雖然前述實施例之圖像處理流程係以第一圖像VM1及第二圖像VM2需要校正為例說明,然於另一實施例中,若第一圖像VM1及第二圖像VM2不需校正(例如,第一圖像VM1的圖像及第二圖像VM2的圖像的視角及比例相同),則可省略步驟S230,而第二局部區域RE1’為第一圖像VM1的局部區域,且第二局部區域RE2’為第二圖像VM2的局部區域。第4圖之流程中對校正圖像的執行步驟皆改為針對影像串流之圖像執行。 Although the image processing flow in the foregoing embodiment is described by taking the example that the first image VM1 and the second image VM2 need to be calibrated, in another embodiment, if the first image VM1 and the second image VM2 do not need to be corrected Correction (for example, the view angle and scale of the image of the first image VM1 and the image of the second image VM2 are the same), then step S230 can be omitted, and the second partial area RE1' is a partial area of the first image VM1 , and the second partial region RE2' is a partial region of the second image VM2. In the process of FIG. 4 , the steps for performing the correction of the image are changed to be performed for the images of the video stream.

綜上,本發明實施例之多視訊圖像處理裝置在同步化第一影像串流與第二影像串流之前,可先分析第一設定圖像及第二設定圖像,以取得第一設定圖像與第二設定圖像的一重疊影像區域,並記錄此重疊影像區域的資訊,例如是座標。在取得重疊影像區域後,在維持多視訊圖像處理裝置的架構下(例如,不改變多視訊圖像處理裝置之至少一攝像器的位置),於一多視訊圖像處理方法(例如,直播過程)中,可依據所記錄之重疊影像區域的資訊,快速取得第一影像串流的各第一圖像與第二影像串流的各第二圖像的重疊影像區域,並依據重疊影像區域的特徵集差異同步化第一影像串流與第二影像串流,然後拼接同步化後的第一影像串流之第一圖像與第二影像串流之第二圖像成為拼接圖像。 To sum up, before synchronizing the first image stream and the second image stream, the multi-video image processing apparatus according to the embodiment of the present invention can analyze the first setting image and the second setting image to obtain the first setting An overlapping image area of the image and the second setting image, and the information of the overlapping image area, such as coordinates, is recorded. After obtaining the overlapping image area, under the framework of maintaining the multi-video image processing device (for example, without changing the position of at least one camera of the multi-video image processing device), in a multi-video image processing method (for example, live streaming process), the overlapping image areas of the first images of the first image stream and the second images of the second image stream can be quickly obtained according to the recorded information of the overlapping image areas, and the overlapping image areas can be obtained according to the overlapping image areas. The feature set difference of the first image stream and the second image stream are synchronized, and then the synchronized first image of the first image stream and the second image of the second image stream are spliced into a spliced image.

綜上所述,雖然本發明已以實施例揭露如上,然其 並非用以限定本發明。本發明所屬技術領域中具有通常知識者,在不脫離本發明之精神和範圍內,當可作各種之更動與潤飾。因此,本發明之保護範圍當視後附之申請專利範圍所界定者為準。 To sum up, although the present invention has been disclosed above with the embodiments, It is not intended to limit the present invention. Those skilled in the art to which the present invention pertains can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention shall be determined by the scope of the appended patent application.

10:第一攝像器 10: The first camera

20:第二攝像器 20: Second camera

30:攝像對象 30: Camera subject

100:多視訊圖像處理裝置 100: Multi-video image processing device

110:第一輸入埠 110: The first input port

120:第二輸入埠 120: The second input port

130:圖像處理單元 130: Image processing unit

RE1:第一局部區域 RE1: First local area

RE2:第二局部區域 RE2: Second local area

Claims (8)

一種多視訊圖像處理方法,包括:接收一第一影像串流,該第一影像串流包括複數個第一圖像;接收一第二影像串流,該第二影像串流包括複數個第二圖像;以及執行一重疊影像區域抓取步驟,包含:取得各該第一圖像之一第一局部區域;及取得各該第二圖像之一第二局部區域,其中該第一局部區域的圖像與該第二局部區域的圖像一樣且重疊;其中,當該第一局部區域的圖像或該第二局部區域的圖像的內容發生變化時,同步該第一影像串流與該第二影像串流;其中,該多視訊圖像處理方法更包括:取得同步化後的各該第一圖像之該第一局部區域與該第一局部區域以外的一第三局部區域;取得同步化後的各該第二圖像之該第二局部區域與該第二局部區域以外的一第四局部區域;及將同步化後的該第一局部區域與該第二局部區域之一者、該第三局部區域與該第四局部區域拼接成一拼接圖像。 A multi-video image processing method, comprising: receiving a first image stream, the first image stream including a plurality of first images; receiving a second image stream, the second image stream including a plurality of first images two images; and performing a step of capturing an overlapping image area, including: obtaining a first partial area of each of the first images; and obtaining a second partial area of each of the second images, wherein the first partial area The image of the area is the same as and overlapped with the image of the second partial area; wherein, when the content of the image of the first partial area or the image of the second partial area changes, the first image stream is synchronized and the second image stream; wherein the multi-video image processing method further comprises: obtaining the first partial area of each of the first images after synchronization and a third partial area other than the first partial area ; obtain the second partial area of each of the second images after synchronization and a fourth partial area other than the second partial area; and compare the synchronized first partial area and the second partial area First, the third partial area and the fourth partial area are spliced into a spliced image. 如請求項1所述之多視訊圖像處理方法,其中各該第一圖像之該第一局部區域係依據複數個第一座標取得,各該第二圖像之該第一局部區域係依據複數個第二座標取得。 The multi-video image processing method according to claim 1, wherein the first partial area of each of the first images is obtained according to a plurality of first coordinates, and the first partial area of each of the second images is obtained according to A plurality of second coordinates are obtained. 如請求項1所述之多視訊圖像處理方法,於執行該重疊影像區域抓取步驟之前更包括:執行一圖像校正步驟,包括:依據一第一校正參數,校正各該第一圖像為對應的一第一校正圖像;以及依據一第二校正參數,校正各該第二圖像為對應的一第二校正圖像;其中,各該第一校正圖像包含校正後之該第一局部區域,各該第二校正圖像包含校正後之該第二局部區域。 The multi-video image processing method according to claim 1, before performing the step of capturing the overlapping image region, further comprising: performing an image correction step, comprising: correcting each of the first images according to a first correction parameter is a corresponding first corrected image; and each of the second images is corrected to a corresponding second corrected image according to a second correction parameter; wherein each of the first corrected images includes the corrected first image. A partial area, each of the second corrected images includes the corrected second partial area. 如請求項1所述之多視訊圖像處理方法,更包括:執行一特徵集分析步驟,包括:取得各該第一局部區域的一第一特徵集;及取得各該第二局部區域的一第二特徵集;於執行該同步該第一影像串流與該第二影像串流的步驟中包括:比對該些第一特徵集與該些第二特徵集,以取得該些第一特徵集與對應的該些第二特徵集之間的一時間差;及依據該時間差,同步化該第一影像串流與該第二影像串流。 The multi-video image processing method as claimed in claim 1, further comprising: performing a feature set analysis step, comprising: obtaining a first feature set of each of the first partial regions; and obtaining a feature set of each of the second partial regions second feature set; performing the step of synchronizing the first image stream and the second image stream includes: comparing the first feature sets and the second feature sets to obtain the first features a time difference between the set and the corresponding second feature sets; and synchronizing the first image stream and the second image stream according to the time difference. 如請求項1所述之多視訊圖像處理方法,更包括:執行一特徵集分析步驟,包括:取得各該第一局部區域的一第一特徵集;及取得各該第二局部區域的一第二特徵集; 於該同步該第一影像串流與該第二影像串流的步驟中包括:比對該些第一特徵集與該些第二特徵集,以取得該些第一特徵集與對應的該些第二特徵集之間的一偵率差;及依據該偵率差,同步化該第一影像串流與該第二影像串流。 The multi-video image processing method as claimed in claim 1, further comprising: performing a feature set analysis step, comprising: obtaining a first feature set of each of the first partial regions; and obtaining a feature set of each of the second partial regions the second feature set; The step of synchronizing the first image stream and the second image stream includes: comparing the first feature sets and the second feature sets to obtain the first feature sets and the corresponding ones a detection rate difference between the second feature sets; and synchronizing the first image stream and the second image stream according to the detection rate difference. 一種多視訊圖像設定方法,包括:接收一第一設定圖像;接收一第二設定圖像;執行一特徵分析步驟,包括:取得該第一設定圖像之複數個第一特徵點;及取得該第二設定圖像之複數個第二特徵點;以及執行一重疊影像區域分析步驟,包括:比對該些第一特徵點與該些第二特徵點,取得相匹配之至少一第一匹配特徵點與至少一第二匹配特徵點,其中該至少一第一匹配特徵點為該些第一特徵點之至少一者,該至少一第二匹配特徵點為該些第二特徵點之至少一者;依據該些第一匹配特徵點,取得該第一設定圖像之一第一局部區域,其中該第一局部區域包含該至少一第一匹配特徵點;及依據該些第二匹配特徵點,取得該第二設定圖像之一第二局部區域,其中該第二局部區域包含該至少一第二匹配特徵點; 其中,該第一局部區域的圖像重疊於該第二局部區域的圖像;其中,該多視訊圖像設定方法更包括:執行一圖像校正步驟,包括:校正該第一設定圖像為一第一校正圖像;及校正該第二設定圖像為一第二校正圖像;其中,該第一校正圖像的視角與該第二校正圖像的視角相同。 A multi-video image setting method, comprising: receiving a first setting image; receiving a second setting image; performing a feature analysis step, comprising: obtaining a plurality of first feature points of the first setting image; and Obtaining a plurality of second feature points of the second set image; and performing an overlapping image area analysis step, including: comparing the first feature points and the second feature points to obtain at least one matching first feature point matching feature point and at least one second matching feature point, wherein the at least one first matching feature point is at least one of the first feature points, and the at least one second matching feature point is at least one of the second feature points one; obtaining a first partial region of the first setting image according to the first matching feature points, wherein the first partial region includes the at least one first matching feature point; and according to the second matching features point to obtain a second partial area of the second setting image, wherein the second partial area includes the at least one second matching feature point; Wherein, the image of the first partial area is overlapped with the image of the second partial area; wherein, the multi-video image setting method further includes: performing an image correction step, including: correcting the first setting image to be A first corrected image; and the corrected second setting image is a second corrected image; wherein the viewing angle of the first corrected image is the same as the viewing angle of the second corrected image. 如請求項6所述之多視訊圖像設定方法,其中該些第一匹配特徵點的數量與該些第二匹配特徵點的數量相同。 The method for setting multiple video images according to claim 6, wherein the number of the first matching feature points is the same as the number of the second matching feature points. 如請求項6所述之多視訊圖像設定方法,更包括:紀錄該第一局部區域的複數個第一邊界點的複數個第一座標;以及紀錄該第二局部區域的複數個第二邊界點的複數個第二座標。 The method for setting multiple video images according to claim 6, further comprising: recording a plurality of first coordinates of a plurality of first boundary points of the first partial area; and recording a plurality of second boundaries of the second partial area plural second coordinates of the point.
TW109145832A 2020-12-23 2020-12-23 Multi-video image setting method and multi-video processing method TWI773047B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW109145832A TWI773047B (en) 2020-12-23 2020-12-23 Multi-video image setting method and multi-video processing method
CN202111287535.8A CN114666635B (en) 2020-12-23 2021-11-02 Multi-video image setting method and multi-video image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109145832A TWI773047B (en) 2020-12-23 2020-12-23 Multi-video image setting method and multi-video processing method

Publications (2)

Publication Number Publication Date
TW202226837A TW202226837A (en) 2022-07-01
TWI773047B true TWI773047B (en) 2022-08-01

Family

ID=82026231

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109145832A TWI773047B (en) 2020-12-23 2020-12-23 Multi-video image setting method and multi-video processing method

Country Status (2)

Country Link
CN (1) CN114666635B (en)
TW (1) TWI773047B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135330A (en) * 2017-07-04 2017-09-05 广东工业大学 A kind of method and apparatus of video frame synchronization
TW201737692A (en) * 2016-04-08 2017-10-16 晶睿通訊股份有限公司 Image capture system and sychronication method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065129B (en) * 2012-12-30 2016-06-29 信帧电子技术(北京)有限公司 Giant panda is known method for distinguishing
CN110049345A (en) * 2019-03-11 2019-07-23 北京河马能量体育科技有限公司 A kind of multiple video strems director method and instructor in broadcasting's processing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201737692A (en) * 2016-04-08 2017-10-16 晶睿通訊股份有限公司 Image capture system and sychronication method thereof
CN107135330A (en) * 2017-07-04 2017-09-05 广东工业大学 A kind of method and apparatus of video frame synchronization

Also Published As

Publication number Publication date
TW202226837A (en) 2022-07-01
CN114666635A (en) 2022-06-24
CN114666635B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
US20230419437A1 (en) Systems and methods for fusing images
US10762653B2 (en) Generation apparatus of virtual viewpoint image, generation method, and storage medium
US9286680B1 (en) Computational multi-camera adjustment for smooth view switching and zooming
EP2134080B1 (en) Information processing apparatus, image-capturing system, reproduction control method, recording control method, and program
US9998702B2 (en) Image processing device, development apparatus, image processing method, development method, image processing program, development program and raw moving image format
US7145947B2 (en) Video data processing apparatus and method, data distributing apparatus and method, data receiving apparatus and method, storage medium, and computer program
JP6317577B2 (en) Video signal processing apparatus and control method thereof
US9832373B2 (en) Systems and methods for automatically capturing digital images based on adaptive image-capturing templates
US20220174340A1 (en) Display apparatus, display method, and display system
BRPI0807594A2 (en) DATA PROCESSING DEVICE, DATA PROCESSING METHOD AND STORAGE
TWI552600B (en) Image calibrating method for stitching images and related camera and image processing system with image calibrating function
US11342001B2 (en) Audio and video processing
WO2019149066A1 (en) Video playback method, terminal apparatus, and storage medium
US7983454B2 (en) Image processing apparatus and image processing method for processing a flesh-colored area
TWI773047B (en) Multi-video image setting method and multi-video processing method
KR20130044062A (en) Remote video transmission system
US20230217084A1 (en) Image capture apparatus, control method therefor, image processing apparatus, and image processing system
US10783670B2 (en) Method for compression of 360 degree content and electronic device thereof
US10681327B2 (en) Systems and methods for reducing horizontal misalignment in 360-degree video
JP5885025B2 (en) Signal processing apparatus, signal processing method, program, and electronic apparatus
KR102599664B1 (en) System operating method for transfering multiview video and system of thereof
US20170257679A1 (en) Multi-audio annotation
US11877055B2 (en) System and method to operate a set remotely
JP2009065323A (en) Image processing device, image processing method, imaging apparatus, and imaging method
CN114125178A (en) Video splicing method, device and readable medium