WO2015139453A1 - Procédé et dispositif de segmentation de premier plan et d'arrière-plan - Google Patents

Procédé et dispositif de segmentation de premier plan et d'arrière-plan Download PDF

Info

Publication number
WO2015139453A1
WO2015139453A1 PCT/CN2014/088698 CN2014088698W WO2015139453A1 WO 2015139453 A1 WO2015139453 A1 WO 2015139453A1 CN 2014088698 W CN2014088698 W CN 2014088698W WO 2015139453 A1 WO2015139453 A1 WO 2015139453A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame image
region
frame
foreground
Prior art date
Application number
PCT/CN2014/088698
Other languages
English (en)
Chinese (zh)
Inventor
杜馨瑜
杜志军
王栋
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015139453A1 publication Critical patent/WO2015139453A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Embodiments of the present invention relate to the field of image processing, and, more particularly, to a method and apparatus for front and rear scene segmentation.
  • Front and rear scene segmentation is an important part of the video processing process, and the existing method of front and back scene segmentation is complicated to calculate, resulting in long time-consuming and low segmentation efficiency.
  • Embodiments of the present invention provide a method for front and back scene segmentation, which is simple in calculation and can be performed in real time.
  • a method for segmentation of a foreground and a scene comprising: determining, according to a foreground region of a t-1th frame image, a first region of a t-th frame image by using a block matching method, where t is greater than a positive integer of 1; determining a foreground region of the t-th frame image according to a pixel satisfying the first condition in the first region of the t-th frame image.
  • the foreground region based on the t-1th frame image is determined by using a block matching method
  • the first region of the t-th frame image includes: dividing the t-th frame image into m image blocks, m being a positive integer greater than 2; determining an i-th image of the m image blocks of the t-th frame image a matching block in the image of the t-1th frame, where i is a positive integer not greater than m; when the matching block includes pixels belonging to a foreground region of the image of the t-1th frame, determining The i-th image block of the m image blocks of the t-th frame image belongs to the first region of the t-th frame image.
  • the determining, according to the pixel that meets the first condition in the first region of the t-th frame image, determines Determining the foreground area of the t-th frame image includes: determining, in the t-1th frame image, an area other than the foreground area as a background area of the t-1 frame image; determining the t-th frame image a region outside the region is a second region of the t-th frame image; a historical cumulative color histogram of the foreground region of the t-1th frame image, and a background region of the t-1th frame image a historical cumulative color histogram; a color histogram of the first region of the t-th frame image, and a color histogram of the second region of the t-th frame image; calculating the first region of the t-th frame image a color histogram of each image block; a historical cumulative color histogram according to a
  • the device further includes a third determining unit and a fourth determining unit: the third determining unit, configured to meet the second condition
  • the second image is a background image of the image of the first frame
  • the second condition is that the variance of the luminance value V channel of the hue-saturation-luminance value HSV space of the second image is less than a first threshold, and the second image The variance of the hue H channel of the HSV space is smaller than a second threshold
  • the fourth determining unit is configured to determine a foreground area of the first frame image by using a face detection and clustering method or a watershed method.
  • the first determining unit includes: a splitting subunit, configured to: The t-frame image is divided into m image blocks, m is a positive integer greater than 2, and the first determining sub-unit is configured to determine an i-th image of the m image blocks of the t-th frame image into which the segmentation sub-unit is divided a block, a matching block in the t-1th frame image, where i is a positive integer not greater than m; and a second determining subunit, configured to be used in the matching block determined by the first determining subunit When the pixels belonging to the foreground region of the t-1th frame image are included, the i-th image block among the m image blocks of the t-th frame image is determined to belong to the first region of the t-th frame image.
  • the second determining unit includes: a third determining subunit, configured to determine the t-th An area other than the foreground area in the 1-frame image is a background area of the t-1 frame image, and determines that an area other than the first area of the t-th frame image determined by the first determining unit is a second region of the t-th frame image; a calculation subunit, configured to calculate a historical cumulative color histogram of the foreground region of the t-1th frame image, and calculate the t-th determined by the third determining subunit 1 frame of image back a histogram of the historical cumulative color of the scene region, calculating a color histogram of the first region of the t-th frame image determined by the first determining unit, and calculating the image of the t-th frame determined by the third determining sub-unit a color histogram of the second region, and calculating a color histogram of each image block in
  • the device further includes: a fifth determining unit, configured to determine the first image, in a fifth possible implementation manner of the second aspect, The first image is used as a background image of the t-th frame image; a synthesizing unit is configured to determine a foreground region of the t-th frame image determined by the second determining unit and the fifth determining unit The first image is synthesized.
  • FIG. 2 is a diagram showing an example of a second image of an embodiment of the present invention.
  • FIG. 4 is a diagram showing another example of the front and rear scene division of the initial frame in the embodiment of the present invention.
  • the foreground area of the first frame image can be determined by the face detection and clustering method.
  • FIG. 3(b) shows the foreground region of the first frame image determined by the face detection and clustering method.
  • the first condition may be determined based on a color histogram.
  • the first condition may be determined based on a color histogram of the first region of the t-th frame image and a color histogram of the foreground region of the t-1th frame image.
  • the first condition may be determined according to a color histogram of the first region of the t-th frame image and a color histogram of the foreground region of the first frame image to the t-1th frame image.
  • the invention is not limited thereto.
  • the first condition may be: among them, a color histogram of the foreground area of the image of the first frame, a color histogram of the first region of the image of the t-th frame, A historical cumulative color histogram of the foreground region of the image of the t-1th frame.
  • the color histogram of the background region of the first frame image, the background region of the first frame image is an area other than the foreground region of the first frame image of the first frame image.
  • the foreground area of the t-th frame image determined after step 102.
  • the image after smoothing the edge of the foreground region of the t-th frame image is appropriately expanded, and the expanded image can be understood as the foreground region of the enlarged t-th frame image, and then the edge of the expanded image is edged. Detecting, the edge of the expanded image is obtained, and the edge of the expanded image is located in the background region of the t-th frame image, and the pixel located at the edge of the expanded image can be used as the background pixel of the subsequent watershed segmentation.
  • the watershed segmentation may be performed based on the pixels of the edge of the etched image and the pixels of the edge of the expanded image, thereby determining the foreground image of the t-th frame image.
  • the process of video processing is to replace the background of the video, that is, to replace the background in the video with a new background to generate a new video
  • the segmented step 102 determines The foreground area is synthesized with the new background image.
  • the foreground region of the t-th frame image and the first image may be synthesized by an image synthesis algorithm.
  • the image synthesis algorithm can be an alpha channel.
  • the foreground region of the t-th frame image may be appropriately etched first, then the edge of the etched image is Gaussian smoothed, and the Gaussian smoothed result is taken as the alpha channel value.
  • the Alpha channel can be used to fuse the foreground region of the t-th frame image with the first image, so that the background of the t-th frame image can be replaced with the new image.
  • the background replacement may be that the foreground region of the t-th frame image is combined with the first image after the time domain processing. In this way, the image is synthesized faster and the calculation time is shorter. Real-time background replacement can be done while the video is in progress.
  • the background replacement may be performed by combining the foreground image of the t-th frame image with the first image after the spatial domain processing.
  • the image synthesized by the image is of high quality and the calculation time is also short. High-quality background replacement in real time during video playback.
  • the foreground region of each frame of the video can be determined, and the front and back scene segmentation of each frame of the video is realized.
  • the algorithm for front and back scene segmentation is relatively simple, and the running time is short, and can be performed in real time during the video process.
  • the method can be applied not only to the video process when the camera is fixed, but also to the video process performed by the mobile camera.
  • FIG. 6 is a flow chart of a method of video processing according to an embodiment of the present invention. This video processing is a real-time replacement of the background during the video.
  • the flow chart shown in Figure 6 includes:
  • the image of the background you want to see at the opposite end of the video Before the video starts, you can determine the image of the background you want to see at the opposite end of the video. For the first image. And determining the second image by scene judgment. Specifically, the second image can be determined by texture determination and color judgment.
  • the mobile phone can collect the first scene before the video starts. If the variance of the V channel of the HSV space of the first scenario is less than the first threshold, and the variance of the H channel of the HSV space of the first scenario is less than the second threshold, the first scenario may be determined to be the second image. If the variance of the V channel of the HSV space of the first scenario is not less than the first threshold, or the variance of the H channel of the HSV space of the first scenario is not less than the second threshold, the user of the mobile phone performing the video may change the scene by using the mobile phone. To the second scene, it is re-determined whether the second scene is available for the second image. The first threshold and the second threshold are preset. This step 201 can be referred to the description of FIG. 2 described above.
  • the user of the mobile phone that performs the video can manually draw two lanes on both sides of the boundary between the foreground region of the first frame image and the background region of the first frame image.
  • the pixels of the two channels can be used as the foreground pixel and the background pixel of the subsequent watershed algorithm respectively, so that the watershed algorithm can be used to divide the foreground region of the first frame image and the background region of the first frame image to determine the image of the first frame.
  • Prospect area Specifically, reference may be made to the foregoing description of FIGS. 3 and 4.
  • the foreground region of the first frame image and the first image may be combined using Alpha channel fusion.
  • step 205 of the embodiment of the present invention reference may be made to step 102 in FIG. 1. To avoid repetition, details are not described herein again.
  • step 206 of the embodiment of the present invention reference may be made to the foregoing description of the spatial domain segmentation. To avoid repetition, details are not described herein again.
  • the foreground image of the t-th frame image and the first image may be synthesized by using Alpha channel fusion.
  • the background image of the video seen by the opposite end of the participating video is the first image.
  • the block matching method and the color histogram can be used to determine the foreground region of each frame of the video, and to realize the front and back scene segmentation of each frame of the video, thereby enabling each frame to be The background area of the image is replaced with a new image to enable background replacement during the video.
  • the algorithm for front and back scene segmentation is relatively simple, and the running time is short, and can be performed in real time during the video process.
  • the method can be applied not only to the video process when the camera is fixed, but also to the video process performed by the mobile camera.
  • the stability of the foreground and front view segmentation method in the embodiment of the present invention is high, and the mis-segmentation does not occur due to the sudden disappearance and appearance of the foreground in the video.
  • the embodiments of the present invention can also be applied to the case of a strong light source in the background. That is to say, even if there is a strong light source in the original background of the video, the front and rear scenes can be divided.
  • Figure 7. In the original background of the t-th frame image of the video shown in FIG. 7(a), there is a strong light source, and after the background replacement method shown in FIG. 6 is used in the embodiment of the present invention, FIG. 7(b) is after the background replacement.
  • the t-th frame image In the original background of the t-th frame image of the video shown in FIG. 7(a), there is a strong light source, and after the background replacement method shown in FIG. 6 is used in the embodiment of the present invention, FIG. 7(b) is after the background replacement.
  • the t-th frame image In the original background of the t-th frame image of the video shown in FIG. 7(a), there is a strong light source, and after the background replacement method shown in FIG. 6 is used in the embodiment of
  • Figure 8 is a block diagram of an apparatus for front and rear view segmentation in accordance with one embodiment of the present invention.
  • Figure 8 device 300 includes a first determining unit 301 and a second determining unit 302.
  • the first determining unit 301 is configured to determine, according to the foreground region of the t-1th frame image, a first region of the t-th frame image by using a block matching method, where t is a positive integer greater than 1.
  • the second determining unit 302 is configured to determine, according to the pixel that satisfies the first condition in the first region of the t-th frame image determined by the first determining unit 301, the foreground region of the t-th frame image.
  • the foreground region of each frame of the video can be determined, and the front and back scene segmentation of each frame of the video is realized.
  • the algorithm for front and back scene segmentation is relatively simple, and the running time is short, and can be performed in real time during the video process.
  • the method can be applied not only to the video process when the camera is fixed, but also to the video process performed by the mobile camera.
  • the first determining unit 301 may include a dividing subunit, a first determining subunit, and a second determining subunit.
  • the second determining unit 302 may include a third determining subunit, a calculating subunit, a fourth determining subunit, and a fifth determining subunit.
  • a third determining subunit configured to determine that an area other than the foreground area in the image of the t-1th frame is a background area of the t-1 frame image, and determine that the first area is the image of the tth frame The area other than the second area of the t-th frame image.
  • the calculation subunit is configured to calculate a historical cumulative color histogram of the foreground region of the image of the t-1th frame, and calculate a historical cumulative color histogram of the background region of the t-1th frame image determined by the third determining subunit, and calculate the first
  • the determining unit 301 determines the tth a color histogram of the first region of the frame image, calculating a color histogram of the second region of the t-th frame image determined by the third determining sub-unit, and calculating the first region of the t-th frame image determined by the first determining unit 301 The color histogram of each image block.
  • the apparatus 300 shown in FIG. 8 may further include a fifth determining unit and a synthesizing unit.
  • a fifth determining unit configured to determine a first image, where the first image is used as a background image of the t-th frame image.
  • a synthesizing unit configured to synthesize the foreground region of the t-th frame image determined by the second determining unit 302 and the first image determined by the fifth determining unit.
  • FIG. 9 is a block diagram of an apparatus for front and rear scene segmentation in accordance with another embodiment of the present invention.
  • the device 400 shown in FIG. 9 includes a processor 401, a memory 402, and a transceiver circuit 403.
  • the processor 401 is configured to determine, according to a foreground region of the t-1th frame image, a first region of the t-th frame image by using a block matching method, where t is a positive integer greater than 1. And further determining, according to the determined pixel in the first region of the t-th frame image that satisfies the first condition, a foreground region of the t-th frame image.
  • the block matching method and the color histogram can be utilized. Determining the foreground area of each frame of the video, enabling front and back segmentation of each frame of the video.
  • the algorithm for front and back scene segmentation is relatively simple, and the running time is short, and can be performed in real time during the video process.
  • the method can be applied not only to the video process when the camera is fixed, but also to the video process performed by the mobile camera.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a random access memory (RAM), a flash memory, a read-only memory (ROM), a programmable read only memory or an electrically erasable programmable memory, a register, etc.
  • RAM random access memory
  • ROM read-only memory
  • programmable read only memory or an electrically erasable programmable memory
  • register etc.
  • the storage medium is located in the memory 402, and the processor 401 reads the information in the memory 402 and performs the steps of the above method in combination with its hardware.
  • the processor 401 is further configured to use a second image that satisfies the second condition as a background image of the first frame image, and the second condition is a hue-saturation of the second image-
  • the luminance value of the luminance value HSV space has a variance of the V channel that is less than the first threshold, and the variance of the hue H channel of the HSV space of the second image is less than the second threshold.
  • a face detection and clustering method or a watershed method to determine a foreground area of the first frame image.
  • the processor 401 may be specifically configured to divide the t-th frame image into m image blocks, where m is a positive integer greater than 2. And determining, in the m-th image block of the t-th frame image, the matching block in the t-1th frame image, wherein i is not A positive integer greater than m. Further, when the matching block includes pixels belonging to the foreground region of the t-1th frame image, determining that the i th image block of the m image blocks of the t t frame image belongs to the tth frame The first area of the image.
  • the processor 401 may be specifically configured to determine, in the image of the t-1th frame, an area other than the foreground area as a background area of the t-1 frame image, and determine the The area other than the first area in the t-th frame image is the second area of the t-th frame image. Then, it can be specifically used to calculate a historical cumulative color histogram of the foreground region of the image of the t-1th frame, calculate a historical cumulative color histogram of the background region of the t-1th frame image, and calculate the color of the first region of the t-th frame image.
  • a histogram a color histogram of the second region of the t-th frame image is calculated, and a color histogram of each image block in the first region of the t-th frame image is calculated.
  • the color histogram of the second region of the t-th frame image and the color histogram of each image block in the first region of the t-th frame image determine the first condition. Further, the first region of the t-th frame image is used to determine that a region composed of a set of pixels satisfying the first condition is a foreground region of the t-th frame image.
  • the first condition may be: among them, a color histogram of the foreground area of the image of the first frame, a color histogram of the first region of the image of the t-th frame, a historical cumulative color histogram of the foreground region of the image of the t-1th frame, a color histogram of a background region of the first frame image, wherein the background region of the first frame image is an area other than the foreground region of the first frame image of the first frame image.
  • the processor 401 is further configured to determine a first image, where the first image is used as a background image of the t-th frame image. And further synthesizing the foreground region of the t-th frame image and the determined first image.
  • the device 400 can implement the various processes implemented by the device in the embodiments of FIG. 1 and FIG. 6. To avoid repetition, details are not described herein again.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention porte sur un procédé de segmentation de premier plan et d'arrière-plan, consistant : sur la base d'une zone de premier plan de la (t-1)ième image de vidéo, par utilisation d'un procédé d'appariement de blocs, à déterminer une première zone de la tième image de vidéo, t étant un nombre entier positif qui est supérieur à 1 ; et en fonction d'un pixel satisfaisant une première condition dans la première zone de la tième image de vidéo, à déterminer une zone de premier plan de la tième image de vidéo. Au moyen du procédé, sur la base d'une politique d'apprentissage, par utilisation du procédé d'appariement de blocs et d'un histogramme de couleur, une zone de premier plan de chaque image d'une vidéo peut être déterminée, ce qui permet de réaliser la segmentation du premier plan et de l'arrière-plan de chaque image de la vidéo ; et l'algorithme de segmentation de premier plan et d'arrière-plan est simple, et le temps de fonctionnement est court, de sorte que la segmentation de premier plan et d'arrière-plan peut être effectuée en temps réel dans une vidéo. En même temps, le procédé est non seulement applicable pour un processus d'enregistrement vidéo en temps réel d'une caméra fixe, mais également applicable pour le processus d'enregistrement vidéo d'une caméra mobile.
PCT/CN2014/088698 2014-03-17 2014-10-16 Procédé et dispositif de segmentation de premier plan et d'arrière-plan WO2015139453A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410098165.7 2014-03-17
CN201410098165.7A CN104933694A (zh) 2014-03-17 2014-03-17 前后景分割的方法及设备

Publications (1)

Publication Number Publication Date
WO2015139453A1 true WO2015139453A1 (fr) 2015-09-24

Family

ID=54120849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/088698 WO2015139453A1 (fr) 2014-03-17 2014-10-16 Procédé et dispositif de segmentation de premier plan et d'arrière-plan

Country Status (2)

Country Link
CN (1) CN104933694A (fr)
WO (1) WO2015139453A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107612869A (zh) * 2016-07-11 2018-01-19 中兴通讯股份有限公司 图像处理方法及装置
CN107240073B (zh) * 2017-05-12 2020-04-24 杭州电子科技大学 一种基于梯度融合与聚类的三维视频图像修复方法
CN108171719B (zh) * 2017-12-25 2021-07-23 北京奇虎科技有限公司 基于自适应跟踪框分割的视频穿越处理方法及装置
CN109741249A (zh) 2018-12-29 2019-05-10 联想(北京)有限公司 一种数据处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001313914A (ja) * 2001-02-13 2001-11-09 Toshiba Corp 動画像符号化装置
CN101686338A (zh) * 2008-09-26 2010-03-31 索尼株式会社 分割视频中的前景和背景的***和方法
CN101777180A (zh) * 2009-12-23 2010-07-14 中国科学院自动化研究所 基于背景建模和能量最小化的复杂背景实时替换方法
CN102567727A (zh) * 2010-12-13 2012-07-11 中兴通讯股份有限公司 一种背景目标替换方法和装置
CN103607558A (zh) * 2013-11-04 2014-02-26 深圳市中瀛鑫科技股份有限公司 一种视频监控***及其目标匹配方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6757428B1 (en) * 1999-08-17 2004-06-29 National Instruments Corporation System and method for color characterization with applications in color measurement and color matching
JP5160547B2 (ja) * 2007-08-15 2013-03-13 株式会社リバック 画像処理装置、画像処理方法、及び画像処理プログラム、並びに撮像装置
CN102388391B (zh) * 2009-02-10 2014-01-22 汤姆森特许公司 基于前景-背景约束传播的视频抠图

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001313914A (ja) * 2001-02-13 2001-11-09 Toshiba Corp 動画像符号化装置
CN101686338A (zh) * 2008-09-26 2010-03-31 索尼株式会社 分割视频中的前景和背景的***和方法
CN101777180A (zh) * 2009-12-23 2010-07-14 中国科学院自动化研究所 基于背景建模和能量最小化的复杂背景实时替换方法
CN102567727A (zh) * 2010-12-13 2012-07-11 中兴通讯股份有限公司 一种背景目标替换方法和装置
CN103607558A (zh) * 2013-11-04 2014-02-26 深圳市中瀛鑫科技股份有限公司 一种视频监控***及其目标匹配方法和装置

Also Published As

Publication number Publication date
CN104933694A (zh) 2015-09-23

Similar Documents

Publication Publication Date Title
US9196071B2 (en) Image splicing method and apparatus
JP6889417B2 (ja) 一連の画像の画像内のオブジェクト境界安定化のための画像処理装置及び方法
US10121256B2 (en) Temporal saliency map
CN108965740B (zh) 一种实时视频换脸方法、装置、设备和存储介质
JP7226851B2 (ja) 画像処理の方法および装置並びにデバイス
WO2019134504A1 (fr) Procédé et dispositif de floutage d'arrière-plan d'image, support de stockage et appareil électronique
KR101952569B1 (ko) 디바이스를 위한 이미지 편집 기법들
US10620826B2 (en) Object selection based on region of interest fusion
CN107749062B (zh) 图像处理方法、及装置
EP2863362B1 (fr) Procédé et appareil pour une segmentation de scène à partir d'images de pile focale
WO2016165060A1 (fr) Détection de peau d'après une modélisation discriminatoire en ligne
US8879835B2 (en) Fast adaptive edge-aware matting
WO2015139453A1 (fr) Procédé et dispositif de segmentation de premier plan et d'arrière-plan
US10249029B2 (en) Reconstruction of missing regions of images
CN105243371A (zh) 一种人脸美颜程度的检测方法、***及拍摄终端
JP2016095854A (ja) 画像処理方法及び装置
JP2017091298A (ja) 画像処理装置、画像処理方法及び画像処理プログラム
CN111161299A (zh) 影像分割方法、计算机程序、存储介质及电子装置
US9171357B2 (en) Method, apparatus and computer-readable recording medium for refocusing photographed image
EP2930687B1 (fr) Segmentation d'image à l'aide de couleur et de flou
WO2023019910A1 (fr) Procédé et appareil de traitement vidéo, dispositif électronique, support de stockage, programme informatique et produit-programme informatique
JP2014230283A (ja) ピクチャーを処理する方法および装置
CN114998115A (zh) 图像美化处理方法、装置及电子设备
WO2017084011A1 (fr) Procédé et appareil de lissage de vidéo
CN108875692B (zh) 基于关键帧处理技术的缩略影片生成方法、介质和计算设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14885997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14885997

Country of ref document: EP

Kind code of ref document: A1