CN110390293A - A kind of Video object segmentation algorithm based on high-order energy constraint - Google Patents

A kind of Video object segmentation algorithm based on high-order energy constraint Download PDF

Info

Publication number
CN110390293A
CN110390293A CN201910649351.8A CN201910649351A CN110390293A CN 110390293 A CN110390293 A CN 110390293A CN 201910649351 A CN201910649351 A CN 201910649351A CN 110390293 A CN110390293 A CN 110390293A
Authority
CN
China
Prior art keywords
super
pixel
segmentation
item
order energy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910649351.8A
Other languages
Chinese (zh)
Other versions
CN110390293B (en
Inventor
陈亚当
征煜
金子龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201910649351.8A priority Critical patent/CN110390293B/en
Publication of CN110390293A publication Critical patent/CN110390293A/en
Application granted granted Critical
Publication of CN110390293B publication Critical patent/CN110390293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Video object segmentation algorithms based on high-order energy constraint, comprising the following steps: sequence of frames of video is carried out super-pixel segmentation;According to super-pixel segmentation as a result, using gauss hybrid models, the modeling of color characteristic is carried out respectively to previously given marked frame fore/background region, obtain segmentation data item;The smooth item of space-time is established in conjunction with various features according to segmentation data item;According to segmentation data item and the smooth item of space-time, high-order energy term is added;It is approximately data item and smooth item by high-order energy term;Algorithm, which is cut, using figure completes segmentation.The present invention joined a kind of high-order energy constraint based on SIFT feature on the basis of data item and smooth item to guarantee the global coherency of Video segmentation, it solves when the video object is movement and in irregular shape, interframe light stream presence significantly interferes with, the very undesirable problem of segmentation effect and high-order energy term optimize the excessively high problem of computational load.

Description

A kind of Video object segmentation algorithm based on high-order energy constraint
Technical field
The present invention relates to image/video processing technology fields, more particularly, to a kind of video pair based on high-order energy constraint As partitioning algorithm.
Background technique
Video object segmentation refers to the process of foreground object and background separation in sequence of frames of video.At present in the field This binary segmentation is solved the problems, such as there are many method, these methods can be divided into unsupervised approaches and measure of supervision.The former is not Manpower intervention is needed, video data is directly inputted;The latter then requires artificially to provide additional label data to initialize.This In, usually algorithm is inputted using the first frame segmentation result of sequence of frames of video as given data.The invention patent belongs to the latter.
In existing video picture segmentation method, the video object is effectively guaranteed in interframe based on the method that figure is cut It propagates.Each frame is decomposed into space-time node by these methods, to convert segmentation problem at markov random file (MRF) In two class nodes label problem.These most of methods are all to find optimal fore/background label label, it is therefore an objective to be made The data item of sequence of frames of video and smooth item minimize.Although result is relatively preferable, these method presence are more serious Problem.For example when the video object is movement and in irregular shape, interframe light stream presence significantly interferes with, segmentation effect is very not It is ideal.
In general, high-order energy term generally comprises very complicated optimization process, so being difficult to calculate out energy-optimised Analytical expression.General solution removes optimization high-order energy term, fixed high-order while being fixed data item and smooth item Optimization data item and smooth item are removed while energy term.But this will will lead to the huge time and calculate cost.
Summary of the invention
Goal of the invention: in order to overcome the shortcomings of background technique, the invention discloses a kind of views based on high-order energy constraint Frequency Object Segmentation Algorithm.
Technical solution: the Video object segmentation algorithm of the invention based on high-order energy constraint, comprising the following steps:
(1) sequence of frames of video is subjected to super-pixel segmentation;
(2) according to super-pixel segmentation as a result, using gauss hybrid models, to previously given marked frame fore/background area Domain carries out the modeling of color characteristic respectively, obtains segmentation data item;
(3) according to segmentation data item, color combining, Bian Qiangdu, light stream direction and mapping ratio various features establish space-time Smooth item;
(4) according to segmentation data item and the smooth item of space-time, the high-order energy term based on SIFT feature is added;
It (5) is approximately data item and smooth item by adding auxiliary node for high-order energy term to MRF graph model;
(6) algorithm is cut using figure complete segmentation.
Wherein, superpixel segmentation method described in step (1) is to be divided frame each in sequence of frames of video using SLIC algorithm It is cut into several super-pixel points.
Further, the super-pixel quantity in step (2) on the i-th frame is Fi,Indicate wherein j-th of super-pixel, mark Label value isIfIt indicatesFor background super-pixel, ifIt indicatesFor prospect super-pixel;Validity period Prestige value maximum algorithm goes fitting to obtain prospect gauss hybrid modelsWith background gauss hybrid modelsWhereinIndicate the rgb value of super-pixel point, the data item of each super-pixel point indicates are as follows:
Further, the smooth item of space-time described in step (3) includes space smoothing item and time smoothing item, is respectively used to sky Between and timing node it is smooth,
The space smoothing item indicates are as follows:
The time smoothing item indicates are as follows:
The smooth item of space-time are as follows:
Wherein, εtIt is all time domains to set, εsIt is all airspaces to set, λs、λtIt is the weight ginseng of linear combination Number,It is the series connection histogram of color corresponding to j-th of super-pixel and light stream direction in the i-th frame,It is j-th in the i-th frame Color histogram corresponding to super-pixel,Indicate super-pixelWith super-pixelBetween average side intensity,Indicating that the light stream of the internal pixel of time domain maps ratio, δ is the Kronecker function of standard, that is, work as v=u, δ (u, V)=1 work as v ≠ u, δ (u, v)=0.
Further, in step (5) method particularly includes: SIFT feature is clustered into 100 classes, each super-pixel point is ok It is expressed as a node, each node includes multiple SIFT features, and each node can use a SIFT feature histogram To indicate;Indicate super-pixel pointK-th of bin in corresponding histogram, i.e., k-th of SIFT feature class is in super-pixel pointIn feature point number, H indicate histogram bin number;In expression prospect super-pixel point the numerical value of k-th of bin it With,Indicate the sum of the numerical value of k-th of bin in background super-pixel point, ΩkIndicate the numerical value of k-th of bin in all super-pixel points The sum of, haveThe probability value that background before k-th of SIFT feature class is calculated is respectively as follows:
WithThe probability value of background before last available each super-pixel belongs to:WithThen high-order energy term is expressed asIts In:
Obtain final high-order energy term are as follows:
Finally, cutting the global optimization formula that algorithm completes segmentation using figure in step (6) are as follows:
Wherein, E (S, L)=φu(S,L)+α×φp(S,L)+β×Eh(S, L), L are the collection of all super-pixel point labels It closes, S is the set of all super-pixel points, φuFor data item, φpFor smooth item, EhFor high-order energy term, α, β are linear combination Weight parameter.
The utility model has the advantages that compared with prior art, advantages of the present invention are as follows: firstly, the present invention is in data item and smooth item On the basis of joined a kind of high-order energy constraint based on SIFT feature to guarantee the global coherency of Video segmentation, for video , there is stronger robustness in the problems such as random noise of middle appearance, random motion, fore/background fuzzy;Secondly, by pair High-order energy term is approximately data item and smooth item by the mode of MRF model addition auxiliary node, is made it easier to calculate and be optimized.
Detailed description of the invention
Fig. 1 is algorithm flow chart of the invention.
Specific embodiment
Further description of the technical solution of the present invention with reference to the accompanying drawings and examples.
Video object segmentation algorithm of the invention based on high-order energy constraint as shown in Figure 1, comprising the following steps:
(1) sequence of frames of video is subjected to super-pixel segmentation;
In view of will lead to huge memory using the dividing method based on pixel in video is handled and calculate cost, The method that the invention patent uses is split based on super-pixel point.Due to the good characteristic of SLIC algorithm, so result energy Most of boundary of enough object of reservation, therefore it is divided into several to surpass frame each in sequence of frames of video using SLIC algorithm first Pixel.
(2) according to super-pixel segmentation as a result, using gauss hybrid models, to previously given marked frame fore/background area Domain carries out the modeling of color characteristic respectively, obtains segmentation data item;
It is assumed that the super-pixel quantity on the i-th frame is Fi,Indicate that wherein j-th of super-pixel, label value are IfIt indicatesFor background super-pixel, ifIt indicatesFor prospect super-pixel;Use expectation maximization algorithm It is mixed that (EM algorithm) goes fitting to obtain prospect GaussMolding type and background gauss hybrid models WhereinIndicate the rgb value of super-pixel point, the rgb value of super-pixel point can be approximately all pixels in super-pixel point Rgb value average value, the data item of each super-pixel point indicates are as follows:
(3) according to segmentation data item, color combining, Bian Qiangdu, light stream direction and mapping ratio various features establish space-time Smooth item;
The smooth item of space-time includes space smoothing item and time smoothing item, is respectively used to the smooth of room and time node.It is empty Between connection refer to that the connection that has same one side between two super-pixel nodes, time connection refer to pictures inside two super-pixel points There are light stream mappings for vegetarian refreshments.Herein using the multiple features such as side intensity, color, light stream direction and mapping ratio in conjunction with method come Calculate local similarity.By all time domains to and airspace to set expression be εtAnd εs, then the smooth item of total space-time can be with table It is shown as:
λs、λtIt is the weight parameter of linear combination, wherein space smoothing item can be expressed as:
Time smoothing item can be expressed as:
WhereinIt is the series connection histogram of color corresponding to j-th of super-pixel and light stream direction in the i-th frame,It is i-th Color histogram corresponding to j-th of super-pixel in frame.Indicate super-pixelWith super-pixelBetween average side it is strong Degree,Indicate that the light stream of the internal pixel of time domain maps ratio, δ is the Kronecker function of standard, that is, works as v=u, δ (u, v)=1 works as v ≠ u, δ (u, v)=0.
(4) according to segmentation data item and the smooth item of space-time, the high-order energy term based on SIFT feature is added;
Do not have very strong robustness in view of RGB color feature is used alone in video object segmentation, in segmentation Even effect is very undesirable when specific object, so using SIFT feature as high-order energy constraint, enhancing segmentation herein The appearance consistency of object.SIFT feature keeps feature invariance to rotation, scaling, brightness etc., is a kind of highly stable Local feature.
It (5) is approximately data item and smooth item by adding auxiliary node for high-order energy term to MRF graph model;
SIFT feature is clustered into 100 classes, each super-pixel point can be expressed as a node, and each node includes Multiple SIFT features, each node can be indicated with a SIFT feature histogram;Indicate super-pixel pointIt is corresponding K-th of bin in histogram, i.e., k-th of SIFT feature class is in super-pixel pointIn feature point number, H indicates histogram The number of bin;The sum of the numerical value of k-th of bin in expression prospect super-pixel point,It indicates in background super-pixel point k-th The sum of numerical value of bin, ΩkThe sum of the numerical value for indicating k-th of bin in all super-pixel points, hasIt is calculated The probability value of background is respectively as follows: before k SIFT feature class
WithThe probability value of background before last available each super-pixel belongs to:WithThen high-order energy term is expressed asIts In:
Obtain final high-order energy term are as follows:
(6) algorithm is cut using figure complete segmentation.
Global optimization formula are as follows:
Wherein, E (S, L)=φu(S,L)+α×φp(S,L)+β×Eh(S, L), L are the collection of all super-pixel point labels It closes, S is the set of all super-pixel points, φuFor data item, φpFor smooth item, EhFor high-order energy term, α, β are linear combination Weight parameter.
Through rigorous mathematical proof it is found that high-order energy term can be data item and smooth item by approximate representation, so can Segmentation is completed to use figure to cut algorithm.

Claims (6)

1. a kind of Video object segmentation algorithm based on high-order energy constraint, it is characterised in that the following steps are included:
(1) sequence of frames of video is subjected to super-pixel segmentation;
(2) according to super-pixel segmentation as a result, using gauss hybrid models, to previously given marked frame fore/background region point Not carry out color characteristic modeling, obtain segmentation data item;
(3) according to segmentation data item, it is smooth to establish space-time for color combining, Bian Qiangdu, light stream direction and mapping ratio various features ?;
(4) according to segmentation data item and the smooth item of space-time, the high-order energy term based on SIFT feature is added;
It (5) is approximately data item and smooth item by adding auxiliary node for high-order energy term to MRF graph model;
(6) algorithm is cut using figure complete segmentation.
2. the Video object segmentation algorithm according to claim 1 based on high-order energy constraint, it is characterised in that: step (1) superpixel segmentation method described in is that frame each in sequence of frames of video is divided into several super-pixel using SLIC algorithm Point.
3. the Video object segmentation algorithm according to claim 1 based on high-order energy constraint, it is characterised in that: step (2) inIndicate j-th of super-pixel on wherein the i-th frame, label value isIfIt indicatesIt is super for background Pixel, ifIt indicatesFor prospect super-pixel;Fitting is gone to obtain prospect gauss hybrid models using expectation maximization algorithmWith background gauss hybrid modelsWhereinIndicate the RGB of super-pixel point The data item of value, each super-pixel point indicates are as follows:
4. the Video object segmentation algorithm according to claim 1 or 3 based on high-order energy constraint, it is characterised in that: step Suddenly the smooth item of space-time described in (3) includes space smoothing item and time smoothing item, is respectively used to the smooth of room and time node,
The space smoothing item indicates are as follows:
The time smoothing item indicates are as follows:
The smooth item of space-time are as follows:
Wherein, εtIt is all time domains to set, εsIt is all airspaces to set, λs、λtIt is the weight parameter of linear combination, It is the series connection histogram of color corresponding to j-th of super-pixel and light stream direction in the i-th frame,It is j-th to surpass picture in the i-th frame Color histogram corresponding to element,Indicate super-pixelWith super-pixelBetween average side intensity,Table Show the light stream mapping ratio of the internal pixel of time domain, δ is the Kronecker function of standard, that is, works as v=u, and δ (u, v)=1 works as v ≠ u, δ (u, v)=0.
5. the Video object segmentation algorithm according to claim 1 based on high-order energy constraint, it is characterised in that: step (5) in method particularly includes: if SIFT feature is clustered into Ganlei, each super-pixel point can be expressed as a node, often A node includes multiple SIFT features, and each node can be indicated with a SIFT feature histogram;Indicate super-pixel PointK-th of bin in corresponding histogram, i.e., k-th of SIFT feature class is in super-pixel pointIn feature point number, H table Show the number of the bin of histogram;The sum of the numerical value of k-th of bin in expression prospect super-pixel point,Indicate background super-pixel The sum of the numerical value of k-th of bin, Ω in pointkThe sum of the numerical value for indicating k-th of bin in all super-pixel points, has The probability value that background before k-th of SIFT feature class is calculated is respectively as follows:WithIt can finally obtain The probability value of background before belonging to each super-pixel:WithThen high-order Energy term is expressed asWherein:
Obtain final high-order energy term are as follows:
6. the Video object segmentation algorithm according to claim 1 based on high-order energy constraint, it is characterised in that: step (6) the global optimization formula that algorithm completes segmentation is cut using figure in are as follows:
Wherein, E (S, L)=φu(S,L)+α×φp(S,L)+β×Eh(S, L), L are the set of all super-pixel point labels, and S is The set of all super-pixel points, φuFor data item, φpFor smooth item, EhFor high-order energy term, α, β are the weights of linear combination Parameter.
CN201910649351.8A 2019-07-18 2019-07-18 Video object segmentation algorithm based on high-order energy constraint Active CN110390293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910649351.8A CN110390293B (en) 2019-07-18 2019-07-18 Video object segmentation algorithm based on high-order energy constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910649351.8A CN110390293B (en) 2019-07-18 2019-07-18 Video object segmentation algorithm based on high-order energy constraint

Publications (2)

Publication Number Publication Date
CN110390293A true CN110390293A (en) 2019-10-29
CN110390293B CN110390293B (en) 2023-04-25

Family

ID=68285129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910649351.8A Active CN110390293B (en) 2019-07-18 2019-07-18 Video object segmentation algorithm based on high-order energy constraint

Country Status (1)

Country Link
CN (1) CN110390293B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111800609A (en) * 2020-06-29 2020-10-20 中国矿业大学 Mine roadway video splicing method based on multi-plane multi-perception suture line
CN113255493A (en) * 2021-05-17 2021-08-13 南京信息工程大学 Video target segmentation method fusing visual words and self-attention mechanism
CN114862725A (en) * 2022-07-07 2022-08-05 广州光锥元信息科技有限公司 Method and device for realizing motion perception fuzzy special effect based on optical flow method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2011265383A1 (en) * 2011-12-20 2013-07-04 Canon Kabushiki Kaisha Geodesic superpixel segmentation
CN104134217A (en) * 2014-07-29 2014-11-05 中国科学院自动化研究所 Video salient object segmentation method based on super voxel graph cut
AU2014271236A1 (en) * 2014-12-02 2016-06-16 Canon Kabushiki Kaisha Video segmentation method
CN107644429A (en) * 2017-09-30 2018-01-30 华中科技大学 A kind of methods of video segmentation based on strong goal constraint saliency
CN107657625A (en) * 2017-09-11 2018-02-02 南京信息工程大学 Merge the unsupervised methods of video segmentation that space-time multiple features represent
US20180211393A1 (en) * 2017-01-24 2018-07-26 Beihang University Image guided video semantic object segmentation method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2011265383A1 (en) * 2011-12-20 2013-07-04 Canon Kabushiki Kaisha Geodesic superpixel segmentation
CN104134217A (en) * 2014-07-29 2014-11-05 中国科学院自动化研究所 Video salient object segmentation method based on super voxel graph cut
AU2014271236A1 (en) * 2014-12-02 2016-06-16 Canon Kabushiki Kaisha Video segmentation method
US20180211393A1 (en) * 2017-01-24 2018-07-26 Beihang University Image guided video semantic object segmentation method and apparatus
CN107657625A (en) * 2017-09-11 2018-02-02 南京信息工程大学 Merge the unsupervised methods of video segmentation that space-time multiple features represent
CN107644429A (en) * 2017-09-30 2018-01-30 华中科技大学 A kind of methods of video segmentation based on strong goal constraint saliency

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111800609A (en) * 2020-06-29 2020-10-20 中国矿业大学 Mine roadway video splicing method based on multi-plane multi-perception suture line
CN111800609B (en) * 2020-06-29 2021-05-25 中国矿业大学 Mine roadway video splicing method based on multi-plane multi-perception suture line
CN113255493A (en) * 2021-05-17 2021-08-13 南京信息工程大学 Video target segmentation method fusing visual words and self-attention mechanism
CN113255493B (en) * 2021-05-17 2023-06-30 南京信息工程大学 Video target segmentation method integrating visual words and self-attention mechanism
CN114862725A (en) * 2022-07-07 2022-08-05 广州光锥元信息科技有限公司 Method and device for realizing motion perception fuzzy special effect based on optical flow method

Also Published As

Publication number Publication date
CN110390293B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN109636905B (en) Environment semantic mapping method based on deep convolutional neural network
CN109598268B (en) RGB-D (Red Green blue-D) significant target detection method based on single-stream deep network
Grady et al. Random walks for interactive alpha-matting
CN110163239B (en) Weak supervision image semantic segmentation method based on super-pixel and conditional random field
CN110866896B (en) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN110390293A (en) A kind of Video object segmentation algorithm based on high-order energy constraint
Cherabier et al. Learning priors for semantic 3d reconstruction
Roa'a et al. Generation of high dynamic range for enhancing the panorama environment
CN104599290B (en) Video sensing node-oriented target detection method
EP2856425A1 (en) Segmentation of a foreground object in a 3d scene
López-Rubio et al. Stochastic approximation for background modelling
CN111462149A (en) Example human body analysis method based on visual saliency
CN110163873B (en) Bilateral video target segmentation method and system
CN104616026A (en) Monitor scene type identification method for intelligent video monitor
Zhou et al. An efficient two-stage region merging method for interactive image segmentation
CN110390724B (en) SLAM method with instance segmentation
CN107657276B (en) Weak supervision semantic segmentation method based on searching semantic class clusters
CN107316324B (en) Method for realizing real-time stereo matching and optimization based on CUDA
CN111160099B (en) Intelligent segmentation method for video image target
Wang et al. Overview of image colorization and its applications
Gao et al. Shot-based video retrieval with optical flow tensor and HMMs
CN112465837B (en) Image segmentation method for sparse subspace fuzzy clustering by utilizing spatial information constraint
CN114022371B (en) Defogging device and defogging method based on space and channel attention residual error network
Campbell et al. Automatic Interpretation of Outdoor Scenes.
CN108376390B (en) Dynamic perception smoothing filtering algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant