CN115393585A - Moving target detection method based on super-pixel fusion network - Google Patents

Moving target detection method based on super-pixel fusion network Download PDF

Info

Publication number
CN115393585A
CN115393585A CN202210962818.6A CN202210962818A CN115393585A CN 115393585 A CN115393585 A CN 115393585A CN 202210962818 A CN202210962818 A CN 202210962818A CN 115393585 A CN115393585 A CN 115393585A
Authority
CN
China
Prior art keywords
layer
pixel
super
image
histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210962818.6A
Other languages
Chinese (zh)
Other versions
CN115393585B (en
Inventor
李阳
张先玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Botu Electrical Engineering Co ltd
Original Assignee
Jiangsu Vocational College of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Vocational College of Information Technology filed Critical Jiangsu Vocational College of Information Technology
Priority to CN202210962818.6A priority Critical patent/CN115393585B/en
Publication of CN115393585A publication Critical patent/CN115393585A/en
Application granted granted Critical
Publication of CN115393585B publication Critical patent/CN115393585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Nonlinear Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of target detection, in particular to a moving target detection method based on a superpixel fusion network. And then, extracting histogram features of the candidate foreground superpixels, and then respectively taking the pixel features and the superpixel features as the input of a finding pixel fusion network, wherein the whole process has high operation speed and strong robustness.

Description

Moving target detection method based on super-pixel fusion network
Technical Field
The invention relates to the technical field of target detection, in particular to a moving target detection method based on a super-pixel fusion network.
Background
The moving object detection is one of applications of image processing, generally speaking, a background model is obtained through a statistical method, the background model is updated in real time to adapt to light changes and changes of a scene, a morphological method and a detection connected domain area are used for post-processing, influences caused by noise and background disturbance are eliminated, shadows are detected in an HSV (hue, saturation, value) chromaticity space, an accurate moving object is obtained, in a complex scene, the moving object detection is still a challenging task, an existing method based on deep learning mainly adopts a u-net network and obtains a surprising effect, however, the local continuity between pixels is ignored, the detection performance is required to be further improved, in addition, the network contains information of the scene, and the generalization capability is required to be further improved.
Disclosure of Invention
In view of the above situation, an object of the present invention is to provide a moving object detection method based on a super-pixel fusion network.
The technical purpose of the invention is realized by the following technical scheme:
a moving target detection method based on a super-pixel fusion network comprises two parts: (1) a training stage; (2) a detection stage;
the detection phase comprises:
step 1, inputting a color image sequence R1, averaging 3 channel numerical values to perform image graying, and obtaining a grayed image sequence G1;
step 2, performing median filtering on the image sequence to obtain a background image B1, and performing difference on the image sequence G1 and the background image B1 to obtain a candidate foreground sequence which is marked as a pixel characteristic F1;
step 3, performing superpixel segmentation on the color image sequence R1 to obtain region information C1;
step 4, calculating a histogram of pixels of a candidate foreground region corresponding to the superpixel according to the region information, wherein the interval is 0.1 when the histogram is in a range of [ -1,1 ];
step 5, taking the histogram of each region as the characteristics of all pixels in the region, and recording the histogram as a super-pixel characteristic F2;
step 6, constructing a network;
step 7, training the model;
step 8, outputting the trained network model M;
the detection phase comprises:
step 9, inputting an image sequence R2, and averaging 3-channel numerical values to perform image graying if the image is a color image to obtain a grayed image sequence G2; if the image is a gray image, directly enabling G2= R2;
step 10, performing median filtering on the image sequence to obtain a background image B2, and performing difference on the image sequence G2 and the background image B2 to obtain a candidate foreground sequence which is recorded as a pixel characteristic F3;
step 11, performing superpixel segmentation on the color image sequence R2 to obtain region information C2;
step 12, calculating a histogram of pixels of a candidate foreground region corresponding to the super-pixel according to the region information, wherein the interval is 0.1 when the histogram is in a range of [ -1,1 ];
step 13, taking the histogram of each region as the characteristics of all pixels in the region, and recording the histogram as a super-pixel characteristic F4;
step 14, taking the super pixel characteristics F4 and the pixel characteristics F3 as the input of the trained network model M;
and step 15, outputting a detection result.
Further, the specific method of step 6 is as follows:
(1) constructing an encoder:
the convolutional neural network comprises an input layer, a hidden layer and an output layer;
the input layer comprises two inputs, the resolution is 240 multiplied by 320, the number of channels of the encoder input corresponding to the super pixel characteristic F2 is 21, the encoder input corresponding to the pixel characteristic F1 is 1, and the convolution size in the convolution neural network is 3 multiplied by 3;
layer 1 of the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 8 convolutions are used for generating 8 feature maps;
layer 2 of the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 16 convolutions are used for generating 16 feature maps;
the 3 rd layer in the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 32 convolutions are used for generating 32 feature maps;
the 4 th layer in the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 64 feature maps are generated by using 64 convolutions;
(2) constructing a connecting layer:
the 5 th layer in the hidden layers is a connection layer, and the connection layer connects the two encoders by using localization;
(3) constructing a decoder:
layer 6 of the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu, and uses 128 convolutions to generate 64 feature maps;
the 7 th layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and generates 32 feature maps by using 64 convolutions;
the 8 th layer in the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and 32 convolutions are used for generating 16 feature maps;
the 9 th layer in the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and 8 convolutions are used for generating 8 feature maps;
the 10 th layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + ClippedRelu, and 1 feature map is generated by using 1 convolution;
the output layer comprises a regression layer;
and taking the super pixel characteristics and the pixel characteristics as the input of the network, and outputting the super pixel characteristics and the pixel characteristics as the group of the corresponding input image.
In conclusion, the invention has the following beneficial effects:
the invention firstly uses median filtering to obtain candidate foreground, then judges whether the pixel is a foreground pixel or not through a super-pixel fusion network, and only relates to simple multiplication of a matrix when detecting, so the invention has small time complexity, and the processing speed of a training stage and a detection stage is high.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, are not intended to limit the invention, and:
FIG. 1 is a schematic diagram of the steps of the present invention.
Detailed Description
The foregoing and other technical and scientific aspects, features and utilities of the present invention will be apparent from the following detailed description of the embodiments, which is to be read in connection with the accompanying drawings of fig. 1. The structural contents mentioned in the following embodiments are all referred to the attached drawings of the specification.
Exemplary embodiments of the present invention will be described below with reference to the accompanying drawings.
Example 1: a moving target detection method based on a super-pixel fusion network comprises two parts: (1) a training stage; (2) a detection stage;
the detection phase comprises:
step 1, inputting a color image sequence R1, averaging 3-channel numerical values to perform image graying, and obtaining a grayed image sequence G1;
step 2, performing median filtering on the image sequence to obtain a background image B1, and performing difference on the image sequence G1 and the background image B1 to obtain a candidate foreground sequence which is marked as a pixel characteristic F1;
step 3, performing superpixel segmentation on the color image sequence R1 to obtain region information C1;
step 4, calculating a histogram of pixels of a candidate foreground region corresponding to the superpixel according to the region information, wherein the interval is 0.1 when the histogram is in a range of [ -1,1 ];
step 5, taking the histogram of each region as the characteristics of all pixels in the region, and recording the histogram of each region as a super-pixel characteristic F2;
step 6, constructing a network:
(1) constructing an encoder:
the convolutional neural network comprises an input layer, a hidden layer and an output layer;
the input layer comprises two inputs, the resolution is 240 multiplied by 320, the number of channels of the encoder input corresponding to the super pixel characteristic F2 is 21, the encoder input corresponding to the pixel characteristic F1 is 1, and the convolution size in the convolution neural network is 3 multiplied by 3;
the layer 1 in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 8 feature maps are generated by using 8 convolutions;
the layer 2 in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 16 feature maps are generated by using 16 convolutions;
the 3 rd layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 32 feature maps are generated by using 32 convolutions;
layer 4 of the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 64 feature maps are generated by using 64 convolutions;
(2) constructing a connecting layer:
the 5 th layer in the hidden layer is a connection layer, and the connection layer connects the two encoders by using localization;
(3) constructing a decoder:
layer 6 of the hidden layers employs convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu, and 128 convolutions are used to generate 64 feature maps;
the 7 th layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and 64 convolutions are used for generating 32 feature maps;
the 8 th layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and generates 16 feature maps by using 32 convolutions;
the 9 th layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and 8 feature maps are generated by using 8 convolutions;
the 10 th layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + ClippedRelu, and 1 convolution is used for generating 1 feature map;
the output layer comprises a regression layer;
the super pixel characteristics and the pixel characteristics are used as the input of a network, and the super pixel characteristics and the pixel characteristics are output as the group of the corresponding input image;
step 7, training the model;
step 8, outputting the trained network model M;
the detection phase comprises:
step 9, inputting an image sequence R2, and averaging 3-channel numerical values to perform image graying if the image is a color image to obtain a grayed image sequence G2; if the image is a gray image, directly enabling G2= R2;
step 10, performing median filtering on the image sequence to obtain a background image B2, and performing difference on the image sequence G2 and the background image B2 to obtain a candidate foreground sequence which is marked as a pixel characteristic F3;
step 11, performing superpixel segmentation on the color image sequence R2 to obtain region information C2;
step 12, calculating a histogram of pixels of a candidate foreground region corresponding to the superpixel according to the region information, wherein the interval is 0.1 when the histogram is in a range of [ -1,1 ];
step 13, taking the histogram of each region as the characteristics of all pixels in the region, and recording the histogram as a super-pixel characteristic F4;
step 14, taking the super pixel characteristics F4 and the pixel characteristics F3 as the input of the trained network model M;
and step 15, outputting a detection result.
The invention extracts candidate foreground (called as pixel characteristic) by using median filtering, carries out super-pixel segmentation on an image sequence, then extracts histogram characteristic (called as super-pixel characteristic) of super-pixels of the candidate foreground, and then respectively uses the pixel characteristic and the super-pixel characteristic as input of a pixel-finding fusion network.
In the whole detection process, only simple multiplication of a matrix is involved, so that the time complexity is small, the processing speed of a training stage and a detection stage is high, and the dynamic background can be effectively inhibited through the superpixel fusion characteristic due to the consideration of the dynamic characteristic.
Experiments show that the pixel fusion network has a good effect on 34 image sequences in CDNET 2014, more background noise can be removed through pixel fusion, and the pixel fusion network has stronger expression capability than a network with the same depth.
While the invention has been described in further detail with reference to specific embodiments thereof, it is not intended that the invention be limited to the specific embodiments thereof; for those skilled in the art to which the present invention pertains and related technologies, the extension, operation method and data replacement should fall within the protection scope of the present invention based on the technical solution of the present invention.

Claims (2)

1. A moving target detection method based on a super-pixel fusion network is characterized by comprising two stages: (1) a training stage; (2) a detection stage;
the detection phase comprises the following steps:
step 1, inputting a color image sequence R1, averaging 3-channel numerical values to perform image graying, and obtaining a grayed image sequence G1;
step 2, performing median filtering on the image sequence to obtain a background image B1, and performing difference on the image sequence G1 and the background image B1 to obtain a candidate foreground sequence which is marked as a pixel characteristic F1;
step 3, performing superpixel segmentation on the color image sequence R1 to obtain region information C1;
step 4, calculating a histogram of pixels of a candidate foreground region corresponding to the super-pixel according to the region information, wherein the interval is 0.1 when the histogram is in a range of [ -1,1 ];
step 5, taking the histogram of each region as the characteristics of all pixels in the region, and recording the histogram as a super-pixel characteristic F2;
step 6, constructing a network;
step 7, training the model;
step 8, outputting the trained network model M;
the detection phase comprises:
step 9, inputting an image sequence R2, and averaging 3-channel numerical values to perform image graying if the image is a color image to obtain a grayed image sequence G2; if the image is a gray image, directly enabling G2= R2;
step 10, performing median filtering on the image sequence to obtain a background image B2, and performing difference on the image sequence G2 and the background image B2 to obtain a candidate foreground sequence which is marked as a pixel characteristic F3;
step 11, performing superpixel segmentation on the color image sequence R2 to obtain region information C2;
step 12, calculating a histogram of pixels of a candidate foreground region corresponding to the super-pixel according to the region information, wherein the interval is 0.1 when the histogram is in a range of [ -1,1 ];
step 13, taking the histogram of each region as the characteristics of all pixels in the region, and recording the histogram as a super-pixel characteristic F4;
step 14, using the super pixel characteristics F4 and the pixel characteristics F3 as the input of the trained network model M;
and step 15, outputting a detection result.
2. The moving object detection method based on the super-pixel fusion network according to claim 1, wherein the specific method in step 6 is as follows:
(1) constructing an encoder:
the convolutional neural network comprises an input layer, a hidden layer and an output layer;
the input layer comprises two inputs, the resolution is 240 multiplied by 320, the number of channels of the encoder input corresponding to the super pixel characteristic F2 is 21, the encoder input corresponding to the pixel characteristic F1 is 1, and the convolution size in the convolution neural network is 3 multiplied by 3;
layer 1 of the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 8 convolutions are used for generating 8 feature maps;
the layer 2 in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 16 convolutions are used for generating 16 feature maps;
layer 3 of the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 32 convolutions are used for generating 32 feature maps;
layer 4 of the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu + Maxpool, and 64 feature maps are generated by using 64 convolutions;
(2) constructing a connecting layer:
the 5 th layer in the hidden layers is a connection layer, and the connection layer connects the two encoders by using localization;
(3) constructing a decoder:
layer 6 of the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Conv + BN + Relu, and 128 convolutions are used for generating 64 feature maps;
the 7 th layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and generates 32 feature maps by using 64 convolutions;
the 8 th layer in the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and 32 convolutions are used for generating 16 feature maps;
the 9 th layer in the hidden layers adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + Relu, and 8 convolutions are used for generating 8 feature maps;
the 10 th layer in the hidden layer adopts convolution, batch normalization, an activation layer and a pooling layer Deconv + BN + ClippedRelu, and 1 feature map is generated by using 1 convolution;
the output layer comprises a regression layer;
and taking the super pixel characteristics and the pixel characteristics as the input of the network, and outputting the super pixel characteristics and the pixel characteristics as the group of the corresponding input image.
CN202210962818.6A 2022-08-11 2022-08-11 Moving object detection method based on super-pixel fusion network Active CN115393585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210962818.6A CN115393585B (en) 2022-08-11 2022-08-11 Moving object detection method based on super-pixel fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210962818.6A CN115393585B (en) 2022-08-11 2022-08-11 Moving object detection method based on super-pixel fusion network

Publications (2)

Publication Number Publication Date
CN115393585A true CN115393585A (en) 2022-11-25
CN115393585B CN115393585B (en) 2023-05-12

Family

ID=84119085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210962818.6A Active CN115393585B (en) 2022-08-11 2022-08-11 Moving object detection method based on super-pixel fusion network

Country Status (1)

Country Link
CN (1) CN115393585B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637253A (en) * 2011-12-30 2012-08-15 清华大学 Video foreground object extracting method based on visual saliency and superpixel division
CN103578119A (en) * 2013-10-31 2014-02-12 苏州大学 Target detection method in Codebook dynamic scene based on superpixels
CN105809716A (en) * 2016-03-07 2016-07-27 南京邮电大学 Superpixel and three-dimensional self-organizing background subtraction algorithm-combined foreground extraction method
US20170140231A1 (en) * 2015-11-13 2017-05-18 Honda Motor Co., Ltd. Method and system for moving object detection with single camera
CN111881915A (en) * 2020-07-15 2020-11-03 武汉大学 Satellite video target intelligent detection method based on multiple prior information constraints
CN112561949A (en) * 2020-12-23 2021-03-26 江苏信息职业技术学院 Fast moving target detection algorithm based on RPCA and support vector machine
CN112802054A (en) * 2021-02-04 2021-05-14 重庆大学 Mixed Gaussian model foreground detection method fusing image segmentation
CN112926466A (en) * 2021-03-02 2021-06-08 江苏信息职业技术学院 Moving target detection method based on differential convolutional neural network
CN114841941A (en) * 2022-04-24 2022-08-02 江苏信息职业技术学院 Moving target detection algorithm based on depth and color image fusion

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637253A (en) * 2011-12-30 2012-08-15 清华大学 Video foreground object extracting method based on visual saliency and superpixel division
CN103578119A (en) * 2013-10-31 2014-02-12 苏州大学 Target detection method in Codebook dynamic scene based on superpixels
US20170140231A1 (en) * 2015-11-13 2017-05-18 Honda Motor Co., Ltd. Method and system for moving object detection with single camera
CN105809716A (en) * 2016-03-07 2016-07-27 南京邮电大学 Superpixel and three-dimensional self-organizing background subtraction algorithm-combined foreground extraction method
CN111881915A (en) * 2020-07-15 2020-11-03 武汉大学 Satellite video target intelligent detection method based on multiple prior information constraints
CN112561949A (en) * 2020-12-23 2021-03-26 江苏信息职业技术学院 Fast moving target detection algorithm based on RPCA and support vector machine
CN112802054A (en) * 2021-02-04 2021-05-14 重庆大学 Mixed Gaussian model foreground detection method fusing image segmentation
CN112926466A (en) * 2021-03-02 2021-06-08 江苏信息职业技术学院 Moving target detection method based on differential convolutional neural network
CN114841941A (en) * 2022-04-24 2022-08-02 江苏信息职业技术学院 Moving target detection algorithm based on depth and color image fusion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ZHI LIU 等: "Superpixel-Based Spatiotemporal Saliency Detection" *
于洪洋 等: "基于超像素一致显著性的视频运动目标检测算法" *
云红全 等: "基于超像素时空显著性的运动目标检测算法" *
李阳: "基于RPCA和SVM的快速运动目标检测算法" *
邢晴: "基于视觉显著性的海上目标检 测方法研究" *

Also Published As

Publication number Publication date
CN115393585B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
US20080112606A1 (en) Method for moving cell detection from temporal image sequence model estimation
US20100128789A1 (en) Method and apparatus for processing video sequences
CN110782477A (en) Moving target rapid detection method based on sequence image and computer vision system
CN111145209A (en) Medical image segmentation method, device, equipment and storage medium
US11042986B2 (en) Method for thinning and connection in linear object extraction from an image
CN112364865B (en) Method for detecting small moving target in complex scene
Xue et al. Boundary-induced and scene-aggregated network for monocular depth prediction
Boiangiu et al. Voting-based image segmentation
CN113395415A (en) Camera data processing method and system based on noise reduction technology
Katkar et al. A novel approach for medical image segmentation using PCA and K-means clustering
CN109785357B (en) Robot intelligent panoramic photoelectric reconnaissance method suitable for battlefield environment
CN111028263A (en) Moving object segmentation method and system based on optical flow color clustering
Zhang et al. Local stereo matching: An adaptive weighted guided image filtering-based approach
CN115830064B (en) Weak and small target tracking method and device based on infrared pulse signals
CN115393585A (en) Moving target detection method based on super-pixel fusion network
CN110880183A (en) Image segmentation method, device and computer-readable storage medium
CN111145121B (en) Confidence term filter target tracking method for strengthening multi-feature fusion
CN110490877B (en) Target segmentation method for binocular stereo image based on Graph Cuts
CN110942420B (en) Method and device for eliminating image captions
CN113989263A (en) Image area saliency detection method based on super-pixel segmentation
CN108510525B (en) Template method for tracing, device, augmented reality system and storage medium
Singh et al. Performance Evaluation of the masking based Watershed Segmentation
KR101711929B1 (en) Method and apparatus for extraction of edge in image based on multi-color and multi-direction
Tajudin et al. Microbleeds detection using watershed-driven active contour
CN111476821B (en) Target tracking method based on online learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231024

Address after: 215000 Rooms 7 # 211 and 213, No. 200 Shenhu Road, Suzhou Industrial Park, Suzhou Area, China (Jiangsu) Pilot Free Trade Zone, Wuxi City, Jiangsu Province

Patentee after: Jiangsu botu Electrical Engineering Co.,Ltd.

Address before: 214000 No.1 qianou Road, Wuxi City, Jiangsu Province

Patentee before: JIANGSU VOCATIONAL College OF INFORMATION TECHNOLOGY

TR01 Transfer of patent right