CN110288538A - A kind of the moving target shadow Detection and removing method of multiple features fusion - Google Patents
A kind of the moving target shadow Detection and removing method of multiple features fusion Download PDFInfo
- Publication number
- CN110288538A CN110288538A CN201910435299.6A CN201910435299A CN110288538A CN 110288538 A CN110288538 A CN 110288538A CN 201910435299 A CN201910435299 A CN 201910435299A CN 110288538 A CN110288538 A CN 110288538A
- Authority
- CN
- China
- Prior art keywords
- shadow
- image
- moving target
- region
- shade
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000004927 fusion Effects 0.000 title claims abstract description 18
- 230000006870 function Effects 0.000 claims description 14
- 238000005286 illumination Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 8
- 239000003086 colorant Substances 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 6
- 239000000203 mixture Substances 0.000 claims description 6
- 238000000513 principal component analysis Methods 0.000 claims description 6
- 239000000843 powder Substances 0.000 claims description 5
- 238000005530 etching Methods 0.000 claims description 4
- 230000014509 gene expression Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 229910010888 LiIn Inorganic materials 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000003708 edge detection Methods 0.000 claims description 2
- 230000003628 erosive effect Effects 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 230000000877 morphologic effect Effects 0.000 claims description 2
- 230000009467 reduction Effects 0.000 claims description 2
- 230000007306 turnover Effects 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims 2
- 230000006978 adaptation Effects 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 3
- 230000008030 elimination Effects 0.000 description 3
- 238000003379 elimination reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/507—Depth or shape recovery from shading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the moving target shadow Detections and removing method of a kind of multiple features fusion.This method are as follows: read Moving Targets Based on Video Streams image sequence, establish image background model;The prospect of image and background are separated again, obtain moving target foreground area;The Color Name feature for extracting motion target area, obtains the shade candidate region based on Color Name feature;The edge feature for extracting moving target prospect, obtains the shade candidate region based on edge feature;The Color Name feature of motion target area is blended with edge feature, obtains final shadow region;It constructs shade and assesses submodel, assess the shadow condition of current frame image, shade is eliminated according to assessment result, updates shadow removing result;Next frame image is read, is repeated the above process, until picture frame reading finishes.The present invention improves the accuracy of moving target shadow removing and the accuracy and real-time of subsequent motion object detecting and tracking in image.
Description
Technical field
The invention belongs to Detection for Moving Target field, the moving target shadow Detection of especially a kind of multiple features fusion
And removing method.
Background technique
In recent years, with the development of image processing techniques, the object detecting and tracking system based on machine vision is obtained
It is widely applied.For moving object detection as one of the primary study content in field of machine vision, mixing together includes mode
The subjects theoretical knowledges such as identification, automatic control, image procossing and Fusion Features.However, due to movement shade and movement mesh
Motion feature having the same is marked, when using background subtraction by target and background separation, usually shade can be mistaken for moving
A part of target causes moving target shape to change, to reduce the accuracy of succeeding target detection and tracking.Cause
This, probes into one kind and accurately detects and eliminate a disaster of the method for moving target shade as current goal detection research work
Point.
Before detecting movement shade, first have to accurately detect moving target.Nowadays, various countries researcher is continuous
Study movement algorithm of target detection is to obtain better detection effect.Optical flow method, frame differential method and background subtraction are all main
The algorithm of target detection of stream.Optical flow method is put forward for the first time by Gibson in nineteen fifty, and principle is to utilize the change of pixel in the time domain
Change and the similitude of adjacent image interframe obtains the motion information of previous frame Yu current interframe target.Optical flow method stability is good,
Accuracy is high, but computationally intensive, is difficult to meet real-time demand.The principle of frame differential method is pair according to adjacent image interframe
The difference for answering pixel sets up threshold value in conjunction with experience, to extract moving region.Common frame differential method has two frames poor
Point-score and Three image difference.Frame differential method has many advantages, such as to calculate simple, versatile, but the detection such as be easy to appear cavity
As a result, it is difficult to which the target high to similarity is accurately detected.Background subtraction is first to build to the background pixel in video image
Vertical parameter model, then calculus of differences is carried out to current frame image and background frames image, to realize point of foreground area and background
From.The key of background subtraction is the suitable background model of building, and common modeling method has: code book algorithm, W4 model algorithm,
Mixed Gauss model method, non-parametric kernel density estimation method and statistical average method etc..Background subtraction real-time is high, but vulnerable to figure
Noise jamming as in, easily there is a phenomenon where target shadow is mistaken for moving target.As can be seen that single target detection is calculated
All there is a little shortcoming in method, in order to obtain more accurate object detection results, it is also necessary to combine Many Detection.
At the same time, many scholars study for the movement shadow problem under illumination condition and achieve certain amount
Theoretical result.Common shadow Detection and removing method can be divided into two major classes: modelling and characteristic method.Wherein, it is based on model
Shadow Detection algorithm mainly utilize the prior informations such as illumination, objective contour and area to movement shade founding mathematical models, then
Pixel and movement shadow model are matched, and then judge whether it belongs to shade.Such algorithm is needed for concrete scene
Modeling, because being unable to satisfy the demand detected in complex scene to moving target shade without having versatility.
Different from modelling, the shadow Detection algorithm based on feature is by by the feature of current video image and background image
Information compares, and the difference using shadow region, background and moving target in the features such as geometry, color, texture, physics
Property separates shade and target.Such algorithm is not influenced vulnerable to environment and target object, is current goal shadow removing
Mainstream algorithm.Wherein, fortune is mainly detected using features such as the shape of target, areas based on the shadow Detection algorithm of geometrical characteristic
The shade of moving-target;Shadow Detection algorithm based on color characteristic is mainly right in each color space of RGB, HSV, YUV and HSI
Movement shade is detected;Shadow Detection algorithm based on textural characteristics is usually special using Gradient Features and LBP feature as texture
Sign carries out shadow Detection.And the shadow detection method based on single feature often has limitation, can only realize a kind of special with certain
The target shadow of sign detects demand, does not have versatility, it is difficult to realize under complex background to the accurate detection of target shadow and
It eliminates.
Summary of the invention
The object of the present invention is to provide one kind can accurately and real-time detect shade present in moving target, and
The moving target shadow Detection and removing method for the multiple features fusion that robustness is good, accuracy is high, real-time is high.
The technical solution for realizing the aim of the invention is as follows: a kind of the moving target shadow Detection and elimination of multiple features fusion
Method, which comprises the following steps:
Step 1 establishes background model: reading sequence of video images, the back of image is established using mixed Gauss model method
Scape model;
Step 2 obtains motion target area: being separated, is obtained using foreground and background of the Three image difference to image
Moving target foreground area, and the noise jamming in moving target foreground area is filtered, expand and etch state behaviour
Make, obtains moving target foreground area Sf;
Step 3 extracts color characteristic: extracting moving target foreground area SfColor Name feature, obtain comprising it is black,
Blue, brown, grey, green, orange, purple, red, powder, Bai Hehuang ten one-dimensional color characteristics, then pass through Principal Component Analysis for ten one-dimensional colors
Feature is adaptively down to three-dimensional, obtains the shade candidate region S based on Color Name featureCN;
Step 4 extracts edge feature: extracting moving target foreground area S using Canny edge algorithmsfEdge feature,
Obtain the shade candidate region S based on edge featureE;
Step 5, parallel mode fusion: the shade candidate region S based on Color Name feature that step 3 is obtainedCNWith
Step 4 obtains the shade candidate region S based on edge featureE, merged using parallel mode, obtain final shadow region
S, i.e. S=SCN∪SE;
Step 6 establishes shade assessment submodel: in conjunction with intensity of illumination Ea, shadow intensity EbWith shadow factor Z, building yin
Submodel is estimated in film review, assesses shadow region S final in image;
Step 7 eliminates shade: according to shadow region assessment result, deciding whether to execute shadow removing operation, if needing
It wants, then by current frame motion target prospect region SfIn pixel in final shadow region S filled with background pixel, realize
The shadow removing of present frame, and shadow removing result is updated;Conversely, then retaining the shadow removing knot of previous frame image
Fruit;
Step 8 reads next frame, repeats step 3~step 7, until image sequence reading terminates.
Compared with prior art, the present invention its remarkable advantage is: (1) by mixed Gauss model method and Three image difference
In conjunction with, and improved using Otsu algorithm setting threshold value convenient for extracting more accurate background template and foreground moving region
The accuracy rate of prospect and background separation;(2) it merges Color Name feature and edge feature examines the shade of moving target
It surveys, overcomes the limitation of single features, advantageously account for homochromy interference in moving object detection and intensity of illumination is indefinite
Problem;(3) Color Name feature is blended using paralleling tactic with edge feature, improves the comprehensive of shadow Detection,
The possibility of missing inspection is reduced, so that the detection of dash area is more accurate;(4) it while shadow Detection, constructs shade and comments
Estimate submodel to assess shadow condition present in each frame, the shadow removing that timely updated as a result, real-time is high.
Detailed description of the invention
Fig. 1 is the moving target shadow Detection of multiple features fusion of the present invention and the flow diagram of removing method.
Fig. 2 is the flow diagram of mixed Gauss model method in the present invention.
Fig. 3 is the flow diagram of Three image difference in the present invention.
Fig. 4 is the flow diagram that mixed Gauss model method is combined with Three image difference in the present invention.
Fig. 5 is the flow diagram that shade assessment submodel carries out shade assessment in the present invention.
Fig. 6 is four groups of simulated effect figures in the embodiment of the present invention, is the 10th frame, 25 frames, 66 frames and 105 frame original images respectively
Picture and its treatment effect, every group (a), (b), (c), (d) the successively original image of the corresponding frame of representative, Background, foreground picture (shade inspection
Survey result figure) and shadow removing result figure.
Specific embodiment
The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
One section is chosen there are illumination condition is unstable and the laboratory video stream sequence of barrier occlusion issue,
Series of processes is carried out to the sequence of video images on 2013 platform of Matlab2017a and Visual Studio.
As shown in Figure 1, the moving target shadow Detection and removing method of a kind of multiple features fusion of the invention, including it is following
Step:
Step 1 establishes background model: reading sequence of video images, is first filtered the operation such as denoising to image, then use
Mixed Gauss model method establishes the background model of image, in conjunction with Fig. 2, the specific steps are as follows:
Step 1.1, modeling pretreatment: K Gaussian function is defined to indicate the pixel value of each pixel, setting is each adopted
Sampling point all obeys Gaussian mixtures, then single sampled point xiThe Gaussian mixtures probability density function of obedience:
YJ, i=σJ, i 2I
Wherein, p (xi) it is single sampled point xiThe probability of the Gaussian mixtures of obedience, η (xi, μJ, i, YJ, i) be i-th when
Carve the probability density function of j-th of Gaussian Profile, wJ, iFor the weight of j-th of Gaussian Profile of the i-th moment, μJ, iFor the i-th moment jth
The mean value of a Gaussian Profile, YJ, iFor the covariance of j-th of Gaussian Profile of the i-th moment, xiIt is the pixel value at the i-th moment of sampled point,
σJ, i 2For the variance of j-th of Gaussian Profile of the i-th moment, I is three-dimensional unit matrix;
Step 1.2, matching Gaussian distribution model: each new pixel value is set as Ai, by AiIt is matched with K Gauss model,
Until obtain with the matched distributed model of new pixel value, that is, exist:
|Ai-μJ, i-1|≤2.5σJ, i-1
Wherein, μJ, i-1It is the mean value of j-th of Gaussian Profile of the (i-1)-th moment, σJ, i-1It is j-th of Gaussian Profile of the (i-1)-th moment
Standard deviation.
If the match pattern meets context request, which belongs to background, on the contrary then belong to prospect.
Step 1.3 updates weight: the right value update of each mode is carried out according to following formula:
wK, i=(1- θ) * wK, i-1+θ*QK, i
Wherein, wK, iIt is the weight of k-th of Gaussian Profile of the i-th moment, wK, i-1It is the power of k-th of Gaussian Profile of the (i-1)-th moment
Weight, θ is learning rate, QK, iFor indicating whether mode locating for k-th of Gaussian Profile of the i-th moment matches, value is according to pattern match feelings
Depending on condition, if mode matches, QK, i=1;Conversely, QK, i=0, and the weight of each mode is normalized again:
Step 1.4 generates Gauss model: being directed to not matched mode, mean value is remained unchanged with standard deviation, matches mould
The parameter of formula is updated according to the following formula:
ρ=θ η (Ai|μk, σk)
μi=(1- ρ) μi-1+ρ·Ai
σi 2=(1- ρ) σi-1 2+ρ·(Ai-μi)T(Ai-μi)
Wherein, ρ is turnover rate, AiIt is the new pixel value at the i-th moment, μk、σkBe respectively k-th of Gaussian Profile mean value and
Standard deviation, η (Ai|μk, σk) be k-th of Gaussian Profile of the i-th moment probability density, μi-1、μiIt is the equal of (i-1)-th, i moment respectively
Value, σi-1 2、σi 2It is the variance of (i-1)-th, i moment respectively.
Step 2 obtains motion target area: being separated, is obtained using prospect and background of the Three image difference to image
Moving target foreground area, and the noise jamming in moving target foreground area is filtered, expand and etch state behaviour
Make, obtains the preferable moving target foreground area S of effectf.As shown in Figure 3, Figure 4, the specific steps are as follows:
Step 2.1 is directed to the continuous three frames image F in front and backi-1(x, y), Fi(x, y), Fi+1(x, y), first respectively by adjacent two
Frame image carries out calculus of differences, then carries out binaryzation operation by Otsu algorithm setting threshold value, obtains two difference image Ni(x,
y)、Ni+1(x, y);
Two frame difference images obtained in step 2.1 are made AND operation by step 2.2, and three frames for obtaining moving target are poor
Partial image Gi(x, y), it may be assumed that
Gi(x, y)=Ni(x, y) ∩ Ni-1(x, y)
Step 2.3 carries out morphological operation to three-frame difference image, picture noise is removed, further according to three-frame difference image
Between difference, detection obtain the relative motion region Φ of image objects, the background area Φ that has been capped of previous framebAnd it is current
The capped background area Φ of framebc;
Step 2.4, by moving region ΦsInterior pixel and its top n Gaussian Profile is by the progress of pattern match formula
Match, and further judgement obtains the foreground area and background of image, is background dot if pixel matches with Gauss model;
If pixel and each Gauss model mismatch, which is foreground point;This operation is repeated, the set of all foreground points is set
It is set to moving target foreground area Sf。
Step 3 extracts color characteristic: extracting moving target foreground area SfColor Name feature, obtain comprising it is black,
Blue, brown, grey, green, orange, purple, red, powder, Bai Hehuang ten one-dimensional color characteristics, then pass through Principal Component Analysis for ten one-dimensional colors
Feature is adaptively down to three-dimensional, obtains the shade candidate region S based on Color Name featureCN, it is specific as follows:
Step 3.1 is converted to RGB image comprising black, blue, brown, grey, green, orange, purple, red, powder, white using color mapping
It is indicated with the one-dimensional color probability of yellow ten, then operation is normalized, the ten one-dimensional colors for obtaining image indicate;
Step 3.2, in order to reduce operation time, using Principal Component Analysis to the color space carry out self-adaptive reduced-dimensions,
I.e. under the premise of guaranteeing image basic colors feature, its dimension is further decreased.And the basic principle of Principal Component Analysis is
First calculate least cost function εi_cost, then find with this function the U of present frame1×U2Orthogonal column vector projection matrix Li, Li
It should meetLeast cost function εi_costCalculation formula it is as follows:
In above formula:
Wherein, βiFor weighting function, C × V is the Neighborhood matrix in foreground moving region,For ten one-dimensional face of image
Color characteristic expression, (c, v) ∈ { 0 ..., C-1 } × { 0 ..., V-1 }, 0 < j < i,It is LiIn each vector, weight is by
The dimensionality reduction coefficient of j frameIt determines;
Step 3.3, according to formulaLinear projection processing is carried out to foreground moving region, by U1
The Color Name color characteristic of dimension is down to U2The color characteristic of dimension, wherein U1=11, U2=3,It is the three-dimensional face of image
Color characteristic indicates.
Step 4 extracts edge feature: extracting moving target foreground area S using Canny edge algorithmsfEdge feature,
Obtain the shade candidate region S based on edge featureE, it is specific as follows:
Step 4.1,3 × 3 matrix As of selection are to moving target foreground area SfEtching operation is carried out, and before moving target
Scene area SfIn subtract the region after etching operation, reuse Canny edge algorithms detection foreground area in edge contour, note
For E1, that is, have:
Wherein, SfIt is moving target foreground area,It is erosion operation symbol, E1It is to be obtained through Canny edge detection algorithm
Edge contour;
Step 4.2 is extracted by edge E1Edge contour, the i.e. profile of moving target present in region in being enclosed in,
It is denoted as E2;
Step 4.3, difference are both horizontally and vertically to E2Pixel filling is carried out inside region, is as a result denoted as E3,
That is:
Wherein, VKi(x, y)=1 indicates E2Region first horizontal filling, vertically filling operation again;HKi(x, y)=1 item is indicated
E2Region first vertical filling, again horizontal filling operation;
Step 4.4, from moving target foreground area SfMotion target area E of the middle removal after pixel filling3, obtain base
In the shade candidate region S of edge featureE。
Step 5, parallel mode fusion: the shade candidate region S based on Color Name feature that step 3 is obtainedCNWith
Step 4 obtain based on edge feature shade candidate region SE, merged using parallel mode, obtain final shadow region
S has S=SCN∪SE。
Step 6 establishes shade assessment submodel: in conjunction with intensity of illumination Ea, shadow intensity EbWith shadow factor Z, building yin
Submodel is estimated in film review, assesses shadow region S final in image, as shown in figure 5, concrete operations are as follows:
Step 6.1 defines intensity of illumination Ea, shadow intensity EbAnd shadow factor Z:
Wherein, j ∈ { a, b } indicates that the value of j is a or b (a indicates bright, and b indicates shade), Pa、PbRespectively area of illumination
Domain and shadow region, naTo be illuminated by the light intensity EaThe pixel number of the light area of influence, nbFor the pixel number of shadow region,
eiIt is the energy intensity of pixel;
Step 6.2, setting Intensity threshold z1With shadow factor threshold value z2, the shadow condition in current frame image is commented
Estimate;According to emulation experiment, Intensity threshold z is set1=300 and shadow factor threshold value z2=0.25;
Step 6.3 judges whether the intensity of illumination in image reaches threshold value: if light intensity is less than z1Value, then illustrate in image
Shade is unobvious, maintains the shadow condition of image;Conversely, then illustrating that the shade of image need to be further calculated there are obvious shade
Coefficient;
If step 6.4, shadow factor are lower than z2, then the shade in current frame image is not eliminated, maintains shade feelings
Condition;Otherwise, shadow removing is carried out.
Step 7 eliminates shade: according to the shadow region assessment result in step 6, deciding whether to execute shadow removing
Operation, if desired, then by current frame motion target prospect region SfIn pixel background pixel in final shadow region S
Filling, realizes the shadow removing of present frame;And shadow removing result is updated;Conversely, then retaining the yin of previous frame image
Shadow eliminates result;
Step 8 reads next frame, repeats step 3~step 7, until image sequence reading terminates.
It is successively original image and its place of the 10th frame, 25 frames, 66 frames and 105 frames in order in conjunction with the simulated effect figure of Fig. 6
Effect picture is managed, every group of (a), (b), (c), (d) successively represent original image, Background, the foreground picture (shadow detection result of corresponding frame
Figure) and shadow removing result figure.As can be seen that the present invention is based on the moving target shadow Detection of multiple features fusion and elimination sides
Method overcomes the limitation of single feature detection, solves in image moving target because moving yin caused by the factors such as illumination is indefinite
Shadow problem improves the accuracy and real-time of moving target shadow Detection and elimination.
Claims (6)
1. the moving target shadow Detection and removing method of a kind of multiple features fusion, which comprises the following steps:
Step 1 establishes background model: reading sequence of video images, the background mould of image is established using mixed Gauss model method
Type;
Step 2 obtains motion target area: being separated, is moved using foreground and background of the Three image difference to image
Target prospect region, and the noise jamming in moving target foreground area is filtered, expand and etch state operation, obtain
To moving target foreground area Sf;
Step 3 extracts color characteristic: extracting moving target foreground area SfColorName feature, obtain comprising it is black, blue, brown,
Ten one-dimensional color characteristics of grey, green, orange, purple, red, powder, Bai Hehuang, then by Principal Component Analysis by ten one-dimensional color characteristics from
Three-dimensional is down in adaptation, obtains the shade candidate region S based on ColorName featureCN;
Step 4 extracts edge feature: extracting moving target foreground area S using Canny edge algorithmsfEdge feature, obtain
Shade candidate region S based on edge featureE;
Step 5, parallel mode fusion: the shade candidate region S based on Color Name feature that step 3 is obtainedCNWith step 4
Obtain the shade candidate region S based on edge featureE, merged using parallel mode, obtain final shadow region S, i.e. S
=SCN∪SE;
Step 6 establishes shade assessment submodel: in conjunction with intensity of illumination Ea, shadow intensity EbWith shadow factor Z, constructs shade and comment
Estimate submodel, shadow region S final in image is assessed;
Step 7 eliminates shade: according to shadow region assessment result, deciding whether to execute shadow removing operation, if desired,
Then by current frame motion target prospect region SfIn pixel in final shadow region S filled with background pixel, realization is worked as
The shadow removing of previous frame, and shadow removing result is updated;Conversely, then retaining the shadow removing result of previous frame image;
Step 8 reads next frame, repeats step 3~step 7, until image sequence reading terminates.
2. the moving target shadow Detection and removing method of multiple features fusion according to claim 1, which is characterized in that step
Background model is established described in rapid 1, specific as follows:
Step 1.1, modeling pretreatment: K Gaussian function is defined to indicate the pixel value of each pixel, sets each sampled point
Gaussian mixtures are all obeyed, then single sampled point xiThe Gaussian mixtures probability density function of obedience:
YJ, i=σJ, i 2I
Wherein, p (xi) it is single sampled point xiThe probability of the Gaussian mixtures of obedience, η (xi, μJ, i, YJ, i) it is the i-th moment jth
The probability density function of a Gaussian Profile, wJ, iFor the weight of j-th of Gaussian Profile of the i-th moment, μJ, iIt is j-th high for the i-th moment
The mean value of this distribution, YJ, iFor the covariance of j-th of Gaussian Profile of the i-th moment, xiIt is the pixel value at the i-th moment of sampled point, σJ, i 2
For the variance of j-th of Gaussian Profile of the i-th moment, I is three-dimensional unit matrix;
Step 1.2, matching Gaussian distribution model: each new pixel value is set as Ai, by AiIt is matched with K Gauss model, until
Obtain with the matched distributed model of new pixel value, that is, exist:
|Ai-μJ, i-1|≤2.5σJ, i-1
Wherein, μJ, i-1It is the mean value of j-th of Gaussian Profile of the (i-1)-th moment, σJ, i-1It is the mark of j-th of Gaussian Profile of the (i-1)-th moment
It is quasi- poor;
If the match pattern meets context request, which belongs to background, on the contrary then belong to prospect;
Step 1.3 updates weight: the right value update of each mode is carried out according to following formula:
wK, i=(1- θ) * wK, i-1+θ*QK, i
Wherein, wK, iIt is the weight of k-th of Gaussian Profile of the i-th moment, wK, i-1It is the weight of k-th of Gaussian Profile of the (i-1)-th moment, θ
For learning rate, QK, iFor indicating whether mode locating for k-th of Gaussian Profile of the i-th moment matches, value according to pattern match situation and
It is fixed, if mode matches, QK, i=1;Conversely, QK, i=0, and the weight of each mode is normalized again;
Step 1.4 generates Gauss model: being directed to not matched mode, the mean value of sampled point is remained unchanged with standard deviation, is matched
The parameter of mode is updated according to the following formula:
ρ=θ η (Ai|μk, σk)
μi=(1- ρ) μi-1+ρ·Ai
σi 2=(1- ρ) σi-1 2+ρ·(Ai-μi)T(Ai-μi)
Wherein, ρ is turnover rate, AiIt is the new pixel value at the i-th moment, μk、σkIt is the mean value and standard of k-th of Gaussian Profile respectively
Difference, η (Ai|μk, σk) be k-th of Gaussian Profile of the i-th moment probability density, μi-1、μiIt is the mean value of (i-1)-th, i moment respectively,
σi-1 2、σi 2It is the variance of (i-1)-th, i moment respectively.
3. the moving target shadow Detection and removing method of multiple features fusion according to claim 1, which is characterized in that step
Acquisition motion target area described in rapid 2, specific as follows:
Step 2.1 is directed to the continuous three frames image F in front and backi-1(x, y), Fi(x, y), Fi+1(x, y), first respectively by adjacent two field pictures
Calculus of differences is carried out, then binaryzation operation is carried out by Otsu algorithm setting threshold value, obtains two difference image Ni(x, y), Ni+1
(x, y);
Two frame difference images obtained in step 2.1 are made AND operation by step 2.2, obtain the three-frame difference figure of moving target
As Gi(x, y), it may be assumed that
Gi(x, y)=Ni(x, y) ∩ Ni-1(x, y)
Step 2.3 carries out morphological operation to three-frame difference image, picture noise is removed, further according between three-frame difference image
Difference, detection obtain the relative motion region Φ of image objects, the background area Φ that has been capped of previous framebAnd present frame quilt
The background area Φ of coveringbc;
Step 2.4, by moving region ΦsInterior pixel is matched with top n Gaussian Profile by pattern match formula, is gone forward side by side
One step judges to obtain the foreground area of image and background, is background dot if pixel matches with Gauss model;
If pixel and each Gauss model mismatch, which is foreground point;This operation is repeated, by the collection of all foreground points
Conjunction is set as moving target foreground area Sf。
4. the moving target shadow Detection and removing method of multiple features fusion according to claim 1, which is characterized in that step
Extraction color characteristic described in rapid 3, specific as follows:
Step 3.1 is converted to RGB image comprising black, blue, brown, grey, green, orange, purple, red, powder, Bai Hehuang using color mapping
Ten one-dimensional color probabilities indicate, then operation is normalized, obtains ten one-dimensional colors expressions of image;
Step 3.2 carries out self-adaptive reduced-dimensions to the color space using Principal Component Analysis, first calculates least cost function
εi_cost, then find with this function the U of present frame1×U2Orthogonal column vector projection matrix Li, LiIt should meetIt is minimum
Cost function εi_costCalculation formula it is as follows:
In above formula:
Wherein, βiFor weighting function, C × V is the Neighborhood matrix in foreground moving region,It is special for ten one-dimensional colors of image
Sign expression, (c, v) ∈ { 0 ..., C-1 } × { 0 ..., V-1 }, 0 < j < i,It is LiIn each vector, weight is by jth frame
Dimensionality reduction coefficientIt determines;
Step 3.3, according to formulaLinear projection processing is carried out to foreground moving region, by U1Dimension
Color Name color characteristic is down to U2The color characteristic of dimension, wherein U1=11, U2=3,It is the three-dimensional color of image
Character representation.
5. the moving target shadow Detection and removing method of multiple features fusion according to claim 1, which is characterized in that step
Extraction edge feature described in rapid 4, specific as follows:
Step 4.1,3 × 3 matrix As of selection are to moving target foreground area SfEtching operation is carried out, and from moving target foreground area
SfIn subtract the region after etching operation, reuse Canny edge algorithms detection foreground area in edge contour, be denoted as E1, i.e.,
Have:
Wherein, SfIt is moving target foreground area,It is erosion operation symbol, E1It is the side obtained through Canny edge detection algorithm
Edge profile;
Step 4.2 is extracted by edge E1Edge contour, the i.e. profile of moving target present in region in being enclosed in, are denoted as
E2;
Step 4.3, difference are both horizontally and vertically to E2Pixel filling is carried out inside region, is as a result denoted as E3, it may be assumed that
Wherein, VKi(x, y)=1 indicates E2Region first horizontal filling, vertically filling operation again;HKi(x, y)=1 indicates E2Region is first
Vertical filling, again horizontal filling operation;
Step 4.4, from moving target foreground area SfMotion target area E of the middle removal after pixel filling3, obtain based on side
The shade candidate region S of edge featureE。
6. the moving target shadow Detection and removing method of multiple features fusion according to claim 1, which is characterized in that step
Shade assessment submodel is established described in rapid 6, specific as follows:
Step 6.1 defines intensity of illumination Ea, shadow intensity EbAnd shadow factor Z:
Wherein, j ∈ { a, b } indicates that the value of j is a or b (a indicates bright, and b indicates shade), Pa、PbRespectively light area and
Shadow region, naTo be illuminated by the light intensity EaThe pixel number of the light area of influence, nbFor the pixel number of shadow region, eiIt is
The energy intensity of pixel;
Step 6.2, setting Intensity threshold z1With shadow factor threshold value z2, the shadow condition in current frame image is assessed;
Step 6.3 judges whether the intensity of illumination in image reaches threshold value: if light intensity is less than z1Value, then maintain the shade feelings of image
Condition;Conversely, then further calculating the shadow factor of image;
If step 6.4, shadow factor are lower than z2, then the shade in current frame image is not eliminated, maintains shadow condition;It is no
Then, shadow removing is carried out.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910435299.6A CN110288538A (en) | 2019-05-23 | 2019-05-23 | A kind of the moving target shadow Detection and removing method of multiple features fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910435299.6A CN110288538A (en) | 2019-05-23 | 2019-05-23 | A kind of the moving target shadow Detection and removing method of multiple features fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110288538A true CN110288538A (en) | 2019-09-27 |
Family
ID=68002280
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910435299.6A Pending CN110288538A (en) | 2019-05-23 | 2019-05-23 | A kind of the moving target shadow Detection and removing method of multiple features fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288538A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111556278A (en) * | 2020-05-21 | 2020-08-18 | 腾讯科技(深圳)有限公司 | Video processing method, video display device and storage medium |
CN111950523A (en) * | 2020-08-28 | 2020-11-17 | 珠海大横琴科技发展有限公司 | Ship detection optimization method and device based on aerial photography, electronic equipment and medium |
CN112597806A (en) * | 2020-11-30 | 2021-04-02 | 北京影谱科技股份有限公司 | Vehicle counting method and device based on sample background subtraction and shadow detection |
CN113378775A (en) * | 2021-06-29 | 2021-09-10 | 武汉大学 | Video shadow detection and elimination method based on deep learning |
CN113628202A (en) * | 2021-08-20 | 2021-11-09 | 美智纵横科技有限责任公司 | Determination method, cleaning robot and computer storage medium |
CN113643323A (en) * | 2021-08-20 | 2021-11-12 | 中国矿业大学 | Target detection system under dust and fog environment of urban underground comprehensive pipe gallery |
CN115546073A (en) * | 2022-11-29 | 2022-12-30 | 昆明理工大学 | Method and device for removing shadow of floor tile image, computer equipment and storage medium |
CN117953016A (en) * | 2024-03-27 | 2024-04-30 | 华能澜沧江水电股份有限公司 | Flood discharge building exit area slope dangerous rock monitoring method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299210A (en) * | 2014-09-23 | 2015-01-21 | 同济大学 | Vehicle shadow eliminating method based on multi-feature fusion |
CN107220949A (en) * | 2017-05-27 | 2017-09-29 | 安徽大学 | The self adaptive elimination method of moving vehicle shade in highway monitoring video |
CN108985375A (en) * | 2018-07-14 | 2018-12-11 | 李军 | Consider the multiple features fusion tracking of particle weight spatial distribution |
-
2019
- 2019-05-23 CN CN201910435299.6A patent/CN110288538A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299210A (en) * | 2014-09-23 | 2015-01-21 | 同济大学 | Vehicle shadow eliminating method based on multi-feature fusion |
CN107220949A (en) * | 2017-05-27 | 2017-09-29 | 安徽大学 | The self adaptive elimination method of moving vehicle shade in highway monitoring video |
CN108985375A (en) * | 2018-07-14 | 2018-12-11 | 李军 | Consider the multiple features fusion tracking of particle weight spatial distribution |
Non-Patent Citations (1)
Title |
---|
MARTIN DANELLJAN等: "《Adaptive Color Attributes for Real-Time Visual Tracking》", 《CVF2014》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111556278A (en) * | 2020-05-21 | 2020-08-18 | 腾讯科技(深圳)有限公司 | Video processing method, video display device and storage medium |
CN111950523A (en) * | 2020-08-28 | 2020-11-17 | 珠海大横琴科技发展有限公司 | Ship detection optimization method and device based on aerial photography, electronic equipment and medium |
CN112597806A (en) * | 2020-11-30 | 2021-04-02 | 北京影谱科技股份有限公司 | Vehicle counting method and device based on sample background subtraction and shadow detection |
CN113378775A (en) * | 2021-06-29 | 2021-09-10 | 武汉大学 | Video shadow detection and elimination method based on deep learning |
CN113628202A (en) * | 2021-08-20 | 2021-11-09 | 美智纵横科技有限责任公司 | Determination method, cleaning robot and computer storage medium |
CN113643323A (en) * | 2021-08-20 | 2021-11-12 | 中国矿业大学 | Target detection system under dust and fog environment of urban underground comprehensive pipe gallery |
CN113643323B (en) * | 2021-08-20 | 2023-10-03 | 中国矿业大学 | Target detection system under urban underground comprehensive pipe rack dust fog environment |
CN113628202B (en) * | 2021-08-20 | 2024-03-19 | 美智纵横科技有限责任公司 | Determination method, cleaning robot and computer storage medium |
CN115546073A (en) * | 2022-11-29 | 2022-12-30 | 昆明理工大学 | Method and device for removing shadow of floor tile image, computer equipment and storage medium |
CN117953016A (en) * | 2024-03-27 | 2024-04-30 | 华能澜沧江水电股份有限公司 | Flood discharge building exit area slope dangerous rock monitoring method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110288538A (en) | A kind of the moving target shadow Detection and removing method of multiple features fusion | |
CN106355602B (en) | A kind of Multi-target position tracking video frequency monitoring method | |
CN104601964B (en) | Pedestrian target tracking and system in non-overlapping across the video camera room of the ken | |
Bray et al. | Posecut: Simultaneous segmentation and 3d pose estimation of humans using dynamic graph-cuts | |
CN104268583B (en) | Pedestrian re-recognition method and system based on color area features | |
Dadgostar et al. | An adaptive real-time skin detector based on Hue thresholding: A comparison on two motion tracking methods | |
CN103035013B (en) | A kind of precise motion shadow detection method based on multi-feature fusion | |
CN108280397B (en) | Human body image hair detection method based on deep convolutional neural network | |
CN105740945B (en) | A kind of people counting method based on video analysis | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN107066972B (en) | Natural scene Method for text detection based on multichannel extremal region | |
CN111611643A (en) | Family type vectorization data obtaining method and device, electronic equipment and storage medium | |
CN106204594A (en) | A kind of direction detection method of dispersivity moving object based on video image | |
Shen et al. | Adaptive pedestrian tracking via patch-based features and spatial–temporal similarity measurement | |
CN106909883A (en) | A kind of modularization hand region detection method and device based on ROS | |
CN112906550A (en) | Static gesture recognition method based on watershed transformation | |
CN106056078B (en) | Crowd density estimation method based on multi-feature regression type ensemble learning | |
CN102184404A (en) | Method and device for acquiring palm region in palm image | |
Tian et al. | Human Detection using HOG Features of Head and Shoulder Based on Depth Map. | |
CN107871315B (en) | Video image motion detection method and device | |
CN107103301B (en) | Method and system for matching discriminant color regions with maximum video target space-time stability | |
CN110516527B (en) | Visual SLAM loop detection improvement method based on instance segmentation | |
CN107122714B (en) | Real-time pedestrian detection method based on edge constraint | |
Lin | Moving cast shadow detection using scale-relation multi-layer pooling features | |
Altaf et al. | Presenting an effective algorithm for tracking of moving object based on support vector machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190927 |