CN112560740A - PCA-Kmeans-based visible light remote sensing image change detection method - Google Patents

PCA-Kmeans-based visible light remote sensing image change detection method Download PDF

Info

Publication number
CN112560740A
CN112560740A CN202011537557.0A CN202011537557A CN112560740A CN 112560740 A CN112560740 A CN 112560740A CN 202011537557 A CN202011537557 A CN 202011537557A CN 112560740 A CN112560740 A CN 112560740A
Authority
CN
China
Prior art keywords
remote sensing
sensing image
change detection
pca
kmeans
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011537557.0A
Other languages
Chinese (zh)
Inventor
吕国敏
刘昌军
马强
孙涛
桑国庆
于恒帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Institute of Water Resources and Hydropower Research
Original Assignee
China Institute of Water Resources and Hydropower Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Institute of Water Resources and Hydropower Research filed Critical China Institute of Water Resources and Hydropower Research
Priority to CN202011537557.0A priority Critical patent/CN112560740A/en
Publication of CN112560740A publication Critical patent/CN112560740A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a PCA-Kmeans-based visible light remote sensing image change detection method. The method includes the steps of inputting two remote sensing images of the same region in different time phases, obtaining gray level graphs of the two time phase remote sensing images, obtaining a difference graph of the remote sensing images by using a difference method, carrying out dimension reduction on the difference graph based on a PCA algorithm to obtain a feature space vector, classifying the feature vector space based on a K-Means algorithm to obtain a change detection result, carrying out contour detection on the change detection result based on an edge detection algorithm to obtain a boundary box, and finally filtering and combining the detected change region boundary boxes based on an NMS algorithm. The invention can respectively obtain the change detection images of different types of ground objects, has wider application range and higher detection precision, and can be applied to the technical fields of dynamic monitoring such as different time phase remote sensing image change detection, ground object change detection before and after earthquake, ground object change detection before and after flood and the like.

Description

PCA-Kmeans-based visible light remote sensing image change detection method
Technical Field
The invention relates to the technical field of image processing, in particular to a visible light remote sensing image change detection method based on PCA-Kmeans. The invention can respectively obtain the change detection images of different types of ground objects, has wider application range and higher detection precision, and can be applied to the technical fields of dynamic monitoring such as different time phase remote sensing image change detection, ground object change detection before and after earthquake, ground object change detection before and after flood and the like.
Background
The change detection of the remote sensing image means that information of change of ground features of the area along with time is detected by inputting a plurality of remote sensing images of the same area at different moments, the change detection of the remote sensing image becomes one of the most important application directions in the field of remote sensing image processing, and the change detection of the remote sensing image has important application values in many fields such as national and local resource planning management, water and soil conservation, natural disaster detection and the like. With the increase of the resolution of the remote sensing image data, the information which can be extracted by the remote sensing image is richer and richer, so that the change detection can be applied to more and more scenes.
Various different change detection methods have been proposed for single-band multi-temporal remote sensing images, including arithmetic operation, classification, transformation, and the like. The method is classified into a post-classification comparison method and a direct comparison method according to the existing change detection method. The Post-Classification Comparison (PCC) is an intuitive change detection method, which classifies remote sensing images of different time phases, and then compares and analyzes results generated by Classification pixel by pixel to detect change information of ground objects, and the method can detect type information of ground object change. The method can be used for independently classifying and marking the remote sensing images in different time phases, so that the images of factors such as atmosphere, a sensor, seasons, the ground and the like on the images in different time phases can be eliminated. However, the key of this method is classification, and the accuracy of change detection is equal to the accumulation of the accuracy of classification results in different phases, and since the classification itself has a segmentation problem, it is often difficult to obtain a high-accuracy classification result, which may result in low accuracy of the change detection result and uncertainty of the result.
Disclosure of Invention
The invention aims to provide a PCA-Kmeans-based visible light remote sensing image change detection method aiming at the technical defects, which is used for detecting the remote sensing image change of the same area at different moments so as to improve the detection applicability and the detection precision.
The general scheme for achieving the aim of the invention is as follows: firstly, inputting two remote sensing images with different time phases, constructing a gray level graph for the two remote sensing images, constructing a difference graph for the two remote sensing images, extracting a change detection characteristic graph by using a Principal Component Analysis (PCA) method, clustering the characteristic graph by using a K-means algorithm, detecting a change area detection outline by using an edge detection algorithm, filtering rectangular frames surrounding the change area by using an NMS algorithm, combining the rectangular frames adjacent to the change detection graph, and finally obtaining the change area of the remote sensing images with different time phases.
The steps of the invention comprise:
a visible light remote sensing image change detection method based on PCA-Kmeans comprises the following steps:
(1) inputting remote sensing images before and after change: inputting two acquired remote sensing images of the same area and different time phases;
(2) judging whether the input remote sensing image is a color remote sensing image, if so, executing the step (3), otherwise, executing the step (4);
(3) constructing a single-channel gray remote sensing image;
(4) judging whether the resolutions of the input two time-phase remote sensing images are consistent, if so, executing the step (6), otherwise, executing the step (5);
(5) aligning the resolution ratios of the two time-phase remote sensing images;
(6) obtaining a difference image of two remote sensing gray level images in different time phases by using a difference method;
(7) and (3) dimension reduction treatment: based on a PCA algorithm, performing dimensionality reduction on a remote sensing image difference map matrix to obtain a characteristic space vector which is used as input data of next-step cluster analysis;
(8) clustering analysis: constructing a machine learning algorithm based on K-Means, inputting the feature space vector optimized by PCA for classification, and obtaining a remote sensing image change detection result;
(9) detecting the result of the step (8) based on an edge detection algorithm to obtain a change area boundary frame;
(10) redundancy in change detection results with bounding boxes using a non-maximum suppression algorithm (NMS)
Filtering the rest rectangular frames;
(11) inputting the result filtered in the step (10), and combining adjacent boundary frames in the detection result of the change area.
Further optimization, the step (3) comprises the following steps:
(3a) acquiring a certain pixel point of a color remote sensing image, selecting a color channel with the minimum brightness value from three color channels of red R, green G and blue B of the pixel point, and taking the brightness of the color channel as the gray value of the pixel point;
(3b) and (4) repeating the step (3a) until all pixel points in the color remote sensing image are processed, obtaining the gray values of all the pixel points, and forming a gray image by the gray values of all the pixel points.
Further optimization, the step (5) comprises the following steps:
(5a) obtaining a front time phase remote sensing image with a resolution of (w)1,h1) Obtaining the resolution (w) of the remote sensing image of the later time phase2,h2);W1And h1The width and height W of the resolution of the front time phase remote sensing image are respectively2And h2Respectively determining the width and the height of the resolution of the rear time phase remote sensing image;
(5b) judgment of w1×h1、w2×h2If w is a big or small relation of1×h1>w2×h2Adjusting the resolution of the remote sensing image of the time phase to (w)2,h2) Otherwise, adjusting the resolution of the rear time phase remote sensing image to be (w)1,h1)。
Further, on the premise of keeping the maximum variance in each dimension of data, the data subjected to PCA (principal component analysis) dimensionality reduction in the step (7) is projected in a low-dimensional space by searching a new vector base, so that noise with small variance is removed, and the principal component with the maximum information content is reserved; the dimension with large feature value after transformation represents the dimension with large variance in the original data, and the feature space vector which contributes most to the variance of the original image after transformation is taken as the input data of the next step of cluster analysis.
Further, in the step (8), the K-Means algorithm randomly initializes the clustering centers according to a preset clustering number, classifies all samples according to the distance from the samples to each center, calculates the error sum from each type of internal sample to the center, uses the average value of the samples in the class as a new clustering center, and continuously iterates until the error sum in the class (namely E in the following formula) is not reduced any more, thereby completing the clustering analysis; wherein the error criterion function is as follows:
the K-means core algorithm formula is as follows:
Figure BDA0002853591210000031
wherein the content of the E parameter calculation is the Euclidean distance sum from the center position obtained by current iteration to the respective center point cluster, the smaller the value is, the better the current classification effect is, k represents the preset clustering quantity, i represents the serial number of the clustering sample, C represents the serial number of the clustering sampleiRepresents the class i sample set, xiDenotes the mean of the i-th class samples and x denotes the sample.
Further, the step (10) comprises the following steps:
(10a) acquiring all boundary frames in the detection result of the contour of the change area, sequencing according to the area, and selecting the frame with the largest area;
(10b) calculating the overlapping area of the rest rectangular frames and the current rectangular frame, namely IOU, and deleting the frame with small area if the IOU is larger than a certain threshold;
(10c) and (5) continuously selecting a box with the largest area from the unprocessed boxes, and repeating the steps (10a) and (10b) until all the boxes are traversed.
Further, the step (11) comprises the following steps:
(11a) selecting a certain boundary frame, calculating the rest frames, judging whether the other frames are adjacent to the current frame, if so, combining the frames into a large surrounding frame, judging whether the area of the boundary frame is larger than a set threshold value, and otherwise, skipping;
(11b) continuing to select the next one from the un-compared boxes, and repeating (11a) until all bounding boxes are traversed;
(11c) and (4) visualizing the rectangular frame combined in the step (11b) on the rear-time-phase original color remote sensing image.
PCA is a technique for analyzing data, and the most important application is to simplify the original data. The method can effectively find out the most 'main' elements and structures in the data, remove noise and redundancy, reduce the dimension of the original complex data and reveal a simple structure hidden behind the complex data. Its advantages are simple process, no limitation to parameters, and convenient application to different occasions.
The K-means clustering algorithm (K-means) is a clustering analysis algorithm for iterative solution, and comprises the steps of dividing data into K groups in advance, randomly selecting K objects as initial clustering centers, then calculating the distance between each object and each seed clustering center, and allocating each object to the nearest clustering center. The cluster centers and the objects assigned to them represent a cluster. The cluster center of a cluster is recalculated for each sample assigned based on the objects existing in the cluster. This process will be repeated until some termination condition is met. The termination condition may be that no (or minimum number) objects are reassigned to different clusters, no (or minimum number) cluster centers are changed again, and the sum of squared errors is locally minimal.
The invention relates to a PCA-Kmeans-based visible light remote sensing image change detection method, which has the following beneficial effects:
firstly, the data characteristics can be well optimized based on the PCA algorithm, the remote sensing image data dimensionality is effectively compressed while the data information quantity and stability are ensured, the operation requirement is reduced, the noise and the redundancy can be reduced, and the subsequent clustering algorithm processing is facilitated;
secondly, the K-Means clustering algorithm adopted by the invention belongs to unsupervised learning, and can automatically complete the change detection of the remote sensing image under the condition of saving manpower, material resources and financial resources;
thirdly, the method enables the performance of the change detection model to be more excellent based on the PCA-Kmeans algorithm, and effectively improves the detection precision of the change area.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a schematic flow chart of a PCA-Kmeans-based visible light remote sensing image change detection method in the present invention;
FIG. 2 is a partial remote sensing image employed by the present invention; a.2016 images, b.2017 images;
FIG. 3 is a change detection image of the present invention;
FIG. 4 is a comparison result of the time-phase remote sensing images before and after the present invention; a.2016 images, b.2017 images; b the box above refers to where the change is detected.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
A visible light remote sensing image change detection method based on PCA-Kmeans.
The steps of the invention are as follows with reference to the attached figure 1.
(1) Inputting remote sensing images before and after change:
inputting two acquired remote sensing images of the same area and different time phases;
(2) judging whether the input remote sensing image is a color remote sensing image, if so, executing the step (3), otherwise, executing the step (4) on the input remote sensing image;
(3) constructing a single-channel gray remote sensing image:
(3a) acquiring a certain pixel point of a color remote sensing image, selecting a color channel with the minimum brightness value from three color channels of red R, green G and blue B of the pixel point, and taking the brightness of the color channel as the gray value of the pixel point;
(3b) repeating the step (3a) until all pixel points in the color remote sensing image are processed, obtaining the gray values of all the pixel points, and forming a gray image by the gray values of all the pixel points;
(4) judging whether the resolutions of the input two time-phase remote sensing images are consistent, if so, executing the step (6), otherwise, executing the step (5);
(5) aligning the resolution ratios of the two time-phase remote sensing images:
(5a) obtaining a front time phase remote sensing image with a resolution of (w)1,h1) Obtaining the resolution (w) of the remote sensing image of the later time phase2,h2);
(5b) Judgment of w1×h1、w2×h2If w is a big or small relation of1×h1>w2×h2Adjusting the resolution of the remote sensing image of the time phase to (w)2,h2) Otherwise, adjusting the resolution of the rear time phase remote sensing image to be (w)1,h1);
The present embodiment specifically operates on resolution alignment by using region interpolation (INTER AREA),
comprises the following steps:
step 1, acquiring a certain pixel point in a gray remote sensing image with resolution to be adjusted as a central pixel point, and selecting a rectangular window with the size of m × n pixels, wherein the values of m and n are as follows:
m=max(w2,w1)/min(w2,w1)
n=max(h2,h1)/min(h2,h1)
wherein (w)1,h1),(w2,h2) The width and the height of the resolution of the two remote sensing images are respectively;
step 2, arranging the gray values of all pixel points in the rectangular window in a descending order to form a gray sequence, selecting a median value in the gray sequence as a filtering value, and replacing the gray value of the rectangular window in the step 1 with the filtering value;
and 3, repeating the step 1 and the step 2 until all pixel points in the gray level image are processed, and obtaining a new remote sensing gray level image.
(6) Obtaining a difference image of two different time phase remote sensing gray level images by using a difference method:
the difference making method comprises the following specific steps:
step 1: subtracting two remote sensing gray level image matrixes before and after change acquired at different moments, and then taking an absolute value to obtain a difference image of the ground object:
difference_image(i,j)=|image1(i,j)-image2(i,j)|
wherein i and j are indexes of pixel point positions of two images, and image (i and j) represents the gray value of a certain pixel point;
step 2: and (4) repeating the step (1) until all the pixel points mapped by the two remote sensing images are processed, and obtaining a difference value graph of the two time-phase remote sensing images.
(7) Based on a PCA algorithm, carrying out dimensionality reduction on a remote sensing image difference map matrix, on the premise that the maximum variance in each dimensionality of data of each dimension is kept by data subjected to dimensionality reduction by a principal component analysis method PCA, projecting original high-dimensional data in a low-dimensional space by searching a new vector base, eliminating noise with smaller variance, and keeping the principal component with the maximum information content; the dimension with large feature value after transformation represents the dimension with large variance in the original data, and the data which can reflect the variance feature of the original image after transformation is taken as the input data of the next step of cluster analysis;
the principal component analysis in this example has the following steps:
step 1: inputting the difference image obtained in the step (6), acquiring one pixel point, setting a filter with a key _ size of 5x5 by taking the current pixel point as the top left vertex of a rectangular filtering window, acquiring all pixel points in the filter and flattening the pixel points into row vectors, and collecting the row vectors into a new vector set, wherein if the resolution of the remote sensing image is mxn, the number of rows and columns in the vector set is (mxn)/(5 × 5);
step 2: repeating the step 1 until all pixel points in the difference image are processed, and obtaining a (m x n/25,25) -dimensional vector diagram which is recorded as vs;
and step 3: the PCA dimension reduction processing of the vs original feature vector by adopting a principal component analysis method comprises the following steps:
step 3.1: calculating the mean value of each dimension of the vs sample
Figure BDA0002853591210000071
And the difference di
Figure BDA0002853591210000072
Figure BDA0002853591210000073
In the formula: xiThe samples in the ith column are shown, and n is shown as n columns in total.
Step 3.2: constructing a covariance matrix:
Figure BDA0002853591210000074
wherein A ═ d1,d2,...,dn]
Step 3.3: singular Value Decomposition (SVD) to obtain AATAnd arranging λ in monotonically decreasing order1≥λ2≥...≥λpThe corresponding feature vectors are respectively: mu.s12,...,μp(p≤n);
Lambda and mu are respectively eigenvalue and eigenvector
Step 3.4: selecting the first p eigenvectors according to the dimensionality of the dimensionality reduction target to form a linear transformation matrix:
W=[μ12,...,μp]
step 3.5: projecting the original difference features to a P-dimensional subspace:
PCp=WTdi(i=1,2,...,n)
in the formula, PCp is the dimension reduction characteristic of the obtained p-dimensional principal component.
(8) Constructing an unsupervised learning algorithm based on K-Means, inputting feature data subjected to PCA optimization, and finally obtaining a remote sensing image change detection result;
in the step (8), the K-Means algorithm randomly initializes a clustering center according to a preset clustering number, classifies all samples according to the distance from the samples to each center, calculates the error sum from each type of internal samples to the center, takes the average value of the samples in the class as a new clustering center, and continuously iterates until the error sum (E) in the class reaches the minimum value range, thereby completing the clustering analysis; wherein the error criterion function is as follows:
the K-means core algorithm formula is as follows:
Figure BDA0002853591210000081
wherein the content of the E parameter calculation is the Euclidean distance sum from the center position obtained by current iteration to the respective center point cluster, the smaller the value is, the better the current classification effect is, k represents the preset clustering quantity, i represents the serial number of the clustering sample, C represents the serial number of the clustering sampleiRepresents the class i sample set, xiDenotes the mean of the i-th class samples and x denotes the sample.
(9) Detecting the result of the step (8) based on an edge detection algorithm to obtain a change area boundary frame;
the edge detection algorithm in this embodiment is as follows:
step 1: filtering the change detection result obtained in the step (8) by using a Gaussian filtering algorithm to remove the noise of the image and play a smoothing role, wherein the size of a Gaussian kernel is set to be 5x5, and the standard deviation is set to be 0;
the gaussian function is formulated as follows:
Figure BDA0002853591210000082
wherein σ is 0.8;
the Gaussian template is:
Figure BDA0002853591210000091
step 2: using a sobel operator to calculate the gradient size and the gradient direction of each pixel point, wherein the formula is as follows:
Figure BDA0002853591210000092
θ=arctan(Gx/Gy)
wherein Gx is the gradient of the x axis, Gy is the gradient of the y axis, M is the gradient of the current pixel point, and theta is the gradient direction of the current pixel point;
and step 3: the method uses non-maximum suppression to eliminate the spurious effect, and comprises the following specific steps:
after the gradient size and direction are obtained in the step 2, the image is comprehensively scanned, non-boundary points are removed, the gradient of each pixel point is judged whether to be the maximum of surrounding points with the same gradient direction, if yes, the gradient is reserved, and if not, the gradient is removed;
and 4, step 4: true and potential edges are obtained using a dual threshold:
gradient value > maxVal: the processing is a boundary and,
minVal < gradient value < maxVal: the boundary is retained, otherwise, the boundary is discarded,
gradient value < minVal: discarding;
and 5: the bounding box of the detected edge is obtained based on the findContours function in opencv.
(10) Filtering redundant rectangular boxes in the change detection result containing the bounding box by using a non-maximum suppression algorithm (NMS), and comprising the following steps:
(10a) acquiring all boundary frames in the detection result of the contour of the change area, sequencing according to the area, and selecting the frame with the largest area;
(10b) calculating the overlapping area of the rest rectangular frames and the current rectangular frame, namely IOU, and deleting the frame with small area if the IOU is larger than a certain threshold;
(10c) continuously selecting a frame with the largest area from the unprocessed frames, and repeating the steps (10a) and (10b) until all the frames are traversed;
(11) inputting the result filtered in the step (10), and combining adjacent boundary frames in the detection result of the change area:
(11a) selecting a certain boundary frame, calculating the rest frames, judging whether the other frames are adjacent to the current frame, if so, combining the frames into a large surrounding frame, judging whether the area of the boundary frame is larger than a set threshold value, and otherwise, skipping;
(11b) continuing to select the next one from the un-compared boxes, and repeating (11a) until all bounding boxes are traversed;
(11c) and (4) visualizing the rectangular frame combined in the step (11b) on the rear-time-phase original color remote sensing image.
In this embodiment: the method for judging the adjacent bounding boxes in the step (11) comprises the following steps:
step 1: obtaining box1Position coordinates and size (cx)1,cy1,w1,h1) Obtaining box2Position coordinates and size (cx)2,cy2,w2,h2);
Specifically, cx and cy are central coordinates of the rectangular frame, and w and h are width and height of the rectangular frame;
step 2: if box1And box2And if the following conditions are met, judging the boundary frames to be adjacent, executing the step 3, otherwise, searching the next boundary frame:
||cy2-cy1|-(h1+h2)/2|≤d1
||cx2-cx1|-(w1+w2)/2|≤d2
wherein d1 and d2 are thresholds for judging whether two bounding boxes are adjacent;
and step 3: calculating the position coordinates and size of the new bounding box:
nx1=min((cx1-w1/2),(cx2-w2/2))
ny1=min((cy1-h1/2),(cy2-h2/2))
nx2=max((cx1+w1/2),(cx2+w2/2))
ny2=max((cy1+h1/2),(cy2+h2/2))
nw=nx2-nx1
nh=ny2-ny1
wherein, (nx)1,ny1) Is the horizontal and vertical coordinates of the top left vertex of the new bounding box; (nx)2,ny2) Is the horizontal and vertical coordinates of the lower right vertex of the new bounding box, (nw, nh) the width and height of the new bounding box. The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing examplesThose of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (7)

1. A PCA-Kmeans-based visible light remote sensing image change detection method is characterized by comprising the following steps:
(1) inputting remote sensing images before and after change: inputting two acquired remote sensing images of the same area and different time phases;
(2) judging whether the input remote sensing image is a color remote sensing image, if so, executing the step (3), otherwise, executing the step (4);
(3) constructing a single-channel gray remote sensing image;
(4) judging whether the resolutions of the input two time-phase remote sensing images are consistent, if so, executing the step (6), otherwise, executing the step (5);
(5) aligning the resolution ratios of the two time-phase remote sensing images;
(6) obtaining a difference image of two remote sensing gray level images in different time phases by using a difference method;
(7) and (3) dimension reduction treatment: based on a PCA algorithm, performing dimensionality reduction on a remote sensing image difference map matrix to obtain a characteristic space vector which is used as input data of next-step cluster analysis;
(8) clustering analysis: constructing a machine learning algorithm based on K-Means, inputting the feature space vector optimized by PCA for classification, and obtaining a remote sensing image change detection result;
(9) detecting the result of the step (8) based on an edge detection algorithm to obtain a change area boundary frame;
(10) filtering redundant rectangular frames in the change detection result containing the boundary frame by using a non-maximum value inhibition algorithm;
(11) inputting the result filtered in the step (10), and combining adjacent boundary frames in the detection result of the change area.
2. The PCA-Kmeans-based visible light remote sensing image change detection method according to claim 1, characterized in that: the step (3) comprises the following steps:
(3a) acquiring a certain pixel point of a color remote sensing image, selecting a color channel with the minimum brightness value from three color channels of red R, green G and blue B of the pixel point, and taking the brightness of the color channel as the gray value of the pixel point;
(3b) and (4) repeating the step (3a) until all pixel points in the color remote sensing image are processed, obtaining the gray values of all the pixel points, and forming a gray image by the gray values of all the pixel points.
3. The PCA-Kmeans-based visible light remote sensing image change detection method according to claim 1, characterized in that: the step (5) comprises the following steps:
(5a) obtaining a front time phase remote sensing image with a resolution of (w)1,h1) Obtaining the resolution (w) of the remote sensing image of the later time phase2,h2);W1And h1The width and height W of the resolution of the front time phase remote sensing image are respectively2And h2Respectively determining the width and the height of the resolution of the rear time phase remote sensing image;
(5b) judgment of w1×h1、w2×h2If w is a big or small relation of1×h1>w2×h2Adjusting the resolution of the remote sensing image of the time phase to (w)2,h2) Otherwise, adjusting the resolution of the rear time phase remote sensing image to be (w)1,h1)。
4. The PCA-Kmeans-based visible light remote sensing image change detection method according to claim 1, characterized in that: under the premise of keeping the maximum variance in each dimension of data of each dimension, the data subjected to PCA dimensionality reduction processing in the step (7) is projected in a low-dimensional space by searching a new vector base, so that noise with small variance is removed, and the principal component with the maximum information content is reserved; the dimension with large feature value after transformation represents the dimension with large variance in the original data, and the feature space vector which contributes most to the variance of the original image after transformation is taken as the input data of the next step of cluster analysis.
5. The PCA-Kmeans-based visible light remote sensing image change detection method according to claim 1, characterized in that: in the step (8), the K-Means algorithm randomly initializes the clustering centers according to the preset clustering numbers, classifies all samples according to the distances from the samples to the centers, calculates the error sum from each type of internal samples to the centers, takes the average value of the samples in the class as a new clustering center, and continuously iterates until the error sum in the class is not reduced any more, thereby completing the clustering analysis.
6. The PCA-Kmeans-based visible light remote sensing image change detection method according to claim 1, characterized in that: the step (10) includes the steps of:
(10a) acquiring all boundary frames in the detection result of the contour of the change area, sequencing according to the area, and selecting the frame with the largest area;
(10b) calculating the overlapping area of the rest rectangular frames and the current rectangular frame, namely IOU, and deleting the frame with small area if the IOU is larger than a certain threshold;
(10c) and (5) continuously selecting a box with the largest area from the unprocessed boxes, and repeating the steps (10a) and (10b) until all the boxes are traversed.
7. The PCA-Kmeans-based visible light remote sensing image change detection method according to claim 1, characterized in that: the step (11) includes the steps of:
(11a) selecting a certain boundary frame, calculating the rest frames, judging whether the other frames are adjacent to the current frame, if so, combining the frames into a large surrounding frame, judging whether the area of the boundary frame is larger than a set threshold value, and otherwise, skipping;
(11b) continuing to select the next one from the un-compared boxes, and repeating (11a) until all bounding boxes are traversed;
(11c) and (4) visualizing the rectangular frame combined in the step (11b) on the rear-time-phase original color remote sensing image.
CN202011537557.0A 2020-12-23 2020-12-23 PCA-Kmeans-based visible light remote sensing image change detection method Pending CN112560740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011537557.0A CN112560740A (en) 2020-12-23 2020-12-23 PCA-Kmeans-based visible light remote sensing image change detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011537557.0A CN112560740A (en) 2020-12-23 2020-12-23 PCA-Kmeans-based visible light remote sensing image change detection method

Publications (1)

Publication Number Publication Date
CN112560740A true CN112560740A (en) 2021-03-26

Family

ID=75031682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011537557.0A Pending CN112560740A (en) 2020-12-23 2020-12-23 PCA-Kmeans-based visible light remote sensing image change detection method

Country Status (1)

Country Link
CN (1) CN112560740A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116539167A (en) * 2023-07-04 2023-08-04 陕西威思曼高压电源股份有限公司 High-voltage power supply working temperature distribution data analysis method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456018A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Remote sensing image change detection method based on fusion and PCA kernel fuzzy clustering
CN104361589A (en) * 2014-11-12 2015-02-18 河海大学 High-resolution remote sensing image segmentation method based on inter-scale mapping
CN105894513A (en) * 2016-04-01 2016-08-24 武汉大学 Remote sensing image change detection method and remote sensing image change detection system taking into consideration spatial and temporal variations of image objects
CN110070525A (en) * 2019-04-16 2019-07-30 湖北省水利水电科学研究院 Remote sensing image variation detection method based on the semi-supervised CV model of object level
CN111179230A (en) * 2019-12-18 2020-05-19 星际空间(天津)科技发展有限公司 Remote sensing image contrast change detection method and device, storage medium and electronic equipment
CN111539296A (en) * 2020-04-17 2020-08-14 河海大学常州校区 Method and system for identifying illegal building based on remote sensing image change detection
CN111582043A (en) * 2020-04-15 2020-08-25 电子科技大学 High-resolution remote sensing image ground object change detection method based on multitask learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456018A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Remote sensing image change detection method based on fusion and PCA kernel fuzzy clustering
CN104361589A (en) * 2014-11-12 2015-02-18 河海大学 High-resolution remote sensing image segmentation method based on inter-scale mapping
CN105894513A (en) * 2016-04-01 2016-08-24 武汉大学 Remote sensing image change detection method and remote sensing image change detection system taking into consideration spatial and temporal variations of image objects
CN110070525A (en) * 2019-04-16 2019-07-30 湖北省水利水电科学研究院 Remote sensing image variation detection method based on the semi-supervised CV model of object level
CN111179230A (en) * 2019-12-18 2020-05-19 星际空间(天津)科技发展有限公司 Remote sensing image contrast change detection method and device, storage medium and electronic equipment
CN111582043A (en) * 2020-04-15 2020-08-25 电子科技大学 High-resolution remote sensing image ground object change detection method based on multitask learning
CN111539296A (en) * 2020-04-17 2020-08-14 河海大学常州校区 Method and system for identifying illegal building based on remote sensing image change detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵琦琳等著: "《人工神经网络在环境科学与工程中的设计应用》", 31 March 2019, 中国环境出版集团 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116539167A (en) * 2023-07-04 2023-08-04 陕西威思曼高压电源股份有限公司 High-voltage power supply working temperature distribution data analysis method
CN116539167B (en) * 2023-07-04 2023-09-08 陕西威思曼高压电源股份有限公司 High-voltage power supply working temperature distribution data analysis method

Similar Documents

Publication Publication Date Title
CN108537239B (en) Method for detecting image saliency target
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN107330875B (en) Water body surrounding environment change detection method based on forward and reverse heterogeneity of remote sensing image
CN109871884B (en) Multi-feature-fused object-oriented remote sensing image classification method of support vector machine
EP3073443B1 (en) 3d saliency map
CN112529910B (en) SAR image rapid superpixel merging and image segmentation method
CN111340824A (en) Image feature segmentation method based on data mining
CN106127735B (en) A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN107977660A (en) Region of interest area detecting method based on background priori and foreground node
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN111199245A (en) Rape pest identification method
CN115690086A (en) Object-based high-resolution remote sensing image change detection method and system
CN112861654A (en) Famous tea picking point position information acquisition method based on machine vision
CN107392211B (en) Salient target detection method based on visual sparse cognition
CN109741358A (en) Superpixel segmentation method based on the study of adaptive hypergraph
CN110516666B (en) License plate positioning method based on combination of MSER and ISODATA
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN116052152A (en) License plate recognition system based on contour detection and deep neural network
CN111768455A (en) Image-based wood region and dominant color extraction method
CN115018785A (en) Hoisting steel wire rope tension detection method based on visual vibration frequency identification
CN113095332B (en) Saliency region detection method based on feature learning
CN113723314A (en) Sugarcane stem node identification method based on YOLOv3 algorithm
CN107977608B (en) Method for extracting road area of highway video image
CN112560740A (en) PCA-Kmeans-based visible light remote sensing image change detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326

RJ01 Rejection of invention patent application after publication