CN111060887B - Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search - Google Patents

Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search Download PDF

Info

Publication number
CN111060887B
CN111060887B CN201911071241.4A CN201911071241A CN111060887B CN 111060887 B CN111060887 B CN 111060887B CN 201911071241 A CN201911071241 A CN 201911071241A CN 111060887 B CN111060887 B CN 111060887B
Authority
CN
China
Prior art keywords
value
threshold
variance
order search
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911071241.4A
Other languages
Chinese (zh)
Other versions
CN111060887A (en
Inventor
孙剑峰
马乐
刘迪
周鑫
陆威
李思宁
王海虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201911071241.4A priority Critical patent/CN111060887B/en
Publication of CN111060887A publication Critical patent/CN111060887A/en
Application granted granted Critical
Publication of CN111060887B publication Critical patent/CN111060887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting low signal-to-noise ratio echo data signals of a Gm-APD laser radar based on concave-convex search. Step 1: performing preprocessing by using convolution of a Gaussian function and the Gaussian function so as to remove abnormal peaks and obtain a smooth distribution histogram; step 2: extracting the features of the target on the smooth distribution histogram, solving first and second derivatives of the smooth distribution histogram, and determining the distribution of maximum points under the current variance; and step 3: and combining the distance value results of the first-order search and the second-order search, and judging and reserving a correct target distance value by referring to the distance value of the cross neighborhood pixels. The invention is used for signal extraction of echo data with low peak signal-to-noise ratio, and can realize the work of extracting a long-distance target detection signal.

Description

Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search
Technical Field
The invention belongs to the technical field; in particular to a method for extracting low signal-to-noise ratio echo data signals of a Gm-APD laser radar based on concave-convex search.
Background
At present, Gm-APD laser imaging radar signal extraction methods mainly comprise two types, namely nonparametric estimation and parametric estimation. The nonparametric estimation is used for extracting echo characteristics, wherein a peak value method considers that the trigger frequency of a target echo is higher than noise, the peak value of a trigger frequency histogram is extracted to be a target position for reconstruction, the method can better extract the position of the target echo under the conditions of low noise, high signal-to-noise ratio and multi-frame number, and the reconstruction effect on echo data with high noise data and few frames is poor. The time correlation method reconstructs signals in a pulse width according to the correlation, and the signals are considered as a target echo position by sequencing and judging whether the triggering interval of two times is smaller than the pulse width. In general, the reconstruction effect of the existing non-parameter estimation method on high-noise data is poor. The parameter estimation utilizes a Poisson distribution trigger model of Gm-APD to reconstruct, more frames are needed during reconstruction, wherein the maximum likelihood estimation utilizes the trigger model and a likelihood function to search in a parameter space to obtain a distance corresponding to the maximum value of the likelihood function to reconstruct, and when the actual trigger histogram is different from the trigger model, a larger error is caused. Bayesian estimation utilizes reversible jump Monte Carlo Markov to combine with trigger model to obtain the target position of multimodal echo data, depending on the trigger model. And a method for reversely pushing and correcting the trigger frequency histogram reduces the trigger frequency of the background noise at the opening end and highlights the peak value part of the target. In addition, the spatial correlation processing method can be used for improving the statistical frame number by combining with other methods, the reconstruction effect can be improved under the condition of less frame number or extremely low signal-to-noise ratio, and the target edge blurring phenomenon can be caused when the signal-to-noise ratio is slightly high.
Disclosure of Invention
The invention provides a method for extracting a Gm-APD laser radar low signal-to-noise ratio echo data signal based on concave-convex search, which combines a first-order search result and a second-order search result by utilizing the convex characteristic of an echo signal; and when the combination is carried out, the neighborhood information is used for carrying out judgment for many times, the target pixel point is reserved, and the noise pixel point is removed.
The invention is realized by the following technical scheme:
a method for extracting a Gm-APD laser radar low signal-to-noise ratio echo data signal based on concave-convex search comprises the following steps:
step 1: performing preprocessing by using convolution of a Gaussian function and the Gaussian function so as to remove abnormal peaks and obtain a smooth distribution histogram;
step 2: extracting the features of the target on the smooth distribution histogram, solving first and second derivatives of the smooth distribution histogram, and determining the distribution of maximum points under the current variance;
and step 3: and combining the distance value results of the first-order search and the second-order search, and judging and reserving a correct target distance value by referring to the distance value of the cross neighborhood pixels.
Further, the preprocessing expression of step 1 is as follows:
Figure GDA0003459270640000021
Figure GDA0003459270640000022
wherein v is a kernel density function, i.e., a gaussian smoothing function, h is a variance value, u is a trigger histogram, and w is a smoothed histogram.
Further, the step 2 first derivative expression is:
Figure GDA0003459270640000023
the second derivative expression is:
Figure GDA0003459270640000024
wherein v is1Is a variance of h1Kernel density function of v2Is a variance of h2Kernel density function of (1), w1And w2Corresponding to the first and second derivatives of the preprocessed data;
the first derivative target location is characterized by w1In the two adjacent values, the left value is larger than zero, the right value is smaller than zero, the position closest to the middle zero point in the two values is a set of target possible values, and finally the position with the largest difference between the left nearest maximum value and the right nearest minimum value in the possible values is extracted; second derivative position characterized by w2Position corresponding to the minimum value。
Further, the expression of extracting the signal in the first derivative and the second derivative is as follows:
θ=(i,i+1)|w1(i)>0,w1(i+1)<0
pos1stpro=argmin|w1(θ)|
θ1=sort(maximum(w1),pos1stpro)
θ2=sort(minimum(w1),pos1stpro)
Figure GDA0003459270640000025
pos2nd=argmin|w2(i)|
where θ is the possible value of the first characteristic satisfying the first derivative, pos1stproTo satisfy the first and second characteristic possible values of the first derivative, pos1stFor first order search results, pos2ndIs a second order search result.
Further, the specific process in step 3 is as follows:
step 3.1: keeping and judging the value of the target pixel point;
step 3.2: and removing the noise pixel points and supplementing the target pixel points.
Further, the step 3.1 specifically comprises:
step 3.1.1: calculating the difference value between the central pixel point and the cross neighborhood, and sequencing the difference values from small to large to obtain D through first-order search and second-order search respectively1stAnd D2stThe cross neighborhood difference sorting correspondence formula of the two sequences is as follows:
D1st=sort(Dvalue_1st)=d1,d2,d3,d4
D2nd=sort(Dvalue_2nd)=d5,d6,d7,d8
step 3.1.2: respectively judge the difference D1st,D2ndWhether the number of the medium valued points is more than or equal to 2 or not is determined as follows:
(isnan(D1st||D2nd)≤2)
if D is1stThe number of the valued points is more than or equal to 2, S1Searching for a difference D to a first order1stAverage value of the first two values in the sequence, if not, S1Searching for a difference D to a first order1stA first value of the sequence; s2The same process is carried out;
step 3.1.3: judgment of | P1st(i,j)-P2nd(i, j) | and threshold A relation judge S at the same time1And S2In relation to a threshold A;
when | P1st(i,j)-P2nd(i, j) | is less than threshold A or S1And S2Meanwhile, when the distance is smaller than the threshold A, the obtained distance is considered as a target value, the distance value is reserved, and the judgment expression is as follows:
[P1st(i,j)-P2nd(i,j)<th||(S1<th&&S2<th)]
when S is1Less than threshold A, while S2When the first-order search result is greater than or equal to the threshold A, the first-order search result P is considered1st(i, j) the target, second order search result P2nd(i, j) obtaining the wrong distance value, assigning the first-order search result of the pixel point to a second-order search result, and judging the expression as follows:
(S1<th&&S2≥th)
when S is2Less than threshold A, while S1When the second-order search result is greater than or equal to the threshold A, the second-order search result P is considered2nd(i, j) obtaining the first-order search result P1st(i, j) obtaining the wrong distance value, assigning the second-order search result of the pixel point to the first-order search result, and judging the expression as follows:
(S1≥th&&S2<th)
when the above is to S1And S2When the relation with the threshold A is not met, namely the relation is greater than the threshold, the two search results are considered to be non-target values, and are discarded and set as a nonnumber;
step 3.1.4: the routine is ended.
Further, the step 3.2 specifically includes:
step 3.2.1: calculating the number N of the current pixel array non-number points1The calculation expression is as follows:
N1=isnan(P(i,j))
step 3.2.2: d obtained according to step 3.1.11st,D2stS obtained according to step 3.1.21,S2Simultaneously, setting the pixel points of which the cross neighborhood points are all nonnumbers or S is greater than a threshold value as nonnumbers;
step 3.2.3: judging whether three or more valued pixel points exist in the cross neighborhood, wherein the judgment expression is as follows:
(isnan(D1st‖D2nd)≤1)
when present, calculate D1stVariance ss of1And D2stVariance ss of2The calculation formulas are respectively as follows:
ss1=var(D1st(1:3))
ss2=var(D2nd(1:3))
variance ss1The meaning of (1) is that the variance value of three values is left after the maximum deviation value is removed from the four values in the cross neighborhood; variance ss2The meaning of (1) is that the variance value of three values is left after the maximum deviation value is removed from the four values in the cross neighborhood;
step 3.2.4: judging the relation between the variance ss and the threshold B, wherein the judgment expression is as follows:
(ss<th)
when the threshold value is smaller than the variance, setting the central pixel point as the mean value of the valued pixel points with the maximum deviation; when the threshold is greater than the variance, setting it to be a nonnumber;
step 3.2.5: repeating steps 3.1.1 to 3.1.4;
step 3.2.6: calculating the number N of the non-number points of the pixel array at the moment2The calculation formula is as follows:
N2=isnan(P1(i,j))
step 3.2.7: judgment of N1And N2The judgment formula is as follows:
(N1=N2)
when N is present1Is not equal to N2Then N will be2Value is given to N1Returning to the step 3.2.2 for circulation; when N is present1Is equal to N2The routine is ended.
The invention has the beneficial effects that:
the target echo peak value lower than the noise trigger frequency can also be extracted, the method does not depend on a trigger model, and the image signal to noise ratio of the three-dimensional range profile can be improved by judging for many times.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a graph of the high noise echo data and the pre-processing result of the present invention, FIG. 2- (a) the raw data of the high noise echo, and FIG. 2- (b) the pre-processing data of the high noise echo.
FIG. 3 first derivative w of the present invention1And second derivative w2FIG. 3- (a) first derivative w1 and FIG. 3- (b) second derivative w 2.
FIG. 4 is a flow chart of target selection according to the present invention, FIG. 4- (a) a flow chart of a process combining first-order search and second-order search, and FIG. 4- (b) a flow chart of target pixel point determination.
FIG. 5 is a diagram of an experimental scene and a multi-frame distance map for repairing, FIG. 5- (a) a target scene and FIG. 5- (b) a distance map.
FIG. 6 shows a result diagram of a reconstructed distance image extracted from 500 frames of targets according to the present invention, FIG. 6- (a) a peak method diagram, FIG. 6- (b) a maximum likelihood estimation diagram, and FIG. 6- (c) a concave-convex search combination method diagram.
FIG. 7 is a diagram showing the variation of objective evaluation indexes of a reconstructed three-dimensional image with different frame numbers, FIG. 7- (a) a diagram showing the variation of target reduction degree, FIG. 7- (b) a diagram showing the variation of average distance measurement error, and FIG. 7- (c) a diagram showing the variation of inverse signal-to-noise ratio of the image.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A method for extracting a Gm-APD laser radar low signal-to-noise ratio echo data signal based on concave-convex search comprises the following steps: as shown in figure 1 of the drawings, in which,
step 1: performing preprocessing by using convolution of a Gaussian function and the Gaussian function so as to remove abnormal peaks and obtain a smooth distribution histogram; as shown in figure 2 of the drawings, in which,
step 2: extracting the features of the target on the smooth distribution histogram, solving first and second derivatives of the smooth distribution histogram, and determining the distribution of maximum points under the current variance;
and step 3: and combining the distance value results of the first-order search and the second-order search, and judging and reserving a correct target distance value by referring to the distance value of the cross neighborhood pixels.
Further, the preprocessing expression of step 1 is as follows:
Figure GDA0003459270640000051
Figure GDA0003459270640000052
wherein v is a kernel density function, i.e., a gaussian smoothing function, h is a variance value, u is a trigger histogram, and w is a smoothed histogram.
Further, the step 2 first derivative expression is:
Figure GDA0003459270640000053
the second derivative expression is:
Figure GDA0003459270640000054
wherein v is1Is a variance of h1Kernel density function of v2Is a variance of h2Kernel density function of (1), w1And w2Corresponding to the first and second derivatives of the preprocessed data.
The first derivative target location is characterized by w1In the two adjacent values, the left value is larger than zero, the right value is smaller than zero, the position closest to the middle zero point in the two values is a set of target possible values, and finally the position with the largest difference between the left nearest maximum value and the right nearest minimum value in the possible values is extracted;
second derivative position characterized by w2The position corresponding to the minimum value.
Further, the expression of extracting the signal in the first derivative and the second derivative is as follows:
θ=(i,i+1)|w1(i)>0,w1(i+1)<0
the upper formula is w1The left value of the two adjacent points is larger than the right value and is less than zero;
pos1stpro=argmin|w1(θ)|
the point where the first derivative of the two points is closer to 0 is a possible point;
θ1=sort(maximum(w1),pos1stpro)
the above formula is that all maximum value points and possible points in the first derivative are sorted from small to large;
θ2=sort(minimum(w1),pos1stpro)
the above formula is that all minimum value points and possible points in the first derivative are ordered from small to large;
Figure GDA0003459270640000055
the point with the maximum difference between the left-side nearest maximum and the right-side nearest minimum of all possible points is the finally obtained first derivative search result;
pos2nd=argmin|w2(i)|
the corresponding point of the minimum value of the second derivative in the above formula is a second derivative search result;
where θ is the possible value of the first characteristic satisfying the first derivative, pos1stproTo satisfy the first and second characteristic possible values of the first derivative, pos1stFor first order search results, pos2ndIs a second order search result.
Further, the process distribution combining the processing and determining the target pixel points in step 3 includes the following processes:
the process distribution combining the processing and the target pixel point judgment in the step 3 comprises the following processes:
step 3.1: keeping and judging the value of the target pixel point;
in FIG. 4a), P (i, j) is the jth pixel (pixel to be determined) in the ith row, i.e., the first-order search result is P1st(i, j), the second order search result corresponds to P2nd(i, j), the process judges the difference value of the two search results and the relation between the difference value and the respective cross neighborhood, and the value which is considered as the target pixel point is reserved and judged.
Step 3.1.1: calculating the difference value between the central pixel point and the cross neighborhood, and sequencing the difference values from small to large to obtain D through first-order search and second-order search respectively1stAnd D2stThe cross neighborhood difference sorting correspondence formula of the two sequences is as follows:
D1st=sort(Dvalue_1st)=d1,d2,d3,d4
D2nd=sort(Dvalue_2nd)=d5,d6,d7,d8
step 3.1.2: respectively judge the difference D1st,D2stWhether the number of the medium valued points is more than or equal to 2 or not is determined as follows:
(isnan(D1st||D2nd)≤2)
if D is1stThe number of the valued points is more than or equal to 2, S1Searching for a difference D to a first order1stAverage value of the first two values in the sequence, if not, S1Searching for a difference D to a first order1stA first value of the sequence; s2The same process is carried out;
step 3.1.3: judgment of | P1st(i,j)-P2nd(i, j) | and threshold A relation judge S at the same time1And S2In relation to a threshold A;
when | P1st(i,j)-P2nd(i, j) | is less than threshold A or S1And S2Meanwhile, when the distance is smaller than the threshold A, the obtained distance is considered as a target value, the distance value is reserved, and the judgment expression is as follows:
[P1st(i,j)-P2nd(i,j)<th||(S1<th&&S2<th)]
when S is1Less than threshold A, while S2When the first-order search result is greater than or equal to the threshold A, the first-order search result P is considered1st(i, j) obtaining a second order search result P targeting2nd(i, j) obtaining the wrong distance value, assigning the first-order search result of the pixel point to a second-order search result, and judging the expression as follows:
(S1<th&&S2≥th)
when S is2Less than threshold A, while S1When the second-order search result is greater than or equal to the threshold A, the second-order search result P is considered2nd(i, j) obtaining the first-order search result P1st(i, j) obtaining the wrong distance value, assigning the second-order search result of the pixel point to the first-order search result, and judging the expression as follows:
(S1≥th&&S2<th)
when the above is to S1And S2When the relation with the threshold A is not met, namely the relation is greater than the threshold, the two search results are considered to be non-target values, and are discarded and set as a nonnumber;
step 3.1.4: the routine is ended.
Step 3.2: removing noise pixel points and supplementing target pixel points;
in FIG. 4b), P (i, j) is the jth pixel (pixel to be determined) in the ith row, i.e., the first-order search result is P1st(i, j), the second order search result corresponds to P2nd(i, j), the process judges the relation of the respective cross neighborhoods, removes noise pixel points and supplements the targetsMarking pixel points;
step 3.2.1: calculating the number N of the current pixel array non-number points1The calculation expression is as follows:
N1=isnan(P(i,j))
step 3.2.2: d obtained according to step 3.1.11st,D2stS obtained according to step 3.1.21,S2Simultaneously, setting the pixel points of which the cross neighborhood points are all nonnumbers or S is greater than a threshold value as nonnumbers;
step 3.2.3: judging whether three or more valued pixel points exist in the cross neighborhood, wherein the judgment expression is as follows:
(isnan(D1st||D2nd)≤1)
when present, calculate D1stVariance ss of1And D2stVariance ss of2The calculation formulas are respectively as follows:
ss1=var(D1st(1:3))
ss2=var(D2nd(1:3))
variance ss1The meaning of (1) is that the variance value of three values is left after the maximum deviation value is removed from the four values in the cross neighborhood; variance ss2The meaning of (1) is that the variance value of three values is left after the maximum deviation value is removed from the four values in the cross neighborhood;
step 3.2.4: judging the relation between the variance ss and the threshold B, wherein the judgment expression is as follows:
(ss<th)
when the threshold value is smaller than the variance, setting the central pixel point as the mean value of the valued pixel points with the maximum deviation; when the threshold is greater than the variance, setting it to be a nonnumber;
step 3.2.5: repeating steps 3.1.1 to 3.1.4;
step 3.2.6: calculating the number N of the non-number points of the pixel array at the moment2The calculation formula is as follows:
N2=isnan(P1(i,j))
step 3.2.7: judgment of N1And N2The judgment formula is as follows:
(N1=N2)
when N is present1Is not equal to N2Then N will be2Value is given to N1Returning to the step 3.2.2 for circulation; when N is present1Is equal to N2The routine is ended.
The echo data in the real scene is used for carrying out comparative analysis on the method results, and the experimental parameters are shown in table 1. The experimental scene graph and the reference range profile obtained by manually restoring part of the pixel points by combining the multi-frame experimental data reconstruction result and the photo information are shown in FIG. 5,
TABLE 1 Experimental parameters
Figure GDA0003459270640000071
In the case of using 500 frames of echo data, the three methods of signal extraction and reconstruction of the range profile result as shown in fig. 6.
As shown in fig. 6, the peak method and the maximum likelihood estimation method are basically unable to identify the target contour, and have more noise points; the concave-convex searching combination method has prominent target, more complete target pixel points and less noise points. In order to more clearly compare the differences of the three methods, the three methods need to be evaluated by using objective evaluation indexes, namely, target reduction degrees:
Figure GDA0003459270640000072
Figure GDA0003459270640000073
wherein d is a reconstructed distance value; dsIs a standard distance value; dbIs an allowable error distance value; n is the number of target total pixels, and m is the number of pixels of the allowable error distance value. K1The value represents the proportion of the number of pixels obtained by the reconstruction of the method in the pixels occupied by the target. Second, relative average range error is used:
Figure GDA0003459270640000081
wherein d isiDetermining a reconstructed distance value for the target pixel; dsiThe distance value is judged to be the standard distance value of the target pixel. K2The value represents the average ranging error value of all target pel distance values reconstructed by the method. And finally, using the pixel signal-to-noise ratio:
Figure GDA0003459270640000082
wherein p is the number of pixels in the background where noise is not filtered by the threshold and which are judged as non-target in the target. K3The value represents the ratio of the number of target pixel elements to the number of noise pixel elements which are not filtered out, wherein the target pixel elements are obtained by reconstruction in the whole pixel array. Target degree of reduction K1The closer to 1, the better, the average range error K2The smaller the signal to noise ratio, the better the signal to noise ratio, and the corresponding K3The smaller the better. D is used for judging objective evaluation indexbThe results are shown in Table 2 for the various 500 frames set at 10 time intervals.
TABLE 2 average objective evaluation results for multiple groups of 500 frames
Figure GDA0003459270640000083
The objective evaluation result under the fixed 500 frames can be seen, the concave-convex searching method is superior to a peak value method and a maximum likelihood estimation method in signal extraction target reconstruction, the target reduction degree can be improved by 15.10% compared with the peak value method under the 500 frames of the scene, and the image signal-to-noise ratio can be improved by 21 times; compared with maximum likelihood estimation, the target reduction degree can be improved by 3.83%, and the image signal-to-noise ratio can be improved by 15 times. For more comprehensive comparison, the objective evaluation index changes at different frame numbers are analyzed, and the reconstruction effects of various methods at different frame numbers are compared, and the result is shown in fig. 7.
FIG. 7 shows that as the number of frames increases, the target reduction degree tends to be stable after increasing; the average distance measurement error tends to be stable after being reduced; the image signal-to-noise ratio reciprocal peak value method and the maximum likelihood estimation tend to be stable after being reduced, the concave-convex search combination method is slightly increased, the analysis reason is that when the combination processing is carried out, the distance values of noise points are close along with the increase of the frame number, and parts are identified as targets to be reserved when the judgment is carried out. And the three methods are longitudinally compared, the obtained concave-convex search combination method has the best reconstruction result, and the signal extraction effect of the Gm-APD can be improved.

Claims (5)

1. A Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search is characterized by comprising the following steps:
step 1: performing preprocessing by using convolution of a Gaussian function and the Gaussian function so as to remove abnormal peaks and obtain a smooth distribution histogram;
step 2: extracting the features of the target on the smooth distribution histogram, solving first and second derivatives of the smooth distribution histogram, and determining the distribution of maximum points under the current variance;
and step 3: combining the distance value results of the first-order search and the second-order search, and judging and reserving a correct target distance value by referring to the distance value of the cross neighborhood pixels;
the expression of the first derivative in the step 2 is as follows:
Figure FDA0003459270630000011
the second derivative expression is:
Figure FDA0003459270630000012
wherein v is1Is a variance of h1Kernel density function of v2Is a variance of h2Kernel density function of (1), w1And w2Corresponding to the first and second derivatives of the preprocessed data;
the first derivative target location is characterized by w1In the two adjacent values, the left value is larger than zero, the right value is smaller than zero, the position closest to the middle zero point in the two values is a set of target possible values, and finally the position with the largest difference between the left nearest maximum value and the right nearest minimum value in the possible values is extracted; second derivative position characterized by w2The position corresponding to the minimum value;
the expression of signal extraction in the first derivative and the second derivative is as follows:
θ=(i,i+1)|w1(i)>0,w1(i+1)<0
pos1stpro=argmin|w1(θ)|
θ1=sort(maximum(w1),pos1stpro)
θ2=sort(minimum(w1),pos1stpro)
Figure FDA0003459270630000013
pos2nd=argmin|w2(i)|
where θ is the possible value of the first characteristic satisfying the first derivative, pos1stproTo satisfy the first and second characteristic possible values of the first derivative, pos1stFor first order search results, pos2ndIs a second order search result.
2. The method of claim 1, wherein the step 1 preprocessing expression is:
Figure FDA0003459270630000014
Figure FDA0003459270630000015
wherein v is a kernel density function, i.e., a gaussian smoothing function, h is a variance value, u is a trigger histogram, and w is a smoothed histogram.
3. The method according to claim 1, wherein the step 3 comprises the following specific steps:
step 3.1: keeping and judging the value of the target pixel point;
step 3.2: and removing the noise pixel points and supplementing the target pixel points.
4. The method according to claim 3, characterized in that said step 3.1 is in particular:
step 3.1.1: calculating the difference value between the central pixel point and the cross neighborhood, and sequencing the difference values from small to large to obtain D through first-order search and second-order search respectively1stAnd D2stThe cross neighborhood difference sorting correspondence formula of the two sequences is as follows:
D1st=sort(Dvalue_1st)=d1,d2,d3,d4
D2nd=sort(Dvalue_2nd)=d5,d6,d7,d8
step 3.1.2: respectively judge the difference D1st,D2ndWhether the number of the medium valued points is more than or equal to 2 or not is determined as follows:
(isnan(D1st||D2nd)≤2)
if D is1stThe number of the valued points is more than or equal to 2, S1Searching for a difference D to a first order1stAverage value of the first two values in the sequence, if not, S1Searching for a difference D to a first order1stA first value of the sequence; s2The same process is carried out;
step 3.1.3: judgment of | P1st(i,j)-P2nd(i, j) | and threshold A relation judge S at the same time1And S2In relation to a threshold A;
when | P1st(i,j)-P2nd(i, j) | is less than threshold A or S1And S2Meanwhile, when the distance is smaller than the threshold A, the obtained distance is considered as a target value, the distance value is reserved, and the judgment expression is as follows:
[P1st(i,j)-P2nd(i,j)<th||(S1<th&&S2<th)]
when S is1Less than threshold A, while S2When the first-order search result is greater than or equal to the threshold A, the first-order search result P is considered1st(i, j) the target, second order search result P2nd(i, j) obtaining the wrong distance value, assigning the first-order search result of the pixel point to a second-order search result, and judging the expression as follows:
(S1<th&&S2≥th)
when S is2Less than threshold A, while S1When the second-order search result is greater than or equal to the threshold A, the second-order search result P is considered2nd(i, j) obtaining the first-order search result P1st(i, j) obtaining the wrong distance value, assigning the second-order search result of the pixel point to the first-order search result, and judging the expression as follows:
(S1≥th&&S2<th)
when the above is to S1And S2When the relation with the threshold A is not met, namely the relation is greater than the threshold, the two search results are considered to be non-target values, and are discarded and set as a nonnumber;
step 3.1.4: the routine is ended.
5. The method according to claim 3, characterized in that said step 3.2 is in particular:
step 3.2.1: calculating the number N of the current pixel array non-number points1The calculation expression is as follows:
N1=isnan(P(i,j))
step 3.2.2: d obtained according to step 3.1.11st,D2stS from step 3.1.21,S2Simultaneously, setting the pixel points of which the cross neighborhood points are all nonnumbers or S is greater than a threshold value as nonnumbers;
step 3.2.3: judging whether three or more valued pixel points exist in the cross neighborhood, wherein the judgment expression is as follows:
(isnan(D1st||D2nd)≤1)
when present, calculate D1stVariance ss of1And D2stVariance ss of2The calculation formulas are respectively as follows:
ss1=var(D1st(1:3))
ss2=var(D2nd(1:3))
variance ss1The meaning of (1) is that the variance value of three values is left after the maximum deviation value is removed from the four values in the cross neighborhood; variance ss2The meaning of (1) is that the variance value of three values is left after the maximum deviation value is removed from the four values in the cross neighborhood;
step 3.2.4: judging the relation between the variance ss and the threshold B, wherein the judgment expression is as follows:
(ss<th)
when the threshold value is smaller than the variance, setting the central pixel point as the mean value of the valued pixel points with the maximum deviation; when the threshold is greater than the variance, setting it to be a nonnumber;
step 3.2.5: repeating steps 3.1.1 to 3.1.4;
step 3.2.6: calculating the number N of the non-number points of the pixel array at the moment2The calculation formula is as follows:
N2=isnan(P1(i,j))
step 3.2.7: judgment of N1And N2The judgment formula is as follows:
(N1=N2)
when N is present1Is not equal to N2Then N will be2Value is given to N1Returning to the step 3.2.2 for circulation; when N is present1Is equal to N2The routine is ended.
CN201911071241.4A 2019-11-05 2019-11-05 Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search Active CN111060887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911071241.4A CN111060887B (en) 2019-11-05 2019-11-05 Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911071241.4A CN111060887B (en) 2019-11-05 2019-11-05 Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search

Publications (2)

Publication Number Publication Date
CN111060887A CN111060887A (en) 2020-04-24
CN111060887B true CN111060887B (en) 2022-02-22

Family

ID=70298409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911071241.4A Active CN111060887B (en) 2019-11-05 2019-11-05 Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search

Country Status (1)

Country Link
CN (1) CN111060887B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113406665B (en) * 2021-06-15 2022-11-08 哈尔滨工业大学 Laser radar three-dimensional range image high-resolution reconstruction method and device based on multi-echo extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013042365A (en) * 2011-08-16 2013-02-28 Nec Corp Avalanche photodiode adjustment system and avalanche photodiode adjustment method
CN107290755A (en) * 2017-06-23 2017-10-24 哈尔滨工业大学 The target range and the acquisition methods of target strength realized based on 4D image-forming photon counting laser radars system
CN107705314A (en) * 2017-11-01 2018-02-16 齐鲁工业大学 A kind of more subject image dividing methods based on intensity profile
CN108445471A (en) * 2018-03-26 2018-08-24 武汉大学 A kind of range accuracy appraisal procedure under the conditions of single-photon laser radar multi-detector

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013042365A (en) * 2011-08-16 2013-02-28 Nec Corp Avalanche photodiode adjustment system and avalanche photodiode adjustment method
CN107290755A (en) * 2017-06-23 2017-10-24 哈尔滨工业大学 The target range and the acquisition methods of target strength realized based on 4D image-forming photon counting laser radars system
CN107705314A (en) * 2017-11-01 2018-02-16 齐鲁工业大学 A kind of more subject image dividing methods based on intensity profile
CN108445471A (en) * 2018-03-26 2018-08-24 武汉大学 A kind of range accuracy appraisal procedure under the conditions of single-photon laser radar multi-detector

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GM-APD激光雷达距离像邻域KDE重构;刘迪等;《红外与激光工程》;20190630;摘要,第366页左栏倒数第1段,第367页左栏倒数第1-2段、右栏第1-4段,第368页左栏倒数第1段,第369页左栏第1-3段及图2-4 *
基于盖革模式APD的光子计数激光雷达探测距离研究;罗韩君等;《光电工程》;20131215(第12期);第80-88页 *

Also Published As

Publication number Publication date
CN111060887A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
US10990191B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN108921130B (en) Video key frame extraction method based on saliency region
CN106709928B (en) fast two-dimensional maximum inter-class variance threshold method for noisy images
US8983199B2 (en) Apparatus and method for generating image feature data
CN112802020B (en) Infrared dim target detection method based on image inpainting and background estimation
CN113052873A (en) Single-target tracking method for on-line self-supervision learning scene adaptation
CN115131351B (en) Engine oil radiator detection method based on infrared image
CN110992288B (en) Video image blind denoising method used in mine shaft environment
CN114972882B (en) Wear surface damage depth estimation method and system based on multi-attention mechanism
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
CN111060887B (en) Gm-APD laser radar low signal-to-noise ratio echo data signal extraction method based on concave-convex search
CN104200434A (en) Non-local mean image denoising method based on noise variance estimation
CN112214746B (en) Identity recognition method based on multi-modal vein image gender information heterogeneous separation
CN111091583B (en) Long-term target tracking method
Chacko Pre and post processing approaches in edge detection for character recognition
CN110751670A (en) Target tracking method based on fusion
CN104331700A (en) Track-energy-diffusion-diagram-based group behavior identification method
CN110751671B (en) Target tracking method based on kernel correlation filtering and motion estimation
CN102314687A (en) Method for detecting small targets in infrared sequence images
CN111223050A (en) Real-time image edge detection algorithm
CN113888488A (en) Steel rail defect detection method and system based on deep residual shrinkage network
CN111259914B (en) Hyperspectral extraction method for characteristic information of tea leaves
CN114078138A (en) Image significance detection method and device
CN112801903A (en) Target tracking method and device based on video noise reduction and computer equipment
CN112632601A (en) Crowd counting method for subway carriage scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant