CN113553966B - Method for extracting effective starry sky area of single star map - Google Patents

Method for extracting effective starry sky area of single star map Download PDF

Info

Publication number
CN113553966B
CN113553966B CN202110855532.3A CN202110855532A CN113553966B CN 113553966 B CN113553966 B CN 113553966B CN 202110855532 A CN202110855532 A CN 202110855532A CN 113553966 B CN113553966 B CN 113553966B
Authority
CN
China
Prior art keywords
star
saliency
map
star map
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110855532.3A
Other languages
Chinese (zh)
Other versions
CN113553966A (en
Inventor
师晨光
张锐
余勇
孙兴哲
林晓冬
谢祥华
黄志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Engineering Center for Microsatellites
Innovation Academy for Microsatellites of CAS
Original Assignee
Shanghai Engineering Center for Microsatellites
Innovation Academy for Microsatellites of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Engineering Center for Microsatellites, Innovation Academy for Microsatellites of CAS filed Critical Shanghai Engineering Center for Microsatellites
Priority to CN202110855532.3A priority Critical patent/CN113553966B/en
Priority to CN202410312473.9A priority patent/CN118212635A/en
Publication of CN113553966A publication Critical patent/CN113553966A/en
Application granted granted Critical
Publication of CN113553966B publication Critical patent/CN113553966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting an effective star sky area of a single star map, which comprises the steps of firstly preprocessing the star map to obtain a saliency value of each pixel in the star map, and generating a star map saliency map; then, taking the saliency value as a characteristic, carrying out initial segmentation on the star map saliency map to obtain pre-segmented blocks, and carrying out characteristic extraction on the pre-segmented blocks; and finally, clustering and combining the features to realize the identification and marking of the interference area.

Description

Method for extracting effective starry sky area of single star map
Technical Field
The invention relates to the technical field of aerospace, in particular to a method for extracting an effective starry sky region of a single star map.
Background
Under the hot tide of satellite constellation, the huge increase of the number of artificial satellites has great influence on space detection tasks and ground observation experiments. The star sensor has the advantages of high precision, high autonomy, low power consumption and the like, so the star sensor plays an extremely important role in aerospace tasks. With the improvement of the difficulty of space detection tasks and ground observation experiments, higher requirements are put forward on star sensors, especially on star sensors with high precision and high dynamic performance.
Star images are the only data source for star sensors, and star images are often disabled by various disturbances, directly affecting subsequent star point extraction. Therefore, one of the key to star sensor performance is its immunity.
In addition to background noise, large area disturbances in the field of view of the star sensor can also affect the attitude output of the star sensor. High-energy protons released by Van Allen internal radiation bands, cosmic rays, solar proton events and the like generate scratch-type interference on an image plane due to speed difference; the satellite part component and the second diffuse reflection light enter the interference of the regular shape generated by the field of view of the star sensor on the image plane; in ground satellite observation and calibration experiments, interference of irregular shapes such as thin clouds and the like can be inevitably caused in a field of view. In addition, in the future, the star sensor has low cost and high performance, and urban star viewing is possible. But the biggest problem faced by urban stars is the many irregular large-area disruptors in the field of view. Therefore, removing the interference in the field of view is a key step in ensuring proper operation of the star sensor.
In the conventional astronomical image processing research, a great deal of research is put into aiming at the interference problem in astronomical images. The interference can be removed by an interference suppression algorithm based on superposition of a plurality of images, and the interference is removed by detecting whether the distribution characteristics of cosmic rays and satellite trajectories accord with a point spread function or not by a method based on the point spread characteristics; the method of multi-feature matching removes interference by extracting multiple features of an image and based on geometric matching. If these methods are used on star sensors, they result in a large memory consumption and require long-time alignment. In order to remove scratch-type interference from a single astronomical image, a Hough transform-based method and an outlier detection-based method are often used, but have a relatively good effect only on interference conforming to a linear distribution. For other large area disturbances, then the method of Laplacian edge detection is used to identify scratch-like disturbances of arbitrary shape and size by the sharpness of its edges. However, there is no better algorithm or method for large area interferents.
Disclosure of Invention
Aiming at part or all of the problems in the prior art, in order to remove large-area significant interference in a star map and further extract effective stars, the invention provides a method for extracting an effective starry sky region of a single star map, which comprises the following steps:
preprocessing a star map to obtain saliency values of pixels in the star map, and generating a star map saliency map;
taking the saliency value as a characteristic, and initially dividing the star map saliency map;
extracting features; and
and clustering and combining the features to realize identification and marking of the interference area.
Further, the preprocessing includes: and (5) carrying out de-equalization on the star map histogram through a limited LC algorithm to obtain the saliency value of each pixel.
Further, the preprocessing further includes: the calculation of the saliency value comprises the following constraint conditions:
wherein,the pixel value corresponding to the minimum value of the saliency value.
Further, the initial segmentation includes: pixels with similar saliency values are clustered using a simple linear iterative clustering method (Simple Linear Iterative Clustering, SLIC).
Further, the metric value adopted by the simple linear iterative clustering method is calculated according to the saliency distance and the space distance.
Further, the extraction method further comprises normalization processing is performed on the saliency value before initial segmentation is performed.
Further, the features include a mean of saliency of the superpixels, and a variance of the superpixels in the star map.
Further, the clustering combination is realized by adopting a Density-based clustering method (Density-Based Spatial Clustering of Applications with Noise, DBSCAN) with noise.
Further, the distance metric in the DBSCAN algorithm employs a weighted Minkowski distance.
The invention provides a method for extracting an effective star field of a single star map, which is used for identifying and marking an interference field in the star map by fusing saliency detection, SLIC super-pixel segmentation and DBSCAN method, so as to extract the effective star field and be used for star point extraction and star map identification. Specifically, the extraction method comprises the steps of firstly preprocessing a star map by using a limited LC algorithm, and increasing the contrast ratio of a large-area interferent and a background; then pre-segmenting the saliency map based on the thought of SLIC super-pixel segmentation; finally, extracting features from the super pixels, and combining the super pixels with similar features by using DBSCAN clusters to obtain a large-area interferent region and an effective starry sky region. The test result of the real-time star map shows that the extraction method can effectively divide the large-area interferents and the star sky areas in the star map when processing the large-area interference in the view field, and can successfully extract star points under the condition of strong interference, thereby improving the availability of the star sensor. The extraction method can be applied to image recognition of a star camera and ground star observation experiments in an expanded mode.
Drawings
To further clarify the above and other advantages and features of embodiments of the present invention, a more particular description of embodiments of the invention will be rendered by reference to the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. In the drawings, for clarity, the same or corresponding parts will be designated by the same or similar reference numerals.
FIG. 1a shows a schematic diagram of an ideal star point energy distribution generated in a simulation;
FIG. 1b shows a star point energy distribution schematic of a simulated dynamic uniform motion;
FIG. 1c shows a star point energy distribution schematic of a simulated variable angular velocity motion;
FIGS. 1d-1f show a schematic representation of the energy distribution of star points in an in-orbit real star map;
FIGS. 2a-2d show enlarged schematic views of large area interferents in various disturbed star maps;
FIG. 3 is a flow chart of a method for extracting an effective star field of a single star map according to an embodiment of the present invention;
FIGS. 4a and 4b show the star map original and its saliency map after saliency calculation using the LC algorithm in one embodiment of the present invention, respectively;
FIGS. 5a-5b respectively illustrate a star map original and a histogram distribution of the saliency map after its computation using the LC algorithm in one embodiment of the present invention;
FIGS. 6a-6b illustrate gray scale and saliency correspondence, respectively, after saliency calculation using an LC algorithm and a restricted LC algorithm in one embodiment of the present invention;
FIG. 7 is a diagram illustrating the effective star occupancy at different numbers of superpixels in a superpixel partition in accordance with one embodiment of the present invention;
FIG. 8 is a schematic diagram of boundary recall corresponding to different values of m in a superpixel partition according to an embodiment of the present invention;
9a-9d show a superpixel segmentation map and its details representing regions of significant interference, details representing regions of suspected star points, and distribution schematics of each superpixel saliency map mean and original map variance, respectively;
FIGS. 10a-10e show, respectively, the result of a star chain satellite sliding over a star sky to form long striped interference, and the result of extracting star points using a thresholding method, a binary mask generated after clustering, the result of extracting star points in the presence of a mask, and the result of DBSCAN clustering;
FIGS. 11a-11f show a ground-captured star map, and the result of extracting star points using a thresholding method, SLIC superpixel segmentation results, DBSCAN clustering results, binary masks generated after clustering, and star point extraction results when masks are present, respectively;
FIGS. 12a-12f show, respectively, a star map with linear scratch interference, and a result of extracting star points using a thresholding method, a SLIC superpixel segmentation result, a DBSCAN clustering result, a binary mask generated after clustering, and a star point extraction result when the mask is present;
FIGS. 13a-13f show, respectively, a star map with regular halo interference, and the results of extracting star points using thresholding, SLIC superpixel segmentation, DBSCAN clustering, binary masks generated after clustering, and star point extraction results when masks are present; and
fig. 14a-14f show a star map, which is subject to interference by moon and its reflected light, and a result of extracting star points using a thresholding method, a SLIC super-pixel segmentation result, a DBSCAN clustering result, a binary mask generated after clustering, and a star point extraction result when the mask is present, respectively.
Detailed Description
In the following description, the present invention is described with reference to various embodiments. One skilled in the relevant art will recognize, however, that the embodiments may be practiced without one or more of the specific details, or with other alternative and/or additional methods, materials, or components. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. Similarly, for purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the embodiments of the invention. However, the invention is not limited to these specific details. Furthermore, it should be understood that the embodiments shown in the drawings are illustrative representations and are not necessarily drawn to scale.
Reference throughout this specification to "one embodiment" or "the embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
It should be noted that the embodiments of the present invention describe the process steps in a specific order, however, this is merely to illustrate the specific embodiment and not to limit the order of the steps. In contrast, in various embodiments of the present invention, the order of the steps may be adjusted according to process adjustments.
As more satellites are launched into space, the interference factors in space are gradually increased, and the star sensor is inevitably greatly influenced, so that future star sensors have to face more challenges.
Star images are the only source of data for star sensors, and star images are often subject to various disturbances. A typical star map should consist of tens of bright star points and a dark background. The star map signal-to-noise ratio SNR is typically between [20db,50db ], expressed as a point spread function PSF. In the static case, the energy distribution of the star point at the imaging plane is approximately a gaussian diffusion function. The distribution of static star points is expressed as:
wherein, (x, y), (x 0 ,y 0 ) Representing the pixel position on the image plane and the true barycenter coordinates of the star point, sigma, respectively PSF Is Gaussian radius, represents the concentration degree of energy, E sum Is the energy gray scale coefficient, and corresponds to the star, the like and the quantity of the starSub-efficiency, integration time, lens aperture and optical transmittance.
In the dynamic case, the star imaging on the image plane can be represented by a centroid motion model of the star point, the star point energy distribution of which is as follows:
wherein, (x) 0 (t),y 0 (t)) represents the centroid coordinates of the star point at time t, where t=t 0 +Δt (Δt < T), T being the exposure time of the star sensor, in milliseconds;
FIGS. 1a-1f show a schematic diagram of static and dynamic star point energy distribution in simulated and real states, respectively, wherein:
FIG. 1a shows a schematic diagram of an ideal star point energy distribution generated in a simulation;
FIG. 1b shows a star point energy distribution schematic of a simulated dynamic uniform motion;
FIG. 1c shows a star point energy distribution schematic of a simulated variable angular velocity motion;
FIGS. 1d-1f show a schematic of the star point energy distribution in an in-orbit real star map, wherein the star point energy distribution shown in FIG. 1f has slight smearing of dynamic imaging.
Common large area disturbances in star maps include linear scratch-like disturbances, regular and irregular shapes, etc. Fig. 2a-2d show enlarged schematic views of large area interferents in various disturbed star maps:
fig. 2a shows a linear scratch-like disturbance, which occurs when high-energy particles, in particular protons, scratch on the image sensor, or a satellite constellation passes into a satellite sensitive field of view forming scratches on the image surface. Such disturbances are usually not fixed in location and can occur anywhere in the star map. Meanwhile, the length of the scratch is not fixed due to the difference of the relative speeds. The length of the interference formation is significantly different from the typical star energy distribution over a limited exposure time. With the rapid increase of the number of space satellites and the increase of orbit height in the future, the space interference is increasingly complex, and the situation is more and more common;
FIG. 2b shows the disturbance due to a mask design defect, and FIG. 2c shows the disturbance caused by some satellite components or their reflected light entering the field of view of the star sensor, both of which are regularly shaped disturbances that cause an increase in gray values in local areas on the image, where such disturbances are more fixed in location and typically occur around the image; and
fig. 2d shows the interference caused by the reflected light of the celestial body entering the field of view of the star sensor, which is similar to the interference shown in fig. 2b and 2c, except that such interference is often present in the center of the image and irregularly shaped with the greatest effect on the star sensor.
As shown, the large area disturbance is significantly different in shape and gray value compared to a typical star point, and is a continuous block of pixels with gray values close to or higher than the star point. For common star maps, since the gray contrast of star points and dark background is obvious, a global threshold method, such as Otsu algorithm, is generally used to effectively segment the star points and the background. However, when the large-area interferents exist, the contrast among the significant interferents, star points and the background in the star map is small, if the Otsu algorithm is still adopted, different thresholds need to be set according to different star sensors, which has a large limitation and is not good in effect. The star points are directly extracted from the interference star map, so that a large number of false star points are extracted near the interference object, and the star sensor is disabled.
Based on the above, to solve the problem that the star sensor cannot work normally due to various interferences, the contrast between the salient interferents, the star points and the background in the star map needs to be improved first, and then image segmentation is performed. The invention provides a method for extracting an effective star sky area of a single star map, and the technical scheme of the invention is further described below with reference to the embodiment drawings.
Fig. 3 is a flow chart of a method for extracting an effective star field of a single star map according to an embodiment of the present invention. As shown in fig. 3, a method for extracting an effective star field of a single star map includes:
first, in step 301, a saliency value is calculated. The star map is preprocessed to obtain the saliency value of each pixel in the star map, and the star map saliency map is generated, so that the contrast between gray values is increased, and the increase of the pixel contrast is beneficial to better segmentation.
In an embodiment of the invention, the saliency calculation is performed using an LC algorithm, according to which the saliency value of the star map gray value is defined as its contrast with other pixel values in the image:
where n=255, representing the total number of gray scales in the image, g s For pixel I k Gray value f of (f) s G is g s The probability of occurrence in image I, II, represents the gray distance metric, from which [0,255 ] can be calculated]A saliency value corresponding to each gray level of (a). FIGS. 4a and 4b show the star map original and its saliency map after saliency calculation using the LC algorithm in one embodiment of the present invention, respectively; and fig. 5a-5b show the original star map and the histogram distribution of the saliency map obtained by the saliency calculation by the LC algorithm of the present invention, as shown in the figure, the pixels with large gray values in the star map obtain larger saliency values, and the pixel with small gray values also have larger saliency values, but this also results in that the dark pixels in the star map will increase the specific gravity in the saliency map due to the saliency calculation, thereby affecting the segmentation result. To avoid the effect of dark pixels on the segmentation result, in one embodiment of the invention, the computation of the saliency value adds a constraint, which may be referred to as a restricted LC algorithm, as follows:
wherein,the pixel value corresponding to the minimum value of the saliency value.
6a-6b show the gray level and saliency correspondence after saliency calculation using the LC algorithm and the limited LC algorithm in one embodiment of the present invention, it can be seen that, since the background pixels in the star map occupy most of the images and the gray level values are distributed between [0,130], the distribution is most concentrated in the histogram, after the limited saliency calculation, the pixels with larger gray level values, such as star points and significant interferences, obtain large saliency values, and the pixels with smaller gray level values, such as star map background, obtain small saliency values, and the constraint condition can increase the contrast between the significant interferences in the star map and the star points and the background;
next, at step 302, the image is initially segmented. And taking the saliency value as a characteristic, and carrying out initial segmentation on the star map saliency map to obtain a pre-segmentation block. The distinction between star points and significant interference is: the star points occupy fewer pixels, typically between 3x3 and 7x7, and are approximately circular in shape, while the significant interference occupies more pixels and is a continuous block of pixel areas without a fixed shape. Thus, in one embodiment of the invention, the initial segmentation includes: the pixels with similar saliency values are clustered by adopting a simple linear iterative clustering method (Simple Linear Iterative Clustering, SLIC) to obtain pre-segmented blocks taking super pixels as clustering centers, the generated segmented blocks have good boundary fit characteristics, and each super pixel contains star points or significant interference:
the SLIC may also be referred to as a super-pixel segmentation algorithm, which is specifically described as follows:
initializing a seed point K;
in order to avoid that the initialized seed point is positioned on the boundary with larger gradient, the seed point is moved to the gradient minimum point in the 3x3 neighborhood; and
calculating a distance D between seed points, and selecting a seed point with the smallest distance value (2S) in the grid as a clustering center of the pixel, wherein the distance D is calculated by the following method:
wherein d c And d s The color distance and the spatial distance, respectively, are calculated as follows:
wherein i is a cluster center label, j is a one-dimensional index value of pixel coordinates in the 2Sx2S size field corresponding to the cluster center i, and (x, y) is the pixel coordinates;
and (5) performing iterative optimization, namely knowing that the residual error is smaller than a set threshold value, and completing segmentation.
In an embodiment of the invention, the SLIC is classified based on saliency values, the seed points of which are super-pixel geometric centers, and therefore, it is not possible to continue using color distances, and therefore, in an embodiment of the invention, the distance calculation method is improved, which improves the pixel saliency value Sal (I k ) And its coordinates (x, y) in the image form a three-dimensional feature vector g= [ sal, x, y] T And taking the distance of the three-dimensional feature vector as a seed point distance, wherein the distance is fused with a significance distance measurement value d sal Distance d in space s
Wherein:
for initial segmentation of the super-pixel side length, wherein K is the seed number, namely the super-pixel number, which has different values in different images according to classification tasks, the values of the super-pixel side length are closely related to the size of the images,in the star map, the star occupancy A sky (K) I.e. the proportion of the star occupancy image can be defined as:
wherein A is i Represents the ith superpixel, C sky Super-pixels representing stars as a clustering result can be seen, super-pixel segmentation is used in a star map, and if the K value is too large, the generated super-pixels possibly only contain star points, and the star points can be removed as interferents in subsequent processing; if too small, the resulting superpixel results are inaccurate and have larger errors, and FIG. 7 shows a schematic diagram illustrating the effective star occupancy for different superpixels in the superpixel segmentation in one embodiment of the present invention, it can be seen that, as the K value increases, A sky (K) The values will be more and more scattered, indicating that the superpixel is segmented over-segment, in one embodiment of the invention the K value is preferably a value in the range of 1 to 2 times the star map side length;
and
In a further embodiment of the invention, to prevent d sal Is too large, and before the saliency distance measure is calculated, the pixel saliency value Sal (I k ) [0,255 ]]Normalizing; and
m is a normal number, which is used to control the influence of the significant distance measurement value and the spatial distance on the distance measurement, the larger the value is, the more important the spatial similarity is, the more compact the result of the superpixel is, the smaller the value is, the closer the superpixel is to the boundary of the image, but the more irregular the shape is. In order to make the result of super pixel segmentation more close to the Boundary, and also to reserve a more complete star field, the inventor adopts the values of different m to carry out super pixel segmentation, and carries out statistics on the segmentation result, and adopts a Boundary Recall (Boundary Recall) as an evaluation index to evaluate the fitting degree of the super pixel Boundary and the artificial labeling Boundary, wherein the Boundary Recall represents the percentage of the total number of pixels of the artificial labeling image Boundary within 2 pixels of the super pixel Boundary obtained by super pixel segmentation. The larger the value thereof, the better the boundary detection effect of the super pixel. FIG. 8 is a schematic diagram showing boundary recall corresponding to different m values in super-pixel segmentation according to an embodiment of the present invention, as shown in FIG. 7, when the m value is 21, the segmentation result is best fit with the boundary;
next, at step 303, features are extracted. Features are extracted from the pre-partitioned blocks. Because the human visual system is most sensitive to information such as scene boundaries, variance distribution of spatial pixels, spatial gray value differences, and boundary fitting, based on this and the characteristics of common interference of star maps, in one embodiment of the invention, two features are extracted from super-pixels for super-pixel combination:
significance mean M in each superpixel i The method comprises the steps of carrying out a first treatment on the surface of the And
super-pixel variance V in original star map i The method comprises the steps of carrying out a first treatment on the surface of the And
finally, at step 304, the combinations are clustered. And clustering and combining the features to realize identification and marking of the interference area. Because the distribution of the saliency mean value and the original variance of the starry sky area is smaller, the difference between the super pixels is smaller, and the difference between the saliency mean value and the original variance of the large-area interferent area is larger due to the fluctuation of the pixel value, and the difference between the different super pixels is also larger. Based on this, in order to combine different classes of superpixels to segment a starry sky region and a large-area interferent region, in one embodiment of the present invention, a Density-based clustering method with noise (Density-Based Spatial Clustering of Applications with Noise, DBSCAN) is employed for cluster combination:
the DBSCAN algorithm is a density clustering algorithm that assumes that the sample class can be determined by how tightly the sample is distributed. Fig. 9a-9d show the superpixel segmentation map and its details representing the regions of significant interference, the details representing the regions of suspected star points, and the distribution diagrams of the mean and original image variance of each superpixel saliency map, respectively, as shown in the figures, for the star map superpixel, the features of the star field are relatively similar, the feature distribution is concentrated, so that it can be clustered into clusters in DBSCAN clusters, the feature difference of large-area interference regions is large, and the feature distribution is not concentrated, so that it can be treated as noise in DBSCAN. The core of the DBSCAN algorithm is to calculate the distance between samples to find all core objects, where the distance can use the minkowski distance:
wherein S is p Representing feature vectors extracted from super-pixels, S i =[M i ,V i ]Q represents the latitude of the feature vector.
Since the mean of the saliency and the variance of the original star map differ in the weight of the distance metric, the variance fluctuates greatly in the salient interference superpixel, accounting for the larger weight when calculating the distance, while the variance of the superpixel in the starry sky area does not change much but its value is much larger than the mean, in one embodiment of the invention, ln (V i ) Instead of V i The weight of the weakened variance in the distance measure, and thus the minkowski distance can be modified as:
finally, a cluster set c= { C can be obtained 0 ,C 1 ,…,C k And the set of the feature clusters meeting the star field is the effective star field with large-area interference removed in the star map.
In order to verify the effectiveness of the extraction method in the embodiment of the invention, the inventor performs star field extraction on an in-orbit satellite real-time star map and a ground-shot star map by adopting the extraction method.
FIGS. 10a-10e show, respectively, the long striped interference of a satellite chain sliding over a sky, the result of extracting the star points using thresholding, the binary mask generated after clustering, the result of extracting the star points in the presence of the mask, and the result of DBSCAN clustering. Fig. 10a is an astronomical observation star chart of an astronomical station in italy, in which a star chain satellite constellation passes through to form linear scratch interference, and fig. 10b is a result of extracting star points by using a threshold method, so that it can be seen that a plurality of false star points are extracted at the interference position. By adopting the extraction method in the embodiment of the invention, the significant interference area is identified and covered by the mask, then the star points are extracted, wherein the parameter radius epd in the DBSCAN algorithm takes a value of 0.4, the result shown in figure 10e is obtained after the DBSCAN algorithm is clustered, the super pixels of the non-star areas can be combined together according to the result to form a binary mask, as shown in figure 10c, the star point extraction result shown in figure 10d can be obtained, and compared with the star map original image, the linear scratch interference in the star map is basically eliminated, and the existence of the false star points is greatly reduced.
Fig. 11a-11f show the ground-captured star map, and the result of extracting star points using the thresholding method, the SLIC super-pixel segmentation result, the result of DBSCAN clustering, the binary mask generated after clustering, and the star point extraction result when the mask is present, respectively. Strictly speaking, the situation shown in fig. 11a does not occur on the satellite, but in the future, the requirement of low cost and high performance of the star sensor will make it possible to perform ground satellite viewing experiments in cities, so the inventors apply the extraction method of the embodiment of the present invention in this case to detect the applicability of the algorithm. As shown in the figure, when star points are extracted by a threshold method, a plurality of false star points exist at the interference positions, and after clustering is carried out by using the extraction method in the embodiment of the invention, a binary mask is formed, and then the obtained star point extraction result is compared with a star map original image, so that the building interference in the star map is basically eliminated, and the existence of the false star points is greatly reduced.
FIGS. 12a-12f show, respectively, a star map with linear scratch interference, and a result of extracting star points using a thresholding method, a SLIC superpixel segmentation result, a DBSCAN clustering result, a binary mask generated after clustering, and a star point extraction result when the mask is present;
FIGS. 13a-13f show, respectively, a star map with regular halo interference, and the results of extracting star points using thresholding, SLIC superpixel segmentation, DBSCAN clustering, binary masks generated after clustering, and star point extraction results when masks are present; and
fig. 14a-14f show a star map, which is subject to interference by moon and its reflected light, and a result of extracting star points using a thresholding method, a SLIC super-pixel segmentation result, a DBSCAN clustering result, a binary mask generated after clustering, and a star point extraction result when the mask is present, respectively.
As can be seen from the figure, when the star points are extracted by the threshold method, a plurality of false star points exist at the interference positions, and after clustering is carried out by using the extraction method in the embodiment of the invention, a binary mask is formed, and further, the obtained star point extraction result is compared with the star map original image, so that large-area interference in the star map is basically eliminated, and the existence of the false star points is greatly reduced.
In summary, it can be seen that when the extraction method in the embodiment of the invention processes large-area interference in the view field, the large-area interferents and the star sky areas in the star map can be effectively segmented, and the star points can be successfully extracted under the condition of strong interference, so that the availability of the star sensor is improved.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to those skilled in the relevant art that various combinations, modifications, and variations can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention as disclosed herein should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (4)

1. The method for extracting the effective starry sky region of the single star map is characterized by comprising the following steps of:
the star map histogram is subjected to de-equalization through an LC algorithm to obtain saliency values of all pixels in the star map, and a star map saliency map is generated, wherein the calculation of the saliency values comprises the following constraint conditions:
wherein,a pixel value corresponding to the minimum value of the saliency value;
clustering pixels with similar saliency values by using the saliency values as features and adopting a simple linear iterative clustering method, and initially segmenting the star map saliency map to obtain pre-segmentation blocks, wherein the simple linear iterative clustering method adopts a measurement value D According to the distance measurement value d of the saliency sal Distance d in space s And (3) calculating to obtain:
wherein:
the method comprises the steps of initially dividing the edge length of super pixels, wherein K is the number of the super pixels; and
m is a normal number for controlling the saliency distance metric and the spatial distance pair metric D Influence;
extracting the characteristics of the pre-segmentation blocks; and
the characteristics are clustered and combined by adopting a density-based clustering method with noise to realize the identification and marking of an interference area, wherein the distance measurement in the density-based clustering method with noise adopts a weighted Minkowski distance:
wherein,
M i is the mean value of saliency in super pixel i;
V i the pixel variance of the super pixel i in the original star map;
i is a cluster center label, and j is a one-dimensional index value of pixel coordinates in the 2Sx2S size field corresponding to the cluster center i.
2. The extraction method of claim 1, wherein the saliency distance metric d sal The calculation is as follows:
wherein i is a cluster center label, and j is a one-dimensional index value of pixel coordinates in the 2Sx2S size field corresponding to the cluster center i.
3. The extraction method of claim 1, further comprising normalizing the saliency values prior to performing an initial segmentation.
4. The extraction method of claim 1, wherein the features include a mean of saliency for each superpixel, and a variance of superpixel pixels in the star map.
CN202110855532.3A 2021-07-28 2021-07-28 Method for extracting effective starry sky area of single star map Active CN113553966B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110855532.3A CN113553966B (en) 2021-07-28 2021-07-28 Method for extracting effective starry sky area of single star map
CN202410312473.9A CN118212635A (en) 2021-07-28 2021-07-28 Star sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110855532.3A CN113553966B (en) 2021-07-28 2021-07-28 Method for extracting effective starry sky area of single star map

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410312473.9A Division CN118212635A (en) 2021-07-28 2021-07-28 Star sensor

Publications (2)

Publication Number Publication Date
CN113553966A CN113553966A (en) 2021-10-26
CN113553966B true CN113553966B (en) 2024-03-26

Family

ID=78104716

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410312473.9A Pending CN118212635A (en) 2021-07-28 2021-07-28 Star sensor
CN202110855532.3A Active CN113553966B (en) 2021-07-28 2021-07-28 Method for extracting effective starry sky area of single star map

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410312473.9A Pending CN118212635A (en) 2021-07-28 2021-07-28 Star sensor

Country Status (1)

Country Link
CN (2) CN118212635A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114777764B (en) * 2022-04-20 2023-06-30 中国科学院光电技术研究所 High-dynamic star sensor star point extraction method based on event camera

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899892A (en) * 2015-06-30 2015-09-09 西安电子科技大学 Method for quickly extracting star points from star images
CN107392968A (en) * 2017-07-17 2017-11-24 杭州电子科技大学 The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure
WO2019062092A1 (en) * 2017-09-30 2019-04-04 深圳市颐通科技有限公司 Superpixel- and multivariate color space-based body outline extraction method
CN109583455A (en) * 2018-11-20 2019-04-05 黄山学院 A kind of image significance detection method merging progressive figure sequence
CN110188763A (en) * 2019-05-28 2019-08-30 江南大学 A kind of image significance detection method based on improvement graph model
CN110796667A (en) * 2019-10-22 2020-02-14 辽宁工程技术大学 Color image segmentation method based on improved wavelet clustering
CN110991547A (en) * 2019-12-12 2020-04-10 电子科技大学 Image significance detection method based on multi-feature optimal fusion
CN111401307A (en) * 2020-04-08 2020-07-10 中国人民解放军海军航空大学 Satellite remote sensing image target association method and device based on depth measurement learning
CN111583290A (en) * 2020-06-06 2020-08-25 大连民族大学 Cultural relic salient region extraction method based on visual saliency
CN111881915A (en) * 2020-07-15 2020-11-03 武汉大学 Satellite video target intelligent detection method based on multiple prior information constraints
CN113052859A (en) * 2021-04-20 2021-06-29 哈尔滨理工大学 Super-pixel segmentation method based on self-adaptive seed point density clustering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10032287B2 (en) * 2013-10-30 2018-07-24 Worcester Polytechnic Institute System and method for assessing wound

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899892A (en) * 2015-06-30 2015-09-09 西安电子科技大学 Method for quickly extracting star points from star images
CN107392968A (en) * 2017-07-17 2017-11-24 杭州电子科技大学 The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure
WO2019062092A1 (en) * 2017-09-30 2019-04-04 深圳市颐通科技有限公司 Superpixel- and multivariate color space-based body outline extraction method
CN109583455A (en) * 2018-11-20 2019-04-05 黄山学院 A kind of image significance detection method merging progressive figure sequence
CN110188763A (en) * 2019-05-28 2019-08-30 江南大学 A kind of image significance detection method based on improvement graph model
CN110796667A (en) * 2019-10-22 2020-02-14 辽宁工程技术大学 Color image segmentation method based on improved wavelet clustering
CN110991547A (en) * 2019-12-12 2020-04-10 电子科技大学 Image significance detection method based on multi-feature optimal fusion
CN111401307A (en) * 2020-04-08 2020-07-10 中国人民解放军海军航空大学 Satellite remote sensing image target association method and device based on depth measurement learning
CN111583290A (en) * 2020-06-06 2020-08-25 大连民族大学 Cultural relic salient region extraction method based on visual saliency
CN111881915A (en) * 2020-07-15 2020-11-03 武汉大学 Satellite video target intelligent detection method based on multiple prior information constraints
CN113052859A (en) * 2021-04-20 2021-06-29 哈尔滨理工大学 Super-pixel segmentation method based on self-adaptive seed point density clustering

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于SLIC超像素分割的SAR图像海陆分割算法;李智;曲长文;周强;刘晨;;雷达科学与技术;20170815(第04期);全文 *
改进的SLIC算法在彩色图像分割中的应用;郭艳婕;杨明;侯宇超;;重庆理工大学学报(自然科学);20200215(第02期);全文 *
近红外星图显著性特性分析与恒星检测;王哲;郭少军;;光学精密工程;20170615(第06期);全文 *

Also Published As

Publication number Publication date
CN118212635A (en) 2024-06-18
CN113553966A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
Castano et al. Automatic detection of dust devils and clouds on Mars
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN111583276B (en) CGAN-based space target ISAR image component segmentation method
CN112686274A (en) Target object detection method and device
CN112330701A (en) Tissue pathology image cell nucleus segmentation method and system based on polar coordinate representation
CN113553966B (en) Method for extracting effective starry sky area of single star map
Ogunrinde et al. A review of the impacts of defogging on deep learning-based object detectors in self-driving cars
CN110852317A (en) Small-scale target detection method based on weak edge
CN116311412A (en) Mask wearing detection method integrating 3D attention mechanism and cavity convolution
Guo et al. Dim space target detection via convolutional neural network in single optical image
CN114202473A (en) Image restoration method and device based on multi-scale features and attention mechanism
CN117636268A (en) Unmanned aerial vehicle aerial natural driving data set construction method oriented to ice and snow environment
CN113177956A (en) Semantic segmentation method for unmanned aerial vehicle remote sensing image
CN116934762A (en) System and method for detecting surface defects of lithium battery pole piece
CN115984568A (en) Target detection method in haze environment based on YOLOv3 network
CN116030364A (en) Unmanned aerial vehicle lightweight target detection method, system, medium, equipment and terminal
CN113343819B (en) Efficient unmanned airborne SAR image target segmentation method
CN113807206A (en) SAR image target identification method based on denoising task assistance
CN113763261A (en) Real-time detection method for far and small targets under sea fog meteorological condition
Dai et al. Effective detection by fusing visible and infrared images of targets for Unmanned Surface Vehicles
CN115861134B (en) Star map processing method under space-based background
CN117911282B (en) Construction method and application of image defogging model
Ilina et al. Study of Deep Convolutional Neural Network for Vehicle Localization on Blurred Aerial Imagery
CN117392508A (en) Target detection method and device based on coordinate attention mechanism
CN116630426A (en) Flood inundation area extraction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant