CN113688845B - Feature extraction method and device suitable for hyperspectral remote sensing image and storage medium - Google Patents

Feature extraction method and device suitable for hyperspectral remote sensing image and storage medium Download PDF

Info

Publication number
CN113688845B
CN113688845B CN202110955014.9A CN202110955014A CN113688845B CN 113688845 B CN113688845 B CN 113688845B CN 202110955014 A CN202110955014 A CN 202110955014A CN 113688845 B CN113688845 B CN 113688845B
Authority
CN
China
Prior art keywords
image
spectral
substance
spectrum
intersection region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110955014.9A
Other languages
Chinese (zh)
Other versions
CN113688845A (en
Inventor
蔡惠明
卢露
张�成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Nuoyuan Medical Devices Co Ltd
Original Assignee
Nanjing Nuoyuan Medical Devices Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Nuoyuan Medical Devices Co Ltd filed Critical Nanjing Nuoyuan Medical Devices Co Ltd
Priority to CN202110955014.9A priority Critical patent/CN113688845B/en
Publication of CN113688845A publication Critical patent/CN113688845A/en
Application granted granted Critical
Publication of CN113688845B publication Critical patent/CN113688845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention provides a feature extraction method, a device and a storage medium suitable for a hyperspectral remote sensing image, which comprise the following steps: acquiring image information and spectrum information in a hyperspectral remote sensing image; presetting a spectrum threshold, and determining multiple substance types and a spectrum area corresponding to each substance type in the hyperspectral remote sensing image based on the spectrum threshold and spectrum information; acquiring the image area of each substance in the image information; acquiring the spectral area and the image area corresponding to any two adjacent substances, and respectively determining a spectral intersection region and an image intersection region based on the spectral area and the image area of the two adjacent substances; determining a horizontal or vertical demarcation point between the spectral intersection region and the image intersection region based on a preset decision to form a horizontal or vertical demarcation line; and dividing various types of substances in the hyperspectral remote sensing image based on a horizontal or vertical boundary, and extracting the characteristic information of each substance.

Description

Feature extraction method and device suitable for hyperspectral remote sensing image and storage medium
Technical Field
The invention relates to the technical field of hyperspectral remote sensing, in particular to a method and a device for extracting features of a hyperspectral remote sensing image and a storage medium.
Background
The development of hyperspectral remote sensing benefits from the development and maturity of imaging spectroscopy technology. The imaging spectrum technology is a comprehensive technology integrating a detector technology, a precise optical machine, weak signal detection, a computer technology and an information processing technology. The method is mainly characterized in that an imaging technology is combined with a spectrum detection technology, and when the spatial characteristics of a target are imaged, tens to hundreds of narrow wave bands are formed on each spatial pixel through dispersion so as to carry out continuous spectrum coverage.
The data thus formed can be visually described by a "three-dimensional data block" as shown in fig. 1. Where x and y represent two-dimensional plane pixel information coordinate axes and the third dimension (λ axis) is a wavelength information coordinate axis. The hyperspectral image integrates the image information and the spectrum information of the sample, as shown in fig. 2. The image information can reflect external quality characteristics of the sample such as size, shape, defects and the like, the image can obviously reflect a certain defect under a certain specific wavelength due to different spectral absorption of different components, and the spectral information can fully reflect the difference of the internal physical structure and chemical components of the sample.
How to combine the image information and the spectrum information to perform corresponding feature extraction cannot be effectively identified for the prior art.
Disclosure of Invention
The embodiment of the invention provides a feature extraction method, a device and a storage medium suitable for a hyperspectral remote sensing image, which can be used for relatively accurately extracting features in the hyperspectral remote sensing image by combining image information and spectral information, so that the feature extraction is more accurate.
In a first aspect of the embodiments of the present invention, a method for extracting features applicable to a hyperspectral remote sensing image is provided, including:
acquiring image information and spectrum information in a hyperspectral remote sensing image;
presetting a spectrum threshold, and determining multiple substance types and a spectrum area corresponding to each substance type in the hyperspectral remote sensing image based on the spectrum threshold and spectrum information;
acquiring the image area of each substance in the image information;
acquiring the spectral area and the image area corresponding to any two adjacent substances, and respectively determining a spectral intersection region and an image intersection region based on the spectral area and the image area of the two adjacent substances;
determining a horizontal or vertical demarcation point between the spectral intersection region and the image intersection region based on a preset decision to form a horizontal or vertical demarcation line;
and dividing various types of substances in the hyperspectral remote sensing image based on a horizontal or vertical boundary, and extracting the characteristic information of each substance.
Optionally, in one possible implementation manner of the first aspect, determining a horizontal or vertical demarcation point between the spectral intersection region and the image intersection region based on a preset decision to form a horizontal or vertical demarcation line comprises:
the horizontal or vertical division point of a row or a column is determined by the following formula,
Figure BDA0003220151160000021
wherein the Z-th point in the horizontal or vertical direction from one substance to another is the horizontal or vertical dividing point, S 1 For a predetermined spectral RGB value, M, of one of the substances 1 Is a predetermined spectral RGB value, S, of another substance 2 For a predetermined image RGB value, M, of one of the substances 2 Is a predetermined image RGB value, p, of another substance i Is the ith pixel point in a row or column in the crossed region of the spectrum, m i Is the ith pixel point in a line or a column in the image intersection region, N is the number of the pixel points in the line or the column in the spectrum intersection region and the image intersection region, L 1 Is the spectral weight value, L 2 The image weight value is set;
the horizontal or vertical dividing lines are formed by connecting horizontal or vertical dividing points of a plurality of adjacent rows or columns.
Optionally, in a possible implementation manner of the first aspect, active modification information of the user is obtained, where the active modification information is obtained by adjusting the horizontal or vertical dividing point Z to obtain Z Regulating device
Weighting the spectral weight values L by the following formula 1 And image weight value L 2 Adjusting to obtain an adjusted spectral weight value L 1 part (c) And image weight value L 2 tone
Figure BDA0003220151160000031
Wherein C is a spectrum adjustment coefficient, and D is an image adjustment coefficient.
Optionally, in a possible implementation manner of the first aspect, the spectral threshold is preset based on
The determination of the various substance types and the spectral areas corresponding to the substance types in the hyperspectral remote sensing images by the spectral threshold and the spectral information comprises the following steps:
the spectral information comprises a spectral RGB value of each pixel point in the hyperspectral remote sensing image;
acquiring a corresponding spectral threshold of each pixel point, wherein each spectral threshold corresponds to a substance type;
and determining a plurality of pixel points corresponding to each substance type to obtain the spectral area corresponding to the substance. Optionally, in a possible implementation manner of the first aspect, the acquiring image information and spectrum information in the hyperspectral remote sensing image includes:
the hyperspectral remote sensing image comprises a common image part and a spectrum image part;
image information is acquired based on the ordinary image portion, and spectral information is acquired based on the spectral image portion.
Optionally, in a possible implementation manner of the first aspect, acquiring a spectral area and an image area corresponding to any two adjacent substances, and determining the spectral intersection region and the image intersection region based on the spectral area and the image area of the two adjacent substances respectively includes:
selecting a pixel point of one substance in the spectral area, wherein when the number of the selected pixel point and the adjacent pixel point of the other substance is more than or equal to 2, the selected pixel point belongs to a spectral intersection pixel point;
and acquiring all spectrum intersection pixel points of the 2 substances to determine a spectrum intersection region.
Optionally, in a possible implementation manner of the first aspect, the acquiring a spectral area and an image area corresponding to any two adjacent substances, and the determining a spectral intersection region and an image intersection region based on the spectral area and the image area of the two adjacent substances respectively includes:
selecting a pixel point of one substance in the image area, and when the number of the selected pixel point and the adjacent pixel point of the other substance is more than or equal to 2, the selected pixel point belongs to an image intersecting pixel point;
and acquiring all image intersection pixel points of the 2 substances to determine an image intersection area.
Optionally, in a possible implementation manner of the first aspect, the method further includes:
if the spectrum intersection region has pixel points of a third substance and/or the image intersection region has pixel points of the third substance;
counting the number of pixel points of the third substance;
if the number of the pixel points of the third substance is larger than a preset value, the pixel points of the third substance are removed when a spectrum intersection region and/or an image intersection region are generated;
if the number of the pixel points of the third substance is smaller than the preset value, the pixel points of the third substance are included when the spectrum intersection region and/or the image intersection region are generated, and the pixel points of the third substance are modified into the spectrum RGB value and/or the image RGB value corresponding to the substance type with the largest number of pixel points in the spectrum intersection region and/or the image intersection region.
In a second aspect of the embodiments of the present invention, there is provided a feature extraction device suitable for a hyperspectral remote sensing image, including:
the information acquisition module is used for acquiring image information and spectrum information in the hyperspectral remote sensing image;
the spectrum area determining module is used for presetting a spectrum threshold value, and determining multiple substance types and the spectrum area corresponding to each substance type in the hyperspectral remote sensing image based on the spectrum threshold value and the spectrum information;
the image area acquisition module is used for acquiring the image area of each substance in the image information;
the intersection region acquisition module is used for acquiring the spectral area and the image area corresponding to any two adjacent substances and respectively determining the spectral intersection region and the image intersection region based on the spectral area and the image area of the two adjacent substances;
a boundary determining module for determining a horizontal or vertical boundary point between the spectrum intersection region and the image intersection region based on a preset decision to form a horizontal or vertical boundary;
and the division and extraction module is used for dividing various types of substances in the hyperspectral remote sensing image based on a horizontal or vertical boundary and extracting the characteristic information of each substance.
In a third aspect of the embodiments of the present invention, a readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, is adapted to carry out the method according to the first aspect of the present invention and various possible designs of the first aspect of the present invention.
The invention provides a feature extraction method, a feature extraction device and a storage medium suitable for a hyperspectral remote sensing image, which can be used for obtaining image information and spectrum information in the hyperspectral remote sensing image, respectively processing the image information and the spectrum information to obtain a spectrum intersection area and an image intersection area of any two adjacent substances, determining a horizontal or vertical boundary in the spectrum intersection area and the image intersection area, distinguishing any two adjacent substances in the hyperspectral remote sensing image through the horizontal or vertical boundary, and accurately extracting the features of the substances.
The invention can respectively calculate in two dimensions by combining the image information and the spectrum information, and calculate the RGB values of all the pixel points in the spectrum intersection region and the image intersection region in the two dimensions. And the horizontal or vertical demarcation point of a row or a column is obtained according to the RGB values, so that the calculation of the horizontal or vertical demarcation point and the horizontal or vertical demarcation line is more accurate. When the horizontal or vertical demarcation point is calculated, the spectral weight and the image weight are fully considered, and different scenes and different substances may have different weight values, so that the method can distinguish according to different substances and scenes when the horizontal or vertical demarcation point and the horizontal or vertical demarcation line are calculated.
The invention can display the extracted characteristic information of each substance, a user can adjust the horizontal or vertical boundary according to the actual situation, and the spectral weight value and the image weight value are adjusted according to the adjustment of the horizontal or vertical boundary of the user, so that the adjusted spectral weight value and the adjusted image weight value are more suitable for the preset decision in the application, and the preset decision can be actively learned and dynamically adjusted.
Drawings
FIG. 1 is a diagram of a three-dimensional data block in the background art;
FIG. 2 is a schematic diagram of a hyperspectral image integrated with image information and spectral information in the background art;
FIG. 3 is a flow chart of a feature extraction method suitable for hyperspectral remote sensing images;
FIG. 4 is a schematic diagram of image information in a hyperspectral remote sensing image;
FIG. 5 is a schematic diagram of spectral information in a hyperspectral remote sensing image;
fig. 6 is a structural diagram of a feature extraction device suitable for hyperspectral remote sensing images.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprising a, B and C", "comprising a, B, C" means that all three of a, B, C are comprised, "comprising a, B or C" means comprising one of three of a, B, C, "comprising a, B and/or C" means comprising any 1 or any 2 or 3 of three of a, B, C.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, the term "if" may be interpreted as "at \8230; …" or "in response to a determination" or "in response to a detection" depending on the context.
The technical means of the present invention will be described in detail with reference to specific examples. These several specific embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The invention provides a feature extraction method suitable for a hyperspectral remote sensing image, which is a flow chart shown in figure 3 and comprises the following steps:
and S110, acquiring image information and spectrum information in the hyperspectral remote sensing image. The image information and the spectrum information in the present invention may be in the form of images, as shown in fig. 4 and 5. Fig. 4 is image information in the hyperspectral remote sensing image, and fig. 5 is spectrum information in the hyperspectral remote sensing image.
In the process of collecting the hyperspectral remote sensing image, a certain area can be regarded as respectively collecting image information and spectral information and then fusing the image information and the spectral information to obtain the hyperspectral remote sensing image. Therefore, in a possible implementation manner, when extracting the features in the hyperspectral remote sensing image, the invention firstly carries out reverse processing on the hyperspectral remote sensing image to obtain the image information and the spectrum information in the hyperspectral remote sensing image.
And S120, presetting a spectrum threshold, and determining multiple substance types and spectrum areas corresponding to the substance types in the hyperspectral remote sensing image based on the spectrum threshold and the spectrum information. In step S120, the method includes:
the spectrum information comprises a spectrum RGB value of each pixel point in the hyperspectral remote sensing image.
And acquiring a corresponding spectral threshold of each pixel point, wherein each spectral threshold corresponds to one substance type. Each pixel point will have a corresponding spectral RGB value, and when the spectral RGB value is within a spectral threshold, the spectral RGB value is determined to correspond to the corresponding species.
And determining a plurality of pixel points corresponding to each substance type to obtain the spectral area corresponding to the substance.
The spectral threshold may be a range, for example, 225 to 228, in the spectral information of a hyperspectral remote sensing image, the substances within the spectral threshold 225 to 228 may be regarded as a substance, and the total area corresponding to all the substances within the spectral threshold 225 to 228 is the spectral area. The total area may be the total area within one closed-loop region or the total area formed by a plurality of closed-loop regions.
And step S130, acquiring the image area of each substance in the image information. After the image information is obtained, the image area of each substance in the image information can be extracted according to computer vision techniques. For example, a hyperspectral remote sensing image has land and rivers, and the image area of the land and the rivers in the image information can be determined through a computer vision technology. In the process of determining the image area, the determination can be performed according to the number of the pixel points.
Step S140, acquiring the spectral area and the image area corresponding to any two adjacent substances, and respectively determining the spectral intersection region and the image intersection region based on the spectral area and the image area of the two adjacent substances.
When acquiring the spectral area and the image area, the two adjacent substances are staggered in a large number of cases, for example, a river may have a few shoals, and once again, the shoals may cause a part of pits in the land to temporarily store the river. Therefore, the spectral area and the image area between every two adjacent substances have intersecting and overlapping parts, so the invention respectively determines the spectral intersection region and the image intersection region according to the spectral area and the image area of the two adjacent substances.
And S150, determining a horizontal or vertical demarcation point between the spectrum intersection region and the image intersection region based on a preset decision to form a horizontal or vertical demarcation line. The invention can process according to the distribution condition of the RGB values of each pixel point in the spectrum intersection area and the image intersection area to obtain the horizontal or vertical boundary point, thereby forming the corresponding horizontal or vertical boundary.
Wherein, step S150 includes:
the horizontal or vertical division point of a row or a column is determined by the following formula,
Figure BDA0003220151160000081
wherein the Z-th point in the horizontal or vertical direction from one substance to another is the horizontal or vertical dividing point, S 1 For a predetermined spectral RGB value, M, of one of the substances 1 Is a predetermined spectral RGB value of another substance, 2 2 To it isPredetermined image RGB value, M of a substance 2 Is a predetermined image RGB value, p, of another substance i Is the ith pixel point in a row or a column in the cross region of the spectrum, m i Is the ith pixel point in a row or a column in the image intersection region, N is the number of the pixel points in the row or the column in the spectrum intersection region and the image intersection region, L 1 Is the spectral weight value, L 2 Is an image weight value.
By passing
Figure BDA0003220151160000091
And
Figure BDA0003220151160000092
the vector distance relationship of each substance to the preset spectrum RGB value and the preset image RGB value is reflected under different dimensions, namely the tendency of which substance is biased in the spectrum intersection area and the image intersection area is judged, so that the boundary points and boundaries formed in the spectrum intersection area and the image intersection area are more accurate, and the adjacent substances are accurately distinguished.
The invention can respectively calculate in two dimensions by combining the image information and the spectrum information, and calculate the RGB values of all the pixel points in the spectrum intersection region and the image intersection region in the two dimensions. And the horizontal or vertical dividing point of one line or one column is obtained according to the RGB values, so that the horizontal or vertical dividing point and the horizontal or vertical dividing line can be calculated more accurately. When the horizontal or vertical demarcation point is calculated, the spectral weight and the image weight are fully considered, and different scenes and different substances may have different weight values, so that the invention can distinguish according to different substances and scenes when the horizontal or vertical demarcation point and the horizontal or vertical boundary are calculated.
Horizontal or vertical dividing lines are formed by connecting horizontal or vertical dividing points of a plurality of adjacent rows or columns. After obtaining the plurality of horizontal or vertical demarcation points, connecting the plurality of horizontal or vertical demarcation points to form a vertical demarcation line.
In one possible embodiment, obtainTaking active modification information of a user, wherein the active modification information is obtained by adjusting a horizontal or vertical dividing point Z to obtain Z Regulating device . The invention displays the characteristics of different substances after extracting the characteristics, and a user can adjust the horizontal or vertical boundary to obtain Z according to the display Regulating device . The horizontal or vertical dividing line referred to in the present invention is only relatively horizontal and vertical, and may be in the form of a curve.
Weighting the spectral weight values L by the following formula 1 And image weight value L 2 The adjusted spectrum weighted value L is obtained by adjustment 1 part (c) And image weight value L 2 part (3)
Figure BDA0003220151160000101
Wherein C is a spectrum adjustment coefficient, and D is an image adjustment coefficient.
By the mode, the adjusted spectrum weight value and the adjusted image weight value are more suitable for the preset decision in the application, so that the preset decision can be actively learned and dynamically adjusted.
And S160, dividing various types of substances in the hyperspectral remote sensing image based on a horizontal or vertical boundary, and extracting characteristic information of each substance. According to the method, various types of substances in the hyperspectral remote sensing image are distinguished according to the generated horizontal or vertical demarcation lines, so that the characteristic information of each substance is obtained, and the characteristic information can be the form of the substance, such as the form of a river, the form of land and the like.
In one possible embodiment, step S140 includes:
and selecting a pixel point of one substance in the spectral area, and when the number of the selected pixel point and the adjacent pixel point of the other substance is more than or equal to 2, determining that the selected pixel point belongs to a spectrum intersection pixel point. When judging whether the two substances are in the spectrum intersection region, the invention firstly determines whether the pixel point belongs to the spectrum intersection pixel point, if the number of the selected pixel point and the adjacent pixel point of the other substance is more than or equal to 2, the pixel point is proved to have contact with the other substance in at least two directions, and therefore, the pixel point is determined as the spectrum intersection pixel point.
And acquiring all spectrum intersection pixel points of the 2 substances to determine a spectrum intersection region. And obtaining a spectrum intersection region after obtaining all spectrum intersection pixel points.
In one possible embodiment, step S140 includes:
and selecting a pixel point of one substance in the image area, wherein when the number of the selected pixel point and the adjacent pixel point of the other substance is more than or equal to 2, the selected pixel point belongs to an image intersecting pixel point. When judging whether the image intersection region of two substances is present, the invention firstly determines whether the pixel point belongs to the image intersection pixel point, if the number of the selected pixel point and the adjacent pixel point of the other substance is more than or equal to 2, the pixel point is proved to have contact with the other substance in at least two directions, and therefore, the pixel point is determined as the image intersection pixel point.
And acquiring all image intersection pixel points of the 2 substances to determine an image intersection area. And obtaining an image intersection area after obtaining all the spectrum intersection pixel points.
In one possible embodiment, the method further comprises:
if the spectrum intersection region has pixel points of a third substance and/or the image intersection region has pixel points of the third substance. In an actual hyperspectral remote sensing image, due to the fact that a scene is complex and changeable, pixel points of a third substance are likely to exist in a spectrum intersection region and/or an image intersection region, and therefore when the pixel points of the third substance appear, horizontal or vertical boundary lines can be affected.
The number of pixel points of the third substance is counted. The invention can count the number of the pixel points of the third substance and judge the magnitude of the third substance.
And if the number of the pixel points of the third substance is larger than the preset value, the pixel points of the third substance are removed when the spectrum intersection region and/or the image intersection region are generated. The preset values may be 2, 3, 4, etc. vertical. When the number of the pixel points of the third substance is larger than the preset value, the possible magnitude of the third substance is large, so that the third substance needs to be distinguished independently, the pixel points of the third substance need to be removed in the intersection area, and the area for forming the third substance is reserved independently for the pixel points of the third substance.
If the number of the pixel points of the third substance is smaller than the preset value, the pixel points of the third substance are included when the spectrum intersection region and/or the image intersection region are generated, and the pixel points of the third substance are modified into the spectrum RGB value and/or the image RGB value corresponding to the substance type with the largest number of pixel points in the spectrum intersection region and/or the image intersection region. When the number of the pixel points of the third substance is smaller than the preset value, the number of the third substance is possibly small, and the setting is negligible, so that the third substance can be directly subjected to normalization processing, and the third substance is classified into a spectrum RGB value and/or an image RGB value corresponding to the substance type with the largest number of the pixel points in the spectrum intersection region and/or the image intersection region, the accuracy of a horizontal or vertical boundary is guaranteed, and corresponding errors are removed.
The present invention also provides a feature extraction device suitable for hyperspectral remote sensing images, as shown in fig. 6, including:
the information acquisition module is used for acquiring image information and spectrum information in the hyperspectral remote sensing image;
the spectral area determining module is used for presetting a spectral threshold value, and determining various substance types and spectral areas corresponding to the substance types in the hyperspectral remote sensing images on the basis of the spectral threshold value and spectral information;
the image area acquisition module is used for acquiring the image area of each substance in the image information;
the intersection region acquisition module is used for acquiring the spectral area and the image area corresponding to any two adjacent substances and respectively determining the spectral intersection region and the image intersection region based on the spectral area and the image area of the two adjacent substances;
a boundary determining module for determining a horizontal or vertical boundary point between the spectral intersection region and the image intersection region based on a preset decision to form a horizontal or vertical boundary;
and the division and extraction module is used for dividing various types of substances in the hyperspectral remote sensing image based on a horizontal or vertical boundary and extracting the characteristic information of each substance.
The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to a processor such that the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Additionally, the ASIC may reside in user equipment. Of course, the processor and the readable storage medium may also reside as discrete components in a communication device. The readable storage medium may be a read-only memory (ROM), a random-access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the device may read the execution instructions from the readable storage medium, and the execution of the execution instructions by the at least one processor causes the device to implement the methods provided by the various embodiments described above.
In the embodiment of the terminal or the server, it should be understood that the Processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of hardware and software modules.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A feature extraction method suitable for a hyperspectral remote sensing image is characterized by comprising the following steps of:
acquiring image information and spectrum information in a hyperspectral remote sensing image;
presetting a spectrum threshold, and determining multiple substance types and the spectrum area corresponding to each substance type in the hyperspectral remote sensing image based on the spectrum threshold and the spectrum information;
acquiring the image area of each substance in the image information;
acquiring the spectral area and the image area corresponding to any two adjacent substances, and respectively determining a spectral intersection region and an image intersection region based on the spectral area and the image area of the two adjacent substances;
determining a horizontal or vertical demarcation point between the spectral intersection region and the image intersection region based on a preset decision to form a horizontal or vertical demarcation line;
and dividing various types of substances in the hyperspectral remote sensing images based on a horizontal or vertical boundary, and extracting the characteristic information of each substance.
2. The method for extracting features of hyperspectral remote sensing images according to claim 1, wherein the hyperspectral remote sensing images are obtained by performing the following steps,
determining a horizontal or vertical demarcation point between the spectral intersection region and the image intersection region based on a preset decision to form a horizontal or vertical demarcation line comprises:
the horizontal or vertical division point of a row or a column is determined by the following formula,
Figure FDA0003220151150000011
wherein the Z-th point in the horizontal or vertical direction from one substance to another is the horizontal or vertical dividing point, S 1 For a predetermined spectral RGB value, M, of one of the substances 1 Is a predetermined spectral RGB value, S, of another substance 2 For a predetermined image RGB value, M, of one of the substances 2 Is a predetermined image RGB value, p, of another substance i Is the ith pixel point in a row or column in the crossed region of the spectrum, m i Is the ith pixel point in a line or a column in the image intersection region, N is the number of the pixel points in the line or the column in the spectrum intersection region and the image intersection region, L 1 Is the spectral weight value, L 2 The image weight value is set;
the horizontal or vertical dividing lines are formed by connecting horizontal or vertical dividing points of a plurality of adjacent rows or columns.
3. The method for extracting characteristics of hyperspectral remote sensing images according to claim 2 is characterized in that,
acquiring active modification information of a user, wherein the active modification information is Z obtained by adjusting a horizontal or vertical dividing point Z Regulating device
Weighting the spectral weight value L by the following formula 1 And image weight value L 2 The adjusted spectrum weighted value L is obtained by adjustment 1 part (c) And image weight value L 2 part (3)
Figure FDA0003220151150000021
Wherein C is a spectrum adjustment coefficient, and D is an image adjustment coefficient.
4. The method for extracting characteristics of hyperspectral remote sensing images according to claim 1 is characterized in that,
presetting a spectrum threshold, and determining multiple substance types and a spectrum area corresponding to each substance type in the hyperspectral remote sensing image based on the spectrum threshold and the spectrum information comprises the following steps:
the spectrum information comprises a spectrum RGB value of each pixel point in the hyperspectral remote sensing image;
acquiring a corresponding spectral threshold of each pixel point, wherein each spectral threshold corresponds to a substance type;
and determining a plurality of pixel points corresponding to each substance type to obtain the spectral area corresponding to the substance.
5. The method for extracting features of hyperspectral remote sensing images according to claim 1, wherein the hyperspectral remote sensing images are obtained by performing the following steps,
the method for acquiring the image information and the spectrum information in the hyperspectral remote sensing image comprises the following steps:
the hyperspectral remote sensing image comprises a common image part and a spectral image part;
image information is acquired based on the ordinary image portion, and spectral information is acquired based on the spectral image portion.
6. The method for extracting features of hyperspectral remote sensing images according to claim 1, wherein the hyperspectral remote sensing images are obtained by performing the following steps,
acquiring the spectral area and the image area corresponding to any two adjacent substances, and respectively determining the spectral intersection region and the image intersection region based on the spectral area and the image area of the two adjacent substances comprises the following steps:
selecting a pixel point of one substance in the spectral area, wherein when the number of the selected pixel point and the adjacent pixel point of the other substance is more than or equal to 2, the selected pixel point belongs to a spectral intersection pixel point;
and acquiring all spectrum intersection pixel points of the 2 substances to determine a spectrum intersection region.
7. The method for extracting features of hyperspectral remote sensing images according to claim 6, wherein the hyperspectral remote sensing images are obtained by performing the following steps,
acquiring the spectral area and the image area corresponding to any two adjacent substances, and respectively determining the spectral intersection region and the image intersection region based on the spectral area and the image area of the two adjacent substances comprises the following steps:
selecting a pixel point of one substance in the image area, wherein when the number of the selected pixel point and the adjacent pixel point of the other substance is more than or equal to 2, the selected pixel point belongs to an image intersecting pixel point; and acquiring all image intersection pixel points of the 2 substances to determine an image intersection area.
8. The method for extracting features of the hyperspectral remote sensing image according to claim 1, further comprising:
if the spectrum intersection region has pixel points of a third substance and/or the image intersection region has pixel points of the third substance;
counting the number of the pixel points of the third substance;
if the number of the pixel points of the third substance is larger than a preset value, the pixel points of the third substance are removed when a spectrum intersection region and/or an image intersection region are generated;
if the number of the pixel points of the third substance is smaller than the preset value, the pixel points of the third substance are included when the spectrum intersection region and/or the image intersection region are generated, and the pixel points of the third substance are modified into the spectrum RGB value and/or the image RGB value corresponding to the substance type with the largest number of pixel points in the spectrum intersection region and/or the image intersection region.
9. A feature extraction device suitable for hyperspectral remote sensing images, comprising:
the information acquisition module is used for acquiring image information and spectrum information in the hyperspectral remote sensing image;
the spectral area determining module is used for presetting a spectral threshold value, and determining various substance types and spectral areas corresponding to the substance types in the hyperspectral remote sensing images on the basis of the spectral threshold value and spectral information;
the image area acquisition module is used for acquiring the image area of each substance in the image information;
the intersection region acquisition module is used for acquiring the spectral area and the image area corresponding to any two adjacent substances and respectively determining the spectral intersection region and the image intersection region based on the spectral area and the image area of the two adjacent substances;
a boundary determining module for determining a horizontal or vertical boundary point between the spectral intersection region and the image intersection region based on a preset decision to form a horizontal or vertical boundary;
and the division extraction module is used for dividing various types of substances in the hyperspectral remote sensing images based on a horizontal or vertical boundary and then extracting the characteristic information of each substance.
10. A readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 8.
CN202110955014.9A 2021-08-19 2021-08-19 Feature extraction method and device suitable for hyperspectral remote sensing image and storage medium Active CN113688845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110955014.9A CN113688845B (en) 2021-08-19 2021-08-19 Feature extraction method and device suitable for hyperspectral remote sensing image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110955014.9A CN113688845B (en) 2021-08-19 2021-08-19 Feature extraction method and device suitable for hyperspectral remote sensing image and storage medium

Publications (2)

Publication Number Publication Date
CN113688845A CN113688845A (en) 2021-11-23
CN113688845B true CN113688845B (en) 2022-12-30

Family

ID=78580716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110955014.9A Active CN113688845B (en) 2021-08-19 2021-08-19 Feature extraction method and device suitable for hyperspectral remote sensing image and storage medium

Country Status (1)

Country Link
CN (1) CN113688845B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960221A (en) * 2017-03-14 2017-07-18 哈尔滨工业大学深圳研究生院 A kind of hyperspectral image classification method merged based on spectral signature and space characteristics and system
US10902577B2 (en) * 2017-06-19 2021-01-26 Apeel Technology, Inc. System and method for hyperspectral image processing to identify object
CN107833223B (en) * 2017-09-27 2023-09-22 沈阳农业大学 Fruit hyperspectral image segmentation method based on spectral information
CN108256425B (en) * 2017-12-12 2019-06-25 交通运输部规划研究院 A method of harbour container is extracted using Remote Spectra efficient information rate
CN110363186A (en) * 2019-08-20 2019-10-22 四川九洲电器集团有限责任公司 A kind of method for detecting abnormality, device and computer storage medium, electronic equipment
CN112163523A (en) * 2020-09-29 2021-01-01 北京环境特性研究所 Abnormal target detection method and device and computer readable medium

Also Published As

Publication number Publication date
CN113688845A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN110287932B (en) Road blocking information extraction method based on deep learning image semantic segmentation
CN112818988B (en) Automatic identification reading method and system for pointer instrument
US8548257B2 (en) Distinguishing between faces and non-faces
CN110210448B (en) Intelligent face skin aging degree identification and evaluation method
CN111083365B (en) Method and device for rapidly detecting optimal focal plane position
CN106412573A (en) Method and device for detecting lens stain
CN111369605B (en) Infrared and visible light image registration method and system based on edge features
CN110520768B (en) Hyperspectral light field imaging method and system
CN110634137A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN112344913A (en) Regional risk coefficient evaluation method by utilizing oblique photography image of unmanned aerial vehicle
CN110197185A (en) A kind of method and system based on Scale invariant features transform algorithm monitoring space under bridge
CN115661115A (en) Component detection method, device, electronic equipment and storage medium
Kurmi et al. Pose error reduction for focus enhancement in thermal synthetic aperture visualization
CN115546187A (en) Agricultural pest and disease detection method and device based on YOLO v5
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method
CN113688845B (en) Feature extraction method and device suitable for hyperspectral remote sensing image and storage medium
CN107369163B (en) Rapid SAR image target detection method based on optimal entropy dual-threshold segmentation
CN113269752A (en) Image detection method, device terminal equipment and storage medium
CN115423804B (en) Image calibration method and device and image processing method
CN113344905B (en) Strip deviation amount detection method and system
CN103903258B (en) Method for detecting change of remote sensing image based on order statistic spectral clustering
CN116385567A (en) Method, device and medium for obtaining color card ROI coordinate information
CN115376007A (en) Object detection method, device, equipment, medium and computer program product
CN116152220A (en) Seed counting and size measuring method based on machine vision
CN114742849A (en) Leveling instrument distance measuring method based on image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant