CN118097474A - Ground object information acquisition and recognition system based on image analysis - Google Patents

Ground object information acquisition and recognition system based on image analysis Download PDF

Info

Publication number
CN118097474A
CN118097474A CN202410480880.0A CN202410480880A CN118097474A CN 118097474 A CN118097474 A CN 118097474A CN 202410480880 A CN202410480880 A CN 202410480880A CN 118097474 A CN118097474 A CN 118097474A
Authority
CN
China
Prior art keywords
image
model
data
ground object
variation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410480880.0A
Other languages
Chinese (zh)
Other versions
CN118097474B (en
Inventor
姚刚
江华
周靖
费伟林
范雷刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Minghua Information Technology Co ltd
Original Assignee
Jiaxing Minghua Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Minghua Information Technology Co ltd filed Critical Jiaxing Minghua Information Technology Co ltd
Priority to CN202410480880.0A priority Critical patent/CN118097474B/en
Publication of CN118097474A publication Critical patent/CN118097474A/en
Application granted granted Critical
Publication of CN118097474B publication Critical patent/CN118097474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image information processing, in particular to a ground object information acquisition and identification system based on image analysis, which comprises the following components: the information acquisition module is used for respectively acquiring unmanned aerial vehicle aerial image spot data and channel orthophoto data; the information processing module comprises a vector generation unit used for screening the unmanned aerial vehicle aerial image spot data to output vector contour data; the ground object identification module is connected with the information processing module and used for identifying objects and geological conditions in the ground object image; the control module is used for adjusting the rotation angle of the unmanned aerial vehicle or judging whether the accuracy of data processing meets the requirements according to the change quantity of the feature quantity of the ground feature identified by the model when the accuracy of ground feature identification is judged to be not met according to the qualified aerial image spot quantity ratio. The invention realizes the improvement of the identification accuracy of the ground object information.

Description

Ground object information acquisition and recognition system based on image analysis
Technical Field
The invention relates to the technical field of image information processing, in particular to a ground object information acquisition and identification system based on image analysis.
Background
In the prior art, the ground object information acquisition and recognition system is a system for automatically recognizing and acquiring ground object information by utilizing an image processing and analyzing technology. The system extracts the characteristics and the information in the input image data by processing the input image data, so that the classification, the identification and the positioning of the ground objects are realized, and the system can be used for selecting the site of the power transmission engineering project according to the geological survey of the acquisition environment.
Chinese patent publication No.: CN112560544a discloses a remote sensing image ground object recognition method, system and computer readable storage medium, the method comprises: collecting an original sample remote sensing image for training; carrying out data enhancement processing on the collected original sample remote sensing image to obtain an enhanced sample remote sensing image; constructing a multi-scale dense convolution network; training the multi-scale dense convolution network by combining the original sample remote sensing image and the enhanced sample remote sensing image; after the multi-scale dense convolution network training is completed, identifying the ground features in the remote sensing image to be identified through the multi-scale dense convolution network, and marking the identified ground features. Therefore, the problem that the accuracy of feature information identification is reduced due to insufficient update comprehensiveness of the feature identification model and unsmooth edge processing in the image cutting process is solved.
Disclosure of Invention
Therefore, the invention provides a ground feature information acquisition and recognition system based on image analysis, which is used for solving the problems that in the prior art, the accuracy of ground feature information recognition is insufficient and the recognition accuracy of ground feature information is reduced due to insufficient update comprehensiveness of a ground feature recognition model and unsmooth edge processing in an image cutting process.
In order to achieve the above object, the present invention provides a ground object information acquisition and identification system based on image analysis, comprising: the information acquisition module is used for respectively acquiring unmanned aerial vehicle aerial image spot data and channel orthophoto data;
The information processing module is connected with the information acquisition module and comprises a vector generation unit, a preprocessing unit and an image cutting unit, wherein the vector generation unit is used for screening the aerial image spot data of the unmanned aerial vehicle to output vector contour data, the preprocessing unit is used for preprocessing the channel orthographic image data to output an image to be cut, and the image cutting unit is respectively connected with the vector generation unit and the preprocessing unit and is used for cutting the image to be cut by using the vector contour data to output a model data set; the cutting range of the image to be cut is determined through the vector outline data; the ground object recognition module is connected with the information processing module and used for recognizing objects and geological conditions in the ground object image, and comprises a model training unit which is connected with the image cutting unit and used for training a training data set in the model data set to generate a ground object recognition model; the control module is respectively connected with the information acquisition module, the information processing module and the ground object recognition module, and is used for adjusting the rotation angle of the unmanned aerial vehicle or judging whether the accuracy of data processing meets the requirements according to the change quantity of the ground object characteristic quantity recognized by the model when judging that the accuracy of ground object recognition does not meet the requirements according to the qualified aerial image spot quantity ratio, and adjusting the model updating training mode of the ground object recognition model or adjusting the mask radius of the image layer according to the average difference quantity of the image edge pixels when judging that the accuracy of ground object recognition does not meet the requirements.
Further, the information processing module also includes a scalar generation component to filter the unmanned aerial vehicle spot data to output scalar picture data.
Further, the information acquisition module comprises an unmanned aerial vehicle for changing the acquisition position of the aerial image and an image acquisition unit for acquiring the aerial image spots of the unmanned aerial vehicle;
The control module is connected with the image acquisition unit and is used for acquiring the number of qualified aerial image spots and the total number of aerial image spots to calculate the proportion of the number of the qualified aerial image spots, judging that the identification accuracy of the ground object information is not in accordance with the requirement when the proportion of the number of the qualified aerial image spots meets the first proportion condition or the second proportion condition, and preliminarily judging that the accuracy of the image processing is not in accordance with the requirement when the proportion of the number of the qualified aerial image spots only meets the first proportion condition, and carrying out secondary judgment on the accuracy of the image processing according to the variation of the feature quantity of the ground object identified by the model;
the control module is connected with the unmanned aerial vehicle and is used for increasing the rotation angle of the unmanned aerial vehicle when the number of qualified aerial photo spots only meets the second ratio condition;
The first duty ratio condition is that the number of qualified aerial photo spots is larger than a preset first duty ratio and smaller than or equal to a preset second duty ratio; the second duty ratio condition is that the number of qualified aerial photo spots is larger than a preset second duty ratio;
The increased rotation angle of the unmanned aerial vehicle is determined by the difference value between the qualified aerial image spot number ratio and the preset second ratio; when the advancing direction of the unmanned aerial vehicle and the included angle between a first ray formed by taking the center point of the unmanned aerial vehicle as an endpoint and a camera of the unmanned aerial vehicle and a second ray formed by taking the center point of the unmanned aerial vehicle as an endpoint are acute angles, the unmanned aerial vehicle rotates along the direction away from the first ray, and the maximum rotating angle is the included angle formed by the second ray and a reverse extension line of the first ray.
Further, the calculation formula of the qualified aerial image spot number ratio is as follows:
Wherein U is the qualified aerial image spot number ratio, ut is the qualified aerial image spot number, and Ug is the total aerial image spot number; and when the number of the ground feature characteristics in the aerial image spots identified by the ground feature identification model is the same as the number of the ground feature characteristics in the actual scene corresponding to the aerial image spots in the historical data and exceeds a standard number threshold, judging that the aerial image spots identified by the ground feature identification model are qualified.
Further, the ground object recognition module further comprises a model updating unit which is connected with the model training unit and used for updating the ground object recognition model;
The control module is connected with the model training unit and is used for secondarily judging that the accuracy of image processing is not in accordance with the requirement when the variation of the feature quantity of the ground object identified by the ground object identification model meets the first variation condition or the second variation condition, primarily judging that the screening accuracy of the image spot data is not in accordance with the requirement when the variation of the feature quantity of the ground object identified by the model only meets the second variation condition, and secondarily judging the screening accuracy of the image spot data according to the average difference quantity of the image edge pixels;
The control module is connected with the model updating unit and is used for controlling the model updating unit to update the model according to the model updating training mode when the variation of the feature quantity of the ground features identified by the model only meets the first variation condition;
the first variation condition is that the variation of the feature quantity of the ground object identified by the model is larger than a preset first variation and smaller than or equal to a preset second variation; the second variation condition is that the variation of the feature quantity of the ground object identified by the model is larger than a preset second variation.
Further, the calculation formula of the variation of the feature quantity identified by the model is as follows:
Wherein S is the variation of feature quantity of the feature identified by the model, K a is the feature quantity of the feature in the orthographic image of the first channel identified by the feature identification model, and K b is the feature quantity of the feature in the orthographic image of the second channel identified by the feature identification model; the size of the first channel orthographic image and the second channel orthographic image are the same as the type of the image acquisition place.
Further, the model updating training mode is that the model updating unit updates the model according to the data corresponding calling quantity of the new data type, and the data corresponding calling quantity of the new data type is determined through the difference value between the variation quantity of the feature quantity identified by the model and the preset first variation quantity.
Further, the control module is connected with the image cutting unit, and is configured to calculate an average difference of the image edge pixels according to the pixels of the image edge output by the image cutting unit, and secondarily determine that the screening accuracy of the image spot data does not meet the requirement when the average difference of the image edge pixels is greater than a preset difference, and increase the mask radius of the image layer of the image cutting unit.
Further, the increased layer mask radius is determined by the difference between the average difference of the image edge pixels and the preset difference.
Further, the preprocessing unit is provided with a midpoint pinch-out mode for performing distortion correction processing on the channel orthographic image data, wherein the midpoint pinch-out mode is to select midpoints at two sides of a convex edge of the channel orthographic image, and a new edge point is created at the midpoints.
Compared with the prior art, the method has the beneficial effects that by arranging the information acquisition module, the information processing module, the ground object identification module and the control module, the corresponding working mode of the information acquisition module is determined according to the qualified aerial image spot quantity ratio, the influence of the reduction of the accuracy of the image acquired by the lens due to inertia or interaction of wind power and unmanned aerial vehicle blades when the unmanned aerial vehicle stops is reduced when the camera of the unmanned aerial vehicle is the same as the advancing direction is reduced, the reduction of the accuracy of the ground object information is caused, the adjustment of the data calling quantity of the new data type is carried out according to the change quantity of the ground object characteristic quantity identified by the model, the influence of the reduction of the accuracy of the ground object information due to the lack of the model training in the update process, the reduction of the accuracy of the ground object information due to the lack of the model training is reduced, and the reduction of the accuracy of the ground object information due to the lack of the data generated when the model training is caused by the lack of the edge transition smoothness when the image edge pixels is determined according to the average difference quantity of the image edge pixels.
Furthermore, the system adjusts the rotation angle of the unmanned aerial vehicle by setting the preset duty ratio difference, so that the influence of reduced accuracy of the ground object information identification caused by reduced accuracy of the lens acquired by the image due to inertia or interaction of wind power and unmanned aerial vehicle blades when the unmanned aerial vehicle stops is reduced when the camera of the unmanned aerial vehicle is the same as the advancing direction is reduced, and the improvement of the ground object information identification accuracy is further realized.
Further, the system judges the accuracy of image processing and the accuracy of filtering the image spot data by setting the preset first variation and the preset second variation, and updates the model according to the corresponding model updating training mode, so that the influence of degradation of the accuracy of identifying the ground object information caused by inaccurate secondary judgment of the accuracy of the image processing is reduced, and the improvement of the accuracy of identifying the ground object information is further realized.
Furthermore, the system adjusts the data calling quantity of the new data type by setting the preset variation difference value, so that the defect of comprehensively updating caused by the defect of the calling quantity of the data in the cloud platform corresponding to the new data type is reduced, the influence of reduced identification accuracy of the ground object information is caused, and the improvement of the identification accuracy of the ground object information is further realized.
Furthermore, the system increases the radius of the mask layer by setting the preset difference, and enlarges the transition area by increasing the radius of the mask layer, so that a wider transition area can be generated, the transition of the edge is smoother, and the accuracy of image cutting is improved.
Further, the system processes the feature data to be trained by setting a midpoint pinch-out mode, so that the phenomenon of blurring or passivation of the generated image edge caused by the fact that the processing of the convex edge of the image is not in place is reduced, and the improvement of the recognition accuracy of ground object information is further realized.
Drawings
FIG. 1 is a block diagram of the overall structure of a ground object information acquisition and recognition system based on image analysis according to an embodiment of the invention;
FIG. 2 is a specific block diagram of an information processing module of a ground feature information acquisition and recognition system based on image analysis according to an embodiment of the present invention;
FIG. 3 is a block diagram of a connection structure of an information processing module and an information acquisition module of an image analysis-based ground feature information acquisition and recognition system according to an embodiment of the present invention;
Fig. 4 is a block diagram of a connection structure of an information processing module and a control module of an image analysis-based ground feature information acquisition and recognition system according to an embodiment of the present invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
It should be noted that, the data in this embodiment are obtained by comprehensively analyzing historical data and corresponding data statistics, test experiments and experimental results before the current detection by the feature information acquisition and recognition system based on image analysis; the invention discloses a ground object information acquisition and recognition system based on image analysis, which integrates the number of 1523 qualified aerial image spots counted, detected and calculated in 92 days, the variation of the ground object characteristic number recognized by a model and the average difference of image edge pixels before the current detection, and comprehensively determines the numerical value of each preset parameter standard of the ground object information acquisition and recognition system based on the image analysis. It can be understood by those skilled in the art that the determining manner of the feature information acquisition and identification system based on image analysis according to the present invention for the parameters mentioned above may be to select the value with the highest duty ratio as the preset standard parameter according to the data distribution, as long as the system according to the present invention can clearly define different specific situations in the single item determination process through the obtained value.
Referring to fig. 1, fig. 2, fig. 3, and fig. 4, the overall structure block diagram of the ground feature information acquisition and recognition system based on image analysis, the specific structure block diagram of the information processing module, the connection structure block diagram of the information processing module connected with the information acquisition module, and the connection structure block diagram of the information processing module connected with the control module according to the embodiment of the invention are shown respectively. The invention discloses a ground object information acquisition and identification system based on image analysis, which comprises the following steps:
the information acquisition module is used for respectively acquiring unmanned aerial vehicle aerial image spot data and channel orthophoto data;
the information processing module is connected with the information acquisition module and comprises a vector generation unit, a preprocessing unit and an image cutting unit, wherein the vector generation unit is used for screening the aerial image spot data of the unmanned aerial vehicle to output vector contour data, the preprocessing unit is used for preprocessing the channel orthographic image data to output an image to be cut, and the image cutting unit is respectively connected with the vector generation unit and the preprocessing unit and is used for cutting the image to be cut by using the vector contour data to output a model data set;
The cutting range of the image to be cut is determined through the vector outline data;
The ground object recognition module is connected with the information processing module and used for recognizing objects and geological conditions in the ground object image, and comprises a model training unit which is connected with the image cutting unit and used for training a training data set in the model data set to generate a ground object recognition model;
The control module is respectively connected with the information acquisition module, the information processing module and the ground object recognition module, and is used for adjusting the rotation angle of the unmanned aerial vehicle or judging whether the accuracy of data processing meets the requirements according to the change quantity of the ground object characteristic quantity recognized by the model when judging that the accuracy of ground object recognition does not meet the requirements according to the qualified aerial image spot quantity ratio, and adjusting the model updating training mode of the ground object recognition model or adjusting the mask radius of the image layer according to the average difference quantity of the image edge pixels when judging that the accuracy of ground object recognition does not meet the requirements.
Specifically, the preprocessing unit performs distortion correction, denoising, cloud and fog removal and dodging on the channel orthographic image data.
Specifically, the training data set is trained to generate a feature recognition model.
Specifically, the unmanned aerial vehicle aerial image spot data includes vector contour data and scalar image data.
In particular, vector contour data is a type of data commonly used in geographic information systems, computer-aided design, or image design, consisting of a series of connected vertices representing boundaries of shapes, objects, or areas, which are connected in a corresponding order to form line segments or closed paths, and representing geographic features on a map, such as rivers, roads; each contour line represents an individual geographic or object element, such as a lake or park boundary, and vector data can accurately describe the shape, location and size of these elements.
In particular, the model dataset comprises a training dataset and a validation dataset.
In implementation, by arranging the information acquisition module, the information processing module, the ground object identification module and the control module, the corresponding working mode of the information acquisition module is determined according to the qualified aerial image spot quantity ratio, the influence of the reduction of the accuracy of ground object information caused by the reduction of the accuracy of an image acquired by a lens due to inertia or interaction of wind power and unmanned plane blades when the unmanned plane is stopped when the unmanned plane is the same as the advancing direction is reduced, the adjustment of the data calling quantity of new data types according to the change quantity of the ground object characteristic quantity identified by a model reduces the influence of the reduction of the accuracy of ground object information caused by the inaccuracy of the trained model in the updating process, and the reduction of the accuracy of ground object information caused by the generation of inaccurate data when the model is trained due to the unsmooth edge transition when the image is cut is determined according to the average difference quantity of image edge pixels.
Specifically, the information processing module further includes a scalar generation component to filter the unmanned aerial vehicle spot data to output scalar picture data.
Optionally, preferred embodiments of scalar picture data include building picture data, road picture data, water system picture data.
Specifically, the information acquisition module comprises an unmanned aerial vehicle for changing the acquisition position of the aerial image and an image acquisition unit for acquiring the aerial image spots of the unmanned aerial vehicle;
The control module is connected with the image acquisition unit and is used for acquiring the number of qualified aerial image spots and the total number of aerial image spots to calculate the proportion of the number of the qualified aerial image spots, judging that the identification accuracy of the ground object information is not in accordance with the requirement when the proportion of the number of the qualified aerial image spots meets the first proportion condition or the second proportion condition, and preliminarily judging that the accuracy of the image processing is not in accordance with the requirement when the proportion of the number of the qualified aerial image spots only meets the first proportion condition, and carrying out secondary judgment on the accuracy of the image processing according to the variation of the feature quantity of the ground object identified by the model;
the control module is connected with the unmanned aerial vehicle and is used for increasing the rotation angle of the unmanned aerial vehicle when the number of qualified aerial photo spots only meets the second ratio condition;
The first duty ratio condition is that the number of qualified aerial photo spots is larger than a preset first duty ratio and smaller than or equal to a preset second duty ratio; the second duty ratio condition is that the number of qualified aerial photo spots is larger than a preset second duty ratio;
The increased rotation angle of the unmanned aerial vehicle is determined by the difference value between the qualified aerial image spot number ratio and the preset second ratio; when the advancing direction of the unmanned aerial vehicle and the included angle between a first ray formed by taking the center point of the unmanned aerial vehicle as an endpoint and a camera of the unmanned aerial vehicle and a second ray formed by taking the center point of the unmanned aerial vehicle as an endpoint are acute angles, the unmanned aerial vehicle rotates along the direction away from the first ray, and the maximum rotating angle is the included angle formed by the second ray and a reverse extension line of the first ray.
Optionally, a preferred embodiment of the preset first duty cycle Q1 is q1=0.6, and a preferred embodiment of the preset second duty cycle Q2 is q2=0.7.
Specifically, the number of qualified aerial image spots is recorded as Q, the difference between the number of qualified aerial image spots and the preset second duty ratio is recorded as Δq, and Δq=q-Q2 is set.
In implementation, the system judges the identification accuracy of the ground object information by setting the preset first duty ratio and the preset second duty ratio, reduces the influence of the acquisition stability reduction of the ground object information caused by inaccurate judgment of the identification accuracy of the ground object information, and further improves the identification accuracy of the ground object information.
Specifically, the calculation formula of the qualified aerial image spot number ratio is as follows:
Wherein U is the qualified aerial image spot number ratio, ut is the qualified aerial image spot number, and Ug is the total aerial image spot number; and when the number of the ground feature characteristics in the aerial image spots identified by the ground feature identification model is the same as the number of the ground feature characteristics in the actual scene corresponding to the aerial image spots in the historical data and exceeds a standard number threshold, judging that the aerial image spots identified by the ground feature identification model are qualified.
Specifically, the increased rotation angle of the unmanned aerial vehicle is determined through the difference value between the qualified aerial image spot number ratio and the preset second ratio.
Specifically, if DeltaQ is less than or equal to DeltaQ 0, the control module adjusts the rotation angle of the unmanned aerial vehicle by using a preset first rotation angle adjustment coefficient;
If DeltaQ > DeltaQ0, the control module adjusts the rotation angle of the unmanned aerial vehicle by using a preset second rotation angle adjusting coefficient.
Alternatively, a preferred embodiment of the preset duty cycle difference Δq0 is Δq0=0.1.
Specifically, the first rotation angle adjustment coefficient is preset to be α1, α1=1.2, the second rotation angle adjustment coefficient is preset to be α2, α2=1.4, the rotation angle of the unmanned aerial vehicle is set to be V, 1 < α1 < α2, the increased rotation angle of the unmanned aerial vehicle is set to be V ', V' =v× (1+αi)/2, and αi is the preset i-th rotation angle adjustment coefficient, and i=1, 2 is set.
In implementation, the system adjusts the rotation angle of the unmanned aerial vehicle by setting the preset duty ratio difference, so that the influence of reduced accuracy of ground object information identification caused by reduced accuracy of the lens acquired by the unmanned aerial vehicle due to inertia or interaction of wind power and unmanned aerial vehicle blades when the unmanned aerial vehicle stops is reduced when the camera of the unmanned aerial vehicle is the same as the advancing direction is reduced, and further the improvement of the ground object information identification accuracy is realized.
Specifically, the ground object recognition module further comprises a model updating unit which is connected with the model training unit and used for updating the ground object recognition model;
The control module is connected with the model training unit and is used for secondarily judging that the accuracy of image processing is not in accordance with the requirement when the variation of the feature quantity of the ground object identified by the ground object identification model meets the first variation condition or the second variation condition, primarily judging that the screening accuracy of the image spot data is not in accordance with the requirement when the variation of the feature quantity of the ground object identified by the model only meets the second variation condition, and secondarily judging the screening accuracy of the image spot data according to the average difference quantity of the image edge pixels;
The control module is connected with the model updating unit and is used for controlling the model updating unit to update the model according to the model updating training mode when the variation of the feature quantity of the ground features identified by the model only meets the first variation condition;
the first variation condition is that the variation of the feature quantity of the ground object identified by the model is larger than a preset first variation and smaller than or equal to a preset second variation; the second variation condition is that the variation of the feature quantity of the ground object identified by the model is larger than a preset second variation.
Alternatively, the preferred embodiment of the preset first variation amount P1 is p1=5, and the preferred embodiment of the preset second variation amount P2 is p2=8.
Specifically, the change amount of the feature quantity identified by the model is denoted as P, the difference between the change amount of the feature quantity identified by the model and the preset first change amount is denoted as Δp, and Δp=p-P1 is set.
In implementation, the system judges the accuracy of image processing and the accuracy of filtering the image spot data by setting the preset first variation and the preset second variation, and updates the model according to the corresponding model updating training mode, so that the influence of degradation of the accuracy of identifying the ground object information caused by inaccurate secondary judgment of the accuracy of the image processing is reduced, and the improvement of the accuracy of identifying the ground object information is further realized.
Specifically, the calculation formula of the variation of the feature quantity of the ground object identified by the model is as follows:
Wherein S is the variation of feature quantity of the feature identified by the model, K a is the feature quantity of the feature in the orthographic image of the first channel identified by the feature identification model, and K b is the feature quantity of the feature in the orthographic image of the second channel identified by the feature identification model; the size of the first channel orthographic image and the second channel orthographic image are the same as the type of the image acquisition place.
Specifically, the model update training mode is that the model update unit updates the model according to the data corresponding call quantity of the new data type, and the data corresponding call quantity of the new data type is determined by the difference value between the change quantity of the feature quantity identified by the model and the preset first change quantity.
Specifically, the specific process of determining the corresponding call quantity of the data of the new data type through the difference value between the change quantity of the feature quantity of the ground object identified by the model and the preset first change quantity is as follows:
If delta P is less than or equal to delta P0, the control module uses a preset first call quantity adjustment coefficient to adjust the data call quantity of the new data type to a first call quantity;
if delta P0 > -delta P0, the control module uses a preset second call quantity adjustment coefficient to adjust the data call quantity of the new data type to a second call quantity;
the preset first calling quantity adjusting coefficient is smaller than the preset second calling quantity adjusting coefficient; the data corresponding call number of the new data type comprises a first call number and a second call number.
Alternatively, a preferred embodiment of the preset variation difference Δp0 is Δp0=2.
Specifically, the preset first call number adjustment coefficient is denoted by β1, β1=1.1, the preset second call number adjustment coefficient is denoted by β2, β2=1.3, the data call number of the new data type is denoted by H, wherein 1 < β1 < β2, the data corresponding to the new data type is denoted by H ', H' =h× (1+2βj)/3, wherein βj is the preset j-th call number adjustment coefficient, and j=1, 2.
Specifically, the meaning of the data call number of the new data type is the data call number of the data type corresponding to the new data, which is different from the original training data, in the cloud platform and needs to be subjected to data training in the process of updating the ground object recognition model, for example, the new data input by a user when using the ground object recognition model; it will be appreciated that at the time of model update, the original training data and the new data need to be first combined into a global data set that is used for new data training to update the model.
In implementation, the system adjusts the data calling quantity of the new data type by setting the preset variation difference value, so that the defect of comprehensively updating caused by the defect of the calling quantity of the data in the cloud platform corresponding to the new data type is reduced, the influence of reduced identification accuracy of the ground object information is caused, and the improvement of the identification accuracy of the ground object information is further realized.
Specifically, the control module is connected with the image cutting unit, and is configured to calculate an average difference of the image edge pixels according to the pixels of the image edge output by the image cutting unit, and secondarily determine that screening accuracy of the image spot data does not meet the requirement when the average difference of the image edge pixels is greater than a preset difference, and increase a mask radius of the image layer of the image cutting unit.
Alternatively, a preferred embodiment of the preset difference amount Y0 is y0=100 px.
Specifically, the average difference amount of the image edge pixels is denoted as Y.
Specifically, the calculation formula of the average difference amount of the image edge pixels is:
Wherein Y is the average difference of pixels at the image edge, X m-Xm-1 is the absolute value of the difference between the detected pixel at the m-th image edge and the detected pixel at the m-1 th image edge, n is the number of detected channel orthographic images, and n is a natural number greater than or equal to 1.
Specifically, the increased layer mask radius is determined by the difference between the average difference of the image edge pixels and the preset difference.
Specifically, if DeltaY is less than or equal to DeltaY 0, the control module adjusts the mask radius of the layer by using a preset first radius adjustment coefficient;
if DeltaY 0 > DeltaY0, the control module adjusts the mask radius of the image layer by using a preset second radius adjusting coefficient.
Alternatively, a preferred embodiment of the preset variation difference Δy0 is Δp0=40px.
Specifically, the first radius adjustment coefficient is preset to be γ1, γ1=1.15, the second radius adjustment coefficient is preset to be γ2, γ2=1.25, the layer mask radius is set to be H, wherein 1 < γ1 < γ2, the increased layer mask radius is set to be H ', H' =h×γj, wherein γh is the preset H radius adjustment coefficient, and h=1, 2 are set.
Specifically, the image cutting unit adopts nondestructive cutting when cutting an image to be cut, and the radius of the pattern mask in the cutting process is adjusted to adjust the softness of the transition area.
In implementation, the system increases the radius of the mask layer by setting the preset difference, and enlarges the transition area by increasing the radius of the mask layer, so that a wider transition area is generated, the transition of the edge is smoother, and the accuracy of image cutting is improved.
Specifically, the preprocessing unit is provided with a midpoint pinch-out mode for performing distortion correction processing on the channel orthographic image data, wherein the midpoint pinch-out mode is to select midpoints at two sides of a convex edge of the channel orthographic image, and a new edge point is created at the midpoints.
In implementation, the system processes the feature data to be trained by setting a midpoint pinch-out mode, so that the phenomenon of blurring or passivation of the generated image edge caused by the fact that the processing of the convex edge of the image is not in place is reduced, and the improvement of the recognition accuracy of ground object information is further realized.
Example 1
In this embodiment 1, the ground object information acquisition and recognition system based on image analysis is used to acquire and recognize ground object information, and the control module adjusts the rotation angle of the unmanned aerial vehicle according to the difference between the qualified aerial image spot number ratio and the preset second ratio, where the preset ratio difference is denoted as Δq0, the preset first rotation angle adjustment coefficient is denoted as α1, the preset second rotation angle adjustment coefficient is denoted as α2, and the rotation angle of the unmanned aerial vehicle is denoted as V, where 1 < α1 < α2, α1=1.2, α2=1.4, Δq0=0.1, v=4 °, q=0.9, q2=0.7, and Δq=q-Q2 are set.
In this embodiment 1, Δq=0.2 is obtained, and the control module determines Δq > - Δq0 and adjusts the rotation angle of the unmanned aerial vehicle using a preset first rotation angle adjustment coefficient, so as to calculate V' =4× (1+1.2)/2=4.4 °.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.

Claims (10)

1. The utility model provides a ground object information acquisition identification system based on image analysis which characterized in that includes:
the information acquisition module is used for respectively acquiring unmanned aerial vehicle aerial image spot data and channel orthophoto data;
the information processing module is connected with the information acquisition module and comprises a vector generation unit, a preprocessing unit and an image cutting unit, wherein the vector generation unit is used for screening the aerial image spot data of the unmanned aerial vehicle to output vector contour data, the preprocessing unit is used for preprocessing the channel orthographic image data to output an image to be cut, and the image cutting unit is respectively connected with the vector generation unit and the preprocessing unit and is used for cutting the image to be cut by using the vector contour data to output a model data set;
The cutting range of the image to be cut is determined through the vector outline data;
The ground object recognition module is connected with the information processing module and used for recognizing objects and geological conditions in the ground object image, and comprises a model training unit which is connected with the image cutting unit and used for training a training data set in the model data set to generate a ground object recognition model;
The control module is respectively connected with the information acquisition module, the information processing module and the ground object recognition module, and is used for adjusting the rotation angle of the unmanned aerial vehicle or judging whether the accuracy of data processing meets the requirements according to the change quantity of the ground object characteristic quantity recognized by the model when judging that the accuracy of ground object recognition does not meet the requirements according to the qualified aerial image spot quantity ratio, and adjusting the model updating training mode of the ground object recognition model or adjusting the mask radius of the image layer according to the average difference quantity of the image edge pixels when judging that the accuracy of ground object recognition does not meet the requirements.
2. The image analysis-based terrain information acquisition and recognition system of claim 1, wherein the information processing module further comprises a scalar generation component to filter the unmanned aerial vehicle spot data to output scalar picture data.
3. The ground feature information acquisition and recognition system based on image analysis according to claim 2, wherein the information acquisition module comprises an unmanned aerial vehicle for changing the acquisition position of the aerial image and an image acquisition unit for acquiring the aerial image spots of the unmanned aerial vehicle;
The control module is connected with the image acquisition unit and is used for acquiring the number of qualified aerial image spots and the total number of aerial image spots to calculate the proportion of the number of the qualified aerial image spots, judging that the identification accuracy of the ground object information is not in accordance with the requirement when the proportion of the number of the qualified aerial image spots meets the first proportion condition or the second proportion condition, and preliminarily judging that the accuracy of the image processing is not in accordance with the requirement when the proportion of the number of the qualified aerial image spots only meets the first proportion condition, and carrying out secondary judgment on the accuracy of the image processing according to the variation of the feature quantity of the ground object identified by the model;
the control module is connected with the unmanned aerial vehicle and is used for increasing the rotation angle of the unmanned aerial vehicle when the number of qualified aerial photo spots only meets the second ratio condition;
The first duty ratio condition is that the number of qualified aerial photo spots is larger than a preset first duty ratio and smaller than or equal to a preset second duty ratio; the second duty ratio condition is that the number of qualified aerial photo spots is larger than a preset second duty ratio;
The increased rotation angle of the unmanned aerial vehicle is determined by the difference value between the qualified aerial image spot number ratio and the preset second ratio; when the advancing direction of the unmanned aerial vehicle and the included angle between a first ray formed by taking the center point of the unmanned aerial vehicle as an endpoint and a camera of the unmanned aerial vehicle and a second ray formed by taking the center point of the unmanned aerial vehicle as an endpoint are acute angles, the unmanned aerial vehicle rotates along the direction away from the first ray, and the maximum rotating angle is the included angle formed by the second ray and a reverse extension line of the first ray.
4. The system for collecting and identifying ground object information based on image analysis according to claim 3, wherein the calculation formula of the qualified aerial image spot number ratio is as follows:
Wherein U is the qualified aerial image spot number ratio, ut is the qualified aerial image spot number, and Ug is the total aerial image spot number; and when the number of the ground feature characteristics in the aerial image spots identified by the ground feature identification model is the same as the number of the ground feature characteristics in the actual scene corresponding to the aerial image spots in the historical data and exceeds a standard number threshold, judging that the aerial image spots identified by the ground feature identification model are qualified.
5. The system for collecting and identifying ground object information based on image analysis according to claim 4, wherein the ground object identification module further comprises a model updating unit connected with the model training unit for updating the ground object identification model;
The control module is connected with the model training unit and is used for secondarily judging that the accuracy of image processing is not in accordance with the requirement when the variation of the feature quantity of the ground object identified by the ground object identification model meets the first variation condition or the second variation condition, primarily judging that the screening accuracy of the image spot data is not in accordance with the requirement when the variation of the feature quantity of the ground object identified by the model only meets the second variation condition, and secondarily judging the screening accuracy of the image spot data according to the average difference quantity of the image edge pixels;
The control module is connected with the model updating unit and is used for controlling the model updating unit to update the model according to the model updating training mode when the variation of the feature quantity of the ground features identified by the model only meets the first variation condition;
the first variation condition is that the variation of the feature quantity of the ground object identified by the model is larger than a preset first variation and smaller than or equal to a preset second variation; the second variation condition is that the variation of the feature quantity of the ground object identified by the model is larger than a preset second variation.
6. The system for collecting and identifying feature information based on image analysis according to claim 5, wherein the calculation formula of the variation of the feature quantity identified by the model is:
Wherein S is the variation of feature quantity of the feature identified by the model, K a is the feature quantity of the feature in the orthographic image of the first channel identified by the feature identification model, and K b is the feature quantity of the feature in the orthographic image of the second channel identified by the feature identification model; the size of the first channel orthographic image and the second channel orthographic image are the same as the type of the image acquisition place.
7. The system for collecting and identifying feature information based on image analysis according to claim 6, wherein the model update training mode is that the model update unit updates the model by the number of calls corresponding to the data of the new data type, and the number of calls corresponding to the data of the new data type is determined by the difference between the variation of the feature number identified by the model and the preset first variation.
8. The system of claim 7, wherein the control module is connected to the image cropping unit, and is configured to calculate an average difference of pixels of the image edge according to the pixels of the image edge output by the image cropping unit, and determine that screening accuracy of the image patch data is not satisfactory when the average difference of the pixels of the image edge is greater than a preset difference, and increase a radius of a mask of the image layer of the image cropping unit.
9. The system of claim 8, wherein the increased layer mask radius is determined by a difference between an average difference of the image edge pixels and the predetermined difference.
10. The system according to claim 1, wherein the preprocessing unit is provided with a midpoint pinch-out mode for performing distortion correction processing on the channel orthographic image data, wherein the midpoint pinch-out mode is to select midpoints on two sides of a convex edge of the channel orthographic image, and create a new edge point at the midpoints.
CN202410480880.0A 2024-04-22 2024-04-22 Ground object information acquisition and recognition system based on image analysis Active CN118097474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410480880.0A CN118097474B (en) 2024-04-22 2024-04-22 Ground object information acquisition and recognition system based on image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410480880.0A CN118097474B (en) 2024-04-22 2024-04-22 Ground object information acquisition and recognition system based on image analysis

Publications (2)

Publication Number Publication Date
CN118097474A true CN118097474A (en) 2024-05-28
CN118097474B CN118097474B (en) 2024-06-21

Family

ID=91142357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410480880.0A Active CN118097474B (en) 2024-04-22 2024-04-22 Ground object information acquisition and recognition system based on image analysis

Country Status (1)

Country Link
CN (1) CN118097474B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107939A1 (en) * 2016-12-14 2018-06-21 国家***第二海洋研究所 Edge completeness-based optimal identification method for image segmentation
CN114202695A (en) * 2021-12-15 2022-03-18 梁吟君 Remote sensing image automatic identification system based on artificial intelligence technology
CN114399692A (en) * 2022-01-13 2022-04-26 武汉微集思科技有限公司 Illegal construction identification monitoring detection method and system based on deep learning
WO2022141145A1 (en) * 2020-12-30 2022-07-07 深圳技术大学 Object-oriented high-resolution remote sensing image multi-scale segmentation method and system
CN117576394A (en) * 2023-11-17 2024-02-20 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) Method for improving semantic segmentation of place class by using global information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107939A1 (en) * 2016-12-14 2018-06-21 国家***第二海洋研究所 Edge completeness-based optimal identification method for image segmentation
WO2022141145A1 (en) * 2020-12-30 2022-07-07 深圳技术大学 Object-oriented high-resolution remote sensing image multi-scale segmentation method and system
CN114202695A (en) * 2021-12-15 2022-03-18 梁吟君 Remote sensing image automatic identification system based on artificial intelligence technology
CN114399692A (en) * 2022-01-13 2022-04-26 武汉微集思科技有限公司 Illegal construction identification monitoring detection method and system based on deep learning
CN117576394A (en) * 2023-11-17 2024-02-20 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) Method for improving semantic segmentation of place class by using global information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘洁;赖格英;宋月君;廖凯涛;: "水土保持措施图斑遥感识别提取研究进展", 水土保持应用技术, no. 05, 20 October 2020 (2020-10-20) *
林志玮;丁启禄;涂伟豪;林金石;刘金福;黄炎和;: "基于多元HoG及无人机航拍图像的植被类型识别", 森林与环境学报, no. 04, 17 October 2018 (2018-10-17) *

Also Published As

Publication number Publication date
CN118097474B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
US7809191B2 (en) Image processing system and image processing method for aerial photograph
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
Ugelmann Automatic breakline detection from airborne laser range data
CN110097536A (en) Hexagon bolt looseness detection method based on deep learning and Hough transformation
CN105718872B (en) Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle
CN110473221B (en) Automatic target object scanning system and method
CN114973028B (en) Aerial video image real-time change detection method and system
CN115619623A (en) Parallel fisheye camera image splicing method based on moving least square transformation
CN115797775A (en) Intelligent illegal building identification method and system based on near-earth video image
CN118097474B (en) Ground object information acquisition and recognition system based on image analysis
CN113963314A (en) Rainfall monitoring method and device, computer equipment and storage medium
CN110826364A (en) Stock position identification method and device
CN116863357A (en) Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method
CN109166081B (en) Method for adjusting target brightness in video visibility detection process
CN108335321B (en) Automatic ground surface gravel size information extraction method based on multi-angle photos
CN115797310A (en) Method for determining inclination angle of photovoltaic power station group string and electronic equipment
CN116188348A (en) Crack detection method, device and equipment
CN117612038B (en) Mining area vegetation carbon sink fine calculation method based on unmanned aerial vehicle image
CN117689481B (en) Natural disaster insurance processing method and system based on unmanned aerial vehicle video data
FAN et al. Intelligent antenna attitude parameters measurement based on deep learning ssd model
CN114972358B (en) Artificial intelligence-based urban surveying and mapping laser point cloud offset detection method
CN117516487B (en) Medium-small river video flow test method
CN116503767B (en) River course floater recognition system based on semantic image processing
Ma et al. Research on the Algorithm of Building Object Boundary Extraction Based on Oblique Photographic Model
CN115908384A (en) Accumulated water depth detection method based on visual scale

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant