CN116563699A - Forest fire positioning method combining sky map and mobile phone image - Google Patents

Forest fire positioning method combining sky map and mobile phone image Download PDF

Info

Publication number
CN116563699A
CN116563699A CN202310312355.3A CN202310312355A CN116563699A CN 116563699 A CN116563699 A CN 116563699A CN 202310312355 A CN202310312355 A CN 202310312355A CN 116563699 A CN116563699 A CN 116563699A
Authority
CN
China
Prior art keywords
image
forest fire
point
shooting
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310312355.3A
Other languages
Chinese (zh)
Inventor
朱军
廉慧洁
郭煜坤
游继钢
谢亚坤
陈佩菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202310312355.3A priority Critical patent/CN116563699A/en
Publication of CN116563699A publication Critical patent/CN116563699A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a forest fire positioning method combining a sky map and a mobile phone image, which solves the problem that the accurate positioning of forest fire cannot be realized under the condition of poor clarity of acquired forest photos in the prior art. Acquiring a plurality of day map mark point coordinates based on a smart phone with a day map positioning function and a sensor, shooting a forest fire point image based on the day map mark point coordinates, and accurately acquiring space multi-pose information of the moving shot forest fire point image; and moving the forest fire points in the shot plurality of forest fire point images progressively and precisely seeking and positioning the forest fire points based on the precisely acquired space multi-gesture information to obtain positioning coordinates of the forest fire points. The invention is used for positioning forest fire points.

Description

Forest fire positioning method combining sky map and mobile phone image
Technical Field
A forest fire point positioning method combining a sky map and a mobile phone image is used for positioning forest fire points, and belongs to the field of forest fire emergency response.
Background
Forest is an important component of land ecological system, and has great significance for maintaining ecological balance, improving ecological environment and maintaining biodiversity. But in recent years, forest fires sometimes occur under the influence of factors such as global warming, intercropping of forest farmers, frequent forest travel activities and the like. Because forest fires often occur suddenly, the predictability is low, the danger is strong, the prevention and control difficulty is high, and if the fire cannot be found and control measures can not be taken in time, the fire is very easy to spread rapidly, and immeasurable loss is caused. Therefore, the positioning of the forest fire points is effectively, accurately and rapidly realized at the first time of the occurrence of the forest fire, the geographic coordinates of the fire points are accurately provided, the fire fighting command departments can be assisted to make timely and correct fire fighting command decisions, the casualties and economic losses caused by the forest fire are reduced, and the method is one of important problems to be solved in the current basic-level forest fire prevention monitoring.
At present, the forest fire monitoring means in China can be divided into 4 layers according to space positions, namely satellite monitoring, aviation monitoring, near-ground observation and ground patrol, and a three-dimensional forest fire monitoring system is basically formed. Satellite monitoring has the characteristics of wide monitoring range, high frequency and 24 hours all-weather performance, but satellite remote sensing images have lower time and spatial resolution, the deviation between a positioning fire point and an actual fire point reaches kilometer level, the whole calculation processing process needs about half an hour, the real-time performance is poor, and the best opportunity for forest fire suppression is easily delayed. Aviation monitoring comprises aircraft patrol and unmanned aerial vehicle patrol, and has the characteristics of wide investigation range, wide visual field and flexible maneuvering, but the aircraft patrol is expensive, part of forest stand does not have flight conditions, and unmanned aerial vehicle has the problems of weak carrying capacity and easy influence of environment or fire scene in flight. Near-ground observation mainly comprises two modes of manual observation and forest monitoring video positioning, but the monitoring and observation tower points are fixed, the theoretical monitoring range is limited (10-15 km), the construction price is high, and the forest is difficult to realize full coverage due to the fact that monitoring dead angles and gaps exist under the limitation of terrains and terrains. The ground patrolling personnel has larger patrol range, stronger personnel mobility and flexible maneuver, can go deep into the abdomen of a forest area to expand patrol range, but when forest fires are larger or personnel cannot reach fire places, the forest protection personnel is difficult to give accurate fire positions under the condition of long distance.
With rapid development of computer communication, multimedia technology and positioning technology, smart phones are rapidly popularized worldwide by incorporating GPS and various sensors, and a great deal of research and application in the field of mapping geographic information is being developed to the mass direction by means of smart phones. In the prior art, however, the sensor precision of the smart phone is not high, and the precision and quality of measured parameters and digital images are reduced compared with those of professional quantity measuring equipment. Although the smart phone is adopted to collect images in a portable manner, because the photo information collected by the smart phone is poor in clarity, accurate positioning of forest fires cannot be achieved, and therefore the smart phone is not adopted to achieve positioning of forest fires in the prior art;
therefore, the prior art has the following technical problems:
1. under the condition of poor clarity of the collected forest photos, accurate positioning of forest fires cannot be realized;
2. the measuring device has poor mobility and portability.
Disclosure of Invention
The invention aims to provide a forest fire positioning method combining a sky map and a mobile phone image, which solves the problem that the accurate positioning of forest fire cannot be realized under the condition that the acquired forest photo is poor in clarity in the prior art.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a forest fire positioning method combining a sky map and a mobile phone image comprises the following steps:
s1, acquiring a plurality of space map mark point coordinates based on a smart phone with a space map positioning function and a sensor, shooting a forest fire point image based on the space map mark point coordinates, and accurately acquiring space multi-pose information of the moving shooting forest fire point image;
and S2, moving and gradually and precisely locating forest fires in the shot plurality of forest fires based on the precisely acquired space multi-gesture information to obtain forest fires locating coordinates.
Further, the specific steps of the step S1 are as follows:
s1.1, screening a plurality of shooting positions based on space semantic constraint and combining a space map and a real environment, obtaining space map mark point coordinates, shooting forest fire point images on the space map mark point coordinates in sequence based on a multi-pose shooting model, wherein the overlapping degree of the shot forest fire point images is 60-80%, and manually selecting one point of a forest fire region in the forest fire point images as an important characteristic point based on the shot first forest fire point images;
S1.2, acquiring mobile shooting parameters of the smart phone when shooting a forest fire point image;
s1.3, correcting a positioning result of the smart phone based on the mobile shooting parameters and the coordinates of the antenna map marker points, namely optimizing the gesture information of the smart phone through multi-gesture shooting so as to achieve accurate acquisition of the gesture information of the mobile shooting forest fire point images.
Further, the specific steps of the step S3.1 are as follows:
s1.11, firstly, positioning the real-time position of the patrol Lin Yuan on a sky map;
s1.12, selecting a plurality of marked ground objects as shooting points based on the real-time position, spatial semantic constraint and real environment of a forest patrol worker positioned on a sky map, and screening a plurality of shooting station positions to obtain a plurality of mark points, wherein the spatial semantic constraint refers to the heterogeneity characteristics between the marked ground objects and the ground objects outside the marked ground objects in the sky map image, and the heterogeneity characteristics comprise regional, visible and marked relations;
s1.13, acquiring coordinates of the mark points on a space map based on the screened mark points to obtain the coordinates of the mark points of the space map, and shooting forest fire point images on the coordinates of the mark points of the space map in sequence based on a multi-pose shooting model, wherein the multi-pose shooting model is a mode of combining vertical shooting and horizontal shooting of the same fire area.
Further, the specific steps of the step S1.2 are as follows:
the obtaining mobile shooting parameters of the smart phone comprises the following steps: an inner azimuth element and an outer azimuth element;
internal orientation element:
the internal azimuth element is determined by the smart phone, is a parameter describing the relative position between the shooting center and the forest fire point image, and comprises three parameters: the vertical distance f from the shooting center S to the image, and the coordinates (x) of the principal point o in the frame coordinate system 0 ,y 0 ) Wherein, the vertical distance refers to the main distance;
external orientation element:
the external azimuth elements of the images refer to determining the spatial position and the attitude parameters of the forest fire images during shooting, and each forest fire image has six external azimuth elements, namely 3 line elements, namely the coordinate X of the shooting center S under the space rectangular coordinate system of an object space S 、Y S 、Z S And 3 angle elements describing the pose information of the image at the time of shooting, i.e. course angleThe method comprises the steps that a pitch angle omega and a roll angle kappa are provided, external azimuth elements of an image are provided by a smart phone, line elements of the image are acquired by using a position service, and return values of an acceleration sensor, a magnetic field sensor and a direction sensor are jointly calculated to acquire angle elements of a photo;
in an android mobile phone, the output results of the sensors are all based on a local coordinate system of the smart phone, the smart phone coordinate system is a relative coordinate system defined based on a mobile phone screen, an origin of an inertial coordinate system coincides with the origin of the smart phone coordinate system, and a coordinate axis of the inertial coordinate system is parallel to a coordinate axis of a world coordinate system, namely, the coordinate axis is regarded as an intermediate state between the mobile phone coordinate system and the world coordinate system, so that the smart phone needs to be converted from the local coordinate system to the world coordinate system by using the inertial coordinate system, and a conversion formula for converting the smart phone coordinate system to the world coordinate system is specifically as follows:
Rotate around the z-axis, the rotation angle isThe resulting rotation matrix is:
rotation around x-axis, rotation angle omega rotation matrix:
rotation around y-axis, rotation angle k is the rotation matrix obtained:
combining the 3 basic rotation sequences according to different rotation sequences to obtain a rotation matrix between two coordinate systems, wherein the rotation sequences comprise any one of z-x-y, z-y-x, x-z-y, x-y-z, y-z-x and y-x-z, and the rotation matrix is formed by rotating the rotation sequences in the z-x-y sequence:
therefore, the rotation relationship from the smartphone coordinate system to the world coordinate system is:
wherein, (x ', y ', z ') is the three-dimensional coordinates of the point in the world coordinate system, (x, y, z) is the three-dimensional coordinates of the point in the smart phone coordinate system, and T represents the transpose.
Further, the specific steps of the step S1.3 are as follows:
step S1.31. Positioning optimization
Firstly, defining that plane coordinates for acquiring a plurality of shooting positions through a smart phone are S respectively 1 (lon1,lat1),S 2 (lon2,lat2),S 3 (lon3,lat3) … …, manually selecting a space map to obtain plane coordinates of the shooting position, namely, space map mark point coordinates are S 'respectively' 1 (Lon1,Lat1),S′ 2 (Lon2,Lat2),S′ 3 (Lon3,Lat3),……;
Then, calculating the difference value between the positioning of the smart phone and the coordinates of the mark points of the sky map in the coordinate data of each shooting position as a correction number;
finally, respectively calculating arithmetic mean values Deltalon and Deltalat of longitude and latitude corrections as final corrections of mobile phone positioning:
Wherein N is * Refers to non-0 natural integers;
adding the shooting position coordinate data based on the smart phone and the final correction to obtain corrected geodetic coordinates (B, L, A) of the corresponding shooting position, namely obtaining the space parameters of the optimized external azimuth element, and converting the geodetic coordinates into space rectangular coordinates according to the following formula:
wherein e 1 N is the radius of curvature of the unitary mortise circle and is the first eccentricity;
step S1.32. gesture optimization
Acquiring forest fire point images shot vertically and transversely in the same shooting position, removing theoretical angle differences of the two shots, and then averaging to obtain an average value as a final attitude angle to realize the optimization of attitude parameters, namely when the vertical shooting is changed into the transverse shooting, the rolling angle and the pitch angle are different by 90 degrees, so that the average value is respectively obtained for the two images after the 90-degree difference is removed, and the optimized attitude parameters are obtained;
and S1.33, accurately acquiring pose information of the forest fire point image which is shot in a moving way based on positioning optimization and pose optimization.
Further, the specific steps of the step S2 are as follows:
s2.1, identifying the same forest fire point in the forest fire point images shot at a plurality of shooting positions based on an SI FT algorithm and a RANSAC algorithm to obtain forest fire point coordinates;
S2.2, acquiring object point coordinates corresponding to identical image points of forest fire points in two forest fire point images based on the forest fire point coordinates and a double-image-sheet forest fire positioning algorithm;
and 2.3, gradually and precisely finding and positioning object point coordinates corresponding to a plurality of groups of forest fire point homonymous image points based on multi-forest fire point image combination to obtain the forest fire point coordinate positioning.
Further, the specific steps of the step S2.1 are as follows:
s2.11, preprocessing each forest fire point image, namely that the smart phone generates distortion when shooting the forest fire point image, and correcting the distortion generated by the wide-angle lens;
s2.12, respectively adopting an SI FT algorithm to identify characteristic points of each preprocessed forest fire point image to perform rough matching on two forest fire point images with adjacent shooting times, wherein the front forest fire point image is taken as a reference image, and the rear forest fire point image is taken as an image to be matched in the two forest fire point images with adjacent shooting times;
s2.13, performing field voting denoising on the feature points after rough matching, and obtaining an initial inner point set after denoising;
step S2.14, screening an initial internal point set obtained by rough matching based on an improved RANSAC algorithm to obtain more than 6 pairs of matching points accurately, namely, obtaining a corresponding relation between a first forest fire point image serving as a reference image and a second forest fire point image serving as characteristic points of an image to be matched, transmitting coordinates of the characteristic points of the first forest fire point image to the second forest fire point image, then transmitting the second forest fire point image serving as the reference image to a third forest fire point image, and the like, if more than 6 pairs of matching points contain important characteristic points, turning to step S2.15, otherwise, establishing a corresponding relation of the characteristic points of each forest fire point image based on the important characteristic points, and turning to step S2.15 after establishing;
And S2.15, calculating to obtain forest fire point coordinates on the subsequently shot forest fire point images according to the forest fire points manually selected when the first forest fire point image is shot based on the corresponding relation of the characteristic points of each forest fire point image.
Further, the specific steps of the step S2.12 are as follows:
firstly, removing noise points in each forest fire point image through Gaussian blur, creating a multi-scale image, and creating a multi-scale space based on the characteristics of the scale image and the Gaussian difference enhanced image, namely forming a Gaussian pyramid of each forest fire point image based on the multi-scale image, wherein image pixels of the same group of two adjacent layers in the Gaussian pyramid are subtracted to obtain a Gaussian difference pyramid, namely the multi-scale space:
L(x,y,σ)=G(x,y,σ)×I(x,y) (8)
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))×I(x,y) (10)
i (x, y) is a two-dimensional image of each forest fire point image to be detected; l (x, y, sigma) is a Gaussian scale space of a forest fire point image or a Gaussian pyramid or Gaussian image; g (x, y, sigma) is a Gaussian function; sigma is a scale space factor, k is a multiple of an adjacent scale space, and D (x, y, sigma) refers to a Gaussian differential pyramid, namely a multi-scale space;
secondly, comparing each pixel value of the Gaussian image in the multi-scale space with surrounding 26 pixel values, and if the pixel is the highest or lowest pixel in the adjacent pixels, considering the pixel as a candidate characteristic point of each forest fire point image in the scale space;
Calculating a second-order Taylor expansion of the scale space for each candidate feature point, and considering that the point belongs to a low-contrast feature point and eliminating the low-contrast feature point to obtain a feature point if the result is smaller than a threshold value;
then, using the gradient direction distribution characteristics of the neighborhood pixels of the characteristic points after being removed to assign a direction parameter for each characteristic point, counting the gradient directions and gradient amplitudes of all pixels in a circle taking the characteristic point as a circle center and taking 1.5 times of the scale of the Gaussian image in a multi-scale space where the characteristic point is located as a radius, and creating a gradient histogram, wherein the peak value of the histogram represents the main direction of the neighborhood gradient at the key point, namely the direction of the characteristic point, and the direction reaching 80% of the maximum value is taken as an auxiliary direction;
finally, generating a unique fingerprint for the feature point through the main gradient direction, the auxiliary gradient direction and the gradient size of the adjacent pixels, namely a feature point descriptor, calculating the distance between each feature point descriptor in the image to be matched and each feature point descriptor in the reference image, sequencing all the obtained results corresponding to each feature point descriptor, and taking the nearest distance as a matching point to obtain a rough matched result; the specific steps of the step S2.13 are as follows:
After rough matching, respectively calculating the distance d of any two characteristic points in each forest fire point image and the difference delta theta of the main direction included angle, respectively carrying out normalization processing on the distance d and the difference delta theta according to row vectors, and after normalization processing, calculating the distance inner product value d of each pair of matching points of the reference image and the image to be matched ot1 And principal direction angle inner product value d ot2 Finally, comparing whether the inner product value of the distance between a pair of matching points and the inner product value of the included angle of the main direction are smaller than the threshold value or not through the set threshold value, if yes, putting the matching points into the inner point set to obtain an initial inner point set, and otherwise, putting the matching points into the outer point set;
Δθ=θ ij (12)
d ot1 =d ot (im1(x U ,y U ),im2(x u ,y u ))
d ot2 =d ot (im1(θ U ),im2(θ u ))
wherein i and j are on the same forest fire point imageAny pair of initial matching points, (x) U ,y U ,θ U ) And (x) u ,y u ,θ u ) Pixel coordinates and principal directions, d, expressed as corresponding matching points U and U of the reference image im1 and the image im2 to be matched ot Refers to the inner product, θ, of the reference image im1 and the image im2 to be matched i 、θ j The main directions of any two characteristic points i and j in each forest fire point image are respectively represented; the specific steps of the step S2.14 are as follows:
firstly, randomly extracting 4 non-collinear sample data from a characteristic point set based on a SIFT algorithm, calculating a 3 multiplied by 3 transformation matrix H, marking the matrix H as a model M, then calculating projection errors of all data and the model M, adding the projection errors to an initial internal point set if the projection errors are smaller than a threshold value, updating Q-best=Q when the number of elements in the initial internal point set Q is larger than an optimal internal point set Q-best, judging whether the iteration times are larger than the times K or not, exiting if the iteration times are larger than the times K, otherwise adding 1 to the iteration times, repeating the operation until the iteration is finished, eliminating abnormal data, obtaining accurate matching points, and obtaining the corresponding relation between each reference image and the characteristic points of the corresponding image to be matched;
Wherein, (x, y) refers to the position of the feature point of the image to be matched;to refer to the position of the reference image feature point; s is a scale parameter, < >>Is a transformation matrix H;
and obtaining the corresponding relation between each reference image and the corresponding characteristic points of the images to be matched, namely identifying the same forest fire point position in each forest fire point image, and obtaining the forest fire point coordinates in each forest fire point image.
Further, the specific steps of the step S2.2 are as follows:
based on S 1 、S 2 Point, the same forest area is photographed to obtain images with overlapping degree of more than 60% and left and right 2 sheets, S 1 S 2 To take a baseline S 1 o 1 Is the main optical axis of the left camera station, S 2 o 2 The main optical axis of the right shooting station is the conformation of the forest fire point P in the left and right images is P respectively 1 、p 2 Course angle of 2 intelligent mobile phones at shooting moment when shooting images is measured in real time through independent development of three-dimensional electronic compassPitch angle omega, roll angle kappa, and simultaneously accurately measure S 1 、S 2 Coordinates of two photographing centers (X S 、Y S 、Z S ) Wherein, course angle->Refers to the included angle between the main optical axis and the north direction, and the conformation p 1 、p 2 When the angle between the main optical axis and the north direction is +.>Pitch angle ω refers to the angle between the axis and the vertical plane, and the conformation p 1 、p 2 When the angle between the axis and the vertical surface is omega 1 、ω 2 The roll angle κ is the angle between the axis and the horizontal plane, and the conformation p 1 、p 2 When the angle between the axis and the horizontal plane is kappa 1 、κ 2
The shooting center, the image points and the object points, namely the forest fire points, meet a collineation equation to form a light beam bundle, the collineation equation is obtained based on the rotation relation from a smart phone coordinate system to a world coordinate system, a light beam method is adopted to conduct double image analysis, the light beam bundle is taken as a adjustment unit, a three-dimensional electronic compass is used for acquiring a three-dimensional attitude angle at the moment of shooting as an initial value to conduct space position and attitude parameter optimization, namely pose information of a moving shooting forest fire point image is accurately acquired, and after optimization, all the same-name image points and corresponding object points of the left and right forest fire point images, namely the forest fire point coordinates are calculated according to a collineation equation:
in (x) L ,y L )、(x R ,y R ) Respectively corresponding image points P of forest fire points P on left and right 2 images 1 、p 2 Is directly obtained from left and right forest fire point images after being matched with the forest fire point images, (x) L0 ,y L0 ,f L )、(x R0 ,y R0 ,f R ) The internal azimuth elements of the left camera and the right camera are respectively; respectively a left and a right shooting station shooting centers S 1 、S 2 Point in shooting measurement space rectangular coordinate system +.>And->The coordinates in the step (a) are obtained through GPS positioning of a mobile terminal; (a) L1 、b L1 、c L1 )、(a L2 、b L2 、c L2 )、(a L3 、b L3 、c L3 ) Respectively representing parameters corresponding to a first row, a second row and a third row in a rotation matrix R1 after the optimization of spatial position and attitude parameters of the left image R1 、b R1 、c R1 )、(a R2 、b R2 、c R2 )、(a R3 、b R3 、c R3 ) After the spatial position and the gesture parameters of the right image are optimized, parameters corresponding to a first row, a second row and a third row in a rotation matrix R2 are respectively represented, and the rotation matrix R1 and the rotation matrix R2 are obtained based on the rotation matrix R;and->In the shooting measurement coordinate system for the object points corresponding to the left and right homonymous image points>Andmeasuring 6 pairs of the same-name pixels, and solving the three-dimensional attitude angles of the left forest fire point image and the right forest fire point image and the coordinates of the corresponding object points of the same-name pixels according to the least square adjustment principle, wherein the coordinates (X, Y, Z) of the object points corresponding to the same-name pixels of the forest fire points are obtained.
Further, the specific steps of the step S2.3 are as follows:
each group of reference images and images to be matched are subjected to double-image analysis based on a beam method to obtain a group of object point coordinates (X, Y, Z) corresponding to identical points of forest fires, and then n forest fires are imaged to obtain m groups of fire positioning coordinates, wherein n is more than or equal to 3:
by solving the centroid of each of the m groups of points as an estimated value of the forest fire P coordinate, it is assumed that the coordinate of the ith point is (X i ,Y i ,Z i ) The coordinates of the centroid are:
then finally solve (X P ,Y P ,Z P ) The positioning coordinates of the forest fire point are obtained.
Compared with the prior art, the invention has the advantages that:
1. Compared with the prior art that professional measuring equipment is adopted for photo measurement and parameter acquisition, the invention is developed based on an android system, and forest protection staff can directly use based on a mobile smart phone.
2. According to the invention, the space map platform built in the terminal carried by the forest guard is fully utilized, so that space position reference and a data base map are provided for forest fire positioning work, single-point positioning precision based on a mobile phone is improved, and simultaneously, the interactive correction method can further improve the forest fire positioning precision.
3. The invention provides a forest fire point progressive refinement positioning method combining multiple images so as to improve the positioning accuracy of the forest fire points.
4. According to the invention, shooting pose information is provided by fully utilizing mobile phone positioning service and built-in sensors, a forest fire point mobile positioning method combining a space map space position reference and a data base map is constructed, relatively accurate information support is provided for emergency rescue, namely, by utilizing a national geographic information public service platform space map built in a mobile phone provided by a ground patrolling person, various parameters of mobile phone positioning can be corrected, and meanwhile, a plurality of photos are combined to make up for the difference of shooting quality.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and should not be considered limiting the scope, and that other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a frame of the present invention;
FIG. 2 is a schematic diagram of forest fire positioning based on mobile shooting in the present invention;
fig. 3 is a schematic diagram of a space attitude information accurate frame of a mobile shooting forest fire point image based on a space map, wherein in space semantic constraint, points near a curved route from left to right in space map mark point positioning are sequentially a shooting point 1, a shooting point 2 and a shooting point 3, and in three diagrams based on mark point positioning of a smart phone, the shooting points in the left to right diagrams are sequentially a shooting point 1, a shooting point 2 and a shooting point 3;
FIG. 4 is a schematic diagram of screening shooting locations in a combined sky map and real environment according to the present invention;
FIG. 5 is a schematic view of the multi-pose camera of FIG. 3;
fig. 6 is a schematic diagram of a forest fire moving progressive refinement positioning based on beam method double image analysis, wherein points in a picture in the interactive correction of the forest fire mark are forest fire positions;
FIG. 7 is a schematic diagram of the automatic matching of characteristic points of a forest fire point image in the present invention;
FIG. 8 is a schematic diagram of the dual-image analysis forest fire coordinates of the beam method of FIG. 1;
fig. 9 is a schematic diagram of capturing of the attitude angle information at the moment of photographing (photographing moment) in fig. 3;
fig. 10 is a schematic diagram of the coordinate resolution in fig. 6.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Firstly, combining a sky map image and a real scene, and screening a characteristic point of a marked ground object suitable for positioning and correcting; correcting the mobile phone positioning result by combining the existing positioning information of the sky map, and optimizing the mobile phone posture information through multi-posture shooting to realize accurate acquisition of mobile shooting posture information; and finally, correcting the fire point positions on different images through a plurality of movable shot forest fire images (forest fire point images), and carrying out forest fire point positioning by combining the plurality of movable shot forest fire images based on double-image analysis by a beam method so as to achieve the aim of improving the movable shot positioning accuracy of the forest fire points.
Space attitude information of a mobile shooting image based on a sky map is accurately acquired, namely, a plurality of sky map mark point coordinates are acquired based on a smart phone with a sky map positioning function and a sensor, forest fire point images are shot based on the sky map mark point coordinates, and space multi-attitude information of the mobile shooting forest fire point images is accurately acquired at the same time:
the mobile phone navigation module is low in positioning precision, can only generally navigate roads, inquire related information, cannot realize accurate positioning, and is low in spatial attitude information precision in a method for positioning a mobile shooting target directly based on mobile phone positioning service and sensor parameters. Therefore, the method and the device screen the mark points by taking regional, visibility, mark performance and other space semantic constraints into consideration, correct the mobile phone positioning result by combining the existing positioning information of the sky map, and optimize the mobile phone posture information through multi-posture shooting so as to realize accurate acquisition of the mobile shooting image posture information. The method comprises three parts, namely screening a plurality of shooting positions by combining a space map and a real environment based on space semantic constraint, obtaining space map mark point coordinates, and shooting forest fire point images on the space map mark point coordinates in sequence based on a multi-pose shooting model, wherein the overlapping degree of the shot forest fire point images is 60-80%; secondly, acquiring basic parameters of a shot photo of the mobile phone, namely acquiring mobile shooting parameters of the smart phone when shooting a forest fire image; thirdly, optimizing photo pose parameters based on the sky map, namely correcting the positioning result of the smart phone based on the mobile shooting parameters and the coordinates of the mark points of the sky map, namely optimizing the pose information of the smart phone through multi-pose shooting so as to realize accurate acquisition of the pose information of the mobile shooting forest fire point images.
Station location screening combining a sky map and a real environment:
before screening the position of the shooting station, the heterogeneity characteristics between the marked ground object and other ground objects in the space map image are needed to be deeply analyzed, and the position of the shooting station is screened by combining the space map and the real environment in consideration of spatial semantic constraints such as regional, visibility and markedness. The regional is that the station should be located in the vicinity of the position of the forest patrol personnel, and the distance between each two adjacent mark points is as much as 30-200 m; visibility means that the selected station position needs to be ensured to be capable of shooting a fire scene; the markedness means that the ground object has stronger distinguishing characteristics in the space map and the real situation, such as road corners and the like.
Firstly, acquiring the real-time position of a forest patrol worker through mobile phone positioning service and positioning the forest patrol worker on a sky map, and then manually selecting more than 3 marked ground objects as shooting points by combining the ground object features of the real environment and the image features (namely, the space semantic constraint features) of the partial region in the sky map, as shown in a road inflection point in fig. 4.
After 3 or more marker points are screened, coordinates of the marker points on a day map are acquired through MarkToo l class provided by a day map API, and forest fire point image shooting is sequentially carried out on the marker points based on a multi-pose shooting model. The shooting process requires that the overlapping degree of photos is between 60 and 80 percent, and one point of a forest fire area in a first forest fire image is manually selected as an important characteristic point after the first forest fire image is shot, wherein the multi-pose shooting model is a mode of combining vertical shooting and horizontal shooting of the same fire area.
Obtaining basic parameters of shooting pictures of a mobile phone:
the mobile photo shooting parameters based on the smart phone mainly comprise two parts: an inner azimuth element and an outer azimuth element.
1) Internal orientation element
The internal azimuth element is determined by the smart phone, is a parameter describing the relative position between the shooting center and the forest fire point image, and comprises three parameters: the vertical distance f from the shooting center S to the image, and the coordinates (x) of the principal point o in the frame coordinate system 0 ,y 0 ) Wherein, the vertical distance refers to the main distance.
The image shot by the mobile phone is distorted under the influence of the manufacturing precision and the assembly process of the camera lens of the mobile phone. It is therefore necessary to calibrate the camera before acquiring the image according to a set of points of known spatial position and their corresponding points on the image, and to obtain the internal reference matrix of the camera by transformation of the coordinate system. Firstly, more than 10 checkerboard photos taken at different angles are collected and preprocessed; then, finding out checkerboard information through a findCHessboard Corders function and a corerSubPix function of openCV and further extracting sub-pixel corner information; finally, the parameter matrix and distortion coefficient in the camera are calculated through ca l ibrateCamera function, and image calibration is carried out. The distortion can be improved by calibration, and the relationship between the natural unit and the actual unit of the camera is determined, so that the size of the object in the image is known after calibration.
2) External orientation element
The external azimuth elements of the images refer to determining the spatial position and the attitude parameters of the forest fire images during shooting, and each forest fire image has six external azimuth elements, namely 3 line elements, namely the coordinate X of the shooting center S under the space rectangular coordinate system of an object space S 、Y S 、Z S And 3 angle elements describing the pose information of the image at the time of shooting, i.e. course angleThe method comprises the steps that a pitch angle omega and a roll angle kappa are provided, external azimuth elements of an image are provided by a smart phone, line elements of the image are acquired by using a position service, and return values of an acceleration sensor, a magnetic field sensor and a direction sensor are jointly calculated to acquire angle elements of a photo;
in an android mobile phone, the output results of the sensors are all based on a local coordinate system of the smart phone, the smart phone coordinate system is a relative coordinate system defined based on a mobile phone screen, an origin of an inertial coordinate system coincides with the origin of the smart phone coordinate system, and a coordinate axis of the inertial coordinate system is parallel to a coordinate axis of a world coordinate system, namely, the coordinate axis is regarded as an intermediate state between the mobile phone coordinate system and the world coordinate system, so that the smart phone needs to be converted from the local coordinate system to the world coordinate system by using the inertial coordinate system, and a conversion formula for converting the smart phone coordinate system to the world coordinate system is specifically as follows:
Rotates around the z-axis (rotation angle is) The resulting rotation matrix:
rotation matrix obtained by rotation around x-axis (rotation angle omega):
the resulting rotation matrix is rotated about the y-axis (rotation angle κ):
combining the 3 basic rotation sequences according to different rotation sequences to obtain a rotation matrix between two coordinate systems, wherein the rotation sequences comprise any one of z-x-y, z-y-x, x-z-y, x-y-z, y-z-x and y-x-z, and the rotation matrix is formed by rotating the rotation sequences in the z-x-y sequence:
/>
therefore, the rotation relationship from the smartphone coordinate system to the world coordinate system is:
wherein, (x ', y ', z ') is the three-dimensional coordinates of the point in the world coordinate system, (x, y, z) is the three-dimensional coordinates of the point in the smart phone coordinate system, and T represents the transpose.
Photo pose parameter optimization based on sky map:
1) Positioning optimization
Firstly, defining that plane coordinates for acquiring a plurality of shooting positions through a smart phone are S respectively 1 (lon1,lat1),S 2 (lon2,lat2),S 3 (lon 3, lat 3), … …, manually selecting the plane coordinates of the shooting position through the sky map, namely, the coordinates of the mark points of the sky map are respectively S' 1 (Lon1,Lat1),S′ 2 (Lon2,Lat2),S′ 3 (Lon3,Lat3),……;
Then, calculating the difference value between the positioning of the smart phone and the coordinates of the mark points of the sky map in the coordinate data of each shooting position as a correction number;
finally, respectively calculating arithmetic mean values Deltalon and Deltalat of longitude and latitude corrections as final corrections of mobile phone positioning:
Wherein N is * Refers to non-0 natural integers;
adding the shooting position coordinate data based on the smart phone and the final correction to obtain corrected geodetic coordinates (B, L, A) of the corresponding shooting position, namely obtaining the space parameters of the optimized external azimuth element, and converting the geodetic coordinates into space rectangular coordinates according to the following formula:
wherein e 1 N is the radius of curvature of the unitary mortise circle and is the first eccentricity;
2) Gesture optimization
Acquiring forest fire point images which are shot vertically and horizontally (shown in fig. 5) in the same shooting position in the same fire area, removing theoretical angle differences of the two shots, and then averaging to obtain a final attitude angle to realize the optimization of attitude parameters, namely when the vertical shooting is changed into the horizontal shooting, the rolling angle and the pitch angle are different by 90 degrees, so that the two images are respectively averaged after the 90-degree differences are removed, and the optimized attitude parameters are obtained;
and precisely acquiring pose information of the forest fire point image which is movably shot based on positioning optimization and pose optimization.
Forest fire moves gradually and seeks smart location:
the features of the same forest fire point target in different images are different, and the forest fire point positioning accuracy based on single-point or double-point moving shooting is low, so that in order to further improve the forest fire point spatial position positioning accuracy, the invention provides a forest fire point moving progressive refined positioning based on beam method double-image analysis. The method comprises three parts, namely, firstly, identifying the same fire point in a plurality of shot positions based on an SI FT algorithm and a RANSAC algorithm to obtain a forest fire point coordinate; secondly, a forest fire positioning algorithm based on beam method double image analysis, namely a forest fire positioning algorithm based on forest fire coordinates and double image slices, obtains object point coordinates corresponding to identical image points of forest fire points in two forest fire images; thirdly, gradually and accurately locating the movement of the forest fire points based on the multi-photo combination, namely gradually and accurately locating the movement of the object point coordinates corresponding to the same-name image points of the plurality of groups of forest fire points based on the multi-photo combination, and obtaining the coordinate location of the forest fire points.
Multiple images identical fire point image location identification:
the SI FT algorithm has the characteristics of scale invariance, strong anti-interference capability, good robustness and the like. The core of the RANSAC algorithm is to calculate other data from a set of observation data in an iterative manner, and finally screen incorrect data. However, the classical RANSAC algorithm needs to iterate continuously, which consumes a lot of time, and the fire monitoring cannot achieve real-time performance. The invention combines the SI FT algorithm with the improved RANSAC algorithm, firstly, the picture is preprocessed, then the SI FT algorithm is used for identifying the characteristic points for rough matching, and then the improved RANSAC algorithm is used for screening the characteristic points to obtain accurate matching points.
The SI FT algorithm is realized by four steps, namely, a scale space is created, candidate feature points are detected, precisely positioned, direction assignment is performed, and feature point descriptors are constructed. Firstly, removing noise in an image through Gaussian blur, creating a multi-scale image again, and enhancing image characteristics based on the scale image and Gaussian difference to create a multi-scale space (formulas 8, 9 and 10); secondly, comparing each pixel value of the Gaussian image in the multi-scale space with other 26 surrounding pixel values, if the pixel is the highest or lowest pixel in the adjacent pixels, considering the pixel as a candidate characteristic point of the image under the scale, calculating second-order Taylor expansion of the scale space for each candidate characteristic point, and if the result is smaller than a threshold value, considering the pixel as a low-contrast characteristic point and eliminating the low-contrast characteristic point, and obtaining the characteristic point after eliminating; then, a gradient histogram is created by utilizing gradient direction distribution characteristics of neighborhood pixels of the feature points to assign direction parameters for each key point, counting gradient directions and gradient amplitudes of all pixels in a circle taking the feature point as a circle center and taking 1.5 times of the scale of a Gaussian image in which the feature point is positioned as a radius, wherein the peak value of the histogram represents the main direction of the neighborhood gradient at the feature point, namely the direction of the feature point, and other directions reaching 80% of the maximum value can be taken as auxiliary directions; and finally, generating a unique fingerprint for the key point through the gradient directions (including the main direction and the auxiliary direction) and the sizes of the adjacent pixels, namely a 'feature point descriptor', calculating the distance between each feature point descriptor in the image to be matched and each feature point descriptor in the reference image, sequencing all the obtained results corresponding to each feature point descriptor, and taking the nearest distance as a matching point to obtain a rough matched result.
L(x,y,σ)=G(x,y,σ)×I(x,y) (8)
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))×I(x,y) (10)
I (x, y) is a two-dimensional image of each forest fire point image to be detected; l (x, y, sigma) is a Gaussian scale space of a forest fire point image or a Gaussian pyramid or Gaussian image; g (x, y, sigma) is a Gaussian function; sigma is a scale space factor, k is a multiple of an adjacent scale space, D (x, y, sigma) refers to a Gaussian differential pyramid, namely a multi-scale space, and exp represents an exponential function based on a natural number e;
and (3) performing field voting on the feature points obtained after the initial features are obtained through the SI FT algorithm, screening out partial mismatching points, and obtaining a final sample point set so as to reduce the iteration times of the RANSAC algorithm.
The method comprises the steps of firstly, respectively calculating the distance d (formula 11) between any two characteristic points in each forest fire image and the difference delta theta (formula 12) between main direction included angles, respectively carrying out normalization processing on the characteristic points according to row vectors, and calculating the inner product d between the image to be matched and the reference image after the normalization processing ot1 And principal direction angle inner product value d ot2 Finally, through the set threshold value (a large number of experiments prove that when the distance threshold value td and the direction threshold value tθ are set to be 0.4 and 0.5 respectively, a relatively large number of inner point sets can be obtained), whether the distance inner product value and the main direction included angle inner product value of a pair of matching points are smaller than the threshold value or not is compared, if yes, the set of inner points is put into, and otherwise, the set of outer points is put into.
Δθ=θ ij (12)
d ot1 =d ot (im1(x U ,y U ),im2(x u ,y u ))
d ot2 =d ot (im1(θ U ),im2(θ u ))
Wherein i and j are any pair of initial matching points (x) on the same forest fire point image U ,y U ,θ U ) And (x) u ,y u ,θ u ) Pixel coordinates and principal directions, d, expressed as corresponding matching points U and U of the reference image im1 and the image im2 to be matched ot Refers to the inner product, θ, of the reference image im1 and the image im2 to be matched i 、θ j The main directions of any two characteristic points i and j in each forest fire point image are respectively represented;
and screening out partial mismatching points in a field voting mode to obtain a final sample point set so as to reduce the iteration times of the RANSAC algorithm. The implementation of the RANSAC algorithm is specifically divided into three steps, namely, calculation of a transformation matrix, calculation of projection errors and an iterative process. Firstly, randomly extracting 4 non-collinear sample data from a known data set (a characteristic point set is obtained based on a SIFT algorithm), calculating a 3 multiplied by 3 transformation matrix H (formula 13), recording as a model M, then calculating projection errors of all data and the model M, adding the projection errors to an inner point set if the projection errors are smaller than a threshold value, updating Q-best=Q when the number of elements in the point set Q is larger than an optimal inner point set Q-best, simultaneously updating the iteration number K, finally, if the iteration number is larger than K, exiting, otherwise, adding 1 to the iteration number, and repeating the operations until the iteration is ended, so as to realize the elimination of the abnormal data, and obtaining accurate matching points, namely obtaining the corresponding relation between each reference image and the characteristic points of the corresponding image to be matched.
Wherein, (x, y) refers to the position of the feature point of the image to be matched;to refer to the position of the reference image feature point; s is a scale parameter, < >>Is a transformation matrix H;
screening an initial internal point set obtained by rough matching based on an improved RANSAC algorithm to obtain more than 6 pairs of accurate matching points, namely obtaining a corresponding relation between a first forest fire point image serving as a reference image and a second forest fire point image serving as characteristic points of an image to be matched, transmitting coordinates of the characteristic points of the first forest fire point image to the second forest fire point image, then transmitting the second forest fire point image serving as the reference image to a third forest fire point image, and the like, if the more than 6 pairs of matching points contain important characteristic points, transferring to the next step, otherwise, establishing the corresponding relation of the characteristic points of each forest fire point image based on the important characteristic points (calculating a basic matrix between the reference image and the image to be matched according to the existing matching points, combining the known camera gestures, obtaining the specific position of the important characteristic points on the image to be matched, performing block matching along the polar line by utilizing a polar geometry method, searching the specific position of the forest points on the image to be matched), and transferring to the next step;
After the corresponding relation between the reference image and the characteristic points of the image to be matched is obtained, the coordinates of the forest fire points on other images to be matched can be calculated according to the forest fire points manually selected when the first image is shot.
Forest fire positioning algorithm based on double-shot:
as shown in fig. 8, based on S 1 、S 2 Point, the same forest area is photographed to obtain images with overlapping degree of more than 60% and left and right 2 sheets, S 1 S 2 To take a baseline S 1 o 1 Is the main optical axis of the left camera station, S 2 o 2 The main optical axis of the right shooting station is the conformation of the forest fire point P in the left and right images is P respectively 1 、p 2 Course angle of 2 intelligent mobile phones at shooting moment when shooting images is measured in real time through independent development of three-dimensional electronic compassPitch angle omega, roll angle kappa, and simultaneously accurately measure S 1 、S 2 Coordinates of two photographing centers (X S 、Y S 、Z S ) Wherein, course angle->Refers to the included angle between the main optical axis and the north direction, and the conformation p 1 、p 2 When the angle between the main optical axis and the north direction is +.>Pitch angle ω refers to the angle between the axis and the vertical plane, and the conformation p 1 、p 2 When the angle between the axis and the vertical surface is omega 1 、ω 2 The roll angle k refers to the axis and the horizontal planeIncluded angle, conformation p 1 、p 2 When the included angles between the axis and the horizontal plane are k respectively 1 、k 2
The shooting center, the image points and the object points, namely the forest fire points, meet a collineation equation to form a light beam bundle, the collineation equation is obtained based on the rotation relation from a smart phone coordinate system to a world coordinate system, a light beam method is adopted to conduct double image analysis, the light beam bundle is taken as a adjustment unit, a three-dimensional electronic compass is used for acquiring a three-dimensional attitude angle at the moment of shooting as an initial value to conduct space position and attitude parameter optimization, namely pose information of a moving shooting forest fire point image is accurately acquired, after optimization, all the same-name image points and corresponding object points of left and right forest fire point images are optimized, namely the forest fire point coordinates are listed as an error equation according to the collineation condition equation:
In (x) L ,y L )、(x R ,y R ) Respectively corresponding image points P of forest fire points P on left and right 2 images 1 、p 2 Is directly obtained from left and right forest fire point images after being matched with the forest fire point images, (x) L0 ,y L0 ,f L )、(x R0 ,y R0 ,f R ) The internal azimuth elements of the left camera and the right camera are respectively; respectively a left and a right shooting station shooting centers S 1 、S 2 Point in shooting measurement space rectangular coordinate system +.>And->The coordinates in the step (a) are obtained through GPS positioning of a mobile terminal; (a) L1 、b L1 、c L1 )、(a L2 、b L2 、c L2 )、(a L3 、b L3 、c L3 ) Respectively representing parameters corresponding to a first row, a second row and a third row in a rotation matrix R1 after the optimization of spatial position and attitude parameters of the left image R1 、b R1 、c R1 )、(a R2 、b R2 、c R2 )、(a R3 、b R3 、c R3 ) After the spatial position and the gesture parameters of the right image are optimized, parameters corresponding to a first row, a second row and a third row in a rotation matrix R2 are respectively represented, and the rotation matrix R1 and the rotation matrix R2 are obtained based on the rotation matrix R;and->In the shooting measurement coordinate system for the object points corresponding to the left and right homonymous image points>Andmeasuring 6 pairs of the same-name pixels, and solving the three-dimensional attitude angles of the left forest fire point image and the right forest fire point image and the coordinates of the corresponding object points of the same-name pixels according to the least square adjustment principle, wherein the coordinates (X, Y, Z) of the object points corresponding to the same-name pixels of the forest fire points are obtained.
Multi-photo combined forest fire point moving progressive refinement positioning:
Each group of reference images and images to be matched are subjected to double-image analysis based on a beam method to obtain a group of object point coordinates (X, Y, Z) corresponding to the same-name image points of forest fires, and then n pieces of images (n is more than or equal to 3) obtain m groups of fire positioning coordinates:
/>
for a real fire point P, we calculate the m sets of fire point positioning coordinates P 1 、P 2 、P 3 The equivalent is adjacent to the true fire point P, so the m can be solvedThe centroid of the point serves as an estimate of the P-point coordinates.
The centroid is a point whose abscissa, ordinate, and Z-axis are the average value of the abscissas, the average value of the ordinates, and the average value of the Z-axis of m points, respectively. Namely: let the coordinates of the i-th point be (X i ,Y i ,Z i ) The coordinates of the centroid are:
then finally solve (X P ,Y P ,Z P ) The positioning coordinates of the forest fire point are obtained.

Claims (10)

1. A forest fire positioning method combining a sky map and a mobile phone image is characterized by comprising the following steps:
s1, acquiring a plurality of space map mark point coordinates based on a smart phone with a space map positioning function and a sensor, shooting a forest fire point image based on the space map mark point coordinates, and accurately acquiring space multi-pose information of the moving shooting forest fire point image;
and S2, moving and gradually and precisely locating forest fires in the shot plurality of forest fires based on the precisely acquired space multi-gesture information to obtain forest fires locating coordinates.
2. The method for positioning forest fires by combining a sky map and mobile phone images according to claim 1, wherein the specific steps of the step S1 are as follows:
s1.1, screening a plurality of shooting positions based on space semantic constraint and combining a space map and a real environment, obtaining space map mark point coordinates, shooting forest fire point images on the space map mark point coordinates in sequence based on a multi-pose shooting model, wherein the overlapping degree of the shot forest fire point images is 60-80%, and manually selecting one point of a forest fire region in the forest fire point images as an important characteristic point based on the shot first forest fire point images;
s1.2, acquiring mobile shooting parameters of the smart phone when shooting a forest fire point image;
s1.3, correcting a positioning result of the smart phone based on the mobile shooting parameters and the coordinates of the antenna map marker points, namely optimizing the gesture information of the smart phone through multi-gesture shooting so as to achieve accurate acquisition of the gesture information of the mobile shooting forest fire point images.
3. The method for positioning forest fires by combining a sky map and mobile phone images according to claim 2, wherein the specific steps of step S3.1 are as follows:
S1.11, firstly, positioning the real-time position of the patrol Lin Yuan on a sky map;
s1.12, selecting a plurality of marked ground objects as shooting points based on the real-time position, spatial semantic constraint and real environment of a forest patrol worker positioned on a sky map, and screening a plurality of shooting station positions to obtain a plurality of mark points, wherein the spatial semantic constraint refers to the heterogeneity characteristics between the marked ground objects and the ground objects outside the marked ground objects in the sky map image, and the heterogeneity characteristics comprise regional, visible and marked relations;
s1.13, acquiring coordinates of the mark points on a space map based on the screened mark points to obtain the coordinates of the mark points of the space map, and shooting forest fire point images on the coordinates of the mark points of the space map in sequence based on a multi-pose shooting model, wherein the multi-pose shooting model is a mode of combining vertical shooting and horizontal shooting of the same fire area.
4. A forest fire positioning method combining a sky map and a mobile phone image according to claim 3, wherein the specific steps of step S1.2 are as follows:
the obtaining mobile shooting parameters of the smart phone comprises the following steps: an inner azimuth element and an outer azimuth element;
internal orientation element:
The internal azimuth element is determined by the smart phone and is a parameter for describing the relative position between the shooting center and the forest fire point image, and comprisesThree parameters: the vertical distance f from the shooting center S to the image, and the coordinates (x) of the principal point o in the frame coordinate system 0 ,y 0 ) Wherein, the vertical distance refers to the main distance;
external orientation element:
the external azimuth elements of the images refer to determining the spatial position and the attitude parameters of the forest fire images during shooting, and each forest fire image has six external azimuth elements, namely 3 line elements, namely the coordinate X of the shooting center S under the space rectangular coordinate system of an object space S 、Y S 、Z S And 3 angle elements describing the pose information of the image at the time of shooting, i.e. course angleThe method comprises the steps that a pitch angle omega and a roll angle kappa are provided, external azimuth elements of an image are provided by a smart phone, line elements of the image are acquired by using a position service, and return values of an acceleration sensor, a magnetic field sensor and a direction sensor are jointly calculated to acquire angle elements of a photo;
in an android mobile phone, the output results of the sensors are all based on a local coordinate system of the smart phone, the smart phone coordinate system is a relative coordinate system defined based on a mobile phone screen, an origin of an inertial coordinate system coincides with the origin of the smart phone coordinate system, and a coordinate axis of the inertial coordinate system is parallel to a coordinate axis of a world coordinate system, namely, the coordinate axis is regarded as an intermediate state between the mobile phone coordinate system and the world coordinate system, so that the smart phone needs to be converted from the local coordinate system to the world coordinate system by using the inertial coordinate system, and a conversion formula for converting the smart phone coordinate system to the world coordinate system is specifically as follows:
Rotate around the z-axis, the rotation angle isThe resulting rotation matrix is:
rotation around x-axis, rotation angle omega rotation matrix:
rotation around y-axis, rotation angle k is the rotation matrix obtained:
combining the 3 basic rotation sequences according to different rotation sequences to obtain a rotation matrix between two coordinate systems, wherein the rotation sequences comprise any one of z-x-y, z-y-x, x-z-y, x-y-z, y-z-x and y-x-z, and the rotation matrix is formed by rotating the rotation sequences in the z-x-y sequence:
therefore, the rotation relationship from the smartphone coordinate system to the world coordinate system is:
wherein, (x ', y ', z ') is the three-dimensional coordinates of the point in the world coordinate system, (x, y, z) is the three-dimensional coordinates of the point in the smart phone coordinate system, and T represents the transpose.
5. The method for positioning forest fires by combining a sky map and mobile phone images according to claim 4, wherein the specific steps of the step S1.3 are as follows:
step S1.31. Positioning optimization
Firstly, defining that plane coordinates for acquiring a plurality of shooting positions through a smart phone are S respectively 1 (lon1,lat1),S 2 (lon2,lat2),S 3 (lon 3, lat 3), … …, manually selected by day map to obtain a beatThe plane coordinates of the shooting position, namely the coordinates of the marker points of the sky map are S' 1 (Lon1,Lat1),S′ 2 (Lon2,Lat2),S′ 3 (Lon3,Lat3),……;
Then, calculating the difference value between the positioning of the smart phone and the coordinates of the mark points of the sky map in the coordinate data of each shooting position as a correction number;
Finally, respectively calculating arithmetic mean values Deltalon and Deltalat of longitude and latitude corrections as final corrections of mobile phone positioning:
wherein N is * Refers to non-0 natural integers;
adding the shooting position coordinate data based on the smart phone and the final correction to obtain corrected geodetic coordinates (B, L, A) of the corresponding shooting position, namely obtaining the space parameters of the optimized external azimuth element, and converting the geodetic coordinates into space rectangular coordinates according to the following formula:
wherein e 1 N is the radius of curvature of the unitary mortise circle and is the first eccentricity;
step S1.32. gesture optimization
Acquiring forest fire point images shot vertically and transversely in the same shooting position, removing theoretical angle differences of the two shots, and then averaging to obtain an average value as a final attitude angle to realize the optimization of attitude parameters, namely when the vertical shooting is changed into the transverse shooting, the rolling angle and the pitch angle are different by 90 degrees, so that the average value is respectively obtained for the two images after the 90-degree difference is removed, and the optimized attitude parameters are obtained;
and S1.33, accurately acquiring pose information of the forest fire point image which is shot in a moving way based on positioning optimization and pose optimization.
6. The method for positioning forest fires by combining a sky map and a mobile phone image according to claim 5, wherein the specific steps of step S2 are as follows:
S2.1, identifying the same forest fire point in the forest fire point images shot at a plurality of shooting positions based on a SIFT algorithm and a RANSAC algorithm to obtain forest fire point coordinates;
s2.2, acquiring object point coordinates corresponding to identical image points of forest fire points in two forest fire point images based on the forest fire point coordinates and a double-image-sheet forest fire positioning algorithm;
and 2.3, gradually and precisely finding and positioning object point coordinates corresponding to a plurality of groups of forest fire point homonymous image points based on multi-forest fire point image combination to obtain the forest fire point coordinate positioning.
7. The method for positioning forest fires by combining a sky map and a mobile phone image according to claim 6, wherein the specific steps of step S2.1 are as follows:
s2.11, preprocessing each forest fire point image, namely that the smart phone generates distortion when shooting the forest fire point image, and correcting the distortion generated by the wide-angle lens;
s2.12, respectively adopting SIFT algorithm to identify characteristic points of each preprocessed forest fire point image to perform rough matching on two forest fire point images with adjacent shooting times, wherein the front forest fire point image is taken as a reference image, and the rear forest fire point image is taken as an image to be matched in the two forest fire point images with adjacent shooting times;
S2.13, performing field voting denoising on the feature points after rough matching, and obtaining an initial inner point set after denoising;
step S2.14, screening an initial internal point set obtained by rough matching based on an improved RANSAC algorithm to obtain more than 6 pairs of matching points accurately, namely, obtaining a corresponding relation between a first forest fire point image serving as a reference image and a second forest fire point image serving as characteristic points of an image to be matched, transmitting coordinates of the characteristic points of the first forest fire point image to the second forest fire point image, then transmitting the second forest fire point image serving as the reference image to a third forest fire point image, and the like, if more than 6 pairs of matching points contain important characteristic points, turning to step S2.15, otherwise, establishing a corresponding relation of the characteristic points of each forest fire point image based on the important characteristic points, and turning to step S2.15 after establishing;
and S2.15, calculating to obtain forest fire point coordinates on the subsequently shot forest fire point images according to the forest fire points manually selected when the first forest fire point image is shot based on the corresponding relation of the characteristic points of each forest fire point image.
8. The method for positioning forest fires by combining a sky map and an image of a mobile phone according to claim 7, wherein the specific steps of step S2.12 are as follows:
Firstly, removing noise points in each forest fire point image through Gaussian blur, creating a multi-scale image, and creating a multi-scale space based on the characteristics of the scale image and the Gaussian difference enhanced image, namely forming a Gaussian pyramid of each forest fire point image based on the multi-scale image, wherein image pixels of the same group of two adjacent layers in the Gaussian pyramid are subtracted to obtain a Gaussian difference pyramid, namely the multi-scale space:
L(x,y,σ)=G(x,y,σ)×I(x,y) (8)
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))×I(x,y) (10)
i (x, y) is a two-dimensional image of each forest fire point image to be detected; l (x, y, sigma) is a Gaussian scale space of a forest fire point image or a Gaussian pyramid or Gaussian image; g (x, y, sigma) is a Gaussian function; sigma is a scale space factor, k is a multiple of an adjacent scale space, and D (x, y, sigma) refers to a Gaussian differential pyramid, namely a multi-scale space;
secondly, comparing each pixel value of the Gaussian image in the multi-scale space with surrounding 26 pixel values, and if the pixel is the highest or lowest pixel in the adjacent pixels, considering the pixel as a candidate characteristic point of each forest fire point image in the scale space;
calculating a second-order Taylor expansion of the scale space for each candidate feature point, and considering that the point belongs to a low-contrast feature point and eliminating the low-contrast feature point to obtain a feature point if the result is smaller than a threshold value;
Then, using the gradient direction distribution characteristics of the neighborhood pixels of the characteristic points after being removed to assign a direction parameter for each characteristic point, counting the gradient directions and gradient amplitudes of all pixels in a circle taking the characteristic point as a circle center and taking 1.5 times of the scale of the Gaussian image in a multi-scale space where the characteristic point is located as a radius, and creating a gradient histogram, wherein the peak value of the histogram represents the main direction of the neighborhood gradient at the key point, namely the direction of the characteristic point, and the direction reaching 80% of the maximum value is taken as an auxiliary direction;
finally, generating a unique fingerprint for the feature point through the main gradient direction, the auxiliary gradient direction and the gradient size of the adjacent pixels, namely a feature point descriptor, calculating the distance between each feature point descriptor in the image to be matched and each feature point descriptor in the reference image, sequencing all the obtained results corresponding to each feature point descriptor, and taking the nearest distance as a matching point to obtain a rough matched result; the specific steps of the step S2.13 are as follows:
after rough matching, respectively calculating the distance d of any two characteristic points in each forest fire point image and the difference delta theta of the main direction included angle, respectively carrying out normalization processing on the distance d and the difference delta theta according to row vectors, and after normalization processing, calculating the distance inner product value d of each pair of matching points of the reference image and the image to be matched ot1 And principal direction angle inner product value d ot2 Finally, comparing whether the inner product value of the distance between a pair of matching points and the inner product value of the included angle of the main direction are smaller than the threshold value or not through the set threshold value, if yes, putting the matching points into the inner point set to obtain an initial inner point set, and otherwise, putting the matching points into the outer point set;
Δθ=θ ij (12)
d ot1 =d ot (im1(x U ,y U ),im2(x u ,y u ))
d ot2 =d ot (im1(θ U ),im2(θ u ))
wherein i and j are any pair of initial matching points (x) on the same forest fire point image U ,y U ,θ U ) And (x) u ,y u ,θ u ) Pixel coordinates and principal directions, d, expressed as corresponding matching points U and U of the reference image im1 and the image im2 to be matched ot Refers to the inner product, θ, of the reference image im1 and the image im2 to be matched i 、θ j The main directions of any two characteristic points i and j in each forest fire point image are respectively represented; the specific steps of the step S2.14 are as follows:
firstly, randomly extracting 4 non-collinear sample data from a characteristic point set based on a SIFT algorithm, calculating a 3 multiplied by 3 transformation matrix H, marking the matrix H as a model M, then calculating projection errors of all data and the model M, adding the projection errors to an initial internal point set if the projection errors are smaller than a threshold value, updating Q-best=Q when the number of elements in the initial internal point set Q is larger than an optimal internal point set Q-best, judging whether the iteration times are larger than the times K or not, exiting if the iteration times are larger than the times K, otherwise adding 1 to the iteration times, repeating the operation until the iteration is finished, eliminating abnormal data, obtaining accurate matching points, and obtaining the corresponding relation between each reference image and the characteristic points of the corresponding image to be matched;
Wherein, (x, y) refers to the position of the feature point of the image to be matched;to refer to the position of the reference image feature point; s is a scale parameter, < >>Is a transformation matrix H;
and obtaining the corresponding relation between each reference image and the corresponding characteristic points of the images to be matched, namely identifying the same forest fire point position in each forest fire point image, and obtaining the forest fire point coordinates in each forest fire point image.
9. The method for positioning forest fires by combining a sky map and a mobile phone image according to claim 8, wherein the specific steps of step S2.2 are as follows:
based on S 1 、S 2 Point, the same forest area is photographed to obtain images with overlapping degree of more than 60% and left and right 2 sheets, S 1 S 2 To take a baseline S 1 o 1 Is the main optical axis of the left camera station, S 2 o 2 The main optical axis of the right shooting station is the conformation of the forest fire point P in the left and right images is P respectively 1 、p 2 Course angle of 2 intelligent mobile phones at shooting moment when shooting images is measured in real time through independent development of three-dimensional electronic compassPitch angle omega, roll angle kappa, and simultaneously accurately measure S 1 、S 2 Coordinates of two photographing centers (X S 、Y S 、Z S ) Wherein, course angle->Refers to the included angle between the main optical axis and the north direction, and the conformation p 1 、p 2 When the main optical axis and the north direction are respectively included angles of Pitch angle ω refers to the angle between the axis and the vertical plane, and the conformation p 1 、p 2 When the angle between the axis and the vertical surface is omega 1 、ω 2 The roll angle κ is the angle between the axis and the horizontal plane, and the conformation p 1 、p 2 When the angle between the axis and the horizontal plane is kappa 1 、κ 2
The shooting center, the image points and the object points, namely the forest fire points, meet a collineation equation to form a light beam bundle, the collineation equation is obtained based on the rotation relation from a smart phone coordinate system to a world coordinate system, a light beam method is adopted to conduct double image analysis, the light beam bundle is taken as a adjustment unit, a three-dimensional electronic compass is used for acquiring a three-dimensional attitude angle at the moment of shooting as an initial value to conduct space position and attitude parameter optimization, namely pose information of a moving shooting forest fire point image is accurately acquired, and after optimization, all the same-name image points and corresponding object points of the left and right forest fire point images, namely the forest fire point coordinates are calculated according to a collineation equation:
in (x) L ,y L )、(x R ,y R ) Respectively corresponding image points P of forest fire points P on left and right 2 images 1 、p 2 Is directly obtained from left and right forest fire point images after being matched with the forest fire point images, (x) L0 ,y L0 ,f L )、(x R0 ,y R0 ,f R ) The internal azimuth elements of the left camera and the right camera are respectively; respectively a left and a right shooting station shooting centers S 1 、S 2 Point in shooting measurement space rectangular coordinate system +. >And->The coordinates in the step (a) are obtained through GPS positioning of a mobile terminal; (a) L1 、b L1 、c L1 )、(a L2 、b L2 、c L2 )、(a L3 、b L3 、c L3 ) Respectively representing parameters corresponding to a first row, a second row and a third row in a rotation matrix R1 after the optimization of spatial position and attitude parameters of the left image R1 、b R1 、c R1 )、(a R2 、b R2 、c R2 )、(a R3 、b R3 、c R3 ) After the spatial position and the gesture parameters of the right image are optimized, parameters corresponding to a first row, a second row and a third row in a rotation matrix R2 are respectively represented, and the rotation matrix R1 and the rotation matrix R2 are obtained based on the rotation matrix R; />Andin the shooting measurement coordinate system for the object points corresponding to the left and right homonymous image points>And->Measuring 6 pairs of the same-name pixels, and solving the three-dimensional attitude angles of the left forest fire point image and the right forest fire point image and the coordinates of the corresponding object points of the same-name pixels according to the least square adjustment principle, wherein the coordinates (X, Y, Z) of the object points corresponding to the same-name pixels of the forest fire points are obtained.
10. The method for positioning forest fires by combining a sky map and a mobile phone image according to claim 9, wherein the specific steps of step S2.3 are as follows:
each group of reference images and images to be matched are subjected to double-image analysis based on a beam method to obtain a group of object point coordinates (X, Y, Z) corresponding to identical points of forest fires, and then n forest fires are imaged to obtain m groups of fire positioning coordinates, wherein n is more than or equal to 3:
By solving the centroid of each of the m groups of points as an estimated value of the forest fire P coordinate, it is assumed that the coordinate of the ith point is (X i ,Y i ,Z i ) The coordinates of the centroid are:
then finally solve (X P ,Y P ,Z P ) The positioning coordinates of the forest fire point are obtained.
CN202310312355.3A 2023-03-28 2023-03-28 Forest fire positioning method combining sky map and mobile phone image Pending CN116563699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310312355.3A CN116563699A (en) 2023-03-28 2023-03-28 Forest fire positioning method combining sky map and mobile phone image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310312355.3A CN116563699A (en) 2023-03-28 2023-03-28 Forest fire positioning method combining sky map and mobile phone image

Publications (1)

Publication Number Publication Date
CN116563699A true CN116563699A (en) 2023-08-08

Family

ID=87492276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310312355.3A Pending CN116563699A (en) 2023-03-28 2023-03-28 Forest fire positioning method combining sky map and mobile phone image

Country Status (1)

Country Link
CN (1) CN116563699A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876874A (en) * 2024-01-15 2024-04-12 西南交通大学 Forest fire detection and positioning method and system based on high-point monitoring video

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876874A (en) * 2024-01-15 2024-04-12 西南交通大学 Forest fire detection and positioning method and system based on high-point monitoring video

Similar Documents

Publication Publication Date Title
CN109509230B (en) SLAM method applied to multi-lens combined panoramic camera
CN107830846B (en) Method for measuring angle of communication tower antenna by using unmanned aerial vehicle and convolutional neural network
CN103822635B (en) The unmanned plane during flying spatial location real-time computing technique of view-based access control model information
US20150323648A1 (en) Method and system for estimating information related to a vehicle pitch and/or roll angle
CN106373088A (en) Quick mosaic method for aviation images with high tilt rate and low overlapping rate
CN104504675B (en) A kind of active vision localization method
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN110930508A (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
CN110706273B (en) Real-time collapse area measurement method based on unmanned aerial vehicle
CN109883433B (en) Vehicle positioning method in structured environment based on 360-degree panoramic view
CN109596121A (en) A kind of motor-driven station Automatic Targets and space-location method
CN106370160A (en) Robot indoor positioning system and method
CN113947638B (en) Method for correcting orthographic image of fish-eye camera
CN113902809A (en) Method for jointly calibrating infrared camera and laser radar
CN116563699A (en) Forest fire positioning method combining sky map and mobile phone image
CN115511956A (en) Unmanned aerial vehicle imaging positioning method
CN110986888A (en) Aerial photography integrated method
CN116129067A (en) Urban live-action three-dimensional modeling method based on multi-source geographic information coupling
CN110160503B (en) Unmanned aerial vehicle landscape matching positioning method considering elevation
CN113340272B (en) Ground target real-time positioning method based on micro-group of unmanned aerial vehicle
CN116309798A (en) Unmanned aerial vehicle imaging positioning method
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN116385504A (en) Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration
CN113436313B (en) Three-dimensional reconstruction error active correction method based on unmanned aerial vehicle
CN112750075A (en) Low-altitude remote sensing image splicing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination