CN115797668A - Image matching method, illicit detection method, terminal device, and storage medium - Google Patents

Image matching method, illicit detection method, terminal device, and storage medium Download PDF

Info

Publication number
CN115797668A
CN115797668A CN202211379620.1A CN202211379620A CN115797668A CN 115797668 A CN115797668 A CN 115797668A CN 202211379620 A CN202211379620 A CN 202211379620A CN 115797668 A CN115797668 A CN 115797668A
Authority
CN
China
Prior art keywords
image
aerial vehicle
unmanned aerial
feature points
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211379620.1A
Other languages
Chinese (zh)
Inventor
周宏宾
任宇鹏
李乾坤
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211379620.1A priority Critical patent/CN115797668A/en
Publication of CN115797668A publication Critical patent/CN115797668A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses an image matching method, an illegal construction detection method, a terminal device and a computer storage medium, wherein the matching method comprises the following steps: acquiring an unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice based on the unmanned aerial vehicle image; extracting a plurality of first characteristic points of the unmanned aerial vehicle image and a plurality of second characteristic points of the orthographic map slice; performing feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of feature point groups; grouping all pixel points of the map section based on a plurality of characteristic point groups to obtain a plurality of pixel groups; and grouping a plurality of pixels to form a plurality of masks, and carrying out re-projection on the orthographic map slices processed by the plurality of masks and the unmanned aerial vehicle image to obtain a re-projected matched image. The image matching method can transmit the grouping information of the feature points to all the pixel points, so that the edge of pixel grouping is optimized, and the re-projection between the orthographic map and the non-orthographic unmanned aerial vehicle image is realized.

Description

Image matching method, illicit detection method, terminal device, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image matching method, an illegal construction detection method, a terminal device, and a computer storage medium.
Background
Image comparison, also called image change detection, requires that the two images be registered. The general procedure for registering two images is: firstly, respectively extracting characteristic points from two images, then matching two groups of characteristic points, then adopting homography matrix between the two images according to matched characteristic point technology, and finally re-projecting one image into the coordinate system of the other image according to the homography matrix. To describe the relationship between two images by a homography matrix, certain preconditions must be satisfied, the two images must be images of the same plane, or the camera pose difference of the two images only contains rotation. However, in the matching of the orthographic base map and the unmanned aerial vehicle image, both conditions are not satisfied: the roof and the ground in the images are often not on the same plane, and the camera poses of the two images are translated.
Disclosure of Invention
The application provides an image matching method, an illegal construction detection method, a terminal device and a computer storage medium.
One technical solution adopted by the present application is to provide an image matching method, including:
acquiring an unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice based on the unmanned aerial vehicle image;
extracting a plurality of first feature points of the unmanned aerial vehicle image and a plurality of second feature points of the orthographic map slice;
performing feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of feature point groups;
grouping all pixel points of the orthographic map slice based on the characteristic point groups to obtain a plurality of pixel groups;
and grouping a plurality of pixels to form a plurality of masks, and carrying out re-projection on the orthographic map slices processed by the plurality of masks and the unmanned aerial vehicle image to obtain a re-projected matched image.
Wherein, the grouping all the pixel points of the ortho map slice based on the plurality of feature point groups to obtain a plurality of pixel groups comprises:
acquiring the characteristic position of each characteristic point group in the orthographic map slice;
acquiring the distance between all pixel points in the orthographic map slice and the characteristic position of each characteristic point group;
distributing each pixel point in the orthographic map slice to a feature point group with the nearest distance;
and forming a plurality of pixel groups according to the characteristic point groups and the distributed pixel points.
Wherein the feature point grouping is at a feature position of the orthographic map slice, and is composed of pixel positions of each feature point in the feature point grouping on the orthographic map slice.
The method for re-projecting the orthographic map slices processed by the masks and the unmanned aerial vehicle image to obtain a re-projected matching image comprises the following steps:
acquiring a first homography matrix of each pixel group;
carrying out reprojection on the corresponding image of the orthographic map slice after the mask processing and the unmanned aerial vehicle image by utilizing each first homography matrix;
and superposing the multiple groups of re-projection results to form the re-projected matched image.
The method for re-projecting the orthographic map slices processed by the masks and the unmanned aerial vehicle image to obtain a re-projected matching image comprises the following steps:
and re-projecting the orthographic map slices processed by using a plurality of masks into a coordinate system of the unmanned aerial vehicle image, and overlapping the orthographic map slices after projection with the unmanned aerial vehicle image to form a re-projected matched image.
The method for re-projecting the orthographic map slices processed by the masks and the unmanned aerial vehicle image to obtain a re-projected matching image comprises the following steps:
and re-projecting the unmanned aerial vehicle image into a coordinate system of the orthographic map slice processed by utilizing a plurality of masks, and overlapping the projected unmanned aerial vehicle image and the orthographic map slice to form a re-projected matching image.
The performing feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of feature point groups includes:
carrying out feature point matching on the plurality of first feature points and the plurality of second feature points, and forming first feature point groups by the successfully matched first feature points and second feature points;
and performing feature point matching on the remaining first feature points and the remaining second feature points, and forming second feature point groups by the successfully matched first feature points and second feature points until the feature point groups are completed.
Wherein the image matching method further comprises:
judging whether the number of the remaining feature points which are not grouped in the first feature points and the second feature points is smaller than a first preset threshold value or not, or whether the number of the feature points which are grouped by the feature points which are newly grouped at the latest is smaller than a second preset threshold value or not;
if yes, determining that the feature point grouping is completed;
if not, determining that the feature point grouping is not finished, and acquiring a plurality of interior points from the remaining feature points which are not grouped to form the latest feature point grouping.
Wherein, the obtaining of a plurality of interior points from the remaining feature points which are not grouped to form the latest feature point group comprises:
calculating a second homography matrix according to the residual non-grouped feature points;
projecting the residual characteristic points which are not grouped according to the second homography matrix, and determining the characteristic points of which the position errors after projection are smaller than a third preset threshold value as interior points;
and forming a latest feature point group by using the feature points determined as the interior points.
Wherein, the obtaining of the corresponding orthographic map slice based on the unmanned aerial vehicle image comprises:
reading positioning information of the unmanned aerial vehicle image;
and cutting an orthographic map slice with the same image range as the unmanned aerial vehicle from the orthographic map according to the positioning information.
Another technical solution adopted by the present application is to provide an illegal construction detection method, including:
acquiring a real-time unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice based on the unmanned aerial vehicle image;
acquiring a matched image of the unmanned aerial vehicle image and the orthographic map slice, wherein the matched image is acquired in the mode of the image matching method;
obtaining difference information of buildings in the unmanned aerial vehicle image based on the matching image;
and judging whether the building is illegally built according to the difference information.
Another technical solution adopted by the present application is to provide a terminal device, where the terminal device includes a memory and a processor coupled to the memory;
wherein the memory is configured to store program data and the processor is configured to execute the program data to implement the image matching method and/or the violation detection method as described above.
Another technical solution adopted by the present application is to provide a computer storage medium for storing program data, which when executed by a computer, is used to implement the image matching method and/or the violation detection method as described above.
The beneficial effect of this application is: the method comprises the steps that terminal equipment obtains unmanned aerial vehicle images and obtains corresponding orthographic map slices based on the unmanned aerial vehicle images; extracting a plurality of first feature points of the unmanned aerial vehicle image and a plurality of second feature points of the orthographic map slice; performing feature point matching on the first feature points and the second feature points to obtain a plurality of feature point groups; grouping all pixel points of the ortho-map slice based on a plurality of feature point groups to obtain a plurality of pixel groups; and grouping a plurality of pixels to form a plurality of masks, and carrying out re-projection on the orthographic map slices processed by the plurality of masks and the unmanned aerial vehicle image to obtain a re-projected matched image. The image matching method can transmit the grouping information of the feature points to all the pixel points, so that the edge of pixel grouping is optimized, and the re-projection between the orthographic map and the non-orthographic unmanned aerial vehicle image is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an image matching method provided herein;
fig. 2 is a schematic diagram of a general flow of an image matching method provided in the present application;
fig. 3 is a schematic diagram of an embodiment of an unmanned aerial vehicle image provided by the present application;
FIG. 4 is a schematic view of an embodiment of an orthographic map slice provided herein;
FIG. 5 is a flowchart illustrating specific sub-steps of step S14 of the image matching method shown in FIG. 1;
FIG. 6 is a schematic diagram illustrating an embodiment of an orthographic map reprojection result provided in the present application;
FIG. 7 is a flowchart illustrating specific sub-steps of step S15 of the image matching method shown in FIG. 1;
FIG. 8 is a schematic flowchart illustrating an embodiment of a violation detection method provided in the present application;
fig. 9 is a schematic structural diagram of an embodiment of a terminal device provided in the present application;
FIG. 10 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The illegal building supervision is one of important works of urban management all the time, the traditional patrol of illegal buildings in a manual mode consumes time and labor, the supervision of illegal buildings is not timely and effective, meanwhile, due to the limitation of patrol angles, illegal and disorganized building and additional building behaviors at the top of a building cannot be timely found, and accordingly follow-up dismantling and rebuilding work of illegal buildings needs to be carried out by investing a large amount of time and energy. How to carry out illegal construction management and control and remediation for a long time, in time and at low cost, the illegal post-treatment is changed into in-service or even pre-prevention, the illegal extension and construction adding behavior is effectively prevented, the efficiency of a supervision department is improved, and the problem to be solved at present is urgently needed.
The unmanned aerial vehicle-based illegal construction inspection platform is expected to help municipal administration departments to solve the problem. The construction of the illegal inspection platform comprises two stages of base map construction and routine inspection. In the base map construction stage, an unmanned aerial vehicle is used to shoot a large number of images in a target area according to an operation mode with a high overlapping rate, the images are spliced into an ortho map and a high-precision Digital Surface Model (DSM) by using a photogrammetry technology, and the ortho map and the DSM are taken as base maps of the target area to be managed. In the routine inspection stage, the unmanned aerial vehicle is used for shooting the target area image according to the operation mode with low overlapping rate, and the unmanned aerial vehicle image and the ortho map are compared by adopting an image comparison technology so as to detect the illegal building.
However, the roof and the ground in the images are often not on the same plane, and the poses of the cameras of the two images are translated, so that the registration effect is influenced in the matching of the orthographic base map and the unmanned aerial vehicle image. Therefore, the image matching method is provided, and registration between an ortho map and an image of a non-ortho unmanned aerial vehicle can be achieved. The image matching method of the application divides the orthographic map slice into a plurality of image blocks, each image block is a single plane image, so that the image blocks can be respectively registered with the non-orthographic unmanned aerial vehicle by using a plurality of homography matrixes.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic flowchart of an embodiment of an image matching method provided by the present application, and fig. 2 is a schematic flowchart of a general flow of the image matching method provided by the present application.
The image matching method is applied to an image matching device, wherein the image matching device can be a server, and can also be a system in which the server and a terminal device are matched with each other. Accordingly, each part, such as each unit, sub-unit, module, and sub-module, included in the image matching apparatus may be all disposed in the server, or may be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein. In some possible implementations, the image matching method of the embodiments of the present application may be implemented by a processor calling computer readable instructions stored in a memory.
Specifically, as shown in fig. 1, the image matching method in the embodiment of the present application specifically includes the following steps:
step S11: acquiring unmanned aerial vehicle images, and acquiring corresponding orthographic map slices based on the unmanned aerial vehicle images.
In this application embodiment, the image matching device obtains the unmanned aerial vehicle image, and the main data source lies in that unmanned aerial vehicle shoots in real time above the target area. The image matching device further acquires a corresponding orthographic map slice according to the unmanned aerial vehicle image.
As shown in fig. 2, the image matching apparatus prestores an orthographic map of a large area, or acquires the orthographic map of the large area from the cloud server. Then, the image matching apparatus reads GPS information such as the latitude and longitude of the target area in the unmanned aerial vehicle image exif (Exchangeable image file format), and cuts out a map slice corresponding to the unmanned aerial vehicle image range from the orthographic map according to the GPS information.
Specifically, fig. 3 is a schematic diagram of an embodiment of an image of a drone provided by the present application, and fig. 4 is a schematic diagram of an embodiment of an orthographic map slice provided by the present application. Comparing fig. 3 and fig. 4, it can be seen that the content of the orthographic map slice is substantially the same as the content of the drone image, and the GPS information of the two pieces is completely the same.
In other embodiments, the image matching device may extend the preset area based on the map area determined by the GPS information of the drone image, thereby ensuring that the orthographic map slice can provide all pixel information for drone image matching under the condition of considering camera distortion, calculation errors, and the like.
Fig. 3 is a schematic diagram of an embodiment of a drone image provided by the present application, and fig. 4 is a schematic diagram of an embodiment of an orthographic map slice provided by the present application.
Step S12: extracting a plurality of first feature points of the unmanned aerial vehicle image, and extracting a plurality of second feature points of the orthographic map slice.
Step S13: and matching the characteristic points of the plurality of first characteristic points and the plurality of second characteristic points to obtain a plurality of characteristic point groups.
In the embodiment of the application, the image matching device extracts a plurality of first feature points of the unmanned aerial vehicle image, extracts a plurality of second feature points of the orthographic map slice, and then performs feature point matching on the plurality of first feature points and the plurality of second feature points. The matching relationship of the feature point matching may be the pixel similarity, the shape similarity, and/or the like of the first feature point and the second feature point.
Specifically, the image matching device may perform first feature point matching on all the first feature points and all the second feature points, and group the first feature points and the second feature points, which are successfully matched with the first feature points, into first feature point groups. And the image matching device continuously carries out secondary feature point matching on the first feature points left after the first feature point matching and the second feature points left, and the first feature points and the second feature points which are successfully matched with the second feature points form a second feature point group. And after multiple times of feature point matching, obtaining a plurality of feature point groups until no first feature points and second feature points remain or the current grouping condition meets the feature point grouping completion condition.
For example, 1000 feature points are extracted from the unmanned aerial vehicle image and the orthomap slice, 300 feature points are successfully matched in the first feature point matching, and the 300 feature points form a first feature point group. Then, the image matching apparatus continues matching the remaining 700 feature points until a condition that the grouping of the feature points is completed is satisfied.
Further, after completing each feature point matching, the image matching device may determine whether the feature point grouping is completed according to whether the number of the remaining non-grouped feature points is smaller than a first preset threshold, or whether the number of the feature points newly grouped is smaller than a second preset threshold. And if the number of the remaining feature points which are not grouped is smaller than a first preset threshold value, or the number of the feature points which are newly grouped at the latest is smaller than a second preset threshold value, judging that the grouping of the feature points is finished. And if the number of the remaining feature points which are not grouped is greater than or equal to a first preset threshold value, and the number of the feature points which are newly grouped is greater than or equal to a second preset threshold value, determining that the feature point grouping is not finished.
If the characteristic point grouping is not completed, the image matching device can use a RANSAC algorithm to solve the homography matrix of the currently un-grouped characteristic points, and all the interior points are used as a new characteristic point grouping. And the inner point refers to a characteristic point of which the position error is smaller than a third preset threshold value after the projection according to the homography matrix. And then, continuously judging whether the grouping of the feature points is finished or not, and repeating the actions until the grouping of the feature points is finished.
If the feature point grouping is completed, the process proceeds to step S14.
Step S14: and grouping all pixel points of the map section based on the characteristic point groups to obtain a plurality of pixel groups.
In the embodiment of the application, the image matching device transmits grouping information of a plurality of characteristic point groups to the orthographic map slice or all pixel points on the unmanned aerial vehicle image according to the principle of proximity. Then, the image matching device carries out superpixel segmentation on the shot-map slice or the unmanned aerial vehicle image, and grouping information is revised again in the superpixel segmentation according to the principle that a small number of images obey majority, so that the edges of pixel grouping are closer to the actual texture edges.
Among them, the most intuitive explanation of superpixels is to "aggregate" some pixels with similar characteristics to form a more representative large "element". And the new element will be used as the basic unit of other image processing algorithms. The dimensionality is greatly reduced; and secondly, some abnormal pixel points can be eliminated.
Next, taking super-pixel segmentation of all pixels of the map slice as an example, please refer to fig. 5 specifically for a specific process of super-pixel segmentation, and fig. 5 is a flowchart illustrating a specific sub-step of step S14 of the image matching method shown in fig. 1.
It should be noted that, in other embodiments, the superpixel segmentation may be performed on all the pixel points of the unmanned aerial vehicle image.
Specifically, as shown in fig. 5, the image matching method in the embodiment of the present application specifically includes the following steps:
step S141: and acquiring the characteristic position of each characteristic point group in the orthographic map slice.
In the embodiment of the present application, the image matching apparatus acquires the feature position of each feature point grouping in the ortho-map slice. The feature position of the feature point group may be composed of the pixel position of each feature point in the feature point group on the ortho map slice, or may be determined by the average value of the pixel coordinates of all the feature points in the feature point group on the ortho map slice, or may be determined by the cluster center of the pixel coordinates of all the feature points in the feature point group on the ortho map slice, or the like.
Step S142: and acquiring the distance between all pixel points in the orthographic map slice and the characteristic position of each characteristic point group.
Step S143: and allocating each pixel point in the orthographic map slice to the characteristic point group with the nearest distance.
In the embodiment of the present application, the image matching apparatus calculates the distance between the pixel position of each pixel point in the orthographic map slice and the position of all the feature point groups, and then allocates each pixel point to the feature point group with the smallest distance, thereby revising the group information of the feature point groups.
Step S144: and forming a plurality of pixel groups according to the characteristic point groups and the distributed pixel points.
In the embodiment of the present application, after the image matching device completes the distribution of all the pixel points of the orthographic map slice, the image matching device groups all the feature points and the distributed pixel points thereof to form a new pixel group. The method and the device perform superpixel segmentation on the shot map slice by using the feature point grouping to form the pixel grouping, so that the edge of the pixel grouping is closer to the actual texture edge.
Step S15: and grouping a plurality of pixels to form a plurality of masks, and carrying out re-projection on the orthographic map slices processed by the plurality of masks and the unmanned aerial vehicle image to obtain a re-projected matched image.
In the embodiment of the application, the image matching device sequentially forms masks according to pixel groups, and uses the homography matrix corresponding to the pixel groups to re-project the masked orthographic map slices into the coordinate system of the unmanned aerial vehicle image, or re-project the masked unmanned aerial vehicle image into the orthographic map slices. And the image matching device superposes a plurality of groups of re-projection images to obtain a final re-projection result, namely a final re-projected matched image. Referring to fig. 6, fig. 6 is a schematic diagram of an embodiment of an orthographic map reprojection result provided in the present application.
Specifically, the image matching device re-projects the orthographic map slices processed by the masks into a coordinate system of the unmanned aerial vehicle image, and forms the re-projected matching image by superposing the projected orthographic map slices with the unmanned aerial vehicle image.
Or the image matching device re-projects the unmanned aerial vehicle image into a coordinate system of the orthographic map slices processed by the masks, and the re-projected matched image is formed by superposing the projected unmanned aerial vehicle image and the orthographic map slices.
Further, the image shown in fig. 3 is a non-orthometric image obtained by the inspection of the unmanned aerial vehicle, and as the flying height of the unmanned aerial vehicle is low, the perspective relation is obvious, the side elevation of the building can be seen; the heights of the roofs of the buildings are different and are not positioned on the same plane, and the matching with an orthographic map cannot be realized by using a single homography matrix. Fig. 6 shows the result of matching an ortho-map slice to the drone image shown in fig. 3 according to the image matching method described in the present application.
In addition, since the orthographic map does not include information on the building side elevation, there is a pixel missing in the area corresponding to the rear side elevation projected. After the re-projection, the roof of the building on the map is aligned with the roof of the unmanned aerial vehicle image, and the requirement of subsequent illegal construction detection based on image comparison is met. For the problem of missing pixels at the position of the rear vertical face of the projection, the pixels at the corresponding position of the unmanned aerial vehicle image can be used for filling, and the noise of the regions can be effectively filtered by an image comparison algorithm based on deep learning.
Further, since the orthographic map slice area obtained after each mask can be approximate to the same plane, the image matching device can also calculate the homography matrix of the orthographic map slice area of each mask and the unmanned aerial vehicle image, and therefore the multiple homography matrices are used, and the reprojection from the orthographic map slice to the non-orthographic unmanned aerial vehicle image is executed in blocks. Referring to fig. 7, the re-projection process is shown, and fig. 7 is a flow chart illustrating the specific sub-steps of step S15 of the image matching method shown in fig. 1.
It should be noted that, in other embodiments, the image matching apparatus may further calculate a homography matrix of the drone image area and the orthographic map slice of each mask, so that the non-orthographic drone image is re-projected onto the orthographic map slice in blocks using a plurality of homography matrices.
Specifically, as shown in fig. 7, the image matching method in the embodiment of the present application specifically includes the following steps:
step S151: a first homography matrix for each pixel grouping is obtained.
In the embodiment of the present application, the image matching device obtains a homography matrix between a pixel point from the unmanned aerial vehicle image and a pixel point from the ortho-map slice in each pixel group.
Among them, the homographic matrix (homographic matrix) is equivalent to a matrix used in the perspective transformation. The perspective transformation describes the mapping relationship between two planes. It is understood that the homography is called because the relationship between two planes is deterministic and the transformation can only be represented by a unique matrix, hence the homography.
Step S152: and carrying out re-projection on the corresponding images of the plurality of masked orthographic map slices and the unmanned aerial vehicle image by utilizing each first homography matrix.
Step S153: and overlapping the multiple groups of re-projection results to form a re-projected matched image.
In this embodiment of the application, the image matching apparatus re-projects the masked orthographic map slice or the original orthographic map slice to the coordinate system of the unmanned aerial vehicle image according to the homography matrix calculated in step S151. Since a plurality of ortho map slice regions are calculated in step 151, that is, homography matrices of image regions corresponding to each pixel group and the unmanned aerial vehicle image are calculated, the image matching apparatus may re-project the ortho map slices subjected to mask processing into a coordinate system of the unmanned aerial vehicle image in blocks by using the plurality of homography matrices.
And the image matching device completes mask processing in all the ortho map slice areas, and multiple times of re-projection and superposition are performed to obtain a result of matching the original ortho map slice to the unmanned aerial vehicle image, namely the re-projected matched image.
In the embodiment of the application, an image matching device acquires an unmanned aerial vehicle image and acquires a corresponding orthographic map slice based on the unmanned aerial vehicle image; extracting a plurality of first feature points of the unmanned aerial vehicle image and a plurality of second feature points of the ortho map slice; performing feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of feature point groups; grouping all pixel points of the ortho-map slice based on a plurality of feature point groups to obtain a plurality of pixel groups; and grouping a plurality of pixels to form a plurality of masks, and carrying out re-projection on the orthographic map slices processed by the plurality of masks and the unmanned aerial vehicle image to obtain a re-projected matched image. The image matching method can transmit the grouping information of the feature points to all the pixel points, so that the edge of pixel grouping is optimized, and the re-projection between the orthographic map and the non-orthographic unmanned aerial vehicle image is realized.
Based on the image matching method of the above embodiment, the present application further provides a default detection method, and specifically refer to fig. 8, where fig. 8 is a schematic flowchart of an embodiment of the default detection method provided by the present application.
The illegal construction detection method is applied to an illegal construction detection device, wherein the illegal construction detection device can be a server or a system formed by the cooperation of the server and terminal equipment. Accordingly, each part, for example, each unit, sub-unit, module, and sub-module, included in the illegal building detection apparatus may be all disposed in the server, or may be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein. In some possible implementations, the violation detection method of the embodiments of the present application may be implemented by a processor calling a computer readable instruction stored in a memory.
Specifically, as shown in fig. 8, the method for detecting violation according to the embodiment of the present application specifically includes the following steps:
step S21: acquiring a real-time unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice based on the unmanned aerial vehicle image.
In the embodiment of the present application, the content of step S21 has already been described in detail in step S11 in the above embodiment, and is not described herein again.
Step S22: and acquiring a matching image of the unmanned aerial vehicle image and the orthographic map slice.
Step S23: and acquiring difference information of buildings in the unmanned aerial vehicle image based on the matching image.
In this embodiment of the application, the illegal building detection device may analyze the change difference information of the building in the unmanned aerial vehicle image by using the orthographic map reprojection result shown in fig. 6.
Step S24: and judging whether the building is illegally built according to the difference information.
In the embodiment of the application, the illegal construction detection device depends on the building difference and the depth information of the multiple image reprojection results, the position with the abnormal difference can be estimated, and the illegal construction department efficiency can be greatly improved and the labor cost can be reduced by combining the general survey of personnel.
The image matching method and the illegal construction detection method can directly carry out registration and comparison on the non-orthoscopic unmanned aerial vehicle image and the orthoscopic map, do not need to splice the orthoscopic large map every time of inspection, have no requirement on the overlapping rate of the images of inspection, have higher operation efficiency and can meet the inspection requirement of a large scene; in addition, the image matching method and the illegal building detection method do not need the unmanned aerial vehicle to shoot an orthoimage, and the common low-altitude flying unmanned aerial vehicle can also be used for operation, so that the technical threshold of illegal building detection is reduced, and the universality of illegal building detection is improved.
The above embodiments are only one of the common cases of the present application and do not limit the technical scope of the present application, so that any minor modifications, equivalent changes or modifications made to the above contents according to the essence of the present application still fall within the technical scope of the present application.
Continuing to refer to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a terminal device provided in the present application. The terminal device 500 of the embodiment of the present application includes a processor 51, a memory 52, an input-output device 53, and a bus 54.
The processor 51, the memory 52, and the input/output device 53 are respectively connected to the bus 54, the memory 52 stores program data, and the processor 51 is configured to execute the program data to implement the image matching method and/or the violation detection method according to the above embodiments.
In the embodiment of the present application, the processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 51 may be any conventional processor or the like.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application, the computer storage medium 600 stores program data 61, and the program data 61 is used to implement the image matching method and/or the violation detection method according to the above embodiments when executed by a processor.
Embodiments of the present application may be implemented in software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solutions of the present application, which are essential or contributing to the prior art, or all or part of the technical solutions may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, which is defined by the claims and the accompanying drawings, and the equivalents and equivalent structures and equivalent processes used in the present application and the accompanying drawings are also directly or indirectly applicable to other related technical fields and are all included in the scope of the present application.

Claims (13)

1. An image matching method, characterized in that the image matching method comprises:
acquiring an unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice based on the unmanned aerial vehicle image;
extracting a plurality of first feature points of the unmanned aerial vehicle image and a plurality of second feature points of the ortho map slice;
performing feature point matching on the first feature points and the second feature points to obtain a plurality of feature point groups;
grouping all pixel points of the ortho-map slice based on a plurality of feature point groups to obtain a plurality of pixel groups;
and grouping a plurality of pixels to form a plurality of masks, and carrying out re-projection on the orthographic map slices processed by the plurality of masks and the unmanned aerial vehicle image to obtain a re-projected matched image.
2. The image matching method according to claim 1,
the grouping of all pixel points of the ortho map slice based on the plurality of feature point groups to obtain a plurality of pixel groups comprises:
acquiring the characteristic position of each characteristic point group in the orthographic map slice;
acquiring the distance between all pixel points in the orthographic map slice and the characteristic position of each characteristic point group;
distributing each pixel point in the orthographic map slice to a feature point group with the nearest distance;
and forming a plurality of pixel groups according to the characteristic point groups and the distributed pixel points.
3. The image matching method according to claim 2,
the feature point grouping is at feature locations of the ortho map slice, consisting of pixel locations of each feature point in the feature point grouping on the ortho map slice.
4. The image matching method according to claim 1,
the reprojection of the orthographic map slices processed by using the plurality of masks and the unmanned aerial vehicle image is performed to obtain a reprojected matching image, and the method comprises the following steps:
acquiring a first homography matrix of each pixel group;
carrying out re-projection on the corresponding images of the plurality of masked orthographic map slices and the unmanned aerial vehicle image by utilizing each first homography matrix;
and superposing the multiple groups of re-projection results to form the re-projected matched image.
5. The image matching method according to claim 1 or 4,
the reprojection of the orthographic map slices processed by using a plurality of masks and the unmanned aerial vehicle image to obtain a reprojected matching image comprises the following steps:
and re-projecting the orthographic map slices processed by using a plurality of masks into a coordinate system of the unmanned aerial vehicle image, and overlapping the orthographic map slices after projection with the unmanned aerial vehicle image to form a re-projected matched image.
6. The image matching method according to claim 1 or 4,
the reprojection of the orthographic map slices processed by using a plurality of masks and the unmanned aerial vehicle image to obtain a reprojected matching image comprises the following steps:
and re-projecting the unmanned aerial vehicle image into a coordinate system of the orthographic map slice processed by using a plurality of masks, and overlapping the projected unmanned aerial vehicle image and the orthographic map slice to form a re-projected matched image.
7. The image matching method according to claim 1,
the matching of the feature points of the plurality of first feature points and the plurality of second feature points to obtain a plurality of feature point groups comprises:
performing feature point matching on the plurality of first feature points and the plurality of second feature points, and forming first feature point groups by the successfully matched first feature points and second feature points;
and performing feature point matching on the remaining first feature points and the remaining second feature points, and forming second feature point groups by the successfully matched first feature points and second feature points until the feature point groups are completed.
8. The image matching method according to claim 7,
the image matching method further includes:
judging whether the number of the remaining feature points which are not grouped in the plurality of first feature points and the plurality of second feature points is smaller than a first preset threshold value or not, or whether the number of the feature points which are grouped in the most recently newly grouped feature points is smaller than a second preset threshold value or not;
if yes, determining that the feature point grouping is completed;
if not, determining that the feature point grouping is not finished, and acquiring a plurality of interior points from the remaining un-grouped feature points to form the latest feature point grouping.
9. The image matching method according to claim 8,
the step of obtaining a plurality of interior points from the remaining non-grouped feature points to form the latest feature point group comprises the following steps:
calculating a second homography matrix according to the residual non-grouped feature points;
projecting the remaining non-grouped feature points according to the second homography matrix, and determining the feature points with the position error smaller than a third preset threshold value after projection as interior points;
and forming a latest feature point group by using the feature points determined as the interior points.
10. The image matching method according to claim 1,
the obtaining of the corresponding orthographic map slice based on the unmanned aerial vehicle image comprises:
reading positioning information of the unmanned aerial vehicle image;
and cutting out an orthographic map slice with the same range as the unmanned aerial vehicle image from the orthographic map according to the positioning information.
11. A method for detection of an illegal build, the method comprising:
acquiring a real-time unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice based on the unmanned aerial vehicle image;
acquiring a matching image of the unmanned aerial vehicle image and the ortho map slice, wherein the matching image is acquired according to the image matching method of any one of claims 1 to 10;
acquiring difference information of buildings in the unmanned aerial vehicle image based on the matching image;
and judging whether the building is illegally built according to the difference information.
12. A terminal device, comprising a memory and a processor coupled to the memory;
wherein the memory is adapted to store program data, and the processor is adapted to execute the program data to implement the image matching method of any of claims 1 to 10, and/or the violation detection method of claim 11.
13. A computer storage medium for storing program data which, when executed by a computer, is adapted to implement the image matching method of any one of claims 1 to 10 and/or the violation detection method of claim 11.
CN202211379620.1A 2022-11-04 2022-11-04 Image matching method, illicit detection method, terminal device, and storage medium Pending CN115797668A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211379620.1A CN115797668A (en) 2022-11-04 2022-11-04 Image matching method, illicit detection method, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211379620.1A CN115797668A (en) 2022-11-04 2022-11-04 Image matching method, illicit detection method, terminal device, and storage medium

Publications (1)

Publication Number Publication Date
CN115797668A true CN115797668A (en) 2023-03-14

Family

ID=85435703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211379620.1A Pending CN115797668A (en) 2022-11-04 2022-11-04 Image matching method, illicit detection method, terminal device, and storage medium

Country Status (1)

Country Link
CN (1) CN115797668A (en)

Similar Documents

Publication Publication Date Title
CN115424155B (en) Illegal construction detection method, illegal construction detection device and computer storage medium
CN110390306B (en) Method for detecting right-angle parking space, vehicle and computer readable storage medium
CN110926475B (en) Unmanned aerial vehicle waypoint generation method and device and electronic equipment
CN111582022A (en) Fusion method and system of mobile video and geographic scene and electronic equipment
CN114648640B (en) Target object monomer method, device, equipment and storage medium
CN112734837B (en) Image matching method and device, electronic equipment and vehicle
CN116628123B (en) Dynamic slice generation method and system based on spatial database
CN113804100B (en) Method, device, equipment and storage medium for determining space coordinates of target object
CN115082699B (en) Contour shape extraction method and device, electronic equipment and storage medium
CN114565863A (en) Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
CN115641415A (en) Method, device, equipment and medium for generating three-dimensional scene based on satellite image
CN116993785A (en) Target object visual tracking method and device, electronic equipment and storage medium
CN116823966A (en) Internal reference calibration method and device for camera, computer equipment and storage medium
CN115797668A (en) Image matching method, illicit detection method, terminal device, and storage medium
CN111445513A (en) Plant canopy volume obtaining method and device based on depth image, computer equipment and storage medium
CN115439672B (en) Image matching method, illicit detection method, terminal device, and storage medium
CN115690180A (en) Vector map registration method, registration system, electronic device and storage medium
CN110827243B (en) Method and device for detecting abnormity of coverage area of grid beam
CN111508067B (en) Lightweight indoor modeling method based on vertical plane and vertical line
Huang et al. Post‐filtering with surface orientation constraints for stereo dense image matching
CN117889789B (en) Building wall flatness detection method and system
CN116030450B (en) Checkerboard corner recognition method, device, equipment and medium
CN115797578A (en) Processing method and device for high-precision map
CN117333686A (en) Target positioning method, device, equipment and medium
Gadelmawla Exploiting shadows to infer 3D structures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication