CN116309838A - Point cloud map positioning capability evaluation system, method, electronic device and storage medium - Google Patents

Point cloud map positioning capability evaluation system, method, electronic device and storage medium Download PDF

Info

Publication number
CN116309838A
CN116309838A CN202310266437.9A CN202310266437A CN116309838A CN 116309838 A CN116309838 A CN 116309838A CN 202310266437 A CN202310266437 A CN 202310266437A CN 116309838 A CN116309838 A CN 116309838A
Authority
CN
China
Prior art keywords
positioning
module
point cloud
information
abnormal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310266437.9A
Other languages
Chinese (zh)
Inventor
张双力
丛林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixian Advanced Technology Co ltd
Original Assignee
Hangzhou Yixian Advanced Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yixian Advanced Technology Co ltd filed Critical Hangzhou Yixian Advanced Technology Co ltd
Priority to CN202310266437.9A priority Critical patent/CN116309838A/en
Publication of CN116309838A publication Critical patent/CN116309838A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Manufacturing & Machinery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a point cloud map positioning capability assessment system, which is applied to an augmented reality scene and is used for performing failure and ambiguity detection on the positioning capability of a visual positioning map. The request data set generated by positioning in the past is recorded by the data recording module. Searching local subareas with abnormal positioning in the point cloud map according to the request data set by a positioning abnormality detection module, and outputting positioning abnormality information corresponding to each local subarea; and outputting a positioning evaluation result of the point cloud map according to the positioning abnormal information through an evaluation result output module. The system can automatically detect the local area of the point cloud map, determine the abnormal positioning area in a small range as far as possible and know the specific reason of the abnormal positioning, and operators can update the positioning map with the minimum cost and efficiently rectify and change the abnormal area according to the abnormal positioning area, so that the interference on the operation of visual positioning service is reduced.

Description

Point cloud map positioning capability evaluation system, method, electronic device and storage medium
Technical Field
The present application relates to the field of augmented reality, and in particular, to a point cloud map positioning capability evaluation system, a point cloud map positioning capability evaluation method, a computer device, and a computer readable storage medium.
Background
The AR device inevitably generates errors after long-term tracking, and in order to maintain the stability of the AR experience, the pose of the current device is recalibrated through visual map positioning. However, the accuracy of visual positioning depends on the quality of the visual map and the accuracy of the positioning algorithm, and if the quality of the map is poor, inaccurate positioning and incapability of positioning may be caused.
In the related art, the positioning ability of the visual map is checked and evaluated even before being online. However, in the process of providing service for a long time, due to scene change (device replacement, vegetation growth, and traffic change), the positioning capability of the map is still reduced, and thus, positioning failure, inaccuracy and positioning error of a partial area are caused.
Disclosure of Invention
The embodiment of the application provides a point cloud map positioning capability evaluation system, a point cloud map positioning capability evaluation method, computer equipment and a computer readable storage medium, which are used for at least solving the problem that the point cloud map positioning capability is difficult to detect in a fine mode in the related technology.
In a first aspect, an embodiment of the present application provides a point cloud positioning capability assessment system, where the system includes: the system comprises a data recording module, a positioning abnormality detection module and an evaluation result output module, wherein:
The data recording module is used for recording a request data set generated by positioning in the past, wherein the request data set comprises: request image characteristics extracted from a request image sent by a terminal and associated information of the request image;
the positioning abnormality detection module is used for searching local subareas with abnormal positioning in the point cloud map according to the request data set and outputting positioning abnormality information corresponding to each local subarea;
and the evaluation result output module is used for outputting the positioning evaluation result of the point cloud map according to the positioning abnormal information.
In some of these embodiments, the association information of the requested image includes: device ID, image ID, 3D feature point ID, and localization pose, wherein,
the device ID is a serial number of the terminal device that transmitted the requested image,
the image ID is a sequence number of a similar image frame matched with the requested image characteristic in the point cloud map;
the 3D feature point ID is a sequence number corresponding to a 3D feature point of the similar image frame.
In some embodiments, the positioning anomaly detection module includes a positioning failure detection module, a scene change detection module, and a positioning ambiguity detection module, wherein:
The positioning failure detection module is used for detecting a failure subarea according to pose distribution information of the requested image features in the point cloud map;
the scene change detection module is used for detecting scene change subareas according to the 3D characteristic points corresponding to the request image characteristics in the point cloud map and the 3D characteristic points of the request image characteristics;
the positioning ambiguity detection module is used for detecting the positioning ambiguity sub-region according to the real scene positions corresponding to the same positioning result in different time sequences.
In some embodiments, the positioning failure detection module includes a first preprocessing module and a first judgment module, where
The first preprocessing module is used for acquiring positioning poses corresponding to the features of the requested image from the request data set and generating pose distribution information in the point cloud map according to all the positioning poses;
the first judging module is used for searching the invalid subarea in the point cloud map according to the pose distribution information.
In some of these embodiments, the first determination module includes a thermodynamic diagram generation module and a failure sub-region delineation module, where,
The thermodynamic diagram generating module is used for generating a pose thermodynamic diagram according to the pose distribution information, wherein the thermodynamic value of any region in the pose thermodynamic diagram is positively related to the positioning pose density of the region;
the failure subarea demarcation module is used for acquiring a first target area with a thermal value smaller than a preset thermal threshold value in the pose thermodynamic diagram,
judging whether the first target area is in a closed shape or not, judging whether the actual area of the first target area is larger than a first area threshold value or not, and if so, marking the first target area as a failure subarea.
In some of these embodiments, the scene change detection module includes: the device comprises a second preprocessing module and a second judging module, wherein:
the second preprocessing module is configured to obtain, in the request data set, a first 3D feature point corresponding to a first request image feature and a second 3D feature point corresponding to a second request image feature, where image capturing fields corresponding to the first request image and the second request image are the same;
generating a first space grid structure and a second space grid structure based on the first 3D characteristic points and the second 3D characteristic points respectively, wherein each grid is divided into a containing grid or a non-containing grid according to the number of the 3D characteristic points in the grid;
The second judging module is configured to mark the grid as an abnormal grid when the grid of the same sequence is a grid with a container in the first space grid structure and a grid without a container in the second space grid structure, and
and determining a second target area according to the adjacent abnormal grids, and marking the second target area as a scene change area.
In some embodiments, the second preprocessing module forms a first 3D space and a second 3D space according to the 3D feature points and the second 3D feature points, and divides the first 3D space and the second 3D space into N grids, respectively, to generate the first spatial grid structure and the second spatial grid structure.
In some of these embodiments, the second preprocessing module generates the first spatial grid structure and the second spatial grid structure by voxel characterization by storing the 3D feature point and the second 3D feature point coordinates, respectively.
In some of these embodiments, the positioning ambiguity detection module includes: the device comprises a third preprocessing module and a third judging module, wherein:
the third preprocessing module is used for acquiring all positioning poses corresponding to the same equipment ID in the request data set;
The third judging module is used for judging whether the same positioning pose corresponds to different real scene positions in sequence in the adjacent preset time period, if so, recording the accumulated times of the same positioning pose corresponding to different real scene positions in sequence,
and under the condition that the accumulated times are larger than a preset threshold value, acquiring a third target area of the positioning pose corresponding to the point cloud map, and marking the third target area as a positioning ambiguity sub-area.
In some of these embodiments, the system further comprises a localization anomaly analysis module, wherein:
the positioning abnormality analysis module is used for setting different abnormality grade information for the local subareas with the positioning abnormality according to a preset abnormality threshold;
and generating different kinds of coping advice information according to the area of the local subarea with the positioned abnormality and the abnormality level information, wherein the coping advice information comprises: personnel view on site, local map update, and global map reconstruction.
In some of these embodiments, the system further comprises a locate anomaly verification module, wherein:
the positioning abnormality checking module is used for performing abnormality checking on the abnormal positioning subarea according to the positioning abnormality detection information accumulated in different time periods and,
And checking the positioning abnormal result detected and output by the positioning abnormal module according to manual verification information, wherein the manual verification information is generated based on the real scene information corresponding to the local subarea.
In some of these embodiments, the local sub-area is generated by dividing the passable area of the point cloud map by a preset rule, and/or is generated during the detection process by the positioning anomaly detection module, wherein,
the preset rule comprises the following steps: one or more combinations of manual customization rules, specified area rules, and semantic information rules.
In a second aspect, an embodiment of the present application provides a method for evaluating a positioning capability of a point cloud, where the method includes:
recording, by a data recording module, a request data set generated by positioning in the past, wherein the request data set comprises: the method comprises the steps of extracting graphic features from a request image sent by a terminal and associated information of the request image;
searching local subareas with abnormal positioning in the point cloud map according to the request data set by a positioning abnormality detection module, and outputting positioning abnormality information corresponding to each local subarea;
And outputting the positioning evaluation result of the point cloud map according to the positioning abnormal information through an evaluation result output module.
In a third aspect, embodiments of the present application provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the second aspect described above when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method as described in the second aspect above.
Compared with the related art, the point cloud map positioning capability evaluation system provided by the embodiment of the application records a request data set generated by positioning in past times through a data recording module, wherein the request data set comprises: the method comprises the steps that requested image characteristics and associated information of requested images are sent by a terminal; searching local subareas with abnormal positioning in the point cloud map according to the request data set by a positioning abnormality detection module, and outputting positioning abnormality information corresponding to each local subarea; and outputting a positioning evaluation result of the point cloud map according to the positioning abnormal information through an evaluation result output module. The system can automatically detect the local area of the point cloud map, determine the abnormal positioning area in a small range as far as possible and know the specific reason of the abnormal positioning, and operators can update the positioning map with the minimum cost and efficiently rectify and change the abnormal area according to the abnormal positioning area, so that the interference on the operation of visual positioning service is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a schematic view of an application environment of a point cloud positioning capability assessment system according to an embodiment of the present application;
FIG. 2 is a block diagram of a point cloud positioning capability assessment system according to an embodiment of the present application;
FIG. 3 is a schematic view of a pose distribution according to an embodiment of the present application;
FIG. 4 is another pose distribution schematic diagram according to an embodiment of the present application;
FIG. 5 is a schematic diagram of failure zone detection based on pose orientation according to embodiments of the present application;
FIG. 6 is a schematic diagram of a scene point cloud map and spatial grid structure according to an embodiment of the present application;
FIG. 7 is a flow chart of a method of visual map location capability assessment according to an embodiment of the present application;
fig. 8 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described and illustrated below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments provided herein, are intended to be within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the embodiments described herein can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar terms herein do not denote a limitation of quantity, but rather denote the singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein refers to two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The point cloud positioning capability detection system provided by the application can be applied to an application environment shown in fig. 1, fig. 1 is an application environment schematic diagram of a point cloud positioning capability evaluation system according to an embodiment of the application, and as shown in fig. 1, a user operates a terminal device 10 to perform AR experience in an online scene. The terminal device 10 obtains the current positioning pose by sending a positioning request to the visual positioning server 11, obtains and displays a specific AR special effect according to the positioning pose, can comprehensively detect the positioning capability of the visual positioning service 11 through the positioning abnormality detection module 12 after the visual positioning service 11 is deployed, automatically searches the finely divided areas which cannot be positioned, positioned in error and positioned in ambiguity in the map, and outputs corresponding prompting information to instruct an operator to perform rectification and correction correspondingly. It should be noted that, the terminal device 10 may be, but not limited to, a smart phone, a tablet computer, and AR glasses, and the visual positioning server 11 may be disposed in a public network room or a cloud computing platform.
Fig. 2 is a block diagram of a point cloud positioning capability assessment system according to an embodiment of the present application, and as shown in fig. 2, the system includes a data recording module 20, a positioning anomaly detection module 21, and an assessment result output module 22, wherein,
A data recording module 20 for recording a request data set generated by a previous positioning, wherein the request data set comprises: request image characteristics extracted from a request image sent by a terminal and associated information of the request image;
the visual positioning process is approximately as follows: acquiring an image of a user positioning request, and searching an image frame similar to the characteristics of the requested image in a positioning map; further, 2D-3D observation is established between the 2D feature points of the requested image features and the 3D feature points of the similar image frames; finally, obtaining the real-time pose of the terminal side by solving the 2D-3D observation;
the above description of the "visual positioning" procedure is for the purpose of facilitating understanding of the related information of the requested image and the subsequent positioning abnormality detection procedure. The core invention point of the application is that 'the comprehensive and refined positioning abnormality detection is carried out on the visual map', what visual positioning algorithm is adopted, and how the specific logic of the visual positioning algorithm (whether the specific logic corresponds to the above-mentioned exemplary flow strictly or not) are adopted, and the specific limitation is not carried out in the embodiment of the application;
preferably, the data recording module is deployed together with the visual positioning module, so that the flow consumption and data loss in the data transmission process can be reduced; of course, if the bandwidth and the operation resources of the visual positioning service side are limited, the data recording module and the visual positioning service can be deployed independently, and the data recording module and the visual positioning service can transmit data through a remote network channel.
In addition, considering that the processing amount of the positioning request data with complete recording is large, in this embodiment, only the task of "positioning success" is recorded, and the recorded data specifically includes: image features extracted from the requested image transmitted from the user terminal, and association information corresponding to the requested image, which may be, but is not limited to: the device ID of the terminal that sent the requested image, the sequence number of the similar image frame in (the database of) the positioning map that matches the requested image feature, and the sequence number of the similar frame in the positioning map that matches the requested image feature, all of which are referred to as the requested dataset in this embodiment. It should be noted that, the "successful positioning" merely indicates that the server side returns the positioning result corresponding to the request, and does not indicate that the "positioning result" is accurate.
The positioning anomaly detection module 20 is configured to search for a local subarea of the point cloud map, which is abnormal in positioning, according to the request data set, and output positioning anomaly information corresponding to each local subarea.
The abnormality checking module is independently deployed with the visual positioning service, is used as an independent operation processing unit, does not have any interference to the visual positioning service in the detection process, and periodically receives a request data set sent by the visual positioning server;
In order to ensure the accuracy of the detection result, the positioning abnormality detection module runs a positioning map (database) which is identical to that in the visual positioning map, and if the positioning map in the visual positioning service is updated, the positioning abnormality detection module is updated accordingly.
Furthermore, the abnormal positioning detection module can find out sub-areas which cannot be positioned, have scene change and possibly generate positioning ambiguity in the map through a set detection rule. In this embodiment, the local sub-area may be defined in advance according to a preset rule, or may be generated in real time by an automatic operation of the positioning abnormality detection module in the detection process without being defined in advance. Additionally, the manner in which the region is calculated in the map based on the point coordinates may include, but is not limited to: bounding boxes, circumscribed circles, etc. are calculated for the N sampling points.
Furthermore, different classification results can be allocated to different abnormal positioning subareas, for example, any subarea can be divided into normal positioning, difficult positioning, incapability of positioning, positioning ambiguity and the like by setting a multi-level threshold; meanwhile, different prompting colors can be set for map subareas with different grades on a map on a service monitoring interface, for example, subareas which are normal in positioning, difficult in positioning and impossible to position are sequentially configured to be green, yellow and red. The above technical means is only to embody a specific example of the scheme of the present application, and the practical application is not limited thereto, and those skilled in the art can flexibly design.
And the evaluation result output module 22 is used for outputting the positioning evaluation result of the point cloud map according to the positioning abnormality information.
In the prior art, due to the lack of a positioning abnormality detection and analysis scheme for a visual positioning map in an AR scene, when a user finds positioning abnormality in the AR experience process and feeds back the positioning abnormality to a service provider, an operator cannot immediately know a specific range, a specific type and a specific cause of the abnormality, so that the whole map needs to be comprehensively examined and updated. According to the visual positioning service system, the visual positioning service which is put into use can be efficiently and automatically checked and positioned for abnormal types and abnormal reasons, and the positioning abnormal areas in the smallest range possible are found through comprehensive and fine detection and analysis, so that a technical blank in the field is filled. Operators can optimize the positioning map with minimum cost, improve the under-line scene with high efficiency and reduce the interference to the operation of the visual positioning service.
In some embodiments, the positioning capability of the positioning map may be comprehensively and finely detected from multiple dimensions by the positioning anomaly detection module, and specifically, the positioning anomaly detection module includes: failure detection module, scene change detection module and location ambiguity module, wherein:
The failure detection module is used for detecting a failure sub-region according to pose distribution information corresponding to each request image in the point cloud map, and comprises a first preprocessing module and a first judging module, and specifically:
acquiring positioning poses corresponding to the request images from the request data set through a first preprocessing module, and generating pose distribution information in the point cloud map according to all the positioning poses;
further, a local sub-area with invalid positioning is obtained through a first judging module according to pose distribution; specifically, there are various methods for acquiring a local sub-area with failure based on pose distribution, preferably, a thermodynamic diagram may be generated on a positioning map according to pose distribution, and fig. 3 is a schematic view of pose distribution according to an embodiment of the present application, and as shown in fig. 3, the denser the pose distribution at a certain position, the higher the thermodynamic value in the corresponding pose distribution thermodynamic diagram;
optionally, in the pose thermodynamic diagram, a first target area with a thermodynamic value smaller than a preset threshold is obtained, whether the first target area is in a closed shape or not is judged, whether the actual area of the first target area is larger than a preset first area threshold or not is judged, and if the actual area of the first target area is larger than the preset first area threshold, the first target area is marked as an undetectable failure subarea.
In addition, a pose distribution diagram shown in fig. 4 may be generated according to pose distribution information, and fig. 4 is another pose distribution diagram according to an embodiment of the present application, where, as shown in fig. 4, the darker the color of a certain position, the denser the pose distribution of the position. Optionally, it may be determined whether the pose density in any position (region) is less than a certain preset threshold, and whether the position is an undetectable failure sub-region is detected.
Further, in some scenes, only a partial area in the scene changes, for example, a small area in a block map changes in an off-line scene due to a work site worker. Aiming at the situation, the embodiment provides a solving method for splitting the pose into the position and the orientation, and detecting the failure area according to the pose orientation, comprising the following steps:
step 1, defining a plurality of square grids in a scene space, equally dividing the orientation of each grid by 4/6/8/16 according to the requirement, equally dividing the grids by other numbers according to the range size, and respectively corresponding each sub-area to one orientation and one triangular grid after the equal division;
step 2, distributing all the positioning poses into corresponding square grids according to the position coordinates, distributing the positioning poses into corresponding triangular grids according to the directions, and increasing the count of the corresponding triangular grids;
Step 3, if a certain triangular mesh has no positioning pose or few positioning pose distribution, the triangular mesh can be defined as a suspected positioning failure triangular mesh;
and 4, counting the spatial distribution of the suspected positioning failure triangular grids, or calculating the intersection dense position of the orientation rays (or orientation sectors) of each suspected positioning failure triangular grid, so that the positioning failure area can be detected. In particular, specific computing means for the intersection dense include, but are not limited to: the distance between any point in the space and each ray is calculated, and the intersection is determined according to the number of rays with the distance smaller than a given threshold value, or clustering is carried out through a clustering algorithm to determine the intersection.
Fig. 5 is a schematic diagram of failure area detection according to pose orientation according to an embodiment of the present application, where, as shown in fig. 5, gray rays are spatially oriented rays, and elliptical areas are suspected localization failure areas.
It should be noted that, in this embodiment, the square grid and the two-dimensional plane ray are only a specific example, and in practical application, the square grid and the two-dimensional plane ray may be also expanded into a three-dimensional grid and a three-dimensional space ray.
Considering that in some cases, a user may move an object in a scene, which may cause failure in positioning angles of the scene near the object, where the positioning failure is only distributed in angles of the lens toward the moving object, and other angles are normally positioned;
In view of the above, the detection may be performed by a scene change module, specifically, the scene change module includes a second preprocessing module and a second judging module, where,
acquiring a first 3D feature point corresponding to a first request image feature and a second 3D feature point corresponding to a second request image feature in a request data set through a second preprocessing module, wherein the image shooting fields corresponding to the first request image feature and the second request image feature are the same, but shooting times may be different, for example, when a map is built, a first request image is shot at a position A and a position B, and after half a year after the map is on line, a second request image is shot at the position A and the position B;
generating a first space grid structure and a second space grid structure based on the first 3D characteristic points and the second 3D characteristic points respectively, wherein each grid of the two space grid structures is divided into a containing grid or a non-containing grid according to the number of the 3D characteristic points in the grid;
in this embodiment, the spatial grid structure may be generated in two ways:
1. respectively forming a first 3D space and a second 3D space according to the first 3D characteristic point coordinates and the second 3D characteristic point coordinates, and respectively dividing the first 3D space and the second 3D space into N grids to obtain a first space grid structure and a second space grid structure; it should be noted that the grid may be in various forms, may be a two-dimensional square grid, or may be a three-dimensional cube grid;
2. And respectively storing the 3D characteristic points and the second 3D characteristic point coordinates through Voxel characterization (such as Hash Voxel characterization), and further respectively generating a first space grid structure and a second space grid structure.
Fig. 6 is a schematic diagram of a scene point cloud map and spatial grid structure according to an embodiment of the present application.
Further, after the grid structures corresponding to the two 3D points are obtained, the second judging module is used for judging whether the grids at the same position have larger differences in the two spatial grid structures, and an optional mode is as follows: for any grid sequence, if the grid sequence is a grid with a container in a first space grid structure and a grid without a container in a second space grid structure, the large difference exists in the areas where the grids are located, and the areas corresponding to the grids and the adjacent abnormal grids are marked as scene change areas.
A specific example is as follows: it is assumed that the first requested image was taken one year ago and the second requested image was taken at the current time.
Although the position and the field of view at the time of photographing are identical when the two images are compared, the actual objects in the field of view may not be identical, for example, the first request image includes A, B and C three buildings, and at the present time after one year, only two a and C buildings remain in the second request image due to the removal.
Under the above situation, through the method, two grid structures are respectively built based on the 3D feature points corresponding to the two image features, and then the abnormal positioning area is found and marked by comparing grids of the same sequence in the two grid structures.
In this embodiment, the stored information defining the spatial grid includes, but is not limited to: the number of 3D points contained in the mesh, color information of the 3D points contained in the mesh, image feature descriptors of the 3D points contained in the mesh, and the like.
Considering that the point cloud of the visual positioning map has sparsity and non-uniformity, the mesh difference can be judged by using the following method: the difference of points in the grid, the difference of color histograms in the grid, the difference of 3D image feature descriptors in the grid, and the like, and specifically, the difference of the grid feature descriptors can be obtained through a clustering or deep learning network method and the like.
In order to further improve the detection accuracy of the abnormal region, optionally, forming a third space grid structure by using all 3D points of the visual positioning map in the database, respectively comparing differences of the third space grid structure and the second space grid structure, and defining the confidence degree of each space grid according to the difference, wherein the grid with small difference has high confidence degree and the grid with large difference has low confidence degree.
When comparing the difference of the first space grid structure and the second space grid structure, the confidence coefficient of the grid can be considered at the same time, and the grid with high confidence coefficient and high difference is marked as a scene change area, so that operators are prompted to verify and correct more prominently. A grid of low confidence high variance may be marked as an uncertainty region.
In addition, the first space grid structure formed by the requested image features of different date segments can be stored for a long time, and is defined as a first space grid structure database; comparing the first space grid structure of the latest date segment with the first space grid structure of a certain date segment in the history record (for example, comparing the space structure of 2023 with the space structures of 2022 and 2021), if a certain grid has a large difference, the grid and the area corresponding to the adjacent abnormal grid marks can be marked as a scene change area.
Considering that in some cases, the same scene may be moved to another location, such as a shop moving, a device re-placement, etc., if the image features corresponding to the scene remain unchanged without considering the association with the surrounding environment, the corresponding map is the same positioning pose, but the front and back may be located at different real scene locations, in this case, the positioning ambiguity detection module provided by this embodiment may perform targeted detection, specifically:
The positioning ambiguity detection module comprises a third preprocessing module and a third judging module, wherein the third preprocessing module is used for preprocessing the positioning ambiguity;
the third preprocessing module is used for acquiring all positioning poses corresponding to the same equipment ID in the request data set;
the third judging module is used for judging whether the same positioning pose corresponds to different real scene positions in sequence in the adjacent preset time period, if so, recording the accumulated times of the same positioning pose corresponding to different real scene positions in sequence,
and under the condition that the accumulated times are larger than a preset threshold value, acquiring a third target area corresponding to the positioning pose in the point cloud map, and marking the third target area as a positioning ambiguity sub-area.
In some of these embodiments, the assessment system further comprises a locating anomaly analysis module, wherein:
the positioning abnormality analysis module is used for setting different abnormality grade information for the local subareas with the positioning abnormality according to a preset abnormality threshold;
and generating different kinds of response advice information to the operator operation guidance according to the area of the local subarea with the abnormal location and the abnormal level information, wherein the response advice information comprises: personnel view on site, local map update, global map reconstruction, and the like.
Specifically, each region can be divided into different failure grades according to different thresholds, and ambiguous subregions can be positioned by key marks;
when the abnormal positioning area is smaller, the operator can be instructed to perform on-site inspection by sending response advice information, and the movable object is restored to the original state for placement;
the area of the failure area is moderate, and the positioning map can be locally updated aiming at the area and is overlapped with the original positioning map for use;
when the area of the failure area is larger, the original positioning map is replaced after the map is manufactured again;
aiming at the ambiguous positioning area, the method can instruct personnel to check on site, confirm whether the map needs to be reproduced or not, and drop a part of the map corresponding to the original setting position in the original map;
and reminding information is given to operators for the areas with difficult positioning.
In some embodiments, the system further includes a positioning exception checking module, where the positioning exception checking module is configured to perform exception checking on the exception positioning sub-area according to positioning exception detection information accumulated in different time periods.
By comparing and checking failure area reports (such as one week, one month, one year and three years) in different time periods, comprehensive analysis is performed to observe whether the same area has reasonable or abnormal change trend, specifically:
If the same area positioning effect can not be positioned all the time in a report of a period of time or can not be positioned from normal gradual positioning change along with time change, automatically judging that the area is necessarily an abnormal positioning area. If the same area is always not positioned in the annual analysis report, but is positioned normally in the analysis result of the last month, it can be said that the determination result has a problem.
In addition, before actual improvement (such as overall map reconstruction), on the basis of the determination result obtained in the above steps, further checking can be performed manually to determine whether the areas are actually invalid, which specifically includes the following ways:
1. the personnel arrive at the areas, the change judgment of the original scene and the current scene is carried out in the actual scene, and the scope of the failure area is adjusted;
2. photographing by using a monitoring camera of the corresponding area, comparing based on image information, and adjusting the range of the failure area;
3. and sending a signal to an online positioning service, capturing and capturing a plurality of positioning images, and performing manual comparison.
The embodiment of the application also provides a visual map positioning capability assessment method, and fig. 7 is a flowchart of a visual map positioning capability assessment method according to an embodiment of the application, as shown in fig. 7, where the flowchart includes the following steps:
S701, recording a request data set generated by positioning through a data recording module, wherein the request data set comprises: request image characteristics extracted from a request image sent by a terminal and associated information of the request image;
s702, searching local subregions of the positioning abnormality in the point cloud map according to the request data set by a positioning abnormality detection module, and outputting positioning abnormality information corresponding to each local subregion;
s703, outputting the positioning evaluation result of the point cloud map according to the positioning abnormal information through an evaluation result output module.
Through the steps S701 to S703 described above, compared with the prior art, due to the lack of a positioning abnormality detection and analysis scheme for the visual positioning map in the AR scene, when the user AR experience finds a positioning abnormality and feeds back to the service provider, the operator cannot immediately know the range, type and specific cause of the abnormality, and needs to perform comprehensive investigation and overall update on the entire map. According to the visual positioning service, the visual positioning service can be used efficiently and automatically, the abnormal type and the abnormal reason can be checked and positioned, and the abnormal positioning area in the smallest range can be found through comprehensive and fine detection and analysis, so that an operator can optimize the positioning map at the lowest cost, scene improvement can be performed efficiently, and interference to the operation of the visual positioning service can be reduced.
In one embodiment, fig. 8 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 8, an electronic device is provided, which may be a server, and an internal structure diagram thereof may be as shown in fig. 8. The electronic device includes a processor, a network interface, an internal memory, and a non-volatile memory connected by an internal bus, where the non-volatile memory stores an operating system, computer programs, and a database. The processor is used for providing computing and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing environment for the operation of an operating system and a computer program, the computer program is executed by the processor to realize a positioning capability assessment method of the point cloud map, and the database is used for storing data.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (15)

1. A point cloud positioning capability assessment system, the system comprising: the system comprises a data recording module, a positioning abnormality detection module and an evaluation result output module, wherein:
the data recording module is used for recording a request data set generated by positioning in the past, wherein the request data set comprises: request image characteristics extracted from a request image sent by a terminal and associated information of the request image;
The positioning abnormality detection module is used for searching local subareas with abnormal positioning in the point cloud map according to the request data set and outputting positioning abnormality information corresponding to each local subarea;
and the evaluation result output module is used for outputting the positioning evaluation result of the point cloud map according to the positioning abnormal information.
2. The system of claim 1, wherein the association information of the requested image comprises: device ID, image ID, 3D feature point ID, and localization pose, wherein,
the device ID is a serial number of the terminal device that transmitted the requested image,
the image ID is a sequence number of a similar image frame matched with the requested image characteristic in the point cloud map;
the 3D feature point ID is a sequence number corresponding to a 3D feature point of the similar image frame.
3. The system of claim 2, wherein the localization anomaly detection module comprises a localization failure detection module, a scene change detection module, and a localization ambiguity detection module, wherein:
the positioning failure detection module is used for detecting a failure subarea according to pose distribution information of the requested image features in the point cloud map;
The scene change detection module is used for detecting scene change subareas according to the 3D characteristic points corresponding to the request image characteristics in the point cloud map and the 3D characteristic points of the request image characteristics;
the positioning ambiguity detection module is used for detecting the positioning ambiguity sub-region according to the real scene positions corresponding to the same positioning result in different time sequences.
4. The system of claim 3, wherein the location failure detection module comprises a first preprocessing module and a first determination module, wherein
The first preprocessing module is used for acquiring positioning poses corresponding to the features of the requested image from the request data set and generating pose distribution information in the point cloud map according to all the positioning poses;
the first judging module is used for searching the invalid subarea in the point cloud map according to the pose distribution information.
5. The system of claim 4, wherein the first determination module comprises a thermodynamic diagram generation module and a failure subregion delineation module, wherein,
the thermodynamic diagram generating module is used for generating a pose thermodynamic diagram according to the pose distribution information, wherein the thermodynamic value of any region in the pose thermodynamic diagram is positively related to the positioning pose density of the region;
The failure subarea demarcation module is used for acquiring a first target area with a thermal value smaller than a preset thermal threshold value in the pose thermodynamic diagram,
judging whether the first target area is in a closed shape or not, judging whether the actual area of the first target area is larger than a first area threshold value or not, and if so, marking the first target area as a failure subarea.
6. The system of claim 3, wherein the scene change detection module comprises: the device comprises a second preprocessing module and a second judging module, wherein:
the second preprocessing module is configured to obtain, in the request data set, a first 3D feature point corresponding to a first request image feature and a second 3D feature point corresponding to a second request image feature, where the image capturing fields corresponding to the first request image feature and the second request image feature are the same,
generating a first space grid structure and a second space grid structure based on the first 3D characteristic points and the second 3D characteristic points respectively, wherein each grid is divided into a containing grid or a non-containing grid according to the number of the 3D characteristic points in the grid;
the second judging module is configured to mark the grid as an abnormal grid when the grid of the same sequence is a grid with a container in the first space grid structure and a grid without a container in the second space grid structure, and
And determining a second target area according to the adjacent abnormal grids, and marking the second target area as a scene change area.
7. The system according to claim 6, wherein:
the second preprocessing module respectively forms a first 3D space and a second 3D space according to the 3D feature points and the second 3D feature points, and respectively divides the first 3D space and the second 3D space into N grids to generate a first space grid structure and a second space grid structure.
8. The system according to claim 6, wherein:
the second preprocessing module respectively stores the 3D characteristic points and the second 3D characteristic point coordinates through voxel characterization and respectively generates the first space grid structure and the second space grid structure.
9. The system of claim 3, wherein the localization ambiguity detection module comprises: the device comprises a third preprocessing module and a third judging module, wherein:
the third preprocessing module is used for acquiring all positioning poses corresponding to the same equipment ID in the request data set;
the third judging module is used for judging whether the same positioning pose corresponds to different real scene positions in sequence in the adjacent preset time period, if so, recording the accumulated times of the same positioning pose corresponding to different real scene positions in sequence,
And under the condition that the accumulated times are larger than a preset threshold value, acquiring a third target area of the positioning pose corresponding to the point cloud map, and marking the third target area as a positioning ambiguity sub-area.
10. The system of claim 1, further comprising a localization anomaly analysis module, wherein:
the positioning abnormality analysis module is used for setting different abnormality grade information for the local subareas with the positioning abnormality according to a preset abnormality threshold;
and generating different kinds of coping advice information according to the area of the local subarea with the positioned abnormality and the abnormality level information, wherein the coping advice information comprises: personnel view on site, local map update, and global map reconstruction.
11. The system of claim 1, further comprising a locate anomaly verification module, wherein:
the positioning abnormality checking module is used for performing abnormality checking on the abnormal positioning subarea according to the positioning abnormality detection information accumulated in different time periods and,
and checking the positioning abnormal result detected and output by the positioning abnormal module according to manual verification information, wherein the manual verification information is generated based on the real scene information corresponding to the local subarea.
12. The system according to claim 1, wherein the local sub-area is generated by dividing passable areas of the point cloud map by preset rules and/or is generated during detection by the positioning anomaly detection module, wherein,
the preset rule comprises the following steps: one or more combinations of manual customization rules, specified area rules, and semantic information rules.
13. A method for evaluating the positioning capability of a point cloud, the method comprising:
recording, by a data recording module, a request data set generated by positioning in the past, wherein the request data set comprises: request image characteristics extracted from a request image sent by a terminal and associated information of the request image;
searching local subareas with abnormal positioning in the point cloud map according to the request data set by a positioning abnormality detection module, and outputting positioning abnormality information corresponding to each local subarea;
and outputting the positioning evaluation result of the point cloud map according to the positioning abnormal information through an evaluation result output module.
14. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method as claimed in claim 13 when executing the computer program.
15. A computer readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the method of any of claims 13.
CN202310266437.9A 2023-03-13 2023-03-13 Point cloud map positioning capability evaluation system, method, electronic device and storage medium Pending CN116309838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310266437.9A CN116309838A (en) 2023-03-13 2023-03-13 Point cloud map positioning capability evaluation system, method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310266437.9A CN116309838A (en) 2023-03-13 2023-03-13 Point cloud map positioning capability evaluation system, method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN116309838A true CN116309838A (en) 2023-06-23

Family

ID=86833904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310266437.9A Pending CN116309838A (en) 2023-03-13 2023-03-13 Point cloud map positioning capability evaluation system, method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116309838A (en)

Similar Documents

Publication Publication Date Title
CA3099638A1 (en) System and method for construction 3d modeling and analysis
US10878556B2 (en) Interactive semi-automated borescope video analysis and damage assessment system and method of use
RU2019124085A (en) AUTOMATED OBSERVATION AND VERIFICATION OF THE ASSEMBLY PROCESS
CN107504917B (en) Three-dimensional size measuring method and device
CN114972490B (en) Automatic data labeling method, device, equipment and storage medium
CN112950717A (en) Space calibration method and system
CN112200911A (en) Region overlapping type three-dimensional map construction method and device combined with markers
CN109579793B (en) Terrain mapping method, apparatus, flight platform, computer device and storage medium
JP2018169334A (en) Radar image analysis system
RU2659486C1 (en) Method of the radio monitoring results processing
Pollok et al. A visual SLAM-based approach for calibration of distributed camera networks
CN116309838A (en) Point cloud map positioning capability evaluation system, method, electronic device and storage medium
CN109902607B (en) Urban automatic optimization modeling system based on oblique camera
CN115019216B (en) Real-time ground object detection and positioning counting method, system and computer
CN116773598A (en) Digital method for automatically inspecting and positioning defects of photovoltaic panel by unmanned aerial vehicle
CN114493291B (en) Intelligent detection method and system for high fill quality
CN116310237A (en) Method, system, electronic equipment and storage medium for constructing online three-dimensional map
CN115797310A (en) Method for determining inclination angle of photovoltaic power station group string and electronic equipment
KR20170108552A (en) Information system for analysis of waterfront structure damage
CN117197730B (en) Repair evaluation method for urban space distortion image
CN115880323B (en) Greening environment-friendly method and equipment for regional density population positioned by thermal imaging
CN116886879B (en) Satellite-ground integrated digital twin system and method
US11836975B1 (en) System and method for mapping land parcels using computer processes and images of the land parcels
US20240127475A1 (en) Information processing apparatus, information processing method, and program
CN113077513B (en) Visual positioning method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination