CN113052839A - Map detection method and device - Google Patents

Map detection method and device Download PDF

Info

Publication number
CN113052839A
CN113052839A CN202110466751.2A CN202110466751A CN113052839A CN 113052839 A CN113052839 A CN 113052839A CN 202110466751 A CN202110466751 A CN 202110466751A CN 113052839 A CN113052839 A CN 113052839A
Authority
CN
China
Prior art keywords
map
equipment
image
area
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110466751.2A
Other languages
Chinese (zh)
Inventor
闫丹凤
谢非
张淼
王子贤
雷思悦
赵岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110466751.2A priority Critical patent/CN113052839A/en
Publication of CN113052839A publication Critical patent/CN113052839A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a map detection method and a map detection device, which relate to the technical field of data processing, wherein the method comprises the following steps: the method comprises the steps of acquiring a first image acquired by the unmanned equipment in the driving process. A first region in which the device is located is identified in the first image. And obtaining pixel points representing the first region characteristics as first characteristic points. And determining the corresponding actual position of each first characteristic point in the actual environment in which the unmanned equipment runs according to the obtained pixel point position of each first characteristic point. And determining the first position of the equipment according to the actual position corresponding to each first characteristic point. And comparing the first position with a second position of the equipment recorded in the equipment map to obtain a map detection result. By applying the scheme provided by the embodiment of the invention, the map detection efficiency can be improved.

Description

Map detection method and device
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a map detection method and apparatus.
Background
In the prior art, the position of equipment can be recorded by adopting a map, so that the functions of equipment positioning, navigation and the like can be realized according to the map. For example, a map may be used to record the device location of the device in the room, and the drone may travel indoors based on the device location recorded in the map. However, a change in the position of the device or the presence of a new device in the map area represented by the map may result in the position of the device recorded on the map being no longer accurate. In order to determine the accuracy of the device location recorded in the map, the map needs to be detected.
In the prior art, the current device position of each device in the map area is required to be confirmed manually, and compared with the device position recorded in the map, so as to detect the map. However, when there are many objects in the map area, the efficiency of detecting the map manually is low.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a map detection method and apparatus, so as to improve the efficiency of map detection. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a map detection method, where the method includes:
acquiring a first image acquired by unmanned equipment in a driving process;
identifying a first area where equipment is located in the first image;
obtaining pixel points representing the first region characteristics as first characteristic points;
determining the corresponding actual position of each first characteristic point in the actual environment of the unmanned equipment in driving according to the obtained pixel point position of each first characteristic point;
determining a first position of the equipment according to the actual position corresponding to each first feature point;
and comparing the first position with a second position of the equipment recorded in the equipment map to obtain a map detection result.
In an embodiment of the present invention, the comparing the first location with a second location of the device recorded in a device map to obtain a map detection result includes:
calculating a distance between the first location and a second location of the device recorded in a device map;
and generating a map detection result indicating that the position of the device is wrong when the minimum distance in the calculated distances is greater than or equal to a first preset distance.
In an embodiment of the present invention, the obtaining a pixel point representing the first area characteristic as a first characteristic point includes:
and obtaining pixel points representing the first area characteristics and having depth values belonging to a preset depth interval as first characteristic points.
In an embodiment of the present invention, the obtaining a pixel point representing the first area characteristic as a first characteristic point includes:
and obtaining pixel points representing the first region characteristics, of which the angular point response values are greater than the preset response values, as first characteristic points.
In one embodiment of the invention, the second location of the device recorded in the device map is determined by:
acquiring a second image acquired by the unmanned equipment in the driving process;
identifying a second area where the equipment is located in the second image;
obtaining pixel points representing the second region characteristics as second characteristic points;
determining the corresponding actual position of each second characteristic point in the actual driving environment of the unmanned equipment according to the obtained pixel point position of each second characteristic point;
and determining the second position of the equipment according to the actual positions corresponding to the second characteristic points.
In an embodiment of the present invention, the determining the second position of the device according to the actual position corresponding to each second feature point includes:
determining a third position of the equipment according to the actual position corresponding to each second feature point;
calculating a distance between the third location and a second location of the device currently recorded in the device map;
adding the third position as a second position of a new device to the device map when the minimum distance of the calculated distances is greater than or equal to a second preset distance;
under the condition that the minimum distance is smaller than a second preset distance, updating a second position of a target device in the device map according to the third position, wherein the target device is as follows: a device for which the distance between the second location and the third location before updating is the minimum distance.
In one embodiment of the invention, the method further comprises:
obtaining a target image acquired by unmanned equipment, wherein the target image is as follows: the unmanned equipment determines an image acquired under the condition that the distance between the position of the unmanned equipment and the second position of the equipment recorded in the equipment map is smaller than a third preset distance;
identifying a third area where a display panel of the instrument is located in the target image;
identifying a fourth area for displaying information in the third area;
and performing character recognition on the fourth area to obtain instrument information.
In a second aspect, an embodiment of the present invention provides a map detecting apparatus, including:
the image acquisition module is used for acquiring a first image acquired by the unmanned equipment in the driving process;
the area identification module is used for identifying a first area where equipment is located in the first image;
a feature point obtaining module, configured to obtain a pixel point representing the first regional feature as a first feature point;
the actual position determining module is used for determining the corresponding actual position of each first characteristic point in the actual environment of the unmanned equipment in running according to the obtained pixel point position of each first characteristic point;
the first position determining module is used for determining a first position of the equipment according to the actual position corresponding to each first characteristic point;
and the result obtaining module is used for comparing the first position with a second position of the equipment recorded in the equipment map to obtain a map detection result.
In an embodiment of the present invention, the result obtaining module is specifically configured to:
calculating a distance between the first location and a second location of the device recorded in a device map;
and generating a map detection result indicating that the position of the device is wrong when the minimum distance in the calculated distances is greater than or equal to a first preset distance.
In an embodiment of the present invention, the feature point obtaining module is specifically configured to:
and obtaining pixel points representing the first area characteristics and having depth values belonging to a preset depth interval as first characteristic points.
In an embodiment of the present invention, the feature point obtaining module is specifically configured to:
and obtaining pixel points representing the first region characteristics, of which the angular point response values are greater than the preset response values, as first characteristic points.
In one embodiment of the invention, the apparatus further comprises a second location determination module for determining a second location of the device recorded in the device map, the second location determination module comprising:
the image acquisition submodule is used for acquiring a second image acquired by the unmanned equipment in the driving process;
the area identification submodule is used for identifying a second area where the equipment is located in the second image;
the characteristic point obtaining submodule is used for obtaining pixel points representing the characteristics of the second area as second characteristic points;
the actual position determining submodule is used for determining the corresponding actual position of each second characteristic point in the actual environment of the unmanned equipment in driving according to the obtained pixel point position of each second characteristic point;
and the second position determining submodule is used for determining a second position of the equipment according to the actual position corresponding to each second characteristic point.
In an embodiment of the present invention, the second position determining sub-module is specifically configured to:
determining a third position of the equipment according to the actual position corresponding to each second feature point;
calculating a distance between the third location and a second location of the device currently recorded in the device map;
adding the third position as a second position of a new device to the device map when the minimum distance of the calculated distances is greater than or equal to a second preset distance;
under the condition that the minimum distance is smaller than a second preset distance, updating a second position of a target device in the device map according to the third position, wherein the target device is as follows: a device for which the distance between the second location and the third location before updating is the minimum distance.
In one embodiment of the present invention, the apparatus further comprises:
the target image obtaining module is used for obtaining a target image acquired by the unmanned equipment, and the target image is as follows: the unmanned equipment determines an image acquired under the condition that the distance between the position of the unmanned equipment and the second position of the equipment recorded in the equipment map is smaller than a third preset distance;
a third area obtaining module, configured to identify, in the target image, a third area where a display panel of the meter is located;
a fourth region identification module, configured to identify a fourth region for displaying information in the third region;
and the information identification module is used for carrying out character identification on the fourth area to obtain the instrument information.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the first aspect when executing a program stored in the memory.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method steps of any one of the first aspect.
In a fifth aspect, embodiments of the present invention also provide a computer program product comprising instructions, which when run on a computer, cause the computer to perform the method steps of any of the first aspects described above.
The embodiment of the invention has the following beneficial effects:
in the process of map detection through the embodiment of the invention, the first image acquired by the unmanned equipment in the driving process can be acquired. A first region in which the device is located is identified in the first image. And obtaining pixel points representing the first region characteristics as first characteristic points. And determining the corresponding actual position of each first characteristic point in the actual environment in which the unmanned equipment runs according to the obtained pixel point position of each first characteristic point. And determining the first position of the equipment according to the actual position corresponding to each first characteristic point. And comparing the first position with a second position of the equipment recorded in the equipment map to obtain a map detection result.
As can be seen from the above, since the first feature point may characterize the feature of the first area where the device is located, the first feature point may be considered to represent the first area where the device is located. The first location of the device may be determined based on the actual location corresponding to the first feature point, and the first location may be considered to be the current location of the device. Comparing the first position with the second position can determine whether the second position recorded in the device map is accurate, so that a map detection result can be obtained. In addition, the first position where the device is located currently in the map detection process is obtained based on the first image acquired by the unmanned device in the driving process, the map detection process does not need to be carried out manually, and the map detection efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a map detection method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first area provided in an embodiment of the present invention;
fig. 3 is a schematic diagram of a first feature point in a first image according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a first second position determination method according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a second position determining method according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of a meter information collection method provided in an embodiment of the present invention;
FIG. 7A is a schematic view of a third area provided in the embodiments of the present invention;
fig. 7B is a schematic diagram of an edge detection result according to an embodiment of the present invention;
FIG. 7C is a schematic diagram of a fourth area provided in an embodiment of the present invention;
FIG. 7D is a diagram illustrating a character region according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a map detection apparatus provided in an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments given herein by one of ordinary skill in the art, are within the scope of the invention.
In order to solve the problem that map detection efficiency is low in the prior art, embodiments of the present invention provide a map detection method and apparatus.
In an embodiment of the present invention, a map detection method is provided, where the method includes:
the method comprises the steps of acquiring a first image acquired by the unmanned equipment in the driving process.
And identifying a first area where the equipment is located in the first image.
And obtaining pixel points representing the first region characteristics as first characteristic points.
And determining the corresponding actual position of each first characteristic point in the actual environment in which the unmanned equipment runs according to the obtained pixel point position of each first characteristic point.
And determining the first position of the equipment according to the actual position corresponding to each first characteristic point.
And comparing the first position with a second position of the equipment recorded in the equipment map to obtain a map detection result.
As can be seen from the above, since the first feature point may characterize the feature of the first area where the device is located, the first feature point may be considered to represent the first area where the device is located. The first location of the device may be determined based on the actual location corresponding to the first feature point, and the first location may be considered to be the current location of the device. Comparing the first position with the second position can determine whether the second position recorded in the device map is accurate, so that a map detection result can be obtained. In addition, the first position where the device is located currently in the map detection process is obtained based on the first image acquired by the unmanned device in the driving process, the map detection process does not need to be carried out manually, and the map detection efficiency can be improved.
Referring to fig. 1, a schematic flowchart of a map detection method according to an embodiment of the present invention is provided, where the method includes the following steps S101 to S106.
Specifically, the execution subject of the embodiment of the present invention may be a processor installed in an unmanned device, for example, the unmanned device may be a robot, an unmanned vehicle, or the like. The processor of the above-described unmanned device may have an ROS (Robot Operating System) or the like installed therein. However, in the embodiment of the present invention, since there are many data to be processed in the data processing process and the workload is large, the execution main body in the embodiment of the present invention may also be a server in communication connection with the unmanned device.
S101: the method comprises the steps of acquiring a first image acquired by the unmanned equipment in the driving process.
The first image may be an image captured by an image capturing device mounted on the unmanned device.
Specifically, the unmanned device may use an image capturing device installed on the unmanned device during the movement process, such as a camera, a video camera, an RGBD camera, and the like to capture the environment image continuously, so as to perform self-positioning based on the captured environment image by using the SLAM (Simultaneous Localization and Mapping) technology. The RGBD camera can collect a depth map in which a pixel value of each pixel point is a depth value, in addition to the RGB image, and the pixel points at the same position in the RGB image and the depth map collected at the same time correspond to the same position in the actual environment. The RGBD camera described above can continuously acquire an image stream of RGB images and depth maps of 30 FPS.
In addition, in the process of self-positioning of the unmanned equipment, the self pose can be determined according to devices such as a code disc and an inertial sensor which are arranged on the unmanned equipment, and the self pose can be used for self-positioning.
Moreover, the unmanned equipment can be provided with a laser radar, and laser SLAM can be carried out based on laser point cloud collected by the laser radar, so that the unmanned equipment is positioned.
The SLAM technology is only one way to achieve positioning of the unmanned aerial vehicle, and other ways in the prior art may also be used to achieve positioning of the unmanned aerial vehicle, which is not limited in the embodiments of the present invention.
In the case that the execution subject of the embodiment of the present invention is the server, the processor of the unmanned device may compress the first image and then send the compressed first image to the server, so as to reduce the amount of data that needs to be transmitted in the process of transmitting the first image and improve the transmission efficiency. For example, the first image may be compressed into an image of a jpg format having a size of 960 pixels × 540 pixels, or may be an image of another size.
In addition, in the case where the first image is captured by an RGBD camera, the processor may mix the first image with the depth map and transmit the mixed image to the server. After the mixed flow is received by the server, the server may determine the first image and the depth map at the same acquisition time of each pair according to timestamps included in the first image and the depth map.
In an embodiment of the present invention, images acquired by the unmanned aerial vehicle during driving may be acquired, feature points included in each acquired image are respectively determined, and an image in which the number of the included feature points is greater than a preset number is taken as the first image.
S102: and identifying a first area where the equipment is located in the first image.
In an embodiment of the present invention, a neural network model may be trained using a sample image including the device, and a first region in which the device is located in the first image may be identified using the trained neural network model.
The first Region where the device is located in the first image may also be identified by using algorithms in the prior art, such as a YOLO (young Only Look one: Unified, Real-Time Object Detection) algorithm, an R-CNN (Region-Convolutional Neural Networks) algorithm, and the like, which are not described in detail herein.
Referring to fig. 2, a schematic diagram of a first area according to an embodiment of the present invention is provided.
The area enclosed by the black line frame in the figure is the first area.
S103: and obtaining pixel points representing the first region characteristics as first characteristic points.
In an embodiment of the present invention, each pixel point in the first image that characterizes the image may be identified, and a pixel point located in the first region among the identified pixel points may be determined as the first feature point.
In another embodiment of the present invention, a pixel point characterizing the first region may also be directly identified in the first region as the first feature point.
Since the first region is an image region where the device is located, the obtained first feature point can be considered as a feature point representing a feature of the device.
Specifically, the first feature point may be acquired by using an ORB (organized FAST and Rotated BRIEF) algorithm provided by the OpenCV platform, and the acquired first feature point may be referred to as an ORB feature point, and the ORB feature point may be represented by an ORB descriptor in a binary form. In addition, the first feature point may also be acquired by other algorithms in the prior art, which is not limited in the embodiment of the present invention.
Fig. 3 is a schematic diagram of a first feature point in a first image according to an embodiment of the present invention.
The black dots in the figure are the first characteristic points in the first image.
In addition, the first feature point may be obtained by the following step a.
Step A: and obtaining pixel points representing the first area characteristics and having depth values belonging to a preset depth interval as first characteristic points.
Specifically, if the depth value of the pixel point in the first image is too small, it indicates that, when the first image is acquired, the distance between the actual object corresponding to the pixel point with the too small depth value and the image acquisition device is too close, and the actual object may be an occlusion object overlaid on the lens of the image acquisition device. The feature points included in the image region corresponding to the obstruction in the acquired first image may affect the accuracy of the subsequently determined first position of the device.
In addition, if the depth value of the pixel point in the first image is too large, it indicates that, when the first image is acquired, the distance between the actual object corresponding to the pixel point with the too large depth value and the image acquisition device is too far, which is limited by the image acquisition capability of the image acquisition device, and the acquired information of the pixel point may be inaccurate. The feature points contained in the image region of the acquired first image corresponding to the distant object also affect the accuracy of the subsequently determined first position of the device.
Therefore, the preset depth interval can be set, and the pixel points which represent the first area characteristics and have the depth values belonging to the preset depth interval are used as the first characteristic points, so that the pixel points with the depth values being too large or too small are removed.
In an embodiment of the present invention, when the first image is acquired by an RGBD camera, the RGBD camera acquires a depth map corresponding to the first image at the same time as the first image, and the depth map corresponds to the same position in the actual environment as a pixel point at the same position in the first image. Therefore, the depth value of the pixel point can be obtained according to the depth map acquired at the same time as the first image.
In another embodiment of the present invention, the depth value of the pixel point may also be calculated based on the pose of the image capturing device and the position of the pixel point in the first image when the first image is captured. Specifically, the depth value may be calculated based on a depth value calculation method in the prior art, which is not limited in the embodiment of the present invention.
The first feature point may be obtained in the following step B.
And B: and obtaining pixel points representing the first region characteristics, of which the angular point response values are greater than the preset response values, as first characteristic points.
In most cases, the pixel point at the edge of the image can reflect the characteristics of the image region through comparison, the corner response value can reflect the possibility that the pixel point is at the edge of the image, and the larger the corner response value is, the higher the possibility that the pixel point is at the edge of the image is. Therefore, the pixel points with the angular response value larger than the preset response value can be screened as the first characteristic points.
Specifically, the corner response value may be a Harris corner response value.
S104: and determining the corresponding actual position of each first characteristic point in the actual environment in which the unmanned equipment runs according to the obtained pixel point position of each first characteristic point.
In an embodiment of the present invention, the three-dimensional coordinates of the actual position corresponding to the first feature point may be calculated by the following formula.
Figure BDA0003044378400000111
Wherein Z is the depth value of the first feature point, PuvIs the pixel coordinate value of the first feature point in the first image, u is the abscissa value of the first feature point in the first image, v is the ordinate value of the first feature point in the first image, K is the camera internal reference matrix of the image capturing device capturing the first image, R is the camera pose of the image capturing device, P is the pixel coordinate value of the first feature point in the first image, u is the abscissa value of the first feature point in the first image, v is the ordinate value of the first feature point in the first image, K is the camera internal reference matrix of the imagewAnd T is a three-dimensional coordinate of an actual position corresponding to the first feature point, and is a translation vector of the camera pose, and T is a transformation matrix representing the camera pose.
In another embodiment of the present invention, the actual position may also be calculated by other methods in the prior art, which is not limited in the embodiment of the present invention.
S105: and determining the first position of the equipment according to the actual position corresponding to each first characteristic point.
Since the first feature point is a feature point located in the first region, and the first region is a region in which the device is located in the first image, it can be considered that the first feature point may represent the device.
In one embodiment of the present invention, an average value, a weighted average value, or the like of the coordinate values of the actual position corresponding to each first feature point may be calculated as the coordinate values of the first position of the above-described device.
Specifically, in the process of calculating the weighted average, the weight may be set manually, and if the area of the upper half of the device included in the first image is large, the obtained first feature point is more corresponding to the upper half of the device, so the weight corresponding to the first feature point closer to the lower half of the device may be larger. In this way, if the area of the lower half of the device included in the first image is large, the weight of the first feature point closer to the upper half of the device may be larger. If the area of the left half of the device included in the first image is large, the weight of the first feature point corresponding to the right half of the device may be increased. If the area of the right half of the device included in the first image is large, the weight of the first feature point corresponding to the left half of the device may be increased as the first feature point is closer to the left half of the device.
S106: and comparing the first position with a second position of the equipment recorded in the equipment map to obtain a map detection result.
In an embodiment of the present invention, if the calculated first position is close to the second position of the device recorded in the device map, it can be said that the device has not undergone a large positional change, and the map detection result may be map data accurate.
In addition, the first feature point may be added to a feature point library corresponding to the device, and the second position of the device may be recalculated based on an actual position corresponding to the feature point stored in the feature point library. The feature point library stores each feature point corresponding to the device.
The second position of the device recorded in the device map may be updated by calculating an average value, a weighted average value, or the like between the coordinates of the first position and the second position.
In another embodiment of the present invention, if the difference between the calculated first location and the second location recorded in the device map is large, it indicates that the calculation of the first location of the device is wrong, or the device is a new device, and the location of the device may be changed. Therefore, the map detection result can be that the map data is inaccurate, and alarm information can be sent out to inform workers that the map data is inaccurate.
In one embodiment of the present invention, the step S106 can be realized by the following steps C to D.
And C: the distance between the first location and the second location of the device recorded in the device map is calculated.
In one embodiment of the present invention, the distance between the first position and the second position may be calculated based on the coordinate values of the first position and the second position.
Since the number of devices recorded in the device map may be greater than 1, distances between the first location and the second locations of the respective devices recorded in the device map may be calculated, respectively.
Step D: and generating a map detection result indicating that the position of the device is wrong when the minimum distance in the calculated distances is greater than or equal to a first preset distance.
Specifically, since the installation location of most of the devices does not change greatly in most cases, it can be considered that the device corresponding to the smallest distance between the second location and the first location is most likely to be the same device as the device corresponding to the first location. That is, if the calculated minimum distance is greater than or equal to the first preset distance, it indicates that the distances between the obtained first location and the second locations of the respective devices recorded in the map are all large, and the calculated first locations of the devices do not match the second locations of the respective devices recorded in the map, and therefore, it can be considered that the device locations recorded in the map are wrong, and a map detection result indicating that the device locations are wrong is generated.
As can be seen from the above, since the first feature point may characterize the feature of the first area where the device is located, the first feature point may be considered to represent the first area where the device is located. The first location of the device may be determined based on the actual location corresponding to the first feature point, and the first location may be considered to be the current location of the device. Comparing the first position with the second position can determine whether the second position recorded in the device map is accurate, so that a map detection result can be obtained. In addition, the first position where the device is located currently in the map detection process is obtained based on the first image acquired by the unmanned device in the driving process, the map detection process does not need to be carried out manually, and the map detection efficiency can be improved.
Referring to fig. 4, a flowchart of a first second position determining method according to an embodiment of the present invention is shown.
Specifically, the second position of the device recorded in the device map may be determined by the following steps S401 to S405.
S401: and acquiring a second image acquired by the unmanned equipment in the driving process.
S402: and identifying a second area where the equipment is located in the second image.
S403: and obtaining pixel points representing the characteristics of the second area as second characteristic points.
S404: and determining the corresponding actual position of each second characteristic point in the actual driving environment of the unmanned equipment according to the obtained pixel point position of each second characteristic point.
S405: and determining the second position of the equipment according to the actual position corresponding to each second feature point.
In an embodiment of the present invention, the steps S401 to S405 are similar to the steps S101 to S105, and only the difference is that the first image in the steps S101 to S105 is replaced with the second image, and the obtained first position is replaced with the second position, which is not described again in the embodiment of the present invention.
As can be seen from the above, since the second feature point may characterize the feature of the second area where the device is located, it may be considered that the second feature point may represent the second area where the device is located. The second location of the device may be determined based on the actual location corresponding to the second feature point, and the second location may be considered to be the current location of the device. The process of determining the second position is obtained based on the second image acquired by the unmanned equipment in the driving process, and the process of determining the second position does not need to be carried out manually, so that the efficiency of determining the second position can be improved.
Referring to fig. 5, a flowchart of a second position determining method according to an embodiment of the present invention is shown, and compared with the embodiment shown in fig. 4, the above step S405 can be implemented by the following steps S405A-S405D.
S405A: and determining the third position of the equipment according to the actual position corresponding to each second feature point.
Specifically, step S405A is similar to step S105, and the present invention is not repeated herein.
S405B: and calculating the distance between the third position and the second position of the equipment currently recorded in the equipment map.
Specifically, step S405B is similar to step C, and is not repeated herein in this embodiment of the present invention.
S405C: and adding the third position as the second position of the new device to the device map when the minimum distance in the calculated distances is greater than or equal to a second preset distance.
If the minimum distance is greater than or equal to the second preset distance, it may be determined that the distance between the device corresponding to the third location and the device corresponding to the minimum distance recorded in the device map is long and the devices are not the same device. If the device corresponding to the third location may be a device newly appearing in the actual environment, the third location may be added to the device map as the location of the new device.
S405D: and updating the second position of the target device in the device map according to the third position when the minimum distance is smaller than a second preset distance.
Wherein, the target device is: and a device for setting the distance between the second position and the third position before updating to be the minimum distance.
If the minimum distance is smaller than the second preset distance, it may be determined that the device corresponding to the third location is closer to the device corresponding to the minimum distance in the device map and is the same device. The second location of the target device in the device map may be updated based on the third location.
Specifically, an average value or a weighted average value between the third position and the second position of the target device originally recorded in the device map may be calculated, and the calculation result may be used as the new second position of the target device recorded in the device map.
In addition, the second feature point may be added to a feature point library corresponding to the target device, and the second position of the target device may be recalculated based on an actual position corresponding to the feature point stored in the feature point library.
As can be seen from the above, when the distance between the device corresponding to the third location and the target device is long, it is considered that the device corresponding to the third location and the target device are not the same device, so the device corresponding to the third location can be determined as a device newly found in the actual environment and added to the device map, so that the device map is more accurate. Otherwise, the device corresponding to the third location and the target device are considered to be the same device, and the second location of the target device recorded in the device map can be updated according to the third location, so that the second location of the target device recorded in the device map is more accurate, and the device map is more accurate.
Referring to fig. 6, which is a schematic flow chart of a meter information collecting method according to an embodiment of the present invention, the method includes the following steps S601 to S604.
S601: and obtaining a target image acquired by the unmanned equipment.
Wherein, the target image is: and the image is acquired under the condition that the distance between the position of the unmanned equipment and the second position of the equipment recorded in the equipment map is smaller than a third preset distance.
In the case where the execution subject of the embodiment of the present invention is a processor of the unmanned aerial vehicle, the processor may calculate a distance between a position of the unmanned aerial vehicle itself and the second position, and in the case where the distance is smaller than a third preset distance, control the image capturing device installed in itself to capture a target image of the unmanned aerial vehicle itself, and then the processor may receive the target image sent by the image capturing device. The image capturing device for capturing the target image may be the same as or different from the image capturing device for capturing the first image, for example, the image capturing device for capturing the target image may be a monocular camera, and the image capturing device for capturing the first image may be an RGBD camera. The two cameras may be mounted at different locations of the drone, with different orientations.
In the case that the execution subject of the embodiment of the present invention is a server, the server may calculate a distance between the self position of the unmanned device and the second position, and in the case that the distance is smaller than a third preset distance, generate an image capture instruction to a processor of the unmanned device, so that the processor controls the image capture device to capture an image after receiving the image capture instruction. The server may receive the target image sent by the processor.
S602: and identifying a third area where a display panel of the instrument is located in the target image.
The display panel may include other parts such as buttons and indicator lamps in addition to an area for displaying information.
Specifically, in an embodiment of the present invention, an area where the device included in the target image is located may be identified first, and then a third area is identified from the area where the device is located, or the third area may be directly identified from the target image.
In an embodiment of the present invention, since the position and the size of the display panel on the device are relatively fixed, after the area where the device is located is identified, the third area may be determined according to the position and the size of a preset meter on the device.
Fig. 7A is a schematic view of a third area provided in the embodiment of the present invention.
As can be seen from the figure, the display panel of the device includes a region for displaying information, a key region, and the like, and the information displayed on the display panel is "40.0".
Specifically, step S602 is similar to step S102, and the difference is only that the first region is obtained by the identification in step S102, and the third region is obtained by the identification in step S602, which is not described again in the present invention.
S603: and identifying a fourth area for displaying information in the third area.
Specifically, the display information may be represented in the form of numbers, characters, or the like.
In an embodiment of the present invention, since the area of the device displaying the information is often fixed in a fixed position on the display panel, and the size of the area displaying the information is often fixed, the fourth area may be determined from the third area according to a preset position of the area displaying the information on the display panel and the size of the area displaying the information.
In another embodiment of the present invention, the fourth area for displaying information may be identified from the above-described third area through the following steps E to F.
Step E: and detecting the edge of the third area.
In an embodiment of the present invention, edge detection may be performed on the third region based on a Robert operator, a Sobel operator, a Laplace operator, or other image edge detection algorithms in the prior art, which is not described in detail herein.
Fig. 7B is a schematic diagram of an edge detection result according to an embodiment of the present invention.
Fig. 7B is an edge detection result obtained by performing edge detection on the third area shown in fig. 7A.
Step F: and determining a fourth area for displaying information in the third area according to the detected edge.
Specifically, a region surrounded by the edge having the largest area may be the fourth region.
In addition, since the shape of the region on the apparatus where information is displayed is often fixed, a region surrounded by an edge having the same shape as the shape of the region where information is displayed may be selected as the fourth region.
Fig. 7C is a schematic view of a fourth area according to an embodiment of the present invention.
Fig. 7C is a fourth region included in the third region shown in fig. 7A.
S604: and performing character recognition on the fourth area to obtain instrument information.
In an embodiment of the present invention, the meter information may be obtained by using a word recognition algorithm in the prior art, which is not described in detail herein.
In another embodiment of the present invention, the meter information may be obtained through the following steps G to I.
Step G: and determining the color value of each pixel point contained in the fourth area.
Specifically, the color value may be a color value in an RGB color space, or may also be a color value in an HSV color space.
Step H: and counting the number of pixel points corresponding to the color value aiming at each determined color value.
Specifically, the color values of the pixels in the fourth region can be traversed, so that the number of the pixels corresponding to each color value is determined.
Step I: and identifying a character area contained in the fourth area according to the target color value, and performing character identification on the character area to obtain meter information.
Wherein, the target color value is: and determining the color value with the maximum number of corresponding pixel points in the determined color values.
Specifically, in the process of displaying information in the meter, most of the areas for displaying information are often used for displaying information, so that the color represented by the target color value with the largest number of corresponding pixels in the determined color value may be considered as the color of the displayed information. Therefore, the minimum rectangular region of each pixel point having the color value of the target color value can be used as the character region.
After the character area is determined, the character area can be subjected to image segmentation to respectively obtain the image area where each character is located, and character recognition is performed on the image area where each character is located to obtain meter information.
Specifically, the above process may be implemented by using an image segmentation algorithm and a character recognition algorithm in the prior art, which is not described in detail herein.
In addition, after the fourth area is identified, binarization processing can be performed on the fourth area, the pixel value of the pixel point with the color value being the target color value is set as a first pixel value, and the pixel values of other pixel points are set as second pixel values, so that binarization processing can be performed on the second area, and information displayed in the second area can be highlighted. Moreover, the fourth region may be processed to remove noise, and the fourth region may be processed in a noise removal manner in the prior art, which is not described in detail herein in the embodiment of the present invention.
In addition, because the instrument displays information, the color of the displayed information is often fixed, and thus the target color value can also be a preset color value.
Fig. 7D is a schematic diagram of a character area according to an embodiment of the present invention.
The region shown in fig. 7D is a character region included in the fourth region shown in fig. 7C.
As can be seen from the above, in the moving process of the unmanned device, when the distance between the unmanned device and the device is smaller than the preset distance, the target image of the device can be acquired. Because the distance between the unmanned equipment and the equipment is short, the collected target image can clearly show the instrument of the equipment. And by identifying the information area contained in the target image, the meter information can be obtained by determining the content indicated by the information area. Therefore, instrument information can be collected in the process that the unmanned equipment moves, the process of collecting the instrument information does not need manual work, and the efficiency of collecting the instrument information can be improved.
In an embodiment of the present invention, in the process of the unmanned device moving in the actual environment, the position of the unmanned device needs to be determined continuously according to the feature points included in the environment image acquired by the image acquisition device, so as to achieve self-positioning. However, if the image acquired by the unmanned device contains few or no features, the unmanned device may have difficulty in determining its own position, and it may be difficult for the unmanned device to continue determining the driving route.
In the above case, the environment image matching the current environment image in the historical environment image may be determined by bag-of-word model, projection matching, pose optimization, and the like based on the tracking thread of the open source code of the orbslam2 system framework, and the unmanned device may be repositioned according to the determined historical environment image. Specifically, the above relocation method based on the tracking thread belongs to the prior art, and is not described in detail in the embodiments of the present invention. Of course, other relocation methods may also be adopted for relocation, which is not limited in the embodiment of the present invention.
In another embodiment of the present invention, after the environment image is obtained, a part of the pixel points in the environment image may be selected, based on the positions of the pixel points in the environment image of the selected pixel points and the camera pose of the image capturing device when the environment image is captured, the three-dimensional coordinates of the actual positions corresponding to the selected pixel points are calculated, and the three-dimensional cloud blocks of the actual positions represented by each environment image are obtained according to the three-dimensional coordinates. And splicing the three-dimensional point cloud blocks corresponding to different environment images to obtain the three-dimensional point cloud of the actual environment, wherein the three-dimensional point cloud is used as the point cloud map of the actual environment.
Specifically, the calculation of the three-dimensional coordinates, the obtaining of the three-dimensional point cloud blocks, and the splicing of the three-dimensional point cloud blocks can be realized by a common method in the prior art, which is not limited in the embodiment of the present invention.
Corresponding to the map detection method, the embodiment of the invention also provides a map detection device.
Referring to fig. 8, a schematic structural diagram of a map detecting apparatus provided in an embodiment of the present invention is shown, where the apparatus includes:
the image acquisition module 801 is used for acquiring a first image acquired by the unmanned equipment in the driving process;
a region identification module 802, configured to identify a first region in which the device is located in the first image;
a feature point obtaining module 803, configured to obtain a pixel point representing the feature of the first region as a first feature point;
an actual position determining module 804, configured to determine, according to the obtained pixel point position of each first feature point, an actual position of each first feature point in an actual environment where the unmanned equipment is running;
a first position determining module 805, configured to determine a first position of the device according to the actual position corresponding to each first feature point;
a result obtaining module 806, configured to compare the first location with a second location of the device recorded in the device map, and obtain a map detection result.
As can be seen from the above, since the first feature point may characterize the feature of the first area where the device is located, the first feature point may be considered to represent the first area where the device is located. The first location of the device may be determined based on the actual location corresponding to the first feature point, and the first location may be considered to be the current location of the device. Comparing the first position with the second position can determine whether the second position recorded in the device map is accurate, so that a map detection result can be obtained. In addition, the first position where the device is located currently in the map detection process is obtained based on the first image acquired by the unmanned device in the driving process, the map detection process does not need to be carried out manually, and the map detection efficiency can be improved.
In an embodiment of the present invention, the result obtaining module 806 is specifically configured to:
calculating a distance between the first location and a second location of the device recorded in a device map;
and generating a map detection result indicating that the position of the device is wrong when the minimum distance in the calculated distances is greater than or equal to a first preset distance.
In an embodiment of the present invention, the feature point obtaining module 803 is specifically configured to:
and obtaining pixel points representing the first area characteristics and having depth values belonging to a preset depth interval as first characteristic points.
In an embodiment of the present invention, the feature point obtaining module 803 is specifically configured to:
and obtaining pixel points representing the first region characteristics, of which the angular point response values are greater than the preset response values, as first characteristic points.
In one embodiment of the invention, the apparatus further comprises a second location determination module for determining a second location of the device recorded in the device map, the second location determination module comprising:
the image acquisition submodule is used for acquiring a second image acquired by the unmanned equipment in the driving process;
the area identification submodule is used for identifying a second area where the equipment is located in the second image;
the characteristic point obtaining submodule is used for obtaining pixel points representing the characteristics of the second area as second characteristic points;
the actual position determining submodule is used for determining the corresponding actual position of each second characteristic point in the actual environment of the unmanned equipment in driving according to the obtained pixel point position of each second characteristic point;
and the second position determining submodule is used for determining a second position of the equipment according to the actual position corresponding to each second characteristic point.
As can be seen from the above, since the second feature point may characterize the feature of the second area where the device is located, it may be considered that the second feature point may represent the second area where the device is located. The second location of the device may be determined based on the actual location corresponding to the second feature point, and the second location may be considered to be the current location of the device. The process of determining the second position is obtained based on the second image acquired by the unmanned equipment in the driving process, and the process of determining the second position does not need to be carried out manually, so that the efficiency of determining the second position can be improved.
In an embodiment of the present invention, the second position determining sub-module is specifically configured to:
determining a third position of the equipment according to the actual position corresponding to each second feature point;
calculating a distance between the third location and a second location of the device currently recorded in the device map;
adding the third position as a second position of a new device to the device map when the minimum distance of the calculated distances is greater than or equal to a second preset distance;
under the condition that the minimum distance is smaller than a second preset distance, updating a second position of a target device in the device map according to the third position, wherein the target device is as follows: a device for which the distance between the second location and the third location before updating is the minimum distance.
In one embodiment of the present invention, the apparatus further comprises:
the target image obtaining module is used for obtaining a target image acquired by the unmanned equipment, and the target image is as follows: the unmanned equipment determines an image acquired under the condition that the distance between the position of the unmanned equipment and the second position of the equipment recorded in the equipment map is smaller than a third preset distance;
a third area obtaining module, configured to identify, in the target image, a third area where a display panel of the meter is located;
a fourth region identification module, configured to identify a fourth region for displaying information in the third region;
and the information identification module is used for carrying out character identification on the fourth area to obtain the instrument information.
As can be seen from the above, in the moving process of the unmanned device, when the distance between the unmanned device and the device is smaller than the preset distance, the target image of the device can be acquired. Because the distance between the unmanned equipment and the equipment is short, the collected target image can clearly show the instrument of the equipment. And by identifying the information area contained in the target image, the meter information can be obtained by determining the content indicated by the information area. Therefore, instrument information can be collected in the process that the unmanned equipment moves, the process of collecting the instrument information does not need manual work, and the efficiency of collecting the instrument information can be improved.
An embodiment of the present invention further provides an electronic device, as shown in fig. 9, which includes a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete mutual communication through the communication bus 904,
a memory 903 for storing computer programs;
the processor 901 is configured to implement the method steps of any of the above-described map detection methods when executing the program stored in the memory 903.
When the electronic device provided by the embodiment of the invention is applied to detecting a map, the first feature point can represent the feature of the first area where the device is located, so that the first feature point can be considered to represent the first area where the device is located. The first location of the device may be determined based on the actual location corresponding to the first feature point, and the first location may be considered to be the current location of the device. Comparing the first position with the second position can determine whether the second position recorded in the device map is accurate, so that a map detection result can be obtained. In addition, the first position where the device is located currently in the map detection process is obtained based on the first image acquired by the unmanned device in the driving process, the map detection process does not need to be carried out manually, and the map detection efficiency can be improved.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In a further embodiment of the present invention, there is also provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the method steps of any of the above-described map detection methods.
When the computer program stored in the computer-readable storage medium provided by the embodiment of the present invention is executed to detect a map, since the first feature point may represent a feature of a first area where a device is located, the first feature point may be considered to represent the first area where the device is located. The first location of the device may be determined based on the actual location corresponding to the first feature point, and the first location may be considered to be the current location of the device. Comparing the first position with the second position can determine whether the second position recorded in the device map is accurate, so that a map detection result can be obtained. In addition, the first position where the device is located currently in the map detection process is obtained based on the first image acquired by the unmanned device in the driving process, the map detection process does not need to be carried out manually, and the map detection efficiency can be improved.
In a further embodiment provided by the present invention, there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method steps of any of the above described map detection methods.
When the computer program provided by the embodiment of the present invention is executed to detect a map, since the first feature point may represent a feature of a first area where a device is located, the first feature point may be considered to represent the first area where the device is located. The first location of the device may be determined based on the actual location corresponding to the first feature point, and the first location may be considered to be the current location of the device. Comparing the first position with the second position can determine whether the second position recorded in the device map is accurate, so that a map detection result can be obtained. In addition, the first position where the device is located currently in the map detection process is obtained based on the first image acquired by the unmanned device in the driving process, the map detection process does not need to be carried out manually, and the map detection efficiency can be improved.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus, the electronic device, the computer-readable storage medium and the computer program product, since they are substantially similar to the method embodiments, the description is relatively simple, and in relation to them, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A map detection method, characterized in that the method comprises:
acquiring a first image acquired by unmanned equipment in a driving process;
identifying a first area where equipment is located in the first image;
obtaining pixel points representing the first region characteristics as first characteristic points;
determining the corresponding actual position of each first characteristic point in the actual environment of the unmanned equipment in driving according to the obtained pixel point position of each first characteristic point;
determining a first position of the equipment according to the actual position corresponding to each first feature point;
and comparing the first position with a second position of the equipment recorded in the equipment map to obtain a map detection result.
2. The method of claim 1, wherein comparing the first location to a second location of the device recorded in a device map to obtain a map detection result comprises:
calculating a distance between the first location and a second location of the device recorded in a device map;
and generating a map detection result indicating that the position of the device is wrong when the minimum distance in the calculated distances is greater than or equal to a first preset distance.
3. The method according to claim 1, wherein the obtaining of the pixel point characterizing the first region as the first feature point comprises:
and obtaining pixel points representing the first area characteristics and having depth values belonging to a preset depth interval as first characteristic points.
4. The method according to claim 1, wherein the obtaining of the pixel point characterizing the first region as the first feature point comprises:
and obtaining pixel points representing the first region characteristics, of which the angular point response values are greater than the preset response values, as first characteristic points.
5. The method according to any of claims 1-4, characterized in that the second position of the device recorded in the device map is determined by:
acquiring a second image acquired by the unmanned equipment in the driving process;
identifying a second area where the equipment is located in the second image;
obtaining pixel points representing the second region characteristics as second characteristic points;
determining the corresponding actual position of each second characteristic point in the actual driving environment of the unmanned equipment according to the obtained pixel point position of each second characteristic point;
and determining the second position of the equipment according to the actual positions corresponding to the second characteristic points.
6. The method according to claim 5, wherein determining the second position of the device according to the actual positions corresponding to the second feature points comprises:
determining a third position of the equipment according to the actual position corresponding to each second feature point;
calculating a distance between the third location and a second location of the device currently recorded in the device map;
adding the third position as a second position of a new device to the device map when the minimum distance of the calculated distances is greater than or equal to a second preset distance;
under the condition that the minimum distance is smaller than a second preset distance, updating a second position of a target device in the device map according to the third position, wherein the target device is as follows: a device for which the distance between the second location and the third location before updating is the minimum distance.
7. The method according to any one of claims 1-4, further comprising:
obtaining a target image acquired by unmanned equipment, wherein the target image is as follows: the unmanned equipment determines an image acquired under the condition that the distance between the position of the unmanned equipment and the second position of the equipment recorded in the equipment map is smaller than a third preset distance;
identifying a third area where a display panel of the instrument is located in the target image;
identifying a fourth area for displaying information in the third area;
and performing character recognition on the fourth area to obtain instrument information.
8. A map detection apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a first image acquired by the unmanned equipment in the driving process;
the area identification module is used for identifying a first area where equipment is located in the first image;
a feature point obtaining module, configured to obtain a pixel point representing the first regional feature as a first feature point;
the actual position determining module is used for determining the corresponding actual position of each first characteristic point in the actual environment of the unmanned equipment in running according to the obtained pixel point position of each first characteristic point;
the first position determining module is used for determining a first position of the equipment according to the actual position corresponding to each first characteristic point;
and the result obtaining module is used for comparing the first position with a second position of the equipment recorded in the equipment map to obtain a map detection result.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202110466751.2A 2021-04-28 2021-04-28 Map detection method and device Pending CN113052839A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110466751.2A CN113052839A (en) 2021-04-28 2021-04-28 Map detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110466751.2A CN113052839A (en) 2021-04-28 2021-04-28 Map detection method and device

Publications (1)

Publication Number Publication Date
CN113052839A true CN113052839A (en) 2021-06-29

Family

ID=76517825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110466751.2A Pending CN113052839A (en) 2021-04-28 2021-04-28 Map detection method and device

Country Status (1)

Country Link
CN (1) CN113052839A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103640018A (en) * 2013-12-13 2014-03-19 江苏久祥汽车电器集团有限公司 SURF (speeded up robust feature) algorithm based localization method and robot
CN107121126A (en) * 2017-05-16 2017-09-01 华中科技大学 A kind of roadbed settlement monitoring method and system based on image procossing
CN107292863A (en) * 2016-04-12 2017-10-24 上海慧流云计算科技有限公司 A kind of self-charging method and device
CN107515006A (en) * 2016-06-15 2017-12-26 华为终端(东莞)有限公司 A kind of map updating method and car-mounted terminal
CN107564020A (en) * 2017-08-31 2018-01-09 北京奇艺世纪科技有限公司 A kind of image-region determines method and device
CN107728633A (en) * 2017-10-23 2018-02-23 广州极飞科技有限公司 Obtain object positional information method and device, mobile device and its control method
CN109141442A (en) * 2018-09-07 2019-01-04 高子庆 Navigation method based on UWB positioning and image feature matching and mobile terminal
CN111178250A (en) * 2019-12-27 2020-05-19 深圳市越疆科技有限公司 Object identification positioning method and device and terminal equipment
CN111623794A (en) * 2020-05-15 2020-09-04 广州小鹏车联网科技有限公司 Display control method for vehicle navigation, vehicle and readable storage medium
CN111652222A (en) * 2020-07-13 2020-09-11 深圳市智搜信息技术有限公司 License plate positioning method and device, computer equipment and storage medium
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN112163578A (en) * 2020-09-25 2021-01-01 深兰人工智能芯片研究院(江苏)有限公司 Method and system for improving OCR recognition rate
CN112446918A (en) * 2019-09-04 2021-03-05 三赢科技(深圳)有限公司 Method and device for positioning target object in image, computer device and storage medium
CN112712558A (en) * 2020-12-25 2021-04-27 北京三快在线科技有限公司 Positioning method and device of unmanned equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103640018A (en) * 2013-12-13 2014-03-19 江苏久祥汽车电器集团有限公司 SURF (speeded up robust feature) algorithm based localization method and robot
CN107292863A (en) * 2016-04-12 2017-10-24 上海慧流云计算科技有限公司 A kind of self-charging method and device
CN107515006A (en) * 2016-06-15 2017-12-26 华为终端(东莞)有限公司 A kind of map updating method and car-mounted terminal
CN107121126A (en) * 2017-05-16 2017-09-01 华中科技大学 A kind of roadbed settlement monitoring method and system based on image procossing
CN107564020A (en) * 2017-08-31 2018-01-09 北京奇艺世纪科技有限公司 A kind of image-region determines method and device
CN107728633A (en) * 2017-10-23 2018-02-23 广州极飞科技有限公司 Obtain object positional information method and device, mobile device and its control method
CN109141442A (en) * 2018-09-07 2019-01-04 高子庆 Navigation method based on UWB positioning and image feature matching and mobile terminal
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN112446918A (en) * 2019-09-04 2021-03-05 三赢科技(深圳)有限公司 Method and device for positioning target object in image, computer device and storage medium
CN111178250A (en) * 2019-12-27 2020-05-19 深圳市越疆科技有限公司 Object identification positioning method and device and terminal equipment
CN111623794A (en) * 2020-05-15 2020-09-04 广州小鹏车联网科技有限公司 Display control method for vehicle navigation, vehicle and readable storage medium
CN111652222A (en) * 2020-07-13 2020-09-11 深圳市智搜信息技术有限公司 License plate positioning method and device, computer equipment and storage medium
CN112163578A (en) * 2020-09-25 2021-01-01 深兰人工智能芯片研究院(江苏)有限公司 Method and system for improving OCR recognition rate
CN112712558A (en) * 2020-12-25 2021-04-27 北京三快在线科技有限公司 Positioning method and device of unmanned equipment

Similar Documents

Publication Publication Date Title
CN109492507B (en) Traffic light state identification method and device, computer equipment and readable medium
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN109918977B (en) Method, device and equipment for determining idle parking space
CN108839016B (en) Robot inspection method, storage medium, computer equipment and inspection robot
CN111988524A (en) Unmanned aerial vehicle and camera collaborative obstacle avoidance method, server and storage medium
CN111986214B (en) Construction method of pedestrian crossing in map and electronic equipment
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
US20220044558A1 (en) Method and device for generating a digital representation of traffic on a road
CN113744348A (en) Parameter calibration method and device and radar vision fusion detection equipment
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN112950717A (en) Space calibration method and system
CN114972490B (en) Automatic data labeling method, device, equipment and storage medium
EP3940666A1 (en) Digital reconstruction method, apparatus, and system for traffic road
CN109345567B (en) Object motion track identification method, device, equipment and storage medium
Wang et al. Preliminary research on vehicle speed detection using traffic cameras
CN113804100A (en) Method, device, equipment and storage medium for determining space coordinates of target object
CN115082857A (en) Target object detection method, device, equipment and storage medium
CN112509135A (en) Element labeling method, device, equipment, storage medium and computer program product
CN111951328A (en) Object position detection method, device, equipment and storage medium
CN112215036B (en) Cross-mirror tracking method, device, equipment and storage medium
WO2023103883A1 (en) Automatic object annotation method and apparatus, electronic device and storage medium
CN113763466A (en) Loop detection method and device, electronic equipment and storage medium
CN116386373A (en) Vehicle positioning method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231027

Address after: 100876 Beijing city Haidian District Xitucheng Road No. 10

Applicant after: Beijing University of Posts and Telecommunications

Address before: 100876 Beijing city Haidian District Xitucheng Road No. 10

Applicant before: Yan Danfeng

Applicant before: Xie Fei

Applicant before: Zhang Miao

Applicant before: Wang Zixian

Applicant before: Lei Siyue

Applicant before: Zhao Yue