CN116465422A - Map information generation method and device, electronic equipment and storage medium - Google Patents

Map information generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116465422A
CN116465422A CN202310424538.4A CN202310424538A CN116465422A CN 116465422 A CN116465422 A CN 116465422A CN 202310424538 A CN202310424538 A CN 202310424538A CN 116465422 A CN116465422 A CN 116465422A
Authority
CN
China
Prior art keywords
image
determining
area
acquired
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310424538.4A
Other languages
Chinese (zh)
Inventor
耿露
张志军
刘敏
张辉松
朱登明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202310424538.4A priority Critical patent/CN116465422A/en
Publication of CN116465422A publication Critical patent/CN116465422A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3811Point data, e.g. Point of Interest [POI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3885Transmission of map data to client devices; Reception of map data by client devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a generation method, a device, electronic equipment and a storage medium of map information, wherein the generation method of the map information is characterized in that a first image and a second image are obtained, wherein the first image represents an image of a first area acquired by aviation, the second image represents an image of a second area acquired by a vehicle end, and a superposition area exists between the first area and the second area; extracting a first object in the first image and a second object in the second image; combining the first image and the second image, determining the position of the first object relative to a preset reference object to obtain a first position, and determining the position of the second object relative to the preset reference object to obtain a second position; target map information is generated based on the first location and the second location. The first image acquired by combining aviation is combined with the second image acquired by the vehicle end to generate the target map information, so that the defect that the visual angle limitation exists in the place where the true value is generated only by the vehicle end acquired image in the existing scheme is overcome.

Description

Map information generation method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for generating map information, an electronic device, and a storage medium.
Background
The map resource is becoming an essential basic resource in a plurality of fields nowadays, especially in the field of high-precision control, for example, a high-precision map in automatic driving, and the high-precision map is used as a basic supporting condition for automatic driving, and has the characteristics of high precision, abundant elements, high data updating frequency and the like. In the manufacturing process of the high-precision map, a truth field needs to be generated, wherein the truth field represents real scene information represented by the high-precision map, such as real information of roads, road signs, signboards and the like; the existing truth field generation process mainly relies on image information collected by a vehicle-end sensor to generate, but the collection of the vehicle-mounted sensor has limitation, for example, the sensor is limited by the running of a vehicle, and the problem that the sensor does not collect overlook data due to the limitation of view angles exists.
Disclosure of Invention
In order to solve the technical problems, the application provides a map information generation method, a map information generation device, electronic equipment and a storage medium.
In a first aspect, the present application provides a method for generating map information, where the method includes:
acquiring a first image and a second image, wherein the first image represents an image of a first region acquired by aviation, the second image represents an image of a second region acquired by a vehicle end, and a superposition region exists between the first region and the second region;
extracting a first object in the first image and a second object in the second image;
combining the first image and the second image, determining the position of the first object relative to a preset reference object to obtain a first position, and determining the position of the second object relative to the preset reference object to obtain a second position;
target map information is generated based on the first location and the second location.
Optionally, in a case where the first object and the second object are both located within the overlapping region, the first object and the second object represent the same object, and the first position is the same as the second position.
Optionally, the preset reference object is the overlapping area; and
the combining the first image and the second image, determining a position of the first object relative to a preset reference object to obtain a first position, and determining a position of the second object relative to the preset reference object to obtain a second position, includes:
under the condition that the first object and the second object are not located in the overlapping area, combining the first image and the overlapping area, determining the position of the first object relative to the overlapping area, and obtaining a first position; the method comprises the steps of,
and combining the second image and the superposition area, and determining the position of the second object relative to the superposition area to obtain a second position.
Optionally, the first image is acquired in the following manner:
acquiring aerial survey images, wherein the aerial survey images represent images of a first area acquired by aviation;
determining reference coordinates of a preset reference object in the aerial survey image;
and calibrating the coordinates of any object in the aerial survey image based on the reference coordinates to obtain a first image.
Optionally, the second image is acquired in the following manner:
acquiring a vehicle end acquisition image, wherein the vehicle end acquisition image represents an image of a second area acquired by the vehicle end;
determining a coincidence region between the first region and the second region;
determining coincidence coordinate information of the coincidence region in the first image;
and calibrating coordinates of any object in the vehicle-end acquisition image based on the coincident coordinate information to obtain a second image.
Optionally, the extracting the first object in the first image includes:
extracting any object in the first image;
determining an object type of the arbitrary object;
and determining any object with the object type being a preset type as the first object.
Optionally, the extracting the second object in the second image includes:
extracting any object in the second image;
determining an object type of the arbitrary object;
and determining any object with the object type being a preset type as the second object.
In a second aspect, the present application provides a map information generating apparatus, the apparatus including:
the acquisition module is used for acquiring a first image and a second image, wherein the first image represents an image of a first area acquired by aviation, the second image represents an image of a second area acquired by a vehicle end, and an overlapping area exists between the first area and the second area;
the extraction module is used for extracting a first object in the first image and a second object in the second image;
the combination module is used for combining the first image and the second image, determining the position of the first object relative to a preset reference object to obtain a first position, and determining the position of the second object relative to the preset reference object to obtain a second position;
and the generation module is used for generating target map information based on the first position and the second position.
In a third aspect, an electronic device is provided, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the steps of the method according to any one of the embodiments of the first aspect when executing a program stored on a memory.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of the embodiments of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the method, the first image and the second image are obtained, wherein the first image represents an image of a first area acquired by aviation, the second image represents an image of a second area acquired by a vehicle end, and an overlapping area exists between the first area and the second area; extracting a first object in the first image and a second object in the second image; combining the first image and the second image, determining the position of the first object relative to a preset reference object to obtain a first position, and determining the position of the second object relative to the preset reference object to obtain a second position; target map information is generated based on the first location and the second location. The first image acquired by combining aviation is combined with the second image acquired by the vehicle end to generate the target map information, so that the defect that the visual angle limitation exists in the place where the true value is generated only by the vehicle end acquired image in the existing scheme is overcome.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flow chart of a map information generating method according to an embodiment of the present application;
fig. 2 is an application scenario schematic diagram of a map information generating method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a map information generating apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Further advantages and effects of the present invention will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In the process of generating map information, a plurality of data sources are needed, such as data collected by different vehicle-end collecting devices at different moments, and the accuracy, the coordinate information and the image definition of different data sources may be different, so that the problem of difficult calibration exists in the process of generating map information by using the plurality of data sources. On the other hand, in the map information generation process, especially in the high-precision map generation process, the vehicle-end acquisition equipment arranged on the vehicle is mainly used for acquiring image information, and the acquisition process of the vehicle-end acquisition equipment can synchronously acquire the image information along with the running of the vehicle, so that the image information acquired in the mode can be influenced by the running of the vehicle, for example, when the running of the vehicle is too fast, the acquired image information can have the problem of discontinuity, and when the running of the vehicle is too slow, the acquired image information can have the problem of overlapping; further, since the vehicle-end collection device is mounted on the vehicle, the vehicle-end collection device cannot collect overhead information, so that map information generated by this method has a problem of lacking overhead-surface-related road data.
Fig. 1 is a flow chart of a map information generation method according to an embodiment of the present application. The method can be applied to one or more electronic devices such as smart phones, notebook computers, desktop computers, portable computers, servers and the like. The main execution body of the method may be hardware or software. When the execution body is hardware, the execution body may be one or more of the electronic devices. For example, a single electronic device may perform the method, or a plurality of electronic devices may cooperate with one another to perform the method. When the execution subject is software, the method may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module. The present invention is not particularly limited herein.
As shown in fig. 1, the method specifically includes:
s110: acquiring a first image and a second image, wherein the first image represents an image of a first region acquired by aviation, the second image represents an image of a second region acquired by a vehicle end, and a superposition region exists between the first region and the second region.
In this embodiment, the first image is an image of a first area acquired by aviation, and the aviation acquisition may be acquired by an unmanned aerial vehicle near-ground remote sensing measurement technique, or acquired by a helicopter remote sensing technique, or the like; the second image is an image of a second area collected by a vehicle end, and the vehicle end collection can be that a camera arranged on the vehicle collects the second image, or that a sensor device arranged on the vehicle collects the second image, or the like. In this case, the target map information is generated by performing the first image and the second image, and thus the target map information accuracy can be improved.
In one embodiment, the first image is acquired as follows:
acquiring a aerial survey image, wherein the aerial survey image represents an image of a first area acquired by aviation;
determining reference coordinates of a preset reference object in the aerial survey image;
and calibrating the coordinates of any object in the aerial survey image based on the reference coordinates to obtain a first image.
The aerial survey subject of the aerial survey image in this embodiment may be an unmanned aerial vehicle, a camera device on a helicopter, or the like, through an image of the first area acquired in the air downward. The specific acquisition process can be as follows: and determining the aerial survey route information according to the acquisition requirement, so that the aerial survey main body acquires the route represented by the aerial survey route information, and thereby generates an aerial survey image. After the aerial survey image is acquired, the reference coordinates of a preset reference object in the aerial survey image are determined, the preset reference object can be a control point, the control point is a mark for calibrating the horizontal position and the elevation of the ground point, the control point can be divided into a plurality of different levels, for example, a country control point, an image control point, an engineering control point, a local control point and the like, wherein the country control point can comprise a triangle lattice point, a basic level point, a global positioning system site, a satellite positioning service reference station and the like. The coordinate position accuracy of the control points is very high, and the control points can be used as the datum reference points of various maps; in addition, the control point may be a custom control point, and the custom control means that the user uses the disclosed control point according to the requirement, and combines the specific direction and distance from the disclosed control point to calculate the control point by itself. That is, in the case where the preset reference object is a control point, the reference coordinates are control point coordinates. After the reference coordinates are determined, the reference coordinates have higher accuracy, so that the coordinates of any object in the aerial survey image are calibrated according to the reference coordinates, and the effect of adjusting the coordinate accuracy of all objects in the aerial survey image is achieved.
In one embodiment, the second image is acquired as follows:
acquiring a vehicle end acquisition image, wherein the vehicle end acquisition image represents an image of a second area acquired by the vehicle end;
determining a coincidence region between the first region and the second region;
determining coincidence coordinate information of a coincidence region in the first image;
and calibrating coordinates of any object in the vehicle end acquisition image based on the superposition coordinate information to obtain a second image.
In the embodiment, a vehicle-end acquired image is acquired, and a coincidence area between a first area acquired by aviation and a second area acquired by the vehicle-end acquired image is determined; therefore, in the first image, the superposition coordinate information of the superposition area is determined, the superposition coordinate information is calibrated coordinate information, and the superposition area exists in the first image and the vehicle end acquisition image, so the superposition coordinate information is also the coordinate information of the superposition area in the vehicle end acquisition image; at this time, the coordinates of any object in the vehicle-end captured image are calibrated based on the coincident coordinate information, thereby obtaining a second image. The method plays a role in calibrating the coordinates of any object in the acquired image of the vehicle end, and achieves the effect of improving the coordinate accuracy and the matching degree of the first image and the second image.
S120: a first object in the first image and a second object in the second image are extracted.
In this embodiment, since the first image and the second image are image resources for generating map information, one or more objects may be included in the first image and the second image, and the objects represent objects for generating map information, such as ground identification, guideboard information, red road lamp information, and the like. It should be noted that, extracting the first object in the first image and the second object in the second image may include extracting an object type, an object coordinate, and the like of the first object and the second object, where the object type represents a type to which the object belongs, such as a ground identifier, guideboard information, and the like; the object coordinates are coordinate information of the position where the object is located.
In an embodiment, extracting the first object in the first image may include:
extracting any object in the first image;
determining the object type of any object;
and determining any object with the object type being the preset type as a first object.
In this embodiment, since the first image is acquired through aerial photographing, the first image includes a plurality of objects, such as trees, lakes, etc., which are not related to the generation of map information. In the process of extracting the object in the first image, the object related to the map information is required to be extracted; in this embodiment, whether or not the object is an object related to generation of map information is discriminated by the object type. The preset type represents the type of an object related to map information generation, such as ground identification, guideboard information and the like; the non-preset type indicates a type of an object, such as a tree, a lake, etc., which is irrelevant to generation of map information. In specific implementation, an arbitrary object (hereinafter referred to as an object 1) in the first image is first extracted, then an object type of the arbitrary object (i.e., the object 1) is determined, and then an arbitrary object (i.e., the object 1) with an object type being a preset type (such as guideboard information) is determined as the first object, so that the first object related to the map information is extracted and generated, such as a marking position of the first image in fig. 2.
In an embodiment, extracting the second object in the second image comprises:
extracting any object in the second image;
determining the object type of any object;
and determining any object with the object type being the preset type as a second object.
In this embodiment, since the second image is collected through the vehicle end, the second image includes a plurality of objects that are not related to the generation of the map information, such as houses, other vehicles, pedestrians, etc., so that in the process of extracting the objects, the objects that are related to the generation of the map information need to be extracted, in this embodiment, the objects are distinguished by the types of the objects, the preset types represent the types of the objects that are related to the generation of the map information, such as ground marks, guideboard information, etc., and the non-preset types represent the types of the objects that are not related to the generation of the map information, such as houses, other vehicles, pedestrians, etc.; in the implementation, firstly, any object in the second image is extracted, then the object type of the any object is determined, and then the object type of the any object with the preset type is determined to be the second object, so that the second object related to the map information is extracted and generated.
S130: combining the first image and the second image, determining the position of the first object relative to the preset reference object to obtain a first position, and determining the position of the second object relative to the preset reference object to obtain a second position.
In this embodiment, the first object and the second object represent objects for generating map information, such as road signs including road surface signs, guideboard information, and the like, and/or road facilities including road guardrails, traffic lights, and the like. After the first object and the second object are extracted, since the first object and the second object are extracted from different images, the positions of the first object and the second object need to be calibrated and unified, and in this embodiment, the position of the first object and the position of the second object are determined by combining the first image and the second image. Specifically, the position of the first object relative to the preset reference object may be determined to obtain a first position, and the position of the second object relative to the preset reference object may be determined to obtain a second position, where the preset reference object may be the control point described in the foregoing embodiment, the first position may represent coordinates of the first object, and the second position may represent coordinates of the second object, where it is required to be described that the first position and the second position are positions under the same coordinate system. In this way, the specific position of the object is determined, referring to the position information in fig. 2.
In an embodiment, in case the first object and the second object are both located within the overlapping area, the first object and the second object represent the same object and the first position is the same as the second position.
In this embodiment, since the first image and the second image are images representing the first area and the second area, respectively. However, the first region and the second region have overlapping regions, so the position of the object within the overlapping regions should be unique; that is, when the first object is located in the overlapping region of the first region, it is described that the same object as the first object is present in the overlapping region of the second region; therefore, when the first object and the second object are both located in the overlapping area, the first object and the second object represent the same object, so that the first position and the second position are determined. The calibration process may be to determine the second position as the first position based on the first position, or may be to take an intermediate value between the first position and the second position as both the first position and the second position, or the like. The positions of the respective objects in the overlapping region are thus determined, referring to the position information in fig. 2.
In an embodiment, S130 combines the first image and the second image, determines a position of the first object relative to the preset reference object to obtain a first position, and determines a position of the second object relative to the preset reference object to obtain a second position, which may include:
under the condition that the first object and the second object are not located in the overlapping area, combining the first image and the overlapping area, determining the position of the first object relative to the overlapping area, and obtaining a first position; the method comprises the steps of,
and combining the second image and the overlapping area, and determining the position of the second object relative to the overlapping area to obtain a second position.
In this embodiment, the preset reference object is a overlapping area, which may be the following calibration steps: the first image is obtained by acquiring coordinate information of a control point in the first image, calibrating coordinates of any object in the first image based on the coordinate information of the control point, determining coincident coordinate information of a coincident region in the first image, and calibrating coordinates of any object in the second image based on the coincident coordinate information. In the foregoing embodiment, when the first object and the second object are both located in the overlapping area, the first object and the second object represent the same object, and the first position and the second position are the same in the calibration procedure. The positions of the objects in the overlapping areas of the first image and the second image are the same through the calibration steps, at the moment, the positions of the objects outside the overlapping areas of the first image and the second image need to be determined, at the moment, the overlapping areas are used as preset reference objects, the positions of the first objects outside the overlapping areas of the first image are respectively determined to obtain a first position, and the positions of the second objects outside the overlapping areas of the second image are determined to obtain a second position; in the process of determining the first object or the second object outside the overlapping area, whether the whole object is located outside the overlapping area is taken as a reference, and if the whole object is located outside the overlapping area, the object is determined to be located outside the overlapping area; because each object in the overlapping area is calibrated, at the moment, the overlapping area is used as a preset reference object to calibrate the object outside the overlapping area, so that the object outside the overlapping area and the object in the overlapping area are uniformly calibrated under the same coordinate system. Therefore, each object in the first image and each object in the second image are calibrated under the same coordinate system, the accuracy of the objects is improved, and meanwhile, the determination of map information is completed.
S140: target map information is generated based on the first location and the second location.
In this embodiment, the first object and the second object may be one or more, so that determining the positions of the first object and the second object according to the positions of the first position and the second position may generate the target map information including only the first object and the second object and the positions thereof, and the target map information may be a high-precision map.
As shown in fig. 3, an embodiment of the present application provides a map information generating apparatus, where the apparatus includes:
the acquiring module 310 is configured to acquire a first image and a second image, where the first image represents an image of a first area acquired by aviation, and the second image represents an image of a second area acquired by a vehicle end, and the first area and the second area have a coincident area;
an extracting module 320, configured to extract a first object in the first image and a second object in the second image;
a combination module 330, configured to combine the first image and the second image, determine a position of the first object relative to a preset reference object to obtain a first position, and determine a position of the second object relative to the preset reference object to obtain a second position;
the generating module 340 is configured to generate the target map information based on the first location and the second location.
In an embodiment, the combining module 330 is further configured to, in a case where the first object and the second object are both located in the overlapping area, indicate that the first object and the second object are the same object, and the first location is the same as the second location.
In an embodiment, the preset reference object is a overlapping area; and
the combining module 330 may include:
the first determining unit is used for determining the position of the first object relative to the overlapping area by combining the first image and the overlapping area under the condition that neither the first object nor the second object are positioned in the overlapping area, so as to obtain a first position; the method comprises the steps of,
and the second determining unit is used for combining the second image and the overlapping area, determining the position of the second object relative to the overlapping area and obtaining a second position.
In one embodiment, the acquisition module 310 may include:
the first acquisition unit is used for acquiring aerial survey images, wherein the aerial survey images represent images of a first area acquired by aviation;
the third determining unit is used for determining the reference coordinates of a preset reference object in the aerial survey image;
and the first calibration unit is used for calibrating the coordinates of any object in the aerial survey image based on the reference coordinates to obtain a first image.
In one embodiment, the acquisition module 310 may include:
the second acquisition unit is used for acquiring a vehicle end acquisition image, wherein the vehicle end acquisition image represents an image of a second area acquired by the vehicle end;
a fourth determination unit configured to determine a coincidence region between the first region and the second region;
a fifth determining unit configured to determine, in the first image, coincidence coordinate information of the coincidence region;
and the second calibration unit is used for calibrating the coordinates of any object in the vehicle end acquisition image based on the superposition coordinate information to obtain a second image.
In one embodiment, the extraction module 320 may include:
a first extraction unit configured to extract an arbitrary object in a first image;
a sixth determining unit configured to determine an object type of an arbitrary object;
a seventh determining unit, configured to determine, as the first object, an arbitrary object whose object type is a preset type.
In one embodiment, the extraction module 320 may include:
a second extraction unit configured to extract an arbitrary object in the second image;
an eighth determination unit configured to determine an object type of an arbitrary object;
and a ninth determining unit, configured to determine, as the second object, an arbitrary object whose object type is a preset type.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
As shown in fig. 4, the present embodiment provides an electronic device comprising a processor 410, a communication interface 420, a memory 430, and a communication bus 440, wherein the processor 410, the communication interface 420, the memory 430 complete communication with each other through the communication bus 440,
a memory 430 for storing a computer program;
in one embodiment of the present application, the processor 410 is configured to implement the method provided in any one of the foregoing method embodiments when executing the program stored in the memory 430.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for generating discrimination information provided by any one of the method embodiments described above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for generating map information, the method comprising:
acquiring a first image and a second image, wherein the first image represents an image of a first region acquired by aviation, the second image represents an image of a second region acquired by a vehicle end, and a superposition region exists between the first region and the second region;
extracting a first object in the first image and a second object in the second image;
combining the first image and the second image, determining the position of the first object relative to a preset reference object to obtain a first position, and determining the position of the second object relative to the preset reference object to obtain a second position;
target map information is generated based on the first location and the second location.
2. The method of claim 1, wherein the first object and the second object represent the same object and the first location is the same as the second location if both the first object and the second object are within the overlap region.
3. The method according to claim 1, wherein the preset reference object is the coincidence region; and
the combining the first image and the second image, determining a position of the first object relative to a preset reference object to obtain a first position, and determining a position of the second object relative to the preset reference object to obtain a second position, includes:
under the condition that the first object and the second object are not located in the overlapping area, combining the first image and the overlapping area, determining the position of the first object relative to the overlapping area, and obtaining a first position; the method comprises the steps of,
and combining the second image and the superposition area, and determining the position of the second object relative to the superposition area to obtain a second position.
4. The method of claim 1, wherein the first image is acquired by:
acquiring aerial survey images, wherein the aerial survey images represent images of a first area acquired by aviation;
determining reference coordinates of a preset reference object in the aerial survey image;
and calibrating the coordinates of any object in the aerial survey image based on the reference coordinates to obtain a first image.
5. The method of claim 4, wherein the second image is acquired by:
acquiring a vehicle end acquisition image, wherein the vehicle end acquisition image represents an image of a second area acquired by the vehicle end;
determining a coincidence region between the first region and the second region;
determining coincidence coordinate information of the coincidence region in the first image;
and calibrating coordinates of any object in the vehicle-end acquisition image based on the coincident coordinate information to obtain a second image.
6. The method of any of claims 1-5, wherein the extracting the first object in the first image comprises:
extracting any object in the first image;
determining an object type of the arbitrary object;
and determining any object with the object type being a preset type as the first object.
7. The method of any of claims 1-5, wherein the extracting the second object in the second image comprises:
extracting any object in the second image;
determining an object type of the arbitrary object;
and determining any object with the object type being a preset type as the second object.
8. A map information generation apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first image and a second image, wherein the first image represents an image of a first area acquired by aviation, the second image represents an image of a second area acquired by a vehicle end, and an overlapping area exists between the first area and the second area;
the extraction module is used for extracting a first object in the first image and a second object in the second image;
the combination module is used for combining the first image and the second image, determining the position of the first object relative to a preset reference object to obtain a first position, and determining the position of the second object relative to the preset reference object to obtain a second position;
and the generation module is used for generating target map information based on the first position and the second position.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-7 when executing a program stored on a memory.
10. A storage medium having stored thereon a computer program, which when executed by a processor performs the method of any of claims 1-7.
CN202310424538.4A 2023-04-19 2023-04-19 Map information generation method and device, electronic equipment and storage medium Pending CN116465422A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310424538.4A CN116465422A (en) 2023-04-19 2023-04-19 Map information generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310424538.4A CN116465422A (en) 2023-04-19 2023-04-19 Map information generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116465422A true CN116465422A (en) 2023-07-21

Family

ID=87176654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310424538.4A Pending CN116465422A (en) 2023-04-19 2023-04-19 Map information generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116465422A (en)

Similar Documents

Publication Publication Date Title
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN108694882B (en) Method, device and equipment for labeling map
JP4232167B1 (en) Object identification device, object identification method, and object identification program
CN105512646B (en) A kind of data processing method, device and terminal
CN104280036B (en) A kind of detection of transport information and localization method, device and electronic equipment
JP4978615B2 (en) Target identification device
Brenner Extraction of features from mobile laser scanning data for future driver assistance systems
JP5388082B2 (en) Stationary object map generator
KR100884100B1 (en) System and method for detecting vegetation canopy using airborne laser surveying
JP5404861B2 (en) Stationary object map generator
US20130010074A1 (en) Measurement apparatus, measurement method, and feature identification apparatus
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
CN110906954A (en) High-precision map test evaluation method and device based on automatic driving platform
CN109815300A (en) A kind of vehicle positioning method
US10996337B2 (en) Systems and methods for constructing a high-definition map based on landmarks
CN112308913B (en) Vehicle positioning method and device based on vision and vehicle-mounted terminal
US20230334850A1 (en) Map data co-registration and localization system and method
CN112446915B (en) Picture construction method and device based on image group
CN112749584B (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN114488094A (en) Vehicle-mounted multi-line laser radar and IMU external parameter automatic calibration method and device
CN115546551A (en) Deep learning-based geographic information extraction method and system
KR100981588B1 (en) A system for generating geographical information of city facilities based on vector transformation which uses magnitude and direction information of feature point
CN116465422A (en) Map information generation method and device, electronic equipment and storage medium
KR20220096162A (en) Apparatus for detecting road based aerial images and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination