CN112862917A - Map acquisition method and device - Google Patents

Map acquisition method and device Download PDF

Info

Publication number
CN112862917A
CN112862917A CN201911187323.5A CN201911187323A CN112862917A CN 112862917 A CN112862917 A CN 112862917A CN 201911187323 A CN201911187323 A CN 201911187323A CN 112862917 A CN112862917 A CN 112862917A
Authority
CN
China
Prior art keywords
color
original image
pixel point
difference value
color difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911187323.5A
Other languages
Chinese (zh)
Inventor
仵志强
徐颖
刘芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Navinfo Information Technology Co ltd
Original Assignee
Xi'an Navinfo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Navinfo Information Technology Co ltd filed Critical Xi'an Navinfo Information Technology Co ltd
Priority to CN201911187323.5A priority Critical patent/CN112862917A/en
Publication of CN112862917A publication Critical patent/CN112862917A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/008Vector quantisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a map acquisition method and device, which are used for acquiring an original image of a target area; obtaining a vectorization electronic map of the target area according to the original image, wherein the vectorization electronic map comprises a linear contour and/or a planar contour of the original image; and setting attribute information of the vectorized electronic map according to the linear contour and/or the planar contour. The vectorization electronic map of the target area is obtained according to the original image of the target area, the vectorization electronic map comprises the linear outline and/or the planar outline of the original image, and then the attribute information of the vectorization electronic map is set for the linear outline and/or the planar outline of the original image, so that the map acquisition of the target area is realized, and the map acquisition efficiency of the target area is improved.

Description

Map acquisition method and device
Technical Field
The present application relates to the field of map acquisition technologies, and in particular, to a map acquisition method and apparatus.
Background
With the rapid development of information technology and surveying and mapping technology, two-dimensional vector map data has become a necessary guarantee resource in the aspects of geographic information systems, intelligent transportation systems, digital cities, digital national defense construction and the like. The geographic information described by the two-dimensional vector map data is detailed and accurate, and has great economic value and strategic significance. The current map system mainly collects information Of large areas, such as national road networks and Points Of Interest (POI), but for small areas, such as areas inside parks, shopping malls, campuses, and office buildings, collection vehicles cannot enter, and detailed collection cannot be achieved.
In the prior art, for map data collection of a small area, drawing software such as MapInfo and autocad is usually adopted to perform on-site manual drawing, for example, paper drawing or electronic drawing, and then the map data is processed by post-software to obtain map information in the small area.
However, the map data collection method in the prior art is high in cost and low in efficiency.
Disclosure of Invention
The application provides a map acquisition method and device, which are used for realizing map acquisition in a target area and improving the map acquisition efficiency of the target area.
In a first aspect, an embodiment of the present application provides a map acquisition method, including:
acquiring an original image of a target area;
obtaining a vectorization electronic map of a target area according to an original image, wherein the vectorization electronic map comprises a linear outline and/or a planar outline of the original image;
and setting attribute information of the vectorized electronic map according to the linear outline and/or the planar outline.
In the embodiment of the application, the vectorized electronic map of the target area is obtained according to the original image of the target area, the vectorized electronic map comprises the linear contour and/or the planar contour of the original image, and then the attribute information of the vectorized electronic map is set for the linear contour and/or the planar contour of the original image, so that the map acquisition of the target area is realized, and the map acquisition efficiency of the target area is improved.
In an implementation manner, a vectorized electronic map of a target region is obtained according to an original image, and the vectorized electronic map includes:
extracting a linear contour in the original image according to the pixel points of the original image;
and/or the presence of a gas in the gas,
and extracting the planar contour of the original image according to the pixel points of the original image.
In the embodiment of the application, the determination of the vectorized electronic map of the target area is realized by extracting the linear contour and/or the planar contour of the original image respectively, and the accuracy of the linear contour and/or the planar contour in the original image can be improved by extracting the linear contour and/or the planar contour of the original image through the pixel points of the original image.
In one possible implementation, the extracting the linear contour in the original image according to the pixel points of the original image includes:
s1: determining original color points of multiple colors in pixel points of an original image, wherein a first color difference value between the original color point of each color and a standard sample color value of the color is within a first preset range, and taking the original color point of the same color as a first target pixel point;
s2: determining a plurality of first pixel points which are separated from a first target pixel point by M pixel points, wherein M is an integer which is larger than or equal to zero;
s3: determining a second color difference value corresponding to each first pixel point, wherein the second color difference value is the color difference value between the first pixel point and the first target pixel point, and the first target pixel point is used as a second pixel point;
s4: determining at least one first target pixel point from the plurality of first pixel points, wherein the second color difference value of the first target pixel point is within a second preset range, and repeatedly executing the steps S2-S4 until the number of the first target pixel points is at least three, and the at least three first target pixel points are not on the same straight line;
s5: and connecting the plurality of second pixel points to generate a linear sub-outline of the original image.
In one possible embodiment, the extracting the planar contour of the original image according to the pixel points of the original image includes:
r1: determining original color points of multiple colors in an original image, wherein a first color difference value between the original color point of each color and the color value of a standard sample of the color is within a first preset range, and taking any one of the original color points of each color as a second target pixel point;
r2: determining a plurality of third pixel points which are separated from the second target pixel point by N pixel points by taking the second target pixel point as a center, wherein N is an integer which is larger than or equal to zero;
r3: determining a third color difference value corresponding to each third pixel point, wherein the third color difference value is a color difference value between the third pixel point and the second target pixel point;
r4: if the third color difference value corresponding to each third pixel point is within a third preset range, making N equal to N +1, and repeatedly executing steps R2-R4, if the third color difference value is not within the third preset range and the third color difference value is within the third preset range, then taking the second target pixel point as a fourth pixel point, and executing step R5, and if the third color difference value is not within the third preset range, then executing step R6;
r5: determining a pixel point of the plurality of third pixel points, of which the third color difference value is within a third preset range, as a second target pixel point, and repeatedly executing the steps R2-R5;
r6: and connecting third pixel points of which the third color difference values are within a third preset range in the plurality of third pixel points to generate a planar sub-outline of the original image.
Optionally, the map collecting method provided in the embodiment of the present application further includes:
and setting the pixel points in each planar sub-outline to be the same color to obtain a plurality of color areas.
Optionally, the map collecting method provided in the embodiment of the present application further includes:
the intersection line between the different color areas is determined as a linear sub-contour.
Optionally, obtaining the vectorized electronic map of the target region according to the original image includes:
performing semantic segmentation in the layer of the original image to obtain a semantic segmentation image of the original image;
and deleting the color of each segmentation area in the semantic segmentation image to obtain the linear outline of the original image.
In the embodiment of the application, the linear contour of the original image can be determined by performing semantic segmentation in the layer of the original image, so that the efficiency of determining the linear contour of the original image is improved.
Optionally, obtaining the vectorized electronic map of the target region according to the original image includes:
and inputting the original image into the neural network model to obtain the vectorized electronic map of the target area.
In the embodiment of the application, the original image is input into the neural network model, and the vectorized electronic map of the target area is obtained through the processing of the neural network model, so that the efficiency of determining the vectorized electronic map is improved.
Optionally, the attribute information of the vectorized electronic map includes at least one of the following attribute information of a linear contour and/or a planar contour:
identification, name, type, hierarchy, longitude and latitude, and acquisition time.
The apparatus, the electronic device, the computer-readable storage medium, and the computer program product provided in the embodiments of the present application are described below, and contents and effects thereof may refer to the map acquisition method provided in the embodiments of the present application, and are not described again.
In a second aspect, an embodiment of the present application provides a map collecting device, including:
the acquisition module is used for acquiring an original image of a target area;
the processing module is used for obtaining a vectorization electronic map of the target area according to the original image, wherein the vectorization electronic map comprises a linear outline and/or a planar outline of the original image;
and the setting module is used for setting the attribute information of the vectorization electronic map according to the linear contour and/or the planar contour.
In one possible implementation, a processing module includes:
the first processing submodule is used for extracting a linear outline in the original image according to pixel points of the original image;
and/or the presence of a gas in the gas,
and the second processing submodule is used for extracting the planar contour of the original image according to the pixel points of the original image.
In a possible embodiment, the linear profile comprises a plurality of linear sub-profiles, the first processing sub-module being in particular configured to:
s1: determining original color points of multiple colors in pixel points of an original image, wherein a first color difference value between the original color point of each color and a standard sample color value of the color is within a first preset range, and taking the original color point of the same color as a first target pixel point;
s2: determining a plurality of first pixel points which are separated from a first target pixel point by M pixel points, wherein M is an integer which is larger than or equal to zero;
s3: determining a second color difference value corresponding to each first pixel point, wherein the second color difference value is the color difference value between the first pixel point and the first target pixel point, and the first target pixel point is used as a second pixel point;
s4: determining at least one first target pixel point from the plurality of first pixel points, wherein the second color difference value of the first target pixel point is within a second preset range, and repeatedly executing the steps S2-S4 until the number of the first target pixel points is at least three, and the at least three first target pixel points are not on the same straight line;
s5: and connecting the plurality of second pixel points to generate a linear sub-outline of the original image.
In a possible embodiment, the planar profile comprises a plurality of planar sub-profiles, the second processing sub-module being in particular configured to:
r1: determining original color points of multiple colors in an original image, wherein a first color difference value between the original color point of each color and the color value of a standard sample of the color is within a first preset range, and taking any one of the original color points of each color as a second target pixel point;
r2: determining a plurality of third pixel points which are separated from the second target pixel point by N pixel points by taking the second target pixel point as a center, wherein N is an integer which is larger than or equal to zero;
r3: determining a third color difference value corresponding to each third pixel point, wherein the third color difference value is a color difference value between the third pixel point and the second target pixel point;
r4: if the third color difference value corresponding to each third pixel point is within a third preset range, making N equal to N +1, and repeatedly executing steps R2-R4, if the third color difference value is not within the third preset range and the third color difference value is within the third preset range, then taking the second target pixel point as a fourth pixel point, and executing step R5, and if the third color difference value is not within the third preset range, then executing step R6;
r5: determining a pixel point of the plurality of third pixel points, of which the third color difference value is within a third preset range, as a second target pixel point, and repeatedly executing the steps R2-R5;
r6: and connecting third pixel points of which the third color difference values are within a third preset range in the plurality of third pixel points to generate a planar sub-outline of the original image.
In one possible embodiment, a module is provided, further configured to:
and setting the pixel points in each planar sub-outline to be the same color to obtain a plurality of color areas.
Optionally, the map collecting device provided in the embodiment of the present application further includes:
and the determining module is used for determining that the intersection line between the different color areas is a linear sub-outline.
Optionally, the processing module includes:
the segmentation submodule is used for performing semantic segmentation on the layer of the original image to obtain a semantic segmentation image of the original image;
and the deleting submodule is also used for deleting the color of each segmentation area in the semantic segmentation image to obtain the linear outline of the original image.
Optionally, the processing module is specifically configured to:
and inputting the original image into the neural network model to obtain the vectorized electronic map of the target area.
Optionally, the attribute information of the vectorized electronic map includes at least one of the following attribute information of a linear contour and/or a planar contour:
identification, name, type, hierarchy, longitude and latitude, and acquisition time.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as provided by the first aspect or the first aspect realizable manner.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method as provided in the first aspect or the first aspect implementable manner.
In a fifth aspect, an embodiment of the present application provides a computer program product, including: executable instructions for implementing the method as provided in the first aspect or the first aspect alternatives.
According to the map acquisition method and device, the original image of the target area is obtained; obtaining a vectorization electronic map of the target area according to the original image, wherein the vectorization electronic map comprises a linear contour and/or a planar contour of the original image; and setting attribute information of the vectorized electronic map according to the linear contour and/or the planar contour. The vectorization electronic map of the target area is obtained according to the original image of the target area, the vectorization electronic map comprises the linear outline and/or the planar outline of the original image, and then the attribute information of the vectorization electronic map is set for the linear outline and/or the planar outline of the original image, so that the map acquisition of the target area is realized, and the map acquisition efficiency of the target area is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a diagram of an exemplary application scenario provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a map collecting method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a map collecting method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of an original image provided by an embodiment of the present application;
fig. 5 is a schematic flow chart of a map collecting method according to another embodiment of the present application
FIG. 6 is a schematic diagram of an original image provided by another embodiment of the present application;
FIG. 7 is a schematic diagram of an original image provided by an embodiment of the present application;
fig. 8 is a vectorized electronic map provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a map collecting device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a map acquisition apparatus according to another embodiment of the present application;
fig. 11 is a schematic structural diagram of a map acquisition apparatus according to yet another embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
With the rapid development of information technology and surveying and mapping technology, two-dimensional vector map data has become a necessary guarantee resource in the aspects of geographic information systems, intelligent transportation systems, digital cities, digital national defense construction and the like. The map data collection of the small area in the prior art is usually performed by using drawing software such as mapInfo, autocad and the like to perform on-site manual drawing, for example, paper drawing or electronic drawing, and then performing software post-processing to obtain the map information in the small area, but the map collection efficiency is low. In order to solve the above technical problem, an embodiment of the present application provides a map collecting method and apparatus.
An exemplary application scenario of the embodiments of the present application is described below.
The map collection method provided by the embodiment of the present application may be executed by the map collection device provided by the embodiment of the present application, the map collection device provided by the embodiment of the present application may be part or all of a terminal device, and fig. 1 is an exemplary application scenario diagram provided by the embodiment of the present application, as shown in fig. 1, the map collection method provided by the embodiment of the present application may be applied to a terminal device 11, for example, the map collection method may be implemented by an application program or a web page in the terminal device, and the terminal device 11 and the server 12 have data communication, which is not limited in the embodiment of the present application. The specific type of the terminal device is not limited in the embodiment of the application, for example, the terminal device may be a smart phone, a personal computer, a tablet computer, a wearable device, a vehicle-mounted terminal, and the like.
Fig. 2 is a schematic flowchart of a map collecting method according to an embodiment of the present application, where the method may be executed by a map collecting apparatus, and the apparatus may be implemented by software and/or hardware, for example: the device may be a client or a terminal device, the terminal device may be a personal computer, a smart phone, a user terminal, a tablet computer, a wearable device, or the like, and the map collection method is described below with the terminal device as an execution subject, as shown in fig. 2, the map collection method provided in the embodiment of the present application may include:
step S101: an original image of the target area is acquired.
The target area can be a small area, and for the small area where a map collection vehicle cannot enter or cannot enter conveniently, the map collection method provided by the embodiment of the application can be adopted to carry out map collection on the small area. The embodiment of the present application does not limit the target area, for example, the target area may be a small area such as a park interior area, a mall interior area, a campus interior area, an office building interior area, a cell, a scenic spot, a business district, and a transportation hub station, and the embodiment of the present application is not limited thereto.
The original image of the target area may be a navigation map of the target area, for example, a scenic spot navigation map, a campus navigation map, a park navigation map, and the like, or may also be a design construction model map, a publicity picture, and the like of the target area. For example, the original image may be one or more images each including an entire structure of the target region, or may be a plurality of images each including a partial structure of the target region, which is not limited in this embodiment of the application.
For the acquisition of the original image of the target area, the original image of the target area may be acquired through a website, a design and display drawing published or filed by the government, an image shot, an official website of the target area, a published propaganda drawing, a newspaper, a periodical and other paper images.
Step S102: and obtaining a vectorization electronic map of the target area according to the original image, wherein the vectorization electronic map comprises a linear outline and/or a planar outline of the original image.
When the original image is processed, the original image may be imported into the system in a single or batch manner, and the system numbers the original image, which is not limited in the embodiment of the present application, for example, the numbering manner may be: NI + year, month, day + time + 3-digit sequence number, for example: NI 20190306135943001. The embodiments of the present application are not limited thereto. The format of the original image may include mainstream picture formats such as jpeg, jpg, png, and the like. The original images acquired through a network, a government agency and the like can be uniformly placed in a folder of the terminal equipment, and the import can be realized through an import path selected in the system; the pictures obtained after the terminal equipment is used for directly taking the pictures can be directly used.
The embodiment of the present application does not limit a specific implementation manner of obtaining the vectorized electronic map of the target region according to the original image, where the vectorized electronic map includes a linear contour and/or a planar contour of the original image.
In a possible implementation manner, obtaining a vectorized electronic map of a target region according to an original image includes: extracting a linear contour in the original image according to the pixel points of the original image; and/or extracting the planar contour of the original image according to the pixel points of the original image.
The pixel values of the object, the building, the path and the like in the target area in the original image may be different, and the pixel values of the same type of area may be the same, so that the linear contour in the original image or the planar contour of the original image may be extracted according to the pixel values of the pixels of the original image.
In the embodiment of the application, the determination of the vectorized electronic map of the target area is realized by extracting the linear contour and/or the planar contour of the original image respectively, and the accuracy of the linear contour and/or the planar contour in the original image can be improved by extracting the linear contour and/or the planar contour of the original image through the pixel points of the original image.
In another possible implementation, obtaining a vectorized electronic map of a target region according to an original image includes:
performing semantic segmentation in the layer of the original image to obtain a semantic segmentation image of the original image; and deleting the color of each segmentation area in the semantic segmentation image to obtain the linear outline of the original image.
By performing semantic segmentation on the original image, different regions in the original image can be distinguished by different colors respectively to obtain a semantic segmentation map of the original image, and optionally, the semantic segmentation map of the original image can be used as a planar contour of the original image; then, by deleting the colors in the semantic segmentation map, the linear contour of the original image can be obtained.
In the embodiment of the application, the linear contour of the original image can be determined by performing semantic segmentation in the layer of the original image, so that the efficiency of determining the linear contour of the original image is improved.
In another possible implementation, obtaining a vectorized electronic map of a target region according to an original image includes:
and inputting the original image into the neural network model to obtain the vectorized electronic map of the target area.
And establishing a neural network model, wherein the neural network model is used for processing the original image and outputting a vectorization electronic map of the original image, and the vectorization electronic map comprises a linear outline and/or a planar outline of the original image. The embodiment of the application does not limit the specific structure and training mode of the neural network model.
In the embodiment of the application, the original image is input into the neural network model, and the vectorized electronic map of the target area is obtained through the processing of the neural network model, so that the efficiency of determining the vectorized electronic map is improved.
Step S103: and setting attribute information of the vectorized electronic map according to the linear outline and/or the planar outline.
After the linear contour and/or the planar contour of the original image are determined, sub-areas in the target area are distinguished, for example, if the target area is a park, the linear contour and/or the planar contour of the original image can realize the distinction of constructions such as a park, a route, a lake, a river, a square, and the like in the park, and in order to facilitate the distinction of the sub-areas in the target area, it is necessary to set attribute information of various constructions in the target area. The setting of the attribute information of the vectorized electronic map can be automatically generated by the system, and can also be realized in a manual input mode, after the attribute information of the vectorized electronic map is set, the attribute information of the vectorized electronic map can be stored in a background database, and the attribute information of the vectorized electronic map can also be presented in the vectorized electronic map, which is not limited in the embodiment of the present application. The attribute information of the vectorized electronic map is not limited in the embodiment of the application, and in a possible implementation manner, the attribute information of the vectorized electronic map includes at least one of the following attribute information of a linear profile and/or a planar profile: identification, name, type, hierarchy, longitude and latitude, and acquisition time.
The identification is a unique code for recording data and can be automatically generated by the system; the name is used for recording the actual name of the whole outline and can be set in a manual input mode; the type is used for recording the acquisition type of the target area, such as a cell, a school, a market and the like, and can be manually set; the hierarchy is used for indicating that the target area belongs to a single layer or multiple layers, for example, the target area comprises an underground parking lot, or the target area is a multi-layer mall, and the target area belongs to multiple layers and can be set in a manual input mode; when the target area is multi-layered, an associated identifier between each layer, such as a first layer of a mall a, a second layer of the mall a, and the like, may also be established, which is not limited in the embodiment of the present application; the latitude and longitude are used for representing the geographical position of the target area, and can be automatically generated through a Global Positioning System (GPS); the acquisition time represents the time for generating the vectorization electronic map of the target area, and can be generated by a system or set by manual input; in addition, the attribute information of the vectorized electronic map may further include information of a collector, a collection notice, and the like, which is not limited in the embodiment of the present application. By setting the attribute information of the vectorized electronic map, the requirements of the existing electronic map can be better met, and the vectorized electronic map is convenient to retrieve, use and subsequently update and maintain.
In a possible implementation mode, after setting the attribute information of the vectorized electronic map, the attribute information can be exported to an oracle (oracle) database.
In the embodiment of the application, the vectorized electronic map of the target area is obtained according to the original image of the target area, the vectorized electronic map comprises the linear contour and/or the planar contour of the original image, and then the attribute information of the vectorized electronic map is set for the linear contour and/or the planar contour of the original image, so that the map acquisition of the target area is realized, and the map acquisition efficiency of the target area is improved.
In an implementation manner, the linear contour includes a plurality of linear sub-contours, fig. 3 is a flowchart of a map acquisition method provided by another embodiment of the present application, which may be executed by a map acquisition apparatus, which may be implemented by software and/or hardware, for example: the apparatus may be a client or a terminal device, where the terminal device may be a personal computer, a smart phone, a user terminal, a tablet computer, a wearable device, and the like, and the following description describes a map acquisition method with the terminal device as an execution subject, as shown in fig. 3, in the map acquisition method provided in this embodiment of the present application, a vectorized electronic map of a target area is obtained according to an original image, and the vectorized electronic map may include:
s1: determining original color points of multiple colors in pixel points of an original image, wherein a first color difference value between the original color point of each color and a standard sample color value of the color is within a first preset range, and taking the original color point of the same color as a first target pixel point.
The original color points are used as initial points for linear contour identification, and the number of the original color points is not limited in the embodiment of the application. The selection of the original color point can be determined according to the color difference value between the pixel point in the original image and the color value of the standard sample. In one possible implementation manner, each of the standard sample color values includes a luminance value, a red-green value, a yellow-blue value, and the like of the color.
The determination of the original color point may be implemented by determining whether a first color difference value between the pixel point and the color value of the standard sample is within a first preset range, and in a possible implementation, the first color difference value may be calculated by the following formula:
Figure BDA0002292699060000121
wherein Δ E represents a first hue value; l represents a bright-dark value, + represents a slight brightness, -represents a slight darkness; a represents a red-green value, + represents a reddish bias, -represents a greenish bias; b represents a yellow-blue value, + represents a partial yellow, -represents a partial blue; d, measuring the color value of the original image by using a standard color value measuring method (L is the bright and dark value of the color value of the standard sample, and L is the bright and dark value of the pixel point of the measured original image); measuring delta a (a indicates the red-green value of the color value of the standard sample, and a indicates the measured red-green value of the pixel point of the original image); and d, measuring delta b (the b standard indicates a yellow-blue value of the standard sample color, and the b measurement indicates a measured yellow-blue value of a pixel point of the original image), and calculating to obtain a first color difference value of each pixel point of the original image.
After the first color difference value of the pixel point of the original image is determined, the pixel point of the first color difference value in a first preset range is determined as the original color point. Since the color of the standard sample includes a plurality of colors, that is, the color value of the standard template includes a plurality of color values, for the determination of the original color point, the minimum first hue value among the first hue values calculated by selecting the pixel point and the color value of the standard template is compared with the first preset range. And if the minimum first hue difference value is within a first preset range after comparison, the pixel point is an original color point, and the color corresponding to the original color point is the standard sample color corresponding to the minimum first hue difference value.
In addition, the size of the first preset range of the first color difference value Δ E is not limited in the embodiments of the present application, and in a possible implementation manner, the first preset range is 0 to 0.5. In addition, the color type corresponding to the original color point is not limited in the embodiment of the application. In the embodiment of the present application, the original color points with the same color are used as the first target pixel points to determine different linear sub-profiles, and the determination method of the linear sub-profile is described below by taking only the original color point with one color as an example.
S2: and determining a plurality of first pixel points which are separated from the first target pixel point by M pixel points, wherein M is an integer which is larger than or equal to zero.
Fig. 4 is a schematic diagram of an original image according to an embodiment of the present application, and as shown in fig. 4, each square represented by a solid line represents a pixel, where the pixel 40 represents a first target pixel, and taking M ═ 0 as an example, the plurality of first pixels include 8 pixels adjacent to the pixel 40, which are respectively a pixel 41, a pixel 42, a pixel 43, a pixel 44, a pixel 45, a pixel 46, a pixel 47, and a pixel 48. For original images with different resolutions, values of M may be different, for example, the higher the resolution of the original image is, the larger the value of M is selected, and the embodiment of the present application is not limited thereto.
S3: and determining a second color difference value corresponding to each first pixel point, wherein the second color difference value is the color difference value between the first pixel point and the first target pixel point, and the first target pixel point is used as a second pixel point.
Determining the color difference value between each first pixel, i.e., the pixel 41, the pixel 42, the pixel 43, the pixel 44, the pixel 45, the pixel 46, the pixel 47, and the pixel 48, and the pixel 40, to obtain the second color difference value corresponding to each first pixel, and the calculation method of the second color difference value may refer to the calculation formula of the first color difference value, which is not repeated.
For convenience of the following description, the first target pixel 40 is taken as the second pixel.
S4: and determining at least one first target pixel point in the plurality of first pixel points, wherein the second color difference value of the first target pixel point is within a second preset range.
In the multiple first pixel points, a first pixel point of a second color difference value within a second preset range is determined as a first target pixel point, where the second preset range is not limited in the embodiment of the present application, and exemplarily, the second preset range is 0 to 0.5, and the embodiment of the present application is not limited thereto.
After the first target pixel points are determined, the number and the positions of the first target pixel points need to be judged, if the number of the first target pixel points is at least three and the at least three first target pixel points are not on the same straight line, step S5 is executed, otherwise, steps S2-S4 are repeatedly executed.
Exemplarily, if only the second color difference value of the pixel 41 among the plurality of first pixels is within the second preset range, the pixel 41 is taken as the first target pixel, and the steps S2 to S4 are repeatedly performed; if the second color difference values of the pixel point 41 and the pixel point 43 in the plurality of first pixel points are within the second preset range, taking the pixel point 41 and the pixel point 43 as first target pixel points, and repeatedly executing the steps S2-S4; if the second color difference values of the existing pixel 41, the existing pixel 43, and the existing pixel 46 in the plurality of first pixels are all within the second preset range, step S5 is executed. The above is merely an exemplary illustration, and the embodiments of the present application are not limited thereto.
S5: and connecting the plurality of second pixel points to generate a linear sub-outline of the original image.
The embodiment of the present application does not limit the specific implementation manner of connecting the plurality of second pixel points and generating the linear sub-outline of the original image, for example, the coordinate values of each pixel point may be determined, and the coordinate values of the plurality of second pixel points may be obtained.
Specifically, the coordinate of each pixel point is set, and the corresponding coordinate can be generated according to the size of the pixel point of the original image and the distance of 1 pixel point as the reference, and the width of each pixel point represents the distance of 1 coordinate. The numbering of each pixel point is automatically carried out by the system according to the principle that numbering is carried out from the left side of the bottom layer. For example, the coordinates of the plurality of second pixels are respectively recorded as O1(X1, Y1), O2(X2, Y2), O3(X3, Y3), and O4(X4, Y4), then a new layer is established according to the corresponding relationship between the coordinates of the new layer and the coordinates of the original image, a linear sub-contour of the original image is formed, and the linear contour of the original image is formed by obtaining the plurality of linear sub-contours in the original image.
In an implementation manner, the planar contour includes a plurality of planar sub-contours, fig. 5 is a flowchart of a map acquisition method provided by another embodiment of the present application, which may be executed by a map acquisition apparatus, which may be implemented by software and/or hardware, for example: the apparatus may be a client or a terminal device, where the terminal device may be a personal computer, a smart phone, a user terminal, a tablet computer, a wearable device, and the like, and the following describes the map acquisition method with the terminal device as an execution subject, as shown in fig. 5, step S102 in the map acquisition method provided in the embodiment of the present application, that is, obtaining the vectorized electronic map of the target area according to the original image, may include:
r1: determining original color points of a plurality of colors in the original image, wherein a first color difference value between the original color point of each color and the standard sample color value of the color is within a first preset range, and taking any one of the original color points of each color as a second target pixel point.
The content of this step is similar to that in step S1, and reference may be made to the description of step S1 for details, which are not repeated herein.
R2: and determining a plurality of third pixel points which are separated from the second target pixel point by N pixel points by taking the second target pixel point as a center, wherein N is an integer which is larger than or equal to zero.
Fig. 6 is a schematic diagram of an original image according to another embodiment of the present application, and as shown in fig. 6, each square represented by a solid line represents a pixel, where the pixel 50 represents a second target pixel, and taking N ═ 0 as an example, a plurality of third pixels include 8 pixels adjacent to the pixel 50, namely, a pixel 51, a pixel 52, a pixel 53, a pixel 55, a pixel 56, a pixel 57, and a pixel 58. For original images with different resolutions, values of N may be different, for example, the higher the resolution of the original image is, the larger the value of N is selected, and the embodiment of the present application is not limited thereto.
R3: and determining a third color difference value corresponding to each third pixel point, wherein the third color difference value is the color difference value between the third pixel point and the second target pixel point.
Determining color difference values between each of the third pixel points, i.e., the pixel point 51, the pixel point 52, the pixel point 53, the pixel point 55, the pixel point 56, the pixel point 57, and the pixel point 58, and the pixel point 50, to obtain a third color difference value corresponding to each of the third pixel points, and a calculation method for the third color difference value may refer to the calculation formula for the first color difference value, which is not repeated.
R4: if the third color difference value corresponding to each third pixel point is within the third preset range, making N equal to N +1, and repeatedly executing steps R2-R4.
And judging the respective third color difference value corresponding to each third pixel point, if the respective third color difference value corresponding to each third pixel point is within a third preset range, making N equal to N +1, and repeatedly executing the steps R2-R4.
Illustratively, if the third color difference values of the pixel 51, the pixel 52, the pixel 53, the pixel 55, the pixel 56, the pixel 57, and the pixel 58 are all within the third preset range, let N be N +1, take N be 0 in R2 as an example, N be 0+1 be 1, and repeat step R2, take N be 1, determine a plurality of third pixels, take N be 1 as an example, as shown in fig. 6, a plurality of third pixels are respectively pixels 601-616, and no enumeration is performed here.
If the third color difference value is not within the third predetermined range and the third color difference value is within the third predetermined range, step R5 is executed. If none of the third color difference values is within the third predetermined range, go to step R6.
R5: and taking the second target pixel point as a fourth pixel point, and determining a pixel point of the plurality of third pixel points, wherein the pixel point of the third color difference value in a third preset range is taken as the second target pixel point.
And taking the second target pixel point as a fourth pixel point, determining a pixel point of the plurality of third pixel points, which has a third color difference value within a third preset range, as the second target pixel point, and repeatedly executing the steps R2-R5. And taking the second target pixel point 50 as a fourth pixel point, determining a pixel point with a third chromatic difference value within a third preset range in the plurality of third pixel points 51-58 as the second target pixel point, and repeatedly executing the steps R2-R5.
Illustratively, if the third color difference value of the third pixel point 51 is within the third preset range and the third color difference value of the third pixel point 58 is not within the third preset range, it is determined that the third pixel point 51 is the second target pixel point, and the steps R2 to R5 are repeatedly performed. The embodiment of the present application is not described in detail.
R6: and connecting third pixel points of which the third color difference values are within a third preset range in the plurality of third pixel points to generate a planar sub-outline of the original image.
And if the third color difference values are not in the third preset range, connecting third pixel points of the third color difference values in the third preset range in the plurality of third pixel points to generate the planar sub-outline of the original image. And connecting the original color points with various colors to obtain a planar sub-outline of the second target pixel point, and generating the planar outline.
The embodiment of the present application does not limit the manner of generating the planar sub-profile of the original image, for example, the coordinate values of each pixel point are determined, and coordinate values of a plurality of second pixel points are obtained, for example, the coordinates of the plurality of second pixel points are respectively recorded as P1(X1, Y1), P2(X2, Y2), P3(X3, Y3), and P4(X4, Y4), then a new layer is established according to the corresponding relationship between the coordinates of the new layer and the coordinates of the original image, the planar sub-profile of the original image is formed, and the planar profile of the original image is formed by obtaining a plurality of planar sub-profiles in the original image.
In order to identify the facial sub-contour more clearly, the map collecting method provided by the embodiment of the present application further includes:
and setting the pixel points in each planar sub-outline to be the same color to obtain a plurality of color areas.
For example, the pixel points in each planar sub-outline are set to be gray or white, etc., which is not limited in this embodiment of the present application.
After determining each planar sub-contour and obtaining a plurality of color regions, optionally, the map collecting method provided in the embodiment of the present application further includes:
the intersection line between the different color areas is determined as a linear sub-contour.
The embodiment of the application realizes that the linear sub-outline is obtained through the planar sub-outline, and improves the efficiency of determining the linear sub-outline.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 7 is a schematic diagram of an original image provided in an embodiment of the present application, and fig. 8 is a vectorized electronic map provided in the embodiment of the present application, where as shown in fig. 7, the original image includes a navigation map of a target area, and the navigation map includes A, B, C, D, E, F areas in the target area, and after processing, as shown in fig. 8, the vectorized electronic map includes an outline and a position of a A, B, C, D, E, F area in the target area.
Fig. 9 is a schematic structural diagram of a map collecting device according to an embodiment of the present application, where the map collecting device may be implemented by software and/or hardware, for example: the device can be a client or a terminal device, and the terminal device can be a personal computer, a smart phone, a user terminal, a tablet computer, a wearable device, and the like, as shown in fig. 9, the map collecting device provided by the embodiment of the present application can include:
an acquiring module 71, configured to acquire an original image of the target area.
And the processing module 72 is configured to obtain a vectorized electronic map of the target area according to the original image, where the vectorized electronic map includes a linear contour and/or a planar contour of the original image.
Optionally, the processing module 72 is specifically configured to: and inputting the original image into the neural network model to obtain the vectorized electronic map of the target area.
And the setting module 73 is used for setting the attribute information of the vectorization electronic map according to the linear contour and/or the planar contour.
Optionally, the attribute information of the vectorized electronic map includes at least one of the following attribute information of a linear contour and/or a planar contour:
identification, name, type, hierarchy, longitude and latitude, and acquisition time.
In a possible implementation manner, fig. 10 is a schematic structural diagram of a map collecting apparatus provided in another embodiment of the present application, and the apparatus may be implemented by software and/or hardware, for example: the device can be a client or a terminal device, and the terminal device can be a personal computer, a smart phone, a user terminal, a tablet computer, a wearable device, and the like, as shown in fig. 10, the map collecting device provided in the embodiment of the present application, and the processing module 72 includes:
the first processing submodule 721 is configured to extract a linear contour in the original image according to a pixel point of the original image;
and/or the presence of a gas in the gas,
the second processing submodule 722 is configured to extract a planar contour of the original image according to the pixel points of the original image.
In a possible implementation, the linear profile comprises a plurality of linear sub-profiles, the first processing sub-module 721 being specifically configured to:
s1: determining original color points of multiple colors in pixel points of an original image, wherein a first color difference value between the original color point of each color and a standard sample color value of the color is within a first preset range, and taking the original color point of the same color as a first target pixel point;
s2: determining a plurality of first pixel points which are separated from a first target pixel point by M pixel points, wherein M is an integer which is larger than or equal to zero;
s3: determining a second color difference value corresponding to each first pixel point, wherein the second color difference value is the color difference value between the first pixel point and the first target pixel point, and the first target pixel point is used as a second pixel point;
s4: determining at least one first target pixel point from the plurality of first pixel points, wherein the second color difference value of the first target pixel point is within a second preset range, and repeatedly executing the steps S2-S4 until the number of the first target pixel points is at least three, and the at least three first target pixel points are not on the same straight line;
s5: and connecting the plurality of second pixel points to generate a linear sub-outline of the original image.
In a possible embodiment, the planar profile comprises a plurality of planar sub-profiles, the second processing sub-module 722 being particularly configured to:
r1: determining original color points of multiple colors in an original image, wherein a first color difference value between the original color point of each color and the color value of a standard sample of the color is within a first preset range, and taking any one of the original color points of each color as a second target pixel point;
r2: determining a plurality of third pixel points which are separated from the second target pixel point by N pixel points by taking the second target pixel point as a center, wherein N is an integer which is larger than or equal to zero;
r3: determining a third color difference value corresponding to each third pixel point, wherein the third color difference value is a color difference value between the third pixel point and the second target pixel point;
r4: if the third color difference value corresponding to each third pixel point is within a third preset range, making N equal to N +1, and repeatedly executing steps R2-R4, if the third color difference value is not within the third preset range and the third color difference value is within the third preset range, then taking the second target pixel point as a fourth pixel point, and executing step R5, and if the third color difference value is not within the third preset range, then executing step R6;
r5: determining a pixel point of the plurality of third pixel points, of which the third color difference value is within a third preset range, as a second target pixel point, and repeatedly executing the steps R2-R5;
r6: and connecting third pixel points of which the third color difference values are within a third preset range in the plurality of third pixel points to generate a planar sub-outline of the original image.
In a possible embodiment, a module 73 is provided for:
and setting the pixel points in each planar sub-outline to be the same color to obtain a plurality of color areas.
Optionally, as shown in fig. 10, the map collecting device provided in the embodiment of the present application further includes:
a determining module 74 for determining the intersection line between the different color areas as a linear sub-contour.
Fig. 11 is a schematic structural diagram of a map collecting device according to another embodiment of the present application, where the map collecting device may be implemented by software and/or hardware, for example: the device can be a client or a terminal device, and the terminal device can be a personal computer, a smart phone, a user terminal, a tablet computer, a wearable device, and the like, as shown in fig. 11, the map collecting device provided by the embodiment of the present application, and the processing module 72 includes:
the segmentation submodule 81 is configured to perform semantic segmentation on the layer of the original image to obtain a semantic segmentation map of the original image;
the deleting submodule 82 is further configured to delete the color of each segmented area in the semantic segmentation map, so as to obtain a linear contour of the original image.
The device embodiments provided in the present application are merely schematic, and the module division in fig. 9-11 is only one logical function division, and there may be other division ways in actual implementation. For example, multiple modules may be combined or may be integrated into another system. The coupling of the various modules to each other may be through interfaces that are typically electrical communication interfaces, but mechanical or other forms of interfaces are not excluded. Thus, modules described as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices.
Fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and as shown in fig. 12, the electronic device includes:
a processor 91, a memory 92, a transceiver 93, and a computer program; wherein the transceiver 93 enables data transmission with other devices, a computer program is stored in the memory 92 and configured to be executed by the processor 91, the computer program comprising instructions for performing the above-mentioned map acquisition method, the contents and effects of which refer to the method embodiments.
In addition, embodiments of the present application further provide a computer-readable storage medium, in which computer-executable instructions are stored, and when at least one processor of the user equipment executes the computer-executable instructions, the user equipment performs the above-mentioned various possible methods.
Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in user equipment. Of course, the processor and the storage medium may reside as discrete components in a communication device.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (18)

1. A map acquisition method, comprising:
acquiring an original image of a target area;
obtaining a vectorization electronic map of the target area according to the original image, wherein the vectorization electronic map comprises a linear contour and/or a planar contour of the original image;
and setting attribute information of the vectorized electronic map according to the linear contour and/or the planar contour.
2. The method according to claim 1, wherein the obtaining the vectorized electronic map of the target region according to the original image comprises:
extracting the linear contour in the original image according to the pixel points of the original image;
and/or the presence of a gas in the gas,
and extracting the planar contour of the original image according to the pixel points of the original image.
3. The method of claim 2, wherein the line-shaped contour comprises a plurality of line-shaped sub-contours, and the extracting the line-shaped contour from the original image according to the pixel points of the original image comprises:
s1: determining original color points of multiple colors in pixel points of the original image, wherein a first color difference value between the original color point of each color and a standard sample color value of the color is within a first preset range, and taking the original color point of the same color as a first target pixel point;
s2: determining a plurality of first pixel points which are separated from the first target pixel point by M pixel points, wherein M is an integer which is larger than or equal to zero;
s3: determining a second color difference value corresponding to each first pixel point, wherein the second color difference value is the color difference value between the first pixel point and a first target pixel point, and the first target pixel point is used as a second pixel point;
s4: determining at least one first target pixel point from the plurality of first pixel points, wherein the second color difference value of the first target pixel point is within a second preset range, and repeatedly executing the steps S2-S4 until the number of the first target pixel points is at least three, and the at least three first target pixel points are not on the same straight line;
s5: and connecting a plurality of second pixel points to generate a linear sub-outline of the original image.
4. The method according to claim 2 or 3, wherein the planar contour comprises a plurality of planar sub-contours, and the extracting the planar contour of the original image from the pixel points of the original image comprises:
r1: determining original color points of a plurality of colors in the original image, wherein a first color difference value between the original color point of each color and the color value of the standard sample of the color is within a first preset range, and any one of the original color points of each color is taken as a second target pixel point;
r2: determining a plurality of third pixel points which are separated from the second target pixel point by N pixel points by taking the second target pixel point as a center, wherein N is an integer which is larger than or equal to zero;
r3: determining a third color difference value corresponding to each third pixel point, wherein the third color difference value is a color difference value between the third pixel point and the second target pixel point;
r4: if the third color difference value corresponding to each third pixel point is within a third preset range, making N equal to N +1, and repeatedly executing steps R2-R4, if the third color difference value is not within the third preset range and the third color difference value is within the third preset range, taking the second target pixel point as a fourth pixel point, and executing step R5, and if the third color difference value is not within the third preset range, executing step R6;
r5: determining a pixel point with a third color difference value within a third preset range in the plurality of third pixel points as a second target pixel point, and repeatedly executing the steps R2-R5;
r6: and connecting third pixel points of which the third color difference values are within a third preset range in the plurality of third pixel points to generate a planar sub-outline of the original image.
5. The method of claim 4, further comprising:
and setting the pixel points in each planar sub-outline to be the same color to obtain a plurality of color areas.
6. The method of claim 5, further comprising:
determining the intersection line between different color areas as a linear sub-outline.
7. The method according to claim 1, wherein the obtaining the vectorized electronic map of the target region according to the original image comprises:
performing semantic segmentation on the layer of the original image to obtain a semantic segmentation image of the original image;
and deleting the color of each segmentation area in the semantic segmentation graph to obtain the linear contour of the original image.
8. The method according to claim 1, wherein the obtaining the vectorized electronic map of the target region according to the original image comprises:
and inputting the original image into a neural network model to obtain the vectorized electronic map of the target area.
9. The method according to any one of claims 1 to 3, wherein the attribute information of the vectorized electronic map includes at least one of the following attribute information of the linear contour and/or the planar contour:
identification, name, type, hierarchy, longitude and latitude, and acquisition time.
10. A map acquisition apparatus, comprising:
the acquisition module is used for acquiring an original image of a target area;
the processing module is used for obtaining a vectorization electronic map of the target area according to the original image, wherein the vectorization electronic map comprises a linear contour and/or a planar contour of the original image;
and the setting module is used for setting the attribute information of the vectorized electronic map according to the linear outline and/or the planar outline.
11. The apparatus of claim 10, wherein the processing module comprises:
the first processing submodule is used for extracting the linear contour in the original image according to the pixel points of the original image;
and/or the presence of a gas in the gas,
and the second processing submodule is used for extracting the planar contour of the original image according to the pixel points of the original image.
12. The apparatus according to claim 11, wherein the linear profile comprises a plurality of linear sub-profiles, the first processing sub-module being specifically configured to:
s1: determining original color points of multiple colors in pixel points of the original image, wherein a first color difference value between the original color point of each color and a standard sample color value of the color is within a first preset range, and taking the original color point of the same color as a first target pixel point;
s2: determining a plurality of first pixel points which are separated from the first target pixel point by M pixel points, wherein M is an integer which is larger than or equal to zero;
s3: determining a second color difference value corresponding to each first pixel point, wherein the second color difference value is the color difference value between the first pixel point and a first target pixel point, and the first target pixel point is used as a second pixel point;
s4: determining at least one first target pixel point from the plurality of first pixel points, wherein the second color difference value of the first target pixel point is within a second preset range, and repeatedly executing the steps S2-S4 until the number of the first target pixel points is at least three, and the at least three first target pixel points are not on the same straight line;
s5: and connecting a plurality of second pixel points to generate a linear sub-outline of the original image.
13. The apparatus according to claim 11 or 12, wherein the areal profile comprises a plurality of areal sub-profiles, the second processing sub-module being configured in particular to:
r1: determining original color points of a plurality of colors in the original image, wherein a first color difference value between the original color point of each color and the color value of the standard sample of the color is within a first preset range, and any one of the original color points of each color is taken as a second target pixel point;
r2: determining a plurality of third pixel points which are separated from the second target pixel point by N pixel points by taking the second target pixel point as a center, wherein N is an integer which is larger than or equal to zero;
r3: determining a third color difference value corresponding to each third pixel point, wherein the third color difference value is a color difference value between the third pixel point and the second target pixel point;
r4: if the third color difference value corresponding to each third pixel point is within a third preset range, making N equal to N +1, and repeatedly executing steps R2-R4, if the third color difference value is not within the third preset range and the third color difference value is within the third preset range, taking the second target pixel point as a fourth pixel point, and executing step R5, and if the third color difference value is not within the third preset range, executing step R6;
r5: determining a pixel point with a third color difference value within a third preset range in the plurality of third pixel points as a second target pixel point, and repeatedly executing the steps R2-R5;
r6: and connecting third pixel points of which the third color difference values are within a third preset range in the plurality of third pixel points to generate a planar sub-outline of the original image.
14. The apparatus of claim 13, wherein the setup module is further configured to:
and setting the pixel points in each planar sub-outline to be the same color to obtain a plurality of color areas.
15. The apparatus of claim 14, further comprising:
and the determining module is used for determining that the intersection line between the different color areas is a linear sub-outline.
16. The apparatus of claim 10, wherein the processing module comprises:
the segmentation submodule is used for performing semantic segmentation on the layer of the original image to obtain a semantic segmentation map of the original image;
and the deleting submodule is also used for deleting the color of each segmentation area in the semantic segmentation map to obtain the linear outline of the original image.
17. The apparatus of claim 10, wherein the processing module is specifically configured to:
and inputting the original image into a neural network model to obtain the vectorized electronic map of the target area.
18. The apparatus according to any one of claims 10-12, wherein the attribute information of the vectorized electronic map includes at least one of the following attribute information of the linear contour and/or the planar contour:
identification, name, type, hierarchy, longitude and latitude, and acquisition time.
CN201911187323.5A 2019-11-28 2019-11-28 Map acquisition method and device Pending CN112862917A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911187323.5A CN112862917A (en) 2019-11-28 2019-11-28 Map acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911187323.5A CN112862917A (en) 2019-11-28 2019-11-28 Map acquisition method and device

Publications (1)

Publication Number Publication Date
CN112862917A true CN112862917A (en) 2021-05-28

Family

ID=75985889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911187323.5A Pending CN112862917A (en) 2019-11-28 2019-11-28 Map acquisition method and device

Country Status (1)

Country Link
CN (1) CN112862917A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130136351A1 (en) * 2011-11-30 2013-05-30 Canon Kabushiki Kaisha Information processing apparatus having wireless communication function and method of controlling the apparatus
CN106251338A (en) * 2016-07-20 2016-12-21 北京旷视科技有限公司 Target integrity detection method and device
CN107330979A (en) * 2017-06-30 2017-11-07 电子科技大学中山学院 Vector diagram generation method and device for building house type and terminal
WO2018082332A1 (en) * 2016-11-07 2018-05-11 深圳光启合众科技有限公司 Image processing method and device, and robot
CN109461211A (en) * 2018-11-12 2019-03-12 南京人工智能高等研究院有限公司 Semantic vector map constructing method, device and the electronic equipment of view-based access control model point cloud
KR20190053355A (en) * 2017-11-10 2019-05-20 연세대학교 산학협력단 Method and Apparatus for Recognizing Road Symbols and Lanes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130136351A1 (en) * 2011-11-30 2013-05-30 Canon Kabushiki Kaisha Information processing apparatus having wireless communication function and method of controlling the apparatus
CN106251338A (en) * 2016-07-20 2016-12-21 北京旷视科技有限公司 Target integrity detection method and device
WO2018082332A1 (en) * 2016-11-07 2018-05-11 深圳光启合众科技有限公司 Image processing method and device, and robot
CN107330979A (en) * 2017-06-30 2017-11-07 电子科技大学中山学院 Vector diagram generation method and device for building house type and terminal
KR20190053355A (en) * 2017-11-10 2019-05-20 연세대학교 산학협력단 Method and Apparatus for Recognizing Road Symbols and Lanes
CN109461211A (en) * 2018-11-12 2019-03-12 南京人工智能高等研究院有限公司 Semantic vector map constructing method, device and the electronic equipment of view-based access control model point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡玮;陶伟东;苑振宇;王结臣;: "一种Voronoi图的扫描地图矢量化方法", 武汉大学学报(信息科学版), no. 04 *
赵俊娟, 尹京苑, 单新建: "基于高分辨率卫星影像的建筑物轮廓矢量化技术", 防灾减灾工程学报, no. 02 *

Similar Documents

Publication Publication Date Title
US7085650B2 (en) System and method of geospatially mapping topological regions and displaying their attributes
CN107133325A (en) A kind of internet photo geographical space localization method based on streetscape map
US20130027427A1 (en) Associating Digital Images with Waypoints
CN101945327A (en) Wireless positioning method and system based on digital image identification and retrieve
US20140113665A1 (en) Navigating using an indoor map representation
CN104350524A (en) Pose estimation based on peripheral information
CN101339486A (en) Method and apparatus for providing picture file
CN102937452A (en) Navigation method, apparatus and system based on image information code
CN105260466A (en) Picture pushing method and apparatus
CN105320689A (en) Photo information processing method and device as well as terminal
CN112233062A (en) Surface feature change detection method, electronic device, and storage medium
JP2012168069A (en) Map information processor, navigation device, map information processing method and program
CN112530009A (en) Three-dimensional topographic map drawing method and system
JP6165422B2 (en) Information processing system, information processing device, server, terminal device, information processing method, and program
WO2010089885A1 (en) Map information processing device, map information processing method, map information processing program, and recording medium
CN109857826B (en) Video camera visual field marking system and marking method thereof
CN112509453B (en) Electronic navigation method and system for scenic spot live-action navigation map based on mobile equipment
CN112668675B (en) Image processing method and device, computer equipment and storage medium
CN114792111A (en) Data acquisition method and device, electronic equipment and storage medium
Hochmair et al. Positional accuracy of Flickr and Panoramio images in Europe
CN112862917A (en) Map acquisition method and device
CN111651547A (en) Method and device for acquiring high-precision map data and readable storage medium
CN111024101A (en) Navigation path landscape evaluation method and system, storage medium and vehicle-mounted terminal
US20230304826A1 (en) Method and device for generating map data
JP2003331288A (en) Device and method for discriminating image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination