WO2018133727A1 - 一种正射影像图的生成方法及装置 - Google Patents

一种正射影像图的生成方法及装置 Download PDF

Info

Publication number
WO2018133727A1
WO2018133727A1 PCT/CN2018/072318 CN2018072318W WO2018133727A1 WO 2018133727 A1 WO2018133727 A1 WO 2018133727A1 CN 2018072318 W CN2018072318 W CN 2018072318W WO 2018133727 A1 WO2018133727 A1 WO 2018133727A1
Authority
WO
WIPO (PCT)
Prior art keywords
grid
road
image
picture
distance value
Prior art date
Application number
PCT/CN2018/072318
Other languages
English (en)
French (fr)
Inventor
薛晓亮
王涛
Original Assignee
高德软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 高德软件有限公司 filed Critical 高德软件有限公司
Publication of WO2018133727A1 publication Critical patent/WO2018133727A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/3867Geometry of map features, e.g. shape points, polygons or for simplified maps
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/387Organisation of map data, e.g. version management or database structures
    • G01C21/3881Tile-based structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present application relates to the field of map data processing technologies, and in particular, to a method and apparatus for generating an orthophoto map.
  • the orthophoto map is a plan view with orthographic projection properties. Because of its geometric position information and image intuition, it is used to generate maps to obtain orthophoto maps, which are widely used in navigation products.
  • the road height is set to a uniform fixed height value during the orthographic projection process.
  • the fixed height value is used as an input parameter of the process, and finally an orthophotograph of the road surface is obtained.
  • the road surface often has undulations, and the height of the road surface is not static. Therefore, the prior art does not fully consider the change of the road surface height, and sets the road surface height to a uniform fixed height value, which causes The accuracy of the orthophoto image obtained by the fixed road height value is not high.
  • the present application provides a method and apparatus for generating an orthophoto map to eliminate the problem that the orthophoto map generated due to inaccuracy in height values is inaccurate.
  • the present application provides a method for generating an orthophoto map, the method comprising:
  • a sampling track point is selected from a track point generated when the collecting vehicle travels on the road;
  • Each grid is rendered in the color of the corresponding pixel in the orthophoto image to be rendered as the color corresponding to the corresponding grid.
  • the application also provides a device for generating an orthophoto image, the device comprising:
  • a sampling track point selecting unit configured to select a sampling track point from a track point generated when the collecting vehicle travels on the road according to a preset sampling rule
  • a range determining unit configured to determine, according to the coordinates of the sampling track point, the traveling direction angle of the collecting vehicle, and the preset side length of the drawing area, the range of the road area covered by the orthophoto image to be rendered,
  • the road area includes a plurality of grids, each grid corresponding to one pixel of the orthophoto image to be rendered;
  • a plane coordinate determining unit configured to obtain a plane coordinate of each grid according to coordinates of the sampling track point and a traveling direction angle of the collecting vehicle;
  • a three-dimensional coordinate determining unit configured to obtain a height value of each grid from a point cloud data of the road generated when the collecting vehicle travels on the road according to plane coordinates of each grid, and a height value of the grid And the plane coordinates constitute the three-dimensional coordinates of the grid;
  • a color determining unit configured to obtain a color corresponding to each grid according to three-dimensional coordinates of each grid, picture data of the road, and preset camera parameters, where the camera parameters are installed on the collecting vehicle The camera parameters of the road picture data;
  • a rendering unit configured to render each grid in a color of a corresponding pixel in the orthophoto image to be rendered as a color corresponding to the corresponding grid.
  • the present application selects sampling track points from the track points generated when the collecting vehicle is traveling on the road, and for a sampling track point, according to the coordinates of the sampling track points, the traveling direction angle of the collecting vehicle is collected.
  • Determining the range of the road area covered by the orthophoto image to be rendered, the road area range includes a plurality of grids, and calculating the plane coordinates of each grid, according to the plane of each grid Coordinates, obtaining the height value of each grid from the point cloud data of the road generated when the collecting vehicle travels on the road, thereby obtaining the three-dimensional coordinates of the grid, and then according to the three-dimensional coordinates of each grid
  • the picture data of the road and the preset camera parameters are used to obtain the color corresponding to each grid, and the color of each corresponding grid in the orthophoto image to be rendered is rendered as the color corresponding to the corresponding grid.
  • the point cloud data includes the true height value of the road surface sampling point, so the height value of each grid is obtained from the point cloud data as the most realistic height value of the image corresponding to the grid area. Therefore, generating an orthophoto map based on the three-dimensional coordinates including the true height value sufficiently takes into account the height fluctuation of the road surface, and eliminates the problem that the generated orthophoto map is inaccurate due to inaccuracy of the height value parameter.
  • FIG. 1 is a flowchart of a method for generating an orthophoto image according to an embodiment of the present application
  • FIG. 2 is a diagram showing an example of determining a range of road regions covered by an orthophoto image to be rendered according to the present application
  • FIG. 3 is a flowchart of a method for determining a target road picture according to another embodiment of the present disclosure
  • FIG. 5 is a block diagram of a device for generating an orthophoto image according to an embodiment of the present application.
  • the embodiment of the present application provides a method for generating an orthophoto image, and the specific process is shown in FIG. 1 .
  • Step S100 According to a preset sampling rule, select a sampling track point from a track point generated when the collecting vehicle travels on the road;
  • the collecting vehicle collects the road image data during the driving along the road, and generates a track point in the process, and the position information corresponding to the track point can be obtained by the positioning device carried by the collecting vehicle, such as a GPS positioning device and an inertial navigation device. get.
  • the sampling track point may be selected from the track points generated when the collecting vehicle travels on the road according to the sampling rule that the distance between the adjacent two sampling track points is equal to the preset distance threshold.
  • the preset distance threshold may be equal to the length value in the side length of the preset drawing area, and the side parallel to the traveling direction of the drawing area is a long side, for example, the length value is 20 meters, and the selected adjacent The distance between the two sampling track points is 20 meters.
  • Step S110 For a sampling track point, determining a road area range covered by the orthophoto image to be rendered according to the coordinates of the sampling track point, the traveling direction angle of the collecting vehicle, and the preset drawing area side length, the road area Included in the range is at least one grid, each grid corresponding to one pixel of the orthophoto image to be rendered;
  • the coordinate of the sampling track point may be a latitude and longitude coordinate, or may be a plane coordinate after the latitude and longitude conversion;
  • the traveling direction of the collecting vehicle is the traveling direction of the collecting vehicle when the sampling track point is generated, and the driving direction angle refers to the traveling direction and the positive direction.
  • the angle in the north direction can be collected from the vehicle's inertial navigation device;
  • the length of the preset drawing area side length can be determined by the difference between the camera's near shooting distance and the farthest shooting distance value, as shown in Figure 4.
  • the camera's closest shooting distance is 2 meters in front of the camera
  • the farthest shooting distance is 22 meters in front of the camera
  • the difference is 20 meters
  • the length value can be set to 20 meters
  • the width value can be set according to the width of the road.
  • the width of the road covered by the drawing area shall prevail.
  • the road area covered by the orthophoto image to be rendered is based on the position of the current sampling track point, and takes 2 meters ahead of the driving direction. To 22 meters, the left side is 10 meters, and the right side is covered by 10 meters. As shown in Figure 2, the adjacent two sampling track points: the distance between the sampling track point 1 and the sampling track point 2 is 20 meters.
  • the roads covered by the two sampling track points corresponding to the drawing area 1 and the drawing area 2 are respectively the sampling track point 1 and the road area range covered by the orthophoto image to be rendered corresponding to the track point 2.
  • the road area includes at least one grid.
  • the number of the grids may be determined according to a preset resolution.
  • the preset resolution is 1000*1000, that is, the resolution of the generated orthophoto map is generated.
  • the drawing area contains 1000*1000 grids.
  • Step 120 Obtain a plane coordinate of each grid according to coordinates of the sampling track point and a traveling direction angle of the collecting vehicle;
  • X2 x1*cos(yaw)+y1*sin(yaw)+X0;
  • Y2 -x1*sin(yaw)+y1*cos(yaw)+Y0;
  • X2 and y2 represent the plane coordinates of the grid
  • X0 and Y0 represent the coordinates of the sampling trajectory points
  • x1 and y1 represent the grid coordinates of the grid in the grid coordinate system with the sampling trajectory point as the coordinate origin
  • yaw represents the driving direction angle.
  • the plane coordinates here refers to the coordinates in the Cartesian coordinate system.
  • Step S130 according to the plane coordinates of each grid, obtain the height value of each grid from the point cloud data of the road generated when the collecting vehicle travels on the road, and the height value and the plane coordinate of the grid constitute The three-dimensional coordinates of the grid;
  • the point cloud data is a large number of point data sets, which reflect the three-dimensional coordinates of the sampling points along the road surface of the vehicle.
  • the three-dimensional coordinates of the sampling points on the road surface can be collected by using the three-dimensional laser radar carried by the collecting vehicle during the driving process of the collecting vehicle.
  • the process of obtaining the height value of each grid from the point cloud data of the road generated when the vehicle is traveling on the road according to the plane coordinates of each grid includes:
  • the target point is searched from the point cloud data of the road generated when the collecting vehicle travels on the road, and the distance between the target point and the grid is smaller than the point cloud. The distance between the other points in the data and the grid;
  • the height value of the target point is determined as the height value of the grid.
  • the distance value of each point in the point cloud data from the grid is calculated, and the minimum distance value is found from the calculated distance value, and the height value in the coordinates of the point corresponding to the minimum distance value is used as the network.
  • the height value of the grid so that the point in the point cloud closest to each grid position is found by the shortest distance method, and the accuracy of the grid height value is ensured.
  • Step S140 Obtain a color corresponding to each grid according to three-dimensional coordinates of each grid, picture data of the road, and preset camera parameters, where the camera parameters are installed on the collection vehicle to capture the image data.
  • Camera parameters Obtain a color corresponding to each grid according to three-dimensional coordinates of each grid, picture data of the road, and preset camera parameters, where the camera parameters are installed on the collection vehicle to capture the image data.
  • a target road image is determined from image data of the road, and the target road image captures an image of the corresponding area of the grid;
  • the road picture data collected during the driving process of the collecting vehicle reflects the road surface condition of the road along the driving track in reality, and each picture identifies the position coordinates when the picture is taken.
  • the picture data of the road can be directly taken by the camera of the collecting car, such as taking a picture at a shooting speed of 10 shots per second. It is also possible to record the actual road, and then perform picture processing on the recorded video and the trajectory of the collecting vehicle such as the inertial trajectory or the GPS trajectory to obtain picture data with position coordinates. Since the video file is composed of one frame and one frame of image, the picture data with the shooting position can be obtained by performing frame extraction on the video file and matching the INS position or GPS position corresponding to each frame.
  • the process of determining the corresponding three-dimensional coordinates of the grid in the target road image involves the conversion of coordinates. Since the three-dimensional coordinates of the grid are three-dimensional coordinates in the world coordinate system, the transformation of the coordinates includes the network. The three-dimensional coordinates of the grid in the world coordinate system are converted into three-dimensional coordinates in the camera coordinate system, and the three-dimensional coordinates in the camera coordinate system are converted into pixel coordinates in the image coordinate system, including:
  • R R x (yaw) * R y (pitch) * R z (roll) , wherein the direction angle yaw, the pitch angle pitch, and the roll angle roll are rotation angles of the camera coordinate system around the x, y, and z axes;
  • the optical axis offset, the physical focal length, and the rotation angle of the camera coordinate system around the x, y, and z axes belong to preset camera parameters.
  • the color of the pixel is taken as the color corresponding to the grid.
  • Step S150 Render, according to each grid, a color of a corresponding pixel in the orthophoto image to be rendered as a color corresponding to the corresponding grid;
  • the grid points are colored according to the color corresponding to the grid, and the orthophoto image is obtained after the road coverage is completed.
  • the sampling track point is selected from the track points generated when the collecting vehicle is traveling on the road, and the sampling direction point is based on the coordinates of the sampling track point, the driving direction angle of the collecting vehicle, and the preset.
  • the length of the drawing area is determined, and the range of the road area covered by the orthophoto image to be rendered is determined.
  • the road area includes a plurality of grids, and the plane coordinates of each grid are calculated, according to the plane coordinates of each grid.
  • the image data and the preset camera parameters obtain the color corresponding to each grid, and render the color of the corresponding pixel in each of the grids in the orthophoto image to be rendered as the color corresponding to the corresponding grid.
  • the point cloud data includes the true height value of the road surface sampling point, so the height value of each grid in the point cloud data is the true height value of the image corresponding to the grid area, so based on the
  • the three-dimensional coordinates including the true height value to generate the orthophoto image fully take into account the height fluctuation of the road surface, and eliminate the problem that the generated orthophoto image is inaccurate due to the inaccuracy of the height value parameter.
  • a process of determining a target road image from the image data of the road, and the process of capturing the image of the corresponding area of the mesh by the target road image specifically includes:
  • Step S300 Calculate, according to a grid, a distance value between a shooting position of each picture in the road picture data and a plane coordinate of the grid;
  • the camera can only capture the road surface between the closest shooting distance of 2 meters and the farthest shooting distance of 22 meters, so the distance value from the shooting position of the picture to the plane coordinate of the grid Determine the target road image.
  • the shooting position coordinates are (x1, y1)
  • the plane coordinate of a grid is (X1, Y1)
  • the distance between the shooting position coordinates and the grid plane coordinates is calculated. value:
  • Step S310 taking a road picture whose distance value is between the closest shooting distance value and the farthest shooting distance value of the camera as a target road picture for capturing the image of the corresponding area of the mesh;
  • the calculated distance value may have one or more between the closest shooting distance of 2 meters and the farthest shooting distance of 22 meters, that is, the distance value is located between the camera's closest shooting distance value and the farthest shooting distance value. If the road picture is one or more, then
  • the road image is taken as the target road image of the image corresponding to the grid corresponding region;
  • the distance value is one or more road pictures between the closest shooting distance value and the farthest shooting distance value of the camera, calculate a distance value corresponding to the road picture and a closest shooting distance of the camera for each road picture.
  • An absolute value of the difference of values, the distance value corresponding to the road picture is a distance value of a shooting position of the picture to a plane coordinate of the grid;
  • a road picture with the smallest absolute value is selected as the target road picture for capturing the image of the corresponding area of the grid.
  • the image of the grid area is captured.
  • a picture of the road is used as the basis for coloring. This is because the closer the image is taken, the clearer the picture of the object is, and the color obtained is the most realistic.
  • the image of the road image is selected when the shooting position of the road picture is closest to the grid position.
  • a road image is used as a coloring basis to make the color of the grid point closest to the actual color of the image point, so that the generated orthophoto image is more accurate.
  • the embodiment of the present application further provides an orthophoto image generating device. As shown in FIG. 5, the device includes:
  • the sampling track point selecting unit 500 is configured to select a sampling track point from the track points generated when the collecting vehicle travels on the road according to a preset sampling rule;
  • the range determining unit 510 is configured to determine, according to the coordinates of the sampling track point, the traveling direction angle of the collecting vehicle, and the preset side length of the drawing area, the range of the road area covered by the orthophoto image to be rendered, Included in the road area range is at least one grid, each grid corresponding to one pixel point of the orthophoto image to be rendered;
  • a plane coordinate determining unit 520 configured to obtain plane coordinates of each grid according to coordinates of the sampling track point and a traveling direction angle of the collecting vehicle;
  • the three-dimensional coordinate determining unit 530 is configured to obtain, according to the plane coordinate of each grid, a height value of each grid from the point cloud data of the road generated when the collecting vehicle travels on the road, the height of the grid. The value and the plane coordinates form the three-dimensional coordinates of the grid;
  • a color determining unit 540 configured to obtain a color corresponding to each grid according to three-dimensional coordinates of each grid, picture data of the road, and preset camera parameters, where the camera parameters are installed on the collecting vehicle Camera parameters of the road picture data;
  • the rendering unit 550 is configured to render the color of each corresponding grid in the orthographic image to be rendered as the color corresponding to the corresponding grid.
  • the sampling track point selecting unit 500 selects a process of sampling the track points from the track points generated when the collecting vehicle travels on the road according to a preset sampling rule, and specifically includes:
  • the sampling track points are selected from the track points generated when the collecting vehicle travels on the road.
  • the three-dimensional coordinate determination 530 unit obtains a height value of each grid from point cloud data of the road generated when the collecting vehicle travels on the road according to plane coordinates of each grid, Specifically include:
  • the target point is searched from the point cloud data of the road generated when the collecting vehicle travels on the road, and the distance between the target point and the grid is smaller than the point cloud. The distance between the other points in the data and the grid;
  • the height value of the target point is determined as the height value of the grid.
  • the color determining unit 540 obtains a color corresponding to each grid according to the three-dimensional coordinates of each of the grids, the image data of the road, and the preset camera parameters, and specifically includes:
  • the color determining unit 540 determines a target road image from the picture data of the road for a grid, and the process of capturing the image of the corresponding area image of the target road image includes:
  • a road picture having a distance value between the closest shooting distance value and the farthest shooting distance value of the camera is taken as a target road picture for capturing the image of the corresponding area of the mesh.
  • the determining unit 540 as a process of capturing a road image between the closest shooting distance value and the farthest shooting distance value of the camera as the target road image of the corresponding image of the grid corresponding region, specifically includes:
  • the road image is taken as the target road image of the image corresponding to the grid corresponding region;
  • the distance value is one or more road pictures between the closest shooting distance value and the farthest shooting distance value of the camera, calculate a distance value corresponding to the road picture and a closest shooting distance of the camera for each road picture.
  • An absolute value of the difference of values, the distance value corresponding to the road picture is a distance value of a shooting position of the road picture to a plane coordinate of the grid;
  • a road picture with the smallest absolute value is selected as the target road picture for capturing the image of the corresponding area of the grid.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Instructional Devices (AREA)

Abstract

本申请公开了正射影像图的生成方法:按照预设采样规则,从采集车在道路上行驶时生成的轨迹点中选取采样轨迹点;针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,该范围中包括至少一个网格,每个网格对应待渲染的正射影像图的一个像素点;根据每个网格的平面坐标从采集车在行驶时生成的道路的点云数据中得到每个网格的高度值,得到网格的三维坐标;根据每个网格的三维坐标、道路的图片数据及相机参数得到每个网格对应的颜色;将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色,该方案充分考虑了路面的高度变化,生成更精确的正射影像图。

Description

一种正射影像图的生成方法及装置
本申请要求2017年01月20日递交的申请号为201710048480.2、发明名称为“一种正射影像图的生成方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及地图数据处理技术领域,更具体地说,涉及一种正射影像图的生成方法及装置。
背景技术
正射影像图为具有正射投影性质的平面图,由于其具有几何位置信息和影像直观性等特点,被用于生成地图得到正射影像地图,广泛应用于导航产品中。
在生成正射影像地图时需要生成道路路面的正射影像图,现有技术在生成路面的正射影像图时,将路面高度设置为统一的固定高度值,在进行正射投影处理的过程中,将该固定高度值作为处理的输入参数,最终得到路面的正射影像图。
现实中,路面经常会有起伏,路面的高度并不是一成不变的,因此,现有技术中并没有充分考虑到路面高度的变化,而将路面高度设置为统一的固定高度值,这会造成根据统一的固定路面高度值得到的正射影像图的精确度不高。
发明内容
有鉴于此,本申请提供一种正射影像图的生成方法及装置,以消除由于高度值不准确造成生成的正射影像图不精确的问题。
为了实现上述目的,本申请提供一种正射影像图的生成方法,该方法包括:
按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点;
针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和预设的制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,所述道路区域范围中包括至少一个网格,每个网格对应所述待渲染的正射影像图的一个像素点;
根据所述采样轨迹点的坐标和采集车的行驶方向角,得到每个网格的平面坐标;
根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值,网格的高度值和平面坐标构成网格的三维坐标;
根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色,所述相机参数是安装在所述采集车上拍摄所述道路图片数据的相机参数;
将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色。
本申请还提供一种正射影像图的生成装置,该装置包括:
采样轨迹点选取单元,用于按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点;
范围确定单元,用于针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和预设的制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,所述道路区域范围中包括若干网格,每个网格对应所述待渲染的正射影像图的一个像素点;
平面坐标确定单元,用于根据所述采样轨迹点的坐标和采集车的行驶方向角,得到每个网格的平面坐标;
三维坐标确定单元,用于根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值,网格的高度值和平面坐标构成网格的三维坐标;
颜色确定单元,用于根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色,所述相机参数是安装在所述采集车上拍摄所述道路图片数据的相机参数;
渲染单元,用于将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色。
从上述的技术方案可以看出,本申请从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点,针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和预设的制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,所述道路区域范围中包括若干网格,计算每个网格的平面坐标,根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值,进而得到网格的三维坐标,进而根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色,并将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色。上述正射影像图生成过程中,点云数据中包括道路路面采样点的真实高度值,所以从点云数据中得到每个网格的高度值为网格区域对应影像的最为真实的高度值,因此基于该包括真实高度值的三维坐标生成正射影像图充分的考虑到 了路面的高度起伏变化,消除了由于高度值参数不准确造成的生成的正射影像图不精确的问题。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例公开的一种正射影像图的生成方法流程图;
图2为本申请确定待渲染的正射影像图覆盖的道路区域范围的示例图;
图3为本申请另一实施例公开的一种确定目标道路图片的方法流程图;
图4为本申请相机拍摄距离的示例图;
图5为本申请实施例公开的一种正射影像图的生成装置组成框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种正射影像图的生成方法,具体流程如图1所示。
步骤S100:按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点;
其中,采集车在沿道路行驶过程中进行道路图片数据的采集,并在该过程中生成轨迹点,轨迹点对应的位置信息可通过采集车携带的定位设备如GPS定位设备、惯导定位设备获取得到。
优选地,可以按照相邻两个采样轨迹点之间的距离等于预设的距离阈值的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点。
具体的,预设的距离阈值可以等于预设的制图区域的边长中的长度值,制图区域的平行于车辆行驶方向的边为长边,例如长度值为20米,则选择出的相邻两个采样轨迹点之间的距离为20米。
步骤S110、针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和预设的制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,所述道路区域范围中包括至少一个网格,每个网格对应所述待渲染的正射影像图的一个像素点;
其中,采样轨迹点的坐标可以是经纬度坐标,也可以是经纬度转换后的平面坐标;采集车的行驶方向是生成该采样轨迹点时所述采集车的行驶方向,行驶方向角是指行驶方向和正北方向的夹角,可以从车的惯导设备中采集到;预设的制图区域边长中的长度值可以以相机的近拍摄距离和最远拍摄距离值的差值确定,如图4所示,例如相机的最近拍摄距离为相机前方2米,最远拍摄距离为相机前方22米,差值为20米,因此长度值可以设为20米;而宽度值可以根据道路的宽度进行设置,以制图区域能够覆盖道路的宽度为准。
以预设制图区域边长为20m*20m为例,假设当前车辆在中间车道行驶,则待渲染的正射影像图覆盖的道路区域范围为基于当前采样轨迹点的位置,取行驶方向前方2米到22米,左方10米,右方10米覆盖的道路区域,如图2所示,相邻两个采样轨迹点:采样轨迹点1和采样轨迹点2之间的距离值为20米,两个采样轨迹点分别对应的制图区域1和制图区域2覆盖的道路分别为采样轨迹点1和采用轨迹点2对应的待渲染的正射影像图覆盖的道路区域范围。
其中,道路区域范围中包括至少一个网格,具体的,可以根据预设的分辨率确定网格的个数,如预设的分辨率为1000*1000,即令生成的正射影像图的分辨率为1000*1000,则令制图区域中包含1000*1000个网格。
步骤120、根据所述采样轨迹点的坐标和采集车的行驶方向角,得到每个网格的平面坐标;
具体的,可以根据以下的网格平面坐标计算公式进行计算:
x2=x1*cos(yaw)+y1*sin(yaw)+X0;
y2=-x1*sin(yaw)+y1*cos(yaw)+Y0;
X2和y2表示网格的平面坐标,X0和Y0表示采样轨迹点的坐标,x1和y1表示网格在以采样轨迹点为坐标原点的网格坐标系中的网格坐标,yaw表示行驶方向角,这里的平面坐标是指直角坐标系中的坐标。
步骤S130、根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值,网格的高度值和平面坐标构成网格的三维坐标;
其中,点云数据是大量的点数据集合,其反映了沿采集车行驶轨迹的道路表面的采 样点的三维坐标。其中,可以在采集车行驶过程中利用采集车携带的三维激光雷达实现对道路表面的采样点的三维坐标的采集。
优选地,根据每个网格的平面坐标,从采集车在道路上行驶时生成的道路的点云数据中,得到每个网格的高度值的过程,具体包括:
针对每个网格,根据该网格的平面坐标,从采集车在道路上行驶时生成的道路的点云数据中查找目标点,所述目标点与该网格之间的距离值小于点云数据中其他各点与该网格之间的距离值;
将所述目标点的高度值确定为该网格的高度值。
上述过程,计算出点云数据中每个点距离该网格的距离值,并从计算得到的距离值中找到最小距离值,并将该最小距离值对应的点的坐标中的高度值作为网格的高度值,如此通过最短距离的方式,找到距离每个网格位置最近的点云中的点,保证了网格高度值的准确性。
步骤S140、根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色,所述相机参数是安装在所述采集车上拍摄所述图片数据的相机参数;
具体的,首先,针对一个网格,从所述道路的图片数据中确定目标道路图片,所述目标道路图片拍摄到该网格对应区域影像;
其中,在采集车行驶过程中采集的道路图片数据反映了实际中沿行驶轨迹的道路的路面路况,且每一幅图片均标识有拍摄该图片时的位置坐标。
其中,道路的图片数据可以是由采集车的相机直接拍摄所得,如以每秒拍摄10张的拍摄速度进行图片拍摄。也可以是对实际道路进行录像,然后通过对录像视频及采集车的行驶轨迹如惯导轨迹或GPS轨迹,进行图片化处理得到具有位置坐标的图片数据。由于视频文件是由一帧一帧的图像组成的,因此通过对视频文件进行帧提取,并匹配每一帧对应的惯导位置或GPS位置,即可得到具备拍摄位置的图片数据。
然后,根据网格的三维坐标和预设的相机参数,确定网格在所述目标道路图片中对应的像素点;
具体的,确定网格的三维坐标在目标道路图片中对应的像素点的过程,涉及到了坐标的转换,由于网格的三维坐标为在世界坐标系中的三维坐标,所以坐标的转换包括将网格在世界坐标系中的三维坐标转换为在相机坐标系中的三维坐标,将在相机坐标系中的三维坐标转换为在图像坐标系中的像素坐标,具体包括:
假设:P表示网格在世界坐标系中的三维坐标,T表示平移向量=拍摄的目标原点-相机原点,旋转矩阵R为:R x(yaw)*R y(pitch)*R z(roll),其中,方向角yaw,俯仰角pitch,翻滚角roll为相机坐标系绕x,y,z轴的旋转角度;
将坐标P转换为在相机坐标系中的坐标为:Pc(x c,y c,z c)=R(P-T);
将坐标Pc转换为在图像坐标系中的坐标为:P 0(x 0,y 0),其中X 0=f x*(x c/z c)+c x,y 0=f y*(y c/z c)+c y,其中,c x和c y为对光轴偏移,f x和f y为物理焦距。其中,光轴偏移、物理焦距、相机坐标系绕x,y,z轴的旋转角度属于预设的相机参数。
最后,将所述像素点的颜色作为网格对应的颜色。
步骤S150、将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色;
其中,根据该网格对应的颜色对网格点进行着色,道路覆盖范围在着色完成后即得到了正射影像图。
本实施例的技术方案中,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点,针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和预设的制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,所述道路区域范围中包括若干网格,计算每个网格的平面坐标,根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值,进而得到网格的三维坐标,进而根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色,并将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色。上述正射影像图生成过程中,点云数据中包括道路路面采样点的真实高度值,所以点云数据中得到每个网格的高度值为网格区域对应影像的真实高度值,因此基于该包括真实高度值的三维坐标去生成正射影像图充分的考虑到了路面的高度起伏变化,消除了由于高度值参数不准确造成的生成的正射影像图不精确的问题。
本申请一实施例中,针对一个网格,从所述道路的图片数据中确定目标道路图片,所述目标道路图片拍摄到该网格对应区域影像的过程,如图3所示,具体包括:
步骤S300、针对一个网格,计算所述道路图片数据中每一幅图片的拍摄位置到所述网格的平面坐标之间的距离值;
具体的,如图4所示,相机只能拍摄到位于最近拍摄距离2米和最远拍摄距离22米之间的道路路面,所以以图片的拍摄位置到所述网格的平面坐标的距离值确定目标道路 图片。
以拍摄位置的坐标是转换后的平面坐标为例:拍摄位置坐标为(x1,y1),一网格的平面坐标为(X1,Y1),计算拍摄位置坐标到网格平面坐标之间的距离值:
Figure PCTCN2018072318-appb-000001
步骤S310、将距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片作为拍摄到所述网格对应区域影像的目标道路图片;
具体的,计算得到的距离值可能有一个或多个位于最近拍摄距离2米和最远拍摄距离22米之间,即存在距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅或一幅以上的情况,则,
若距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅,则将该幅道路图片作为拍摄到所述网格对应区域影像的目标道路图片;
若距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅以上,则针对每一幅道路图片,计算该道路图片对应的距离值和所述相机的最近拍摄距离值的差值绝对值,所述道路图片对应的距离值为图片的拍摄位置到所述网格的平面坐标的距离值;
选择差值绝对值最小的道路图片作为拍摄到所述网格对应区域影像的目标道路图片。
上述实施例中,当存在多幅距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片时,选择道路图片的拍摄位置距离网格位置最近时拍摄到网格区域影像的一幅道路图片作为着色依据,这是因为距离物体越近拍摄到的该物体的图片越清晰,得到的颜色最为真实,进而选择道路图片的拍摄位置距离网格位置最近时拍摄到网格区域影像的一幅道路图片作为着色依据,能够令网格点的着色最为贴近像点的实际颜色,令生成的正射影像图更为精确。
本申请实施例还提供一种正射影像图生成装置,如图5所示,该装置包括:
采样轨迹点选取单元500,用于按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点;
范围确定单元510,用于针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和预设的制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,所述道路区域范围中包括至少一个网格,每个网格对应所述待渲染的正射影像图的一个 像素点;
平面坐标确定单元520,用于根据所述采样轨迹点的坐标和采集车的行驶方向角,得到每个网格的平面坐标;
三维坐标确定单元530,用于根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值,网格的高度值和平面坐标构成网格的三维坐标;
颜色确定单元540,用于根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色,所述相机参数是安装在所述采集车上拍摄所述道路图片数据的相机参数;
渲染单元550,用于将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色。
优选地,所述采样轨迹点选取单元500按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点的过程,具体包括:
按照相邻两个采样轨迹点之间的距离等于预设的距离阈值的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点。
优选地,所述三维坐标确定530单元根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值的过程,具体包括:
针对每个网格,根据该网格的平面坐标,从采集车在道路上行驶时生成的道路的点云数据中查找目标点,所述目标点与该网格之间的距离值小于点云数据中其他各点与该网格之间的距离值;
将所述目标点的高度值确定为该网格的高度值。
优选地,所述颜色确定单元540根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色的过程,具体包括:
针对一个网格,从所述道路的图片数据中确定目标道路图片,所述目标道路图片拍摄到该网格对应区域影像;
根据网格的三维坐标和预设的相机参数,确定网格在所述目标道路图片中对应的像素点;将所述像素点的颜色作为网格对应的颜色。
优选地,所述颜色确定单元540针对一个网格,从所述道路的图片数据中确定目标道路图片,所述目标道路图片拍摄到该网格对应区域影像的过程,具体包括:
针对一个网格,计算所述道路图片数据中每一幅图片的拍摄位置到所述网格的平面 坐标的距离值;
将距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片作为拍摄到所述网格对应区域影像的目标道路图片。
优选地,所述确定单元540将距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片作为拍摄到所述网格对应区域影像的目标道路图片的过程,具体包括:
若距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅,则将该幅道路图片作为拍摄到所述网格对应区域影像的目标道路图片;
若距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅以上,则针对每一幅道路图片,计算该道路图片对应的距离值和所述相机的最近拍摄距离值的差值绝对值,所述道路图片对应的距离值为道路图片的拍摄位置到所述网格的平面坐标的距离值;
选择差值绝对值最小的道路图片作为拍摄到所述网格对应区域影像的目标道路图片。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (12)

  1. 一种正射影像图的生成方法,其特征在于,所述方法包括:
    按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点;
    针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和预设的制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,所述道路区域范围中包括至少一个网格,每个网格对应所述待渲染的正射影像图的一个像素点;
    根据所述采样轨迹点的坐标和采集车的行驶方向角,得到每个网格的平面坐标;
    根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值,网格的高度值和平面坐标构成网格的三维坐标;
    根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色,所述相机参数是安装在所述采集车上拍摄所述道路图片数据的相机参数;
    将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色。
  2. 如权利要求1所述的方法,其特征在于,所述按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点具体包括:
    按照相邻两个采样轨迹点之间的距离等于预设的距离阈值的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点。
  3. 如权利要求1所述的方法,其特征在于,根据每个网格的平面坐标,从采集车在道路上行驶时生成的道路的点云数据中,得到每个网格的高度值包括:
    针对每个网格,根据该网格的平面坐标,从采集车在道路上行驶时生成的道路的点云数据中查找目标点,所述目标点与该网格的距离值小于点云数据中其他各点与该网格的距离值;
    将所述目标点的高度值确定为该网格的高度值。
  4. 如权利要求1所述的方法,其特征在于,所述根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色包括:
    针对一个网格,从所述道路的图片数据中确定目标道路图片,所述目标道路图片拍摄到该网格对应区域影像;
    根据网格的三维坐标和预设的相机参数,确定网格在所述目标道路图片中对应的像素点;
    将所述像素点的颜色作为网格对应的颜色。
  5. 如权利要求4所述的方法,其特征在于,所述针对一个网格,从所述道路的图片数据中确定目标道路图片,所述目标道路图片拍摄到该网格对应区域影像,包括:
    针对一个网格,计算所述道路图片数据中每一幅道路图片的拍摄位置到所述网格的平面坐标的距离值;
    将距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片作为拍摄到所述网格对应区域影像的目标道路图片。
  6. 如权利要求5所述的方法,其特征在于,所述将距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片作为拍摄到所述网格对应区域影像的目标道路图片,包括:
    若距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅,则将该幅道路图片作为拍摄到所述网格对应区域影像的目标道路图片;
    若距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅以上,则针对每一幅道路图片,计算该道路图片对应的距离值和所述相机的最近拍摄距离值的差值绝对值,所述道路图片对应的距离值为道路图片的拍摄位置到所述网格的平面坐标的距离值;
    选择差值绝对值最小的道路图片作为拍摄到所述网格对应区域影像的目标道路图片。
  7. 一种正射影像图的生成装置,其特征在于,所述装置包括:
    采样轨迹点选取单元,用于按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点;
    范围确定单元,用于针对一个采样轨迹点,根据该采样轨迹点的坐标、采集车的行驶方向角和预设的制图区域边长,确定待渲染的正射影像图覆盖的道路区域范围,所述道路区域范围中包括至少一个网格,每个网格对应所述待渲染的正射影像图的一个像素点;
    平面坐标确定单元,用于根据所述采样轨迹点的坐标和采集车的行驶方向角,得到每个网格的平面坐标;
    三维坐标确定单元,用于根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值,网格的高度值和平面坐标构成网格的三维坐标;
    颜色确定单元,用于根据每个网格的三维坐标、所述道路的图片数据及预设的相机 参数得到每个网格对应的颜色,所述相机参数是安装在所述采集车上拍摄所述道路图片数据的相机参数;
    渲染单元,用于将每个网格在待渲染的正射影像图中对应的像素点的颜色渲染为相应网格对应的颜色。
  8. 如权利要求7所述的装置,其特征在于,所述采样轨迹点选取单元按照预设的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点的过程,具体包括:
    按照相邻两个采样轨迹点之间的距离等于预设的距离阈值的采样规则,从采集车在道路上行驶时生成的轨迹点中,选取采样轨迹点。
  9. 如权利要求7所述的装置,其特征在于,所述三维坐标确定单元根据每个网格的平面坐标,从采集车在所述道路上行驶时生成的所述道路的点云数据中得到每个网格的高度值的过程,具体包括:
    针对每个网格,根据该网格的平面坐标,从采集车在道路上行驶时生成的道路的点云数据中查找目标点,所述目标点与该网格之间的距离值小于点云数据中其他各点与该网格之间的距离值;
    将所述目标点的高度值确定为该网格的高度值。
  10. 如权利要求7所述的装置,其特征在于,所述颜色确定单元根据每个网格的三维坐标、所述道路的图片数据及预设的相机参数得到每个网格对应的颜色的过程,具体包括:
    针对一个网格,从所述道路的图片数据中确定目标道路图片,所述目标道路图片拍摄到该网格对应区域影像;
    根据网格的三维坐标和预设的相机参数,确定网格在所述目标道路图片中对应的像素点;
    将所述像素点的颜色作为网格对应的颜色。
  11. 如权利要求10所述的装置,其特征在于,所述颜色确定单元针对一个网格,从所述道路的图片数据中确定目标道路图片,所述目标道路图片拍摄到该网格对应区域影像的过程,包括:
    针对一个网格,计算所述道路图片数据中每一幅图片的拍摄位置到所述网格的平面坐标的距离值;
    将距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片作为拍摄到所述网格对应区域影像的目标道路图片。
  12. 如权利要求11所述的装置,其特征在于,所述颜色确定单元将距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片作为拍摄到所述网格对应区域影像的目标道路图片的过程,包括:
    若距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅,则将该幅道路图片作为拍摄到所述网格对应区域影像的目标道路图片;
    若距离值位于相机的最近拍摄距离值和最远拍摄距离值之间的道路图片为一幅以上,则针对每一幅道路图片,计算该道路图片对应的距离值和所述相机的最近拍摄距离值的差值绝对值,所述道路图片对应的距离值为道路图片的拍摄位置到所述网格的平面坐标的距离值;
    选择差值绝对值最小的道路图片作为拍摄到所述网格对应区域影像的目标道路图片。
PCT/CN2018/072318 2017-01-20 2018-01-12 一种正射影像图的生成方法及装置 WO2018133727A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710048480.2 2017-01-20
CN201710048480.2A CN108335337B (zh) 2017-01-20 2017-01-20 一种正射影像图的生成方法及装置

Publications (1)

Publication Number Publication Date
WO2018133727A1 true WO2018133727A1 (zh) 2018-07-26

Family

ID=62908269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/072318 WO2018133727A1 (zh) 2017-01-20 2018-01-12 一种正射影像图的生成方法及装置

Country Status (2)

Country Link
CN (1) CN108335337B (zh)
WO (1) WO2018133727A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969679A (zh) * 2019-11-27 2020-04-07 淮南矿业(集团)有限责任公司 一种Arcgis与cass软件相结合的影像纠正方法
CN111475590A (zh) * 2019-01-23 2020-07-31 阿里巴巴集团控股有限公司 道路数据审核方法、***、终端设备及存储介质
CN115394178A (zh) * 2021-09-02 2022-11-25 中国地质大学(北京) 一种特教用汉盲双语标注触觉地图拼图教具的制作方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712197B (zh) * 2018-12-20 2023-06-30 珠海瑞天安科技发展有限公司 一种机场跑道网格化标定方法及***
CN110111414B (zh) * 2019-04-10 2023-01-06 北京建筑大学 一种基于三维激光点云的正射影像生成方法
CN110415330B (zh) * 2019-04-29 2020-05-29 当家移动绿色互联网技术集团有限公司 道路生成方法、装置、存储介质及电子设备
CN111667545B (zh) * 2020-05-07 2024-02-27 东软睿驰汽车技术(沈阳)有限公司 高精度地图生成方法、装置、电子设备及存储介质
CN113945153A (zh) * 2020-07-16 2022-01-18 远景网格有限公司 用于自动驾驶的利用图像跟踪的距离和位置测定方法及装置
CN112598767B (zh) * 2020-12-29 2024-05-10 厦门市美亚柏科信息股份有限公司 基于时空大数据的轨迹行为分析方法、终端设备及存储介质
CN117292349B (zh) * 2023-11-22 2024-04-12 魔视智能科技(武汉)有限公司 确定路面高度的方法、装置、计算机设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249103A (ja) * 2006-03-20 2007-09-27 Zenrin Co Ltd 道路画像作成システム及び道路画像作成方法,及び道路画像合成装置
JP2009223817A (ja) * 2008-03-18 2009-10-01 Zenrin Co Ltd 路面標示地図生成方法
CN101893443A (zh) * 2010-07-08 2010-11-24 上海交通大学 道路数字正射影像地图的制作***
CN102138163A (zh) * 2008-08-29 2011-07-27 三菱电机株式会社 俯瞰图像生成装置、俯瞰图像生成方法以及俯瞰图像生成程序
CN104573733A (zh) * 2014-12-26 2015-04-29 上海交通大学 一种基于高清正射影像图的高精细地图生成***及方法
US20150341552A1 (en) * 2014-05-21 2015-11-26 Here Global B.V. Developing a Panoramic Image
CN106097444A (zh) * 2016-05-30 2016-11-09 百度在线网络技术(北京)有限公司 高精地图生成方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104677363B (zh) * 2013-12-03 2017-04-12 高德软件有限公司 一种道路生成方法和装置
CN106225791B (zh) * 2016-08-03 2019-09-20 福建工程学院 一种基于网格划分的gps定位与道路匹配方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249103A (ja) * 2006-03-20 2007-09-27 Zenrin Co Ltd 道路画像作成システム及び道路画像作成方法,及び道路画像合成装置
JP2009223817A (ja) * 2008-03-18 2009-10-01 Zenrin Co Ltd 路面標示地図生成方法
CN102138163A (zh) * 2008-08-29 2011-07-27 三菱电机株式会社 俯瞰图像生成装置、俯瞰图像生成方法以及俯瞰图像生成程序
CN101893443A (zh) * 2010-07-08 2010-11-24 上海交通大学 道路数字正射影像地图的制作***
US20150341552A1 (en) * 2014-05-21 2015-11-26 Here Global B.V. Developing a Panoramic Image
CN104573733A (zh) * 2014-12-26 2015-04-29 上海交通大学 一种基于高清正射影像图的高精细地图生成***及方法
CN106097444A (zh) * 2016-05-30 2016-11-09 百度在线网络技术(北京)有限公司 高精地图生成方法和装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111475590A (zh) * 2019-01-23 2020-07-31 阿里巴巴集团控股有限公司 道路数据审核方法、***、终端设备及存储介质
CN110969679A (zh) * 2019-11-27 2020-04-07 淮南矿业(集团)有限责任公司 一种Arcgis与cass软件相结合的影像纠正方法
CN115394178A (zh) * 2021-09-02 2022-11-25 中国地质大学(北京) 一种特教用汉盲双语标注触觉地图拼图教具的制作方法
CN115394178B (zh) * 2021-09-02 2023-11-03 中国地质大学(北京) 一种特教用汉盲双语标注触觉地图拼图教具的制作方法

Also Published As

Publication number Publication date
CN108335337B (zh) 2019-12-17
CN108335337A (zh) 2018-07-27

Similar Documents

Publication Publication Date Title
WO2018133727A1 (zh) 一种正射影像图的生成方法及装置
WO2020102944A1 (zh) 点云处理方法、设备及存储介质
CN110617821B (zh) 定位方法、装置及存储介质
JP5832341B2 (ja) 動画処理装置、動画処理方法および動画処理用のプログラム
US11430228B2 (en) Dynamic driving metric output generation using computer vision methods
JP4619962B2 (ja) 路面標示計測システム、白線モデル計測システムおよび白線モデル計測装置
WO2021051344A1 (zh) 高精度地图中车道线的确定方法和装置
JP2010530997A (ja) 道路情報を生成する方法及び装置
CN113409459B (zh) 高精地图的生产方法、装置、设备和计算机存储介质
TWI444593B (zh) 地面目標定位系統與方法
CN112154303B (zh) 高精度地图定位方法、***、平台及计算机可读存储介质
JPWO2020039937A1 (ja) 位置座標推定装置、位置座標推定方法およびプログラム
US11935249B2 (en) System and method for egomotion estimation
CN112036359B (zh) 一种车道线的拓扑信息获得方法、电子设备及存储介质
JP5396585B2 (ja) 地物特定方法
CN112632415B (zh) 一种Web地图实时生成方法及图像处理服务器
WO2022133986A1 (en) Accuracy estimation method and system
CN115222815A (zh) 障碍物距离检测方法、装置、计算机设备和存储介质
TW202321651A (zh) 路徑規劃方法及其系統
WO2021004813A1 (en) Method and mobile entity for detecting feature points in an image
CN113433566A (zh) 地图建构***以及地图建构方法
Zhang et al. UAV-borne Mapping Algorithms for Canopy-Level and High-Speed Drone Applications
US12026960B2 (en) Dynamic driving metric output generation using computer vision methods
CN116007637B (zh) 定位装置、方法、车载设备、车辆、及计算机程序产品
Ye et al. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18741907

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18741907

Country of ref document: EP

Kind code of ref document: A1