CN109493407A - Realize the method, apparatus and computer equipment of laser point cloud denseization - Google Patents

Realize the method, apparatus and computer equipment of laser point cloud denseization Download PDF

Info

Publication number
CN109493407A
CN109493407A CN201811374889.4A CN201811374889A CN109493407A CN 109493407 A CN109493407 A CN 109493407A CN 201811374889 A CN201811374889 A CN 201811374889A CN 109493407 A CN109493407 A CN 109493407A
Authority
CN
China
Prior art keywords
point cloud
view
target scene
obtains
original point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811374889.4A
Other languages
Chinese (zh)
Other versions
CN109493407B (en
Inventor
陈仁
孙银健
黄天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811374889.4A priority Critical patent/CN109493407B/en
Publication of CN109493407A publication Critical patent/CN109493407A/en
Application granted granted Critical
Publication of CN109493407B publication Critical patent/CN109493407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a kind of method, apparatus and computer equipment for realizing laser point cloud denseization, the method for realizing laser point cloud denseization includes: the original point cloud for obtaining target scene;The original point cloud is projected to a cylinder according to forward sight angle of field, generates the first front view, the azimuth when forward sight angle of field acquires the original point cloud with laser radar is related;Based on the mapping relations between different resolution front view constructed by deep learning model, mapped to obtain the second front view by first front view, the high resolution of second front view is in the resolution ratio of first front view;Coordinate system where second front view is projected to the original point cloud, obtains the dense point cloud of the target scene.Solve the problems, such as that the denseization effect of laser point cloud in the prior art is relatively poor using the method, apparatus provided by the present invention for realizing laser point cloud denseization and computer equipment.

Description

Realize the method, apparatus and computer equipment of laser point cloud denseization
Technical field
The present invention relates to field of computer technology more particularly to a kind of method, apparatus for realizing laser point cloud denseization and Computer equipment.
Background technique
High-precision map is to be wanted for assisting driving, semi-automatic driving or unpiloted map by a series of maps Element is constituted, for example, map elements include: road tooth, guardrail etc..Wherein, accurately during map generalization, first from laser point Cloud extracts to obtain map elements, then is edited by the map elements that manual type obtains extraction, to generate accurately Figure.
From the foregoing, it will be observed that the extraction of map elements will lead to map if laser point cloud is excessively sparse dependent on laser point cloud The accuracy of element is not high, and finally influences the production efficiency of high-precision map.For this purpose, existing laser point cloud denseization scheme Interpolated value method is generallyd use, the up-sampling of laser point cloud is carried out with this, reaches the denseization effect of laser point cloud.
However, it is limited to the rule that interpolated value method is relied on, for example, rule includes neighbour's interpolation, bilinear interpolation etc., So that the denseization effect of laser point cloud is relatively poor.
Summary of the invention
Denseization effect in order to solve the problems, such as laser point cloud present in the relevant technologies is relatively poor, each reality of the present invention It applies example and a kind of method, apparatus and computer equipment for realizing laser point cloud denseization is provided.
Wherein, the technical scheme adopted by the invention is as follows:
It is disclosed in a first aspect, a kind of method for realizing laser point cloud denseization according to the present invention, comprising: to obtain target field The original point cloud of scape;The original point cloud is projected to a cylinder according to forward sight angle of field, generates the first front view, the forward sight Azimuth when angle of field acquires the original point cloud with laser radar is related;It is differentiated based on difference constructed by deep learning model Mapping relations between rate front view are mapped to obtain the second front view, point of second front view by first front view Resolution is higher than the resolution ratio of first front view;Coordinate system where second front view is projected to the original point cloud, obtains The dense point cloud of the target scene.
Disclosed second aspect according to the present invention, a kind of device for realizing laser point cloud denseization, comprising: original point cloud obtains Modulus block, for obtaining the original point cloud of target scene;Front view obtains module, and being used for will be described original according to forward sight angle of field Point cloud is projected to a cylinder, generates the first front view, when the forward sight angle of field and laser radar acquire the original point cloud Azimuth is related;Front view mapping block, for based on reflecting between different resolution front view constructed by deep learning model Relationship is penetrated, is mapped to obtain the second front view by first front view, the high resolution of second front view is in described first The resolution ratio of front view;Dense point cloud obtains module, for projecting the second front view to original point cloud place coordinate system, Obtain the dense point cloud of the target scene.
The disclosed third aspect according to the present invention, a kind of computer equipment, including processor and memory, the memory On be stored with computer-readable instruction, as described above realize is realized when the computer-readable instruction is executed by the processor The method of laser point cloud denseization.
A kind of disclosed fourth aspect according to the present invention, computer readable storage medium, is stored thereon with computer program, The method as described above for realizing laser point cloud denseization is realized when the computer program is executed by processor.
In the above-mentioned technical solutions, cylindrical surface projecting is carried out according to forward sight angle of field to the original point cloud of target scene, generated First front view, with the mapping relations between the different resolution front view according to constructed by deep learning model, by the first forward sight Figure mapping obtains the second front view, and then second front view is projected to original point cloud place coordinate system, to obtain the mesh Mark the dense point cloud of scene, that is to say, that be based on deep learning model, the mapping constructed between different resolution front view is closed System, so that the mapping relations are applied in the actual scene of the first front view, obtains high resolution in the first front view resolution ratio The second front view, so as to form the dense point cloud of target scene, to realize denseization of original point cloud, avoid limitation in The rule that interpolation method is relied on, the denseization effect for solving laser point cloud existing in the prior art relatively poor are asked Topic.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not It can the limitation present invention.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and in specification together principle for explaining the present invention.
Fig. 1 is the schematic diagram of related implementation environment according to the present invention.
Fig. 2 is a kind of hardware block diagram of server shown according to an exemplary embodiment.
Fig. 3 is a kind of flow chart of method for realizing laser point cloud denseization shown according to an exemplary embodiment.
Fig. 4 is the schematic diagram of the laser point cloud of the acquisition of laser radar involved in Fig. 3 corresponding embodiment.
Fig. 5 is the rough schematic view of Fig. 4.
Fig. 6 be in Fig. 3 corresponding embodiment step 330 in the flow chart of one embodiment.
Fig. 7 is the specific implementation schematic diagram of front view generating process involved in Fig. 6 corresponding embodiment.
Fig. 8 is synthesis view to be projected involved in Fig. 6 corresponding embodiment apart from view, height view, intensity view Schematic diagram.
Fig. 9 is the schematic diagram of lower first front view of resolution ratio involved in Fig. 6 corresponding embodiment.
Figure 10 be in Fig. 3 corresponding embodiment step 350 in the flow chart of one embodiment.
Figure 11 is the model structure schematic diagram of convolutional neural networks model in Figure 10 corresponding embodiment.
Figure 12 is the schematic diagram of higher second front view of resolution ratio involved in Figure 10 corresponding embodiment.
Figure 13 is the flow chart of another method for realizing laser point cloud denseization shown according to an exemplary embodiment.
Figure 14 be in Figure 13 corresponding embodiment step 410 in the flow chart of one embodiment.
Figure 15 be in Figure 14 corresponding embodiment step 415 in the flow chart of one embodiment.
Figure 16 is the contrast schematic diagram of the registration process shown according to an exemplary embodiment based on reference target.
Figure 17 be in Figure 15 corresponding embodiment step 4153 in the flow chart of one embodiment.
Figure 18 is the flow chart of another method for realizing laser point cloud denseization shown according to an exemplary embodiment.
Figure 19 is a kind of block diagram of device for realizing laser point cloud denseization shown according to an exemplary embodiment.
Figure 20 is a kind of block diagram of computer equipment shown according to an exemplary embodiment.
Through the above attached drawings, it has been shown that the specific embodiment of the present invention will be hereinafter described in more detail, these attached drawings It is not intended to limit the scope of the inventive concept in any manner with verbal description, but is by referring to specific embodiments Those skilled in the art illustrate idea of the invention.
Specific embodiment
Here will the description is performed on the exemplary embodiment in detail, the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended The example of device and method being described in detail in claims, some aspects of the invention are consistent.
As previously mentioned, existing laser point cloud denseization scheme generallys use interpolated value method, laser point cloud is carried out with this Up-sampling, reach the denseization effect of laser point cloud.However, the rule that interpolated value method is relied on is limited to, for example, regular Including neighbour's interpolation, bilinear interpolation etc., so that the denseization effect of laser point cloud is relatively poor.
In order to overcome defect present in above-mentioned interpolated value method, it is further introduced into edge-protected, locality protection interpolation Method:
Wherein, based on the interpolation method at edge, there is certain enhancing to the edge of laser point cloud, laser point cloud can be made Visual effect is more preferable, and then improves the denseization effect of laser point cloud.
The laser point cloud of original low-resolution is divided into different zones first by the interpolation method based on region, then will Interpolation point is mapped to the laser point cloud of the original low-resolution, judges interpolation point affiliated area, further according to interpolation neighborhood of a point picture Element designs different interpolation formulas, calculates the value of the different interpolation point of affiliated area, according to different interpolation formulas finally with this Improve the denseization effect of laser point cloud.
From the foregoing, it will be observed that existing laser point cloud denseization scheme is still limited to the rule that interpolated value method is relied on, no Have the defects that the denseization effect of laser point cloud is relatively poor avoidablely, and then be easy to cause the accuracy of map elements not Height, and finally influence the production efficiency of high-precision map.
For this purpose, spy of the present invention proposes a kind of method for realizing laser point cloud denseization, interpolated value method is avoided limitation to The rule relied on substantially improves the denseization effect of laser point cloud, and correspondingly, this kind realizes the side of laser point cloud denseization Method is adapted to carry out the device of laser point cloud denseization, can be deployed in the computer equipment of framework von Neumann architecture, example Such as, computer equipment is server, the method to realize laser point cloud denseization.
Fig. 1 is a kind of schematic diagram of implementation environment involved in method for realizing laser point cloud denseization.The implementation environment Including user terminal 110 and server end 130.
Wherein, user terminal 110 is deployed in vehicle, aircraft, in robot, can be desktop computer, laptop, plate Computer, smart phone, palm PC, personal digital assistant, navigator, intelligent computer etc., herein without limiting.
User terminal 110 pre-establishes network connection by modes such as wireless or cable networks with server end 130, and leads to It crosses this network connection and realizes that the data between user terminal 110 and server end 130 are transmitted.For example, the data of transmission include: target High-precision map of scene etc..
Described herein to be, server end 130 can be a server, be also possible to the clothes being made of multiple servers Business device cluster, as shown in Figure 1, can also be the cloud computing center being made of multiple servers.Wherein, server is mentioned for user For the electronic equipment of background service, for example, background service includes: the service of laser point cloud denseization, map elements extraction service, height Precision map generates service etc..
For target scene, server end 130 can carry out the thick of original point cloud after getting original point cloud Densification carries out map elements based on the dense point cloud of the target scene to get the dense point cloud for arriving the target scene It extracts.
After extraction obtains map elements, server end 130 can show that extraction obtains by the display screen configured Map elements, under the control of editorial staff generate target scene high-precision map.
Certainly, according to the needs actually operated, denseization of laser point cloud, the extraction of map elements and accurately Map generalization can be both deployed in same server, can also be respectively deployed in different server, for example, laser point cloud Denseization is deployed in server 131,132, and the extraction of map elements is deployed in server 133, and accurately map generalization is then It is deployed in server 134.
Then, the high-precision map of target scene, can further store, for example, store to server end 130, It can store to other spatial caches, be not limited herein.
So, for the user terminal 110 for using high-precision map, for example, when automatic driving vehicle is intended to by target When scene, the user terminal 110 carried will correspondingly obtain the high-precision map of target scene, unmanned in order to assist Vehicle safety passes through target scene.
It is noted that the laser point cloud acquired in server end 130, can be pre- by other acquisition equipment It first acquires and stores and be also possible to vehicle, aircraft, robot in carrying user terminal 110 to server end 130 by target field Jing Shi is acquired in real time by user terminal 110 and is uploaded to server end 130, is not limited herein.
Fig. 2 is a kind of hardware block diagram of server shown according to an exemplary embodiment.This kind of server is applicable in Server end 130 in the implementation environment shown in Fig. 1.
It should be noted that this kind of server, which is one, adapts to example of the invention, it must not believe that there is provided right Any restrictions of use scope of the invention.This kind of server can not be construed to need to rely on or must have in Fig. 2 One or more component in illustrative server 200 shown.
The hardware configuration of server 200 can generate biggish difference due to the difference of configuration or performance, as shown in Fig. 2, Server 200 includes: power supply 210, interface 230, an at least memory 250, at least central processing unit (CPU, a Central Processing Units) 270, display screen 280 and input module 290.
Specifically, power supply 210 is used to provide operating voltage for each component on server 200.
Interface 230 includes an at least wired or wireless network interface 231, at least a string and translation interface 233, at least one defeated Enter output interface 235 and at least USB interface 237 etc., is used for and external device communication.For example, with ring is implemented shown by Fig. 1 The interaction of user terminal 110 in border.
The carrier that memory 250 is stored as resource, can be read-only memory, random access memory, disk or CD Deng the resource stored thereon includes operating system 251, application program 253 and data 255 etc., and storage mode can be of short duration It stores or permanently stores.
Wherein, operating system 251 be used for manage and control server 200 on each component and application program 253, with reality Existing calculating and processing of the central processing unit 270 to mass data 255, can be Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
Application program 253 is the computer program based at least one of completion particular job on operating system 251, can To include an at least module (being not shown in Fig. 2), each module can separately include the series of computation to server 200 Machine readable instruction.For example, realize that the device of laser point cloud denseization can be considered the application program 253 for being deployed in server 200, with The method for realizing laser point cloud denseization.
Data 255 can be photo, picture, can also be laser point cloud, be stored in memory 250.
Central processing unit 270 may include the processor of one or more or more, and be set as by communication bus with deposit Reservoir 250 communicates, and to read the computer-readable instruction stored in memory 250, and then realizes to largely counting in memory 250 According to 255 operation and processing.For example, it is readable to read the series of computation machine stored in memory 250 by central processing unit 270 The form of instruction completes the top sampling methods of Laser Power Devices.
Display screen 280 can be liquid crystal display or electric ink display screen etc., this display screen 280 is set in electronics An output interface is provided between standby 200 and user, to pass through the output interface for any one shape of text, picture or video Formula or combination are formed by output content and show output to user.For example, by showing for the map elements of editor in target In scene map.
Input module 290 can be the touch layer covered on display screen 280, be also possible on 200 shell of electronic equipment Key, trace ball or the Trackpad of setting can also be external keyboard, mouse, Trackpad etc., for receiving user's input Various control instructions, in order under the control of editorial staff generate target scene high-precision map.For example, being directed to target The edit instruction of map elements in scene map.
In addition, also can equally realize the present invention by hardware circuit or hardware circuit combination software, therefore, this hair is realized The bright combination for being not limited to any specific hardware circuit, software and the two.
Referring to Fig. 3, in one exemplary embodiment, a kind of method that realizing laser point cloud denseization is suitable for Fig. 1 institute Show the server end of implementation environment, the structure of the server end can be as shown in Figure 2.
The method that this kind realizes laser point cloud denseization can be executed by server end, it is understood that for by server end The device of realization laser point cloud denseization of middle deployment executes.In following methods embodiment, for ease of description, with each step Executing subject be realize laser point cloud denseization device be illustrated, but not to this constitute limit.
The method that this kind realizes laser point cloud denseization may comprise steps of:
Step 310, the original point cloud of target scene is obtained.
Illustrate first, laser point cloud, is generated by entity in laser scanning target scene, is substantially dot matrix Image is to be made of several sampled points of entity in corresponding target scene.
Wherein, road and its surrounding enviroment that target scene can be for vehicle driving, can also be for robot The interior of building of traveling, or be the navigation channel for unmanned plane low-latitude flying and its surrounding enviroment, the present embodiment is not right This is limited.
It should be noted that the high-precision map of target scene provided by the present embodiment can be according to the difference of target scene And it is suitable for different application scenarios, for example, road and its surrounding enviroment are suitable for assisting vehicle travel scene, interior of building Suitable for auxiliary robot traveling scene, navigation channel and its surrounding enviroment are suitable for auxiliary unmanned plane low-latitude flying scene.
It should be appreciated that the laser point cloud indicated with dot matrix image, a frame laser point cloud includes up to 120,000 pixels, If directly acquisition multiframe laser point cloud generates dense point cloud, and carries out the extraction of map elements with this, it will lead to real-time and want It asks and is difficult to meet, be based on this, in various embodiments of the present invention, the extraction of map elements depends on denseization of original point cloud.Wherein, Original point cloud refers to sparse laser point cloud, for example, single frames laser point cloud.
About the acquisition of original point cloud, original point cloud can derive from pre-stored laser point cloud, can also derive from The laser point cloud acquired in real time, and then obtained by way of locally reading or network is downloaded.
In other words, for realizing the device of laser point cloud denseization, the available laser point cloud acquired in real time, In order to denseization of real-time perfoming laser point cloud, the laser point cloud acquired in a historical time section can also be obtained, in order to Denseization of laser point cloud is carried out when the task of processing is less, the present embodiment makes specific restriction not to this.
Described herein to be, laser point cloud is generated and is acquired by the laser scanning of laser radar transmitting, is being acquired Cheng Zhong, laser radar can be deployed in advance in acquisition equipment, carry out laser point cloud in order to acquire equipment for target scene Acquisition.For example, acquisition equipment is vehicle, laser radar is deployed in the vehicle as vehicular component in advance, when the vehicle driving passes through When crossing target scene, the laser point cloud of the target scene is just correspondingly collected.
Step 330, the original point cloud is projected to a cylinder according to forward sight angle of field, generates the first front view.
As previously mentioned, it includes up to 120,000 pixels that a frame laser point cloud, which is, in order to improve laser point cloud denseization Efficiency, in the present embodiment, denseization of laser point cloud is carried out dependent on the first front view.
It is possible, firstly, to understand, the device hardware feature based on laser radar, laser radar, can when acquiring laser point cloud 360 ° of rotations, every rotation is primary to emit a laser, in order to form a frame and swash by scanning specified angle to target scene Luminous point cloud, that is to say, azimuth coverage when laser radar acquires laser point cloud is 0 °~360 °.Wherein, specified angle can be with It according to the actual demand of application scenarios, flexibly sets, for example, setting specified angle is 360 °, indicates laser radar needs pair Target scene ring sweeps 360 °.
Based on this, forward sight angle of field is related with azimuth when laser radar acquisition laser point cloud.It is also understood that When laser radar carries out laser point cloud acquisition with azimuth, the entity in target scene observed forward has certain angle Range is spent, which is the forward sight angle of field for being considered as laser radar.Therefore, forward sight angle of field is essentially used for instruction and swashs Optical radar can observe forward the angular range of entity in target scene.
Using for vehicle driving road and its ambient enviroment as being illustrated for target scene, pass through laser radar 360 ° are swept to the target scene ring, a frame laser point cloud is formed, as shown in figure 4, presenting in target scene in the laser point cloud Entity, which includes people 401, bicyclist 402, vehicle 403, vehicle 404, vehicle 405 etc..
Fig. 4 is reduced to Fig. 5, in order to be illustrated more clearly that forward sight angle of field, as shown in fig. 5, it is assumed that laser radar 400 When azimuth when acquiring laser point cloud is 0 °, bicyclist 402 in target scene, vehicle 403 can be observed simultaneously, and can not Observe people 401 in target scene, vehicle 404, vehicle 405, then, for azimuth is 0 °, it can determine at this time Forward sight angle of field be 407.
It is azimuthal when with acquisition laser point cloud to gradually change as a result, centered on laser radar, it can determine to obtain Different forward sight angle of field.
It is noted that emitting a laser since the every rotation of laser radar is primary, and then to base in target scene In forward sight angle of field it is observed that entity be scanned, it will be understood that if rotational angle is too small, there are multiple scannings Probability, if rotational angle is too big, and has scanning leakage, therefore, more preferably, rotational angle be set as front view view Angular range indicated by angle not only contributes to the collecting efficiency for improving laser point cloud, and is conducive to extend laser radar Service life.
As an example it is assumed that angular range indicated by forward sight angle of field 407 is 60 °, then laser thunder when azimuth is 0 ° The rotational angle reached can be set to 60 °, then, when laser radar sweeps 360 ° to target scene ring, it is only necessary to rotate 6 times i.e. Can, correspondingly, remaining azimuth of laser radar acquisition laser point cloud is 60 °, 120 °, 180 °, 240 °, 300 ° respectively, namely It is azimuth, substantially indicates orientation when laser radar acquisition laser point cloud.
So, for the original point cloud got, after determining to obtain forward sight angle of field by azimuth, institute can be based on Original point cloud is projected to a cylinder and generates the first front view by determining forward sight angle of field.Wherein, cylinder can be cylindrical surface, It can also be elliptic cylinder.
Since original point cloud is 3-D image, and the first front view is two dimensional image, therefore, the projection in the present embodiment, Essence is to be planarized to the three-D space structure of entity in target scene described in original point cloud, and then pass through X-Y scheme The form of picture, i.e. the first front view express entity in target scene, the complexity of image procossing is reduced with this, are conducive to improve The efficiency of laser point cloud denseization.
Step 350, based on the mapping relations between different resolution front view constructed by deep learning model, by described One front view maps to obtain the second front view.
Wherein, the mapping relations are to be subject to model training, Jin Ertong to deep learning model based on a large amount of training samples Cross the deep learning model construction for completing model training, the training sample include original point cloud to be trained front view and to The front view of training dense point cloud, the resolution ratio of the front view of original point cloud to be trained are lower than the front view of dense point cloud to be trained Resolution ratio.
That is, model training, is substantially using the front view of the original point cloud to be trained in training sample as training Input, and using the front view of the dense point cloud to be trained in training sample as training true value, i.e. training output passes through as a result, A large amount of training sample, can be based on deep learning model, and building obtains the mapping relations between different resolution front view.
It so, can be by first based on the mapping relations between different resolution front view constructed by deep learning model Front view maps to obtain the second front view, and the high resolution of second front view is in the resolution ratio of first front view.
Step 370, coordinate system where the second front view being projected to the original point cloud, obtains the thick of the target scene Close cloud.
Specifically, according to image channel coding mode, the second front view is divided into apart from view, height view and strong Spend view.Wherein, image channel includes indicating red red channel R, the green green channel G of expression and indicating blue Blue channel B.
Coordinate where being projected according to forward sight angle of field to original point cloud respectively apart from view, height view and intensity view System, obtains the dense point cloud of target scene.Wherein, coordinate system where original point cloud refers to the laser thunder from acquisition original point cloud It sets out up to the azimuth of setting, the geographic coordinate system of observed real world.
By process as described above, realized based on mapping relations constructed by deep learning model to original point cloud Denseization avoids limitation to the rule that interpolated value method is relied on, and effectively improves the denseization effect of laser point cloud.
In addition, map elements, which extract, to depend on the dense point cloud of above-mentioned target scene, be conducive to be promoted in target scene The data of entity perception input scale, that is, enrich environment sensing information, so that environment sensing difficulty is effectively reduced, have Conducive to the accuracy that raising map elements extract, and then fully ensure that the production efficiency of high-precision map.
Referring to Fig. 6, in one exemplary embodiment, step 330 may comprise steps of:
Step 331, azimuth when acquiring the original point cloud to the laser radar traverses, and passes through what is traversed Azimuth determines the forward sight angle of field.
As previously mentioned, centered on laser radar, it is azimuthal when with acquisition original point cloud to gradually change, it can determine Obtain different forward sight angle of field.
As shown in figure 5, identified forward sight angle of field is 407 when the azimuth traversed is 0 °.When the side traversed When parallactic angle is 270 °, identified forward sight angle of field is 408.
Step 333, the original point cloud is obtained in the view to be projected of the forward sight angle of field.
Specifically, firstly, obtaining the original point cloud in the forward sight angle of field apart from view, height view and strong Spend view.
It is illustrated in conjunction with Fig. 4, is then based on part original point cloud 409, to obtain for forward sight angle of field 407 In the forward sight angle of field 407 apart from view, height view and intensity view.
It similarly, for forward sight angle of field 408, is then regarded to obtain in the front view based on part original point cloud 410 Angle 408 apart from view, height view and intensity view.
Secondly, according to image channel coding mode, by it is described apart from view, height view and intensity View synthesis be described View to be projected.
Wherein, image channel includes indicating red red channel R, the green green channel G of expression and indicating blue Blue channel B.
For example, height view is inputted red channel R, green channel G will be inputted apart from view, intensity view be inputted blue Chrominance channel B, and then synthesize view to be projected.
Step 335, the view projections to be projected are corresponded to the partial zones of the forward sight angle of field into the cylinder Domain.
In conjunction with Fig. 4 and Fig. 7, using cylinder as cylindrical surface, the azimuth traversed is 0 ° and is illustrated, former based on part Initial point cloud 409 will be obtained in corresponding forward sight angle of field 407 apart from view, height view and intensity view, such as Fig. 8 (a) -8 (c) shown in, and view 701 to be projected being synthesized with this, it is assumed that the regional area that cylindrical surface corresponds to the forward sight angle of field is 702, Then view 701 to be projected is projected to the regional area 702, the projection in cylindrical surface is obtained, as shown in 703 in Fig. 7.
Step 337, the cylinder unwrapping is obtained first front view by the traversal to be done.
As previously mentioned, to form the laser point cloud of target scene, laser radar is needed by rotation specified angle, with scanning Target scene, correspondingly, azimuth can gradually change, so, complete azimuthal traversal, it is understood that for laser radar Specified angle is scanned to target scene.For example, specified angle is 360 °.
At this point, cylinder is the projection for completing regional area corresponding to each forward sight angle of field, by being flat by cylinder unwrapping Face, using the perimeter of cylinder as the length of plane, just obtains the first front view that is, using the height of cylinder as the width of plane, such as Fig. 9 institute Show.It can be clearly seen that the view to be projected 701 synthesized based on forward sight angle of field 407 is only in the projection of cylinder from Fig. 9 A part in first front view, first front view substantially further comprise the view to be projected based on the synthesis of remaining forward sight angle of field Projection of the figure in cylinder.
By the above process, forward sight map generalization is realized, so that the shape that the entity in target scene passes through two dimensional image Formula is achieved to express, and the complexity of image procossing is reduced with this, is conducive to the efficiency for improving laser point cloud denseization.
Referring to Fig. 10, in one exemplary embodiment, the deep learning model is convolutional neural networks model.
In the present embodiment, convolutional neural networks model, model structure includes input layer, hidden layer and output layer.Wherein, Hidden layer further comprises convolutional layer and full articulamentum.This convolutional layer is used for the extraction of characteristics of image, this full articulamentum is then used It is connected entirely in the characteristics of image for extracting convolutional layer.
Optionally, hidden layer can also include active coating, pond layer.Wherein, active coating is for improving convolutional neural networks The convergence rate of model, pond layer is then for reducing the complexity of full connection characteristics of image.
Optionally, convolutional layer be configured with multiple channels, each channel in same input picture have different channels The image of information inputs.
For example, input picture is a color image, it is assumed that convolutional layer is configured with tri- channels A1, A2, A3, then, The color image can be input to three channels, i.e. red channel that convolutional layer is configured according to image channel coding mode The corresponding color image parts of R input the channel A1, and the corresponding color image parts of green channel G input the channel A2, blue channel B Corresponding color image parts input the channel A3.
Correspondingly, step 350 may comprise steps of:
Step 351, first front view is inputted into the convolutional neural networks model, extraction obtains multichannel image spy Sign.
Step 353, the multichannel image feature is connected entirely, obtains global characteristics.
Step 355, the mapping relations based on the convolutional neural networks model construction carry out feature to the global characteristics Mapping, obtains second front view.
In one embodiment, as shown in figure 11, convolutional neural networks model that is to say multichannel convolutive neural network mould Type, model structure include: input layer 601, convolutional layer 602, full articulamentum 603 and output layer 604.Wherein, each layer of convolution Layer 602 is configured with multiple channels.
In conjunction with the model structure of above-mentioned convolutional neural networks model, the learning process based on mapping relations is illustrated.
Firstly, the first front view is inputted by input layer 601, and spy is carried out via multiple channels that convolutional layer 602 is configured Sign is extracted, and multichannel image feature is obtained.Wherein, each channel image feature corresponds to a convolution kernel provided by convolutional layer.
Each channel image feature, is also considered as local feature, for describing entity corresponding to original point cloud in target scene In boundary, spatial position and relative direction relationship, for example, local feature includes spatial relation characteristics, shape feature etc..
Then, multichannel image feature is exported to full articulamentum 603 by convolutional layer 602 and is connected entirely, obtains global spy Sign, shows property, such as color, texture etc. in target scene for describing entity corresponding to original point cloud, correspondingly, entirely Office's feature can be color characteristic, textural characteristics etc..
After obtaining global characteristics, mapping relations just based on convolutional neural networks model construction, by global characteristics Acquistion is exported to the second front view, and via output layer 604.
Return referring to Fig. 9, the first front view there are apparent shortage of data, as shown in " snowflake " white point in Fig. 9, then please join Figure 12 is read, the second front view then has no apparent shortage of data, that is to say, that the second front view has compared to the first front view Higher resolution ratio, is conducive to denseization of laser point cloud.
Under the cooperation of above-described embodiment, the Feature Mapping based on convolutional neural networks model is realized, is avoided limitation to The rule that interpolated value method is relied on, and be conducive to improve the accuracy of Feature Mapping, and then fully guarantee laser point cloud Denseization effect is conducive to provide data supporting abundant for environment sensing.
Please refer to Figure 13, in one exemplary embodiment, method as described above can with the following steps are included:
Step 410, for same target scene, original point cloud to be trained and dense point cloud to be trained are obtained.
Original point cloud to be trained and dense point cloud to be trained can both derive from same laser radar, can also derive from Different laser radars, is not limited herein.
Specifically, as shown in figure 14, in one embodiment, acquisition process may comprise steps of:
Step 411, the single frames laser point cloud for obtaining the target scene, using single frames laser point cloud as described to training original Initial point cloud, and using acquisition moment as the current time of the original point cloud to be trained.
Step 413, adjacent moment is determined according to the current time, and is directed to the target scene, obtained described adjacent Moment collected several frame laser point clouds.
Step 415, several frame laser point clouds that will acquire are overlapped, and obtain the dense point cloud to be trained.
If the collected several frame laser point clouds of adjacent moment derive from same laser radar, by acquisition equipment The collected several frames of adjacent moment can be realized in the information such as movement speed provided by the inertial navigation equipment of configuration, geographical location The superposition of laser point cloud.
, whereas if the collected several frame laser point clouds of adjacent moment are from different laser radars, then superposition can be with It is realized by the registration carried out between the collected several frame laser point clouds of adjacent moment.
In the above process, super-resolution rebuilding is realized, that is, utilizes temporal resolution (different moments, same target scene Several frame laser point clouds) exchange spatial resolution (dense point cloud to be trained) for, allow to carry out the laser point cloud of model training more Add abundant, is conducive to the accurate building of mapping relations.
Step 430, corresponding front view is generated by original point cloud to be trained and dense point cloud to be trained projection, and wait instruct Practice the front view of original point cloud and the front view of dense point cloud to be trained as the training sample.
Projection, it is consistent with the generating process of aforementioned first front view, it is not repeated to describe herein.
Step 450, it guides the deep learning model to carry out model training according to the training sample, constructs the mapping Relationship.
Model training is substantially subject to iteration optimization to the parameter of deep learning model based on a large amount of training samples, So that thus the assignment algorithm function of parameter building meets the condition of convergence.
Wherein, deep learning model includes convolutional neural networks model, Recognition with Recurrent Neural Network model, deep neural network mould Type etc..
Assignment algorithm function, including but not limited to: greatest hope function, loss function (such as softmax activation primitive) Etc..
For example, the parameter of random initializtion deep learning model calculates at random just according to when previous training sample The penalty values of loss function constructed by the parameter of beginningization.
If the penalty values of loss function are not up to minimum, the parameter of deep learning model is updated, and according to the latter Training sample calculates the penalty values of loss function constructed by the parameter updated.
Such iterative cycles are considered as loss function convergence until the penalty values of loss function reach minimum, at this point, deep Degree learning model is also restrained, and meets default required precision, then stops iteration.
Otherwise, iteration updates the parameter of deep learning model, and iterates to calculate institute's undated parameter according to remaining training sample The penalty values of the loss function of building, until loss function is restrained.
It is noted that will also stop if the number of iterations has reached iteration threshold before loss function convergence Iteration guarantees the efficiency of model training with this.
When deep learning model restrains and meets default required precision, indicate that deep learning model completes model training, Thus mapping relations can be obtained based on the deep learning model construction for completing model training.
After the building for completing mapping relations, for realizing the device of laser point cloud denseization, just it is provided with pair The mapping ability of first front view.
So, the first front view is inputted into deep learning model, the target scene just can be obtained according to mapping relations Dense point cloud front view, and then realize original point cloud denseization.
Figure 15 is please referred to, in one exemplary embodiment, step 415 may comprise steps of:
Step 4151, for the same entity in the target scene, divide from several frame laser point clouds got Obtain corresponding reference target.
Step 4153, the reference target obtained according to segmentation, is registrated several frame laser point clouds got.
Step 4155, it is superimposed to obtain the dense point cloud to be trained by several frame laser point clouds for completing registration.
As previously mentioned, several frame laser point clouds got, be substantially different moments, same target scene several frames swash Luminous point cloud can both derive from same laser radar, can also derive from different laser radars.If should be appreciated that source In same laser radar, then several frame laser point clouds necessarily correspond to the same coordinate system, whereas if swashing from different Optical radar, then several frame laser point clouds may cannot keep unified coordinate system, certainly will influence stack result, and then influence to reflect Penetrate the accurate building of relationship.
For example, in target scene, each entity can not be totally stationary, but there are relative motions, for example, on road The vehicle etc. of traveling, for this purpose, the vehicle of the traveling is directed to, if several frame laser point clouds of different moments acquisition are directly folded Add, driving trace of the vehicle on road will be constituted, as shown in 501 in Figure 16 (a), and then original point cloud denseization is caused to be lost It loses, i.e., clearly can not specifically obtain the profile of the vehicle.
For this purpose, several frame laser point clouds of different laser radars are but derived from for same target scene in the present embodiment, Before carrying out several frame laser point cloud superpositions, it will be registrated between several frame laser point clouds first.
Registration, it is therefore intended that guarantee several frame laser point clouds that different laser radars are but derived from for same target scene Between keep unified coordinate system.Optionally, registration includes: the processing modes such as geometric correction, projective transformation, common scale ruler, The present embodiment is limited not to this.
Reference target then refers to that same target scene but derives from the phase in several frame laser point clouds of different laser radars Region is handed over, is in the nature the pixel collection in laser point cloud, it is understood that is reference target, is an image, if the image exists The same entity in same target scene is indicated in dry frame laser point cloud.For example, it is assumed that the same entity in target scene behave, Bicyclist, vehicle etc., then reference target is the pixel collection that people, bicyclist, vehicle etc. are constituted in laser point cloud, be that is to say The corresponding image such as people, bicyclist, vehicle.
The segmentation of reference target, substantially image segmentation, optionally, image segmentation include: normal segmentation, semantic segmentation, Example segmentation etc., wherein normal segmentation further comprises: Threshold segmentation, region segmentation, edge segmentation, histogram divion etc., this Embodiment makes specific restriction not to this.
Based on this, the registration based on reference target, it is understood that be, the target of registration be so that same target scene but Reference target in several frame laser point clouds of different laser radars is completely overlapped, to realize different laser point cloud systems One into identical coordinate system.
It is still illustrated by taking the vehicle travelled on road in target scene as an example, as shown in 501 in Figure 16 (b), with the row The vehicle sailed is registrated as reference target, so that intersecting area is completely overlapped, i.e. driving trace of the vehicle on road It is eliminated, is more clear the profile for specifically showing the vehicle.
In above process, the registration between different laser point clouds is realized, has fully ensured that different laser point clouds are in In the same coordinate system, with the correctness that this ensures to be superimposed, and then the accuracy of mapping relations building has been ensured.
Figure 17 is please referred to, in one exemplary embodiment, step 4153 may comprise steps of:
Step 4153a, several frame laser point clouds to get construct projective transformation function.
Step 4153c estimates the parameter of the projective transformation function according to the reference target that segmentation obtains.
Step 4153e, according to the projective transformation function of completion parameter Estimation between several frame laser point clouds got Carry out projective transformation.
In the present embodiment, registration is realized based on projective transformation mode, that is to say, reference point clouds and target point cloud it Between carry out projective transformation so that target point cloud by rotation and translation and the intersecting area in reference point clouds it is completely overlapped.
Now using the wherein two frame laser point clouds in several frame laser point clouds as reference point clouds and target point cloud, to matching Quasi- procedure declaration is as follows.
Illustrate first, coordinate system where reference point clouds refers to set by the laser radar from acquisition reference point clouds Azimuth is set out, the geographic coordinate system of observed real world, and coordinate system where target point cloud, is referred to from acquisition target point Azimuth set by the laser radar of cloud is set out, the geographic coordinate system of observed real world, so, where reference point clouds Coordinate system needs further to be registrated there may be difference where coordinate system and target point cloud.
Specifically, shown in the projective transformation function such as calculation formula (1) constructed for reference point clouds and target point cloud:
Wherein, fxIndicate in reference point clouds the physical size ratio of pixel in the direction of the x axis in pixel and target point cloud Value, fyIndicate in reference point clouds the physical size ratio of pixel in the y-axis direction, (u in pixel and target point cloud0,v0) table The origin of coordinate system where showing reference point clouds, where coordinate system where R indicates target point cloud and reference point clouds between coordinate system Rotation relationship, the translation relation where coordinate system where t indicates target point cloud and reference point clouds between coordinate system.
(u, v, Zc) indicate target point cloud in pixel three-dimensional coordinate, (Xw,Yw,Zw) indicate pixel in reference point clouds Three-dimensional coordinate.
It is registrated relationship between target point cloud and reference point clouds from the foregoing, it will be observed that determining, is substantially estimated projection transforming function transformation function Parameter, i.e. fx、fy、(u0,v0)、R、t。
For this reason, it may be necessary to obtain target point cloud 6 groups of characteristic points corresponding with reference point clouds.Characteristic point is obtained with segmentation Reference target it is related, that is to say, characteristic point is pixel of the reference target in reference point clouds or target point cloud.
Preferably, for entity in target scene in laser point cloud sharpness of border show, sharp-featured sampled point (example Such as angle point, vertex, endpoint, focus point, inflection point), corresponding extract is evenly distributed as much as possible in reference point clouds or target point cloud 6 pixels as characteristic point, notable feature of the reference target in reference point clouds or target point cloud is embodied with this, in turn Be conducive to improve the accuracy being registrated between reference point clouds and target point cloud.
The estimation of parameter, has just determined and has been registrated pass between reference point clouds and target point cloud in completing projective transformation function System, then, (X is determined by reference to cloudw,Yw,Zw), projective transformation can be carried out to target point cloud, it will be where target point cloud Coordinate system is converted to coordinate system where reference point clouds, i.e. (u, v, Zc)。
Cooperation through the foregoing embodiment realizes the registration based on projective transformation, not only contributes to be greatly lowered and match The calculation amount of quasi- process is conducive to the efficiency for improving laser point cloud denseization, and then promotes the production efficiency of high-precision map, and And characteristic point reflects notable feature of the reference target in laser point cloud, is conducive to the precision for improving registration process.
Please refer to Figure 18, in one exemplary embodiment, method as described above can with the following steps are included:
Step 610, the dense point cloud based on the target scene carries out map elements extraction, obtains map elements data.
Wherein, target scene includes that an at least entity corresponds to map elements.It is also understood that entity is necessary being Entity in target scene, and map elements are then the elements being presented in target scene map.
Specifically, map elements and its corresponding entity according to the difference of application scenarios by different from, for example, auxiliary It helps in vehicle driving scene, map elements include: the elements such as lane line, surface mark, road tooth, fence, traffic mark board, accordingly Ground, entity refer to lane line, surface mark, road tooth, fence, traffic mark board etc..Or in auxiliary unmanned plane low latitude In flying scene, map elements include: the elements such as street lamp, vegetation, building, traffic mark board, and correspondingly, entity then refers to Street lamp, vegetation, building, traffic mark board etc..
So, for high-precision map, map elements data include at least three of map elements in target scene Tie up position.This three-dimensional position refers to geographical location of the entity corresponding to map elements in target scene.Optionally, map Factor data further includes color, classification etc. of the map elements in target scene.
For example, map elements are lane line feature, and correspondingly, map elements data include: lane line in target field Color, form of lane line of three-dimensional position, lane line in scape etc..Wherein, the form of lane line include solid line, it is dotted line, double Yellow line.
Step 630, according to three-dimensional position of the map elements in the target scene in the map elements data, in mesh The map elements are shown in mark scene map.
Target scene map refers to the map to match with target scene.
For the editorial staff of map elements, it can choose while the map elements of whole classifications are edited, Also the map elements that can choose a classification are edited, and the present embodiment is limited not to this.
If editorial staff's selection editor's lane line feature, in target scene map, corresponding lane line will be loaded and wanted Prime number evidence, to show that the lane line is wanted according to the three-dimensional position of the lane line feature in target scene in lane line factor data Element.
It is noted that map elements data, such as lane line factor data will be according to fingers after completing to extract Fixed storage format is stored in advance, reading when being convenient to editorial staff progress map elements editor.
Step 650, edit instruction and the response for map elements described in the target scene map are obtained, institute is generated State the high-precision map of target scene.
After showing map elements in target scene map, editorial staff can reference object scene laser point Cloud checks the map elements.
If map elements are undesirable, for example, do not meet required precision, alternatively, position, shape, classification are inclined Difference, or, cause map elements to be lacked because vehicle stops, then, editorial staff can further to map want Element carries out edit operation, at this point, edit instruction for map elements in target scene map will accordingly be got, and then passes through The response of edit instruction carries out corresponding editing and processing to the map elements in target scene map, and ultimately generates comprising compiling The high-precision map of map elements after volume.
In concrete application scene, high-precision map is to realize unmanned indispensable important link.It can be true Real reduction target scene, so as to improve the positioning accurate of unmanned equipment (such as automatic driving vehicle, unmanned plane, robot) Degree;It can also solve the problems, such as that environment awareness apparatus (such as sensor) fails in unmanned equipment in special circumstances, effectively Ground compensates for the deficiency of environment sensing equipment;It can be implemented as unmanned equipment simultaneously and carry out path Global motion planning, and base Reasonable strategy of advancing is formulated in being judged to unmanned equipment in advance.Therefore, play in unmanned can not for high-precision map The effect of substitution, each embodiment, not only realizes denseization of laser point cloud through the invention, has fully ensured that map elements mention The accuracy taken, and be conducive to improve the precision of high-precision map, the production cost of high-precision map is effectively reduced, improves high The production efficiency of precision map is advantageously implemented the large-scale serial production of high-precision map.
Following is apparatus of the present invention embodiment, can be used for executing realization laser point cloud denseization according to the present invention Method.For undisclosed details in apparatus of the present invention embodiment, it is dense to please refer to realization laser point cloud according to the present invention The embodiment of the method for change.
Figure 19 is please referred to, in one exemplary embodiment, a kind of device 900 that realizing laser point cloud denseization includes but not Be limited to: original point cloud obtains module 910, front view obtains module 930, front view mapping block 950 and dense point cloud and obtains mould Block 970.
Wherein, original point cloud obtains module 910, for obtaining the original point cloud of target scene.
Front view obtains module 930, for projecting the original point cloud to a cylinder according to forward sight angle of field, generates the Azimuth when one front view, the forward sight angle of field acquire the original point cloud with laser radar is related.
Front view mapping block 950, for based on reflecting between different resolution front view constructed by deep learning model Relationship is penetrated, is mapped to obtain the second front view by first front view, the high resolution of second front view is in described first The resolution ratio of front view.
Dense point cloud obtains module 970, for coordinate system where projecting the second front view to the original point cloud, obtains The dense point cloud of the target scene.
In one exemplary embodiment, the front view acquisition module 930 includes but is not limited to: azimuth Traversal Unit, View acquiring unit, view projections unit to be projected and cylinder unwrapping unit to be projected.
Wherein, azimuth Traversal Unit is carried out for azimuth when acquiring the original point cloud to the laser radar Traversal, determines the forward sight angle of field by the azimuth traversed.
View acquiring unit to be projected, for obtaining the original point cloud in the view to be projected of the forward sight angle of field.
View projections unit to be projected, for the view projections to be projected to be corresponded to the forward sight into the cylinder The regional area of angle of field.
Cylinder unwrapping unit is used for the traversal to be done, the cylinder unwrapping is obtained first front view.
In one exemplary embodiment, the view acquiring unit to be projected includes but is not limited to: view obtains subelement With View synthesis subelement.
Wherein, view obtains subelement, for obtaining the original point cloud in the forward sight angle of field apart from view, height Spend view and intensity view.
View synthesis unit, for being regarded described apart from view, height view and intensity according to image channel coding mode Figure synthesizes the view to be projected.
In one exemplary embodiment, the front view study module 950 includes but is not limited to: feature extraction unit, spy Levy connection unit and Feature Mapping unit.
Wherein, feature extraction unit is extracted for first front view to be inputted the convolutional neural networks model To multichannel image feature.
Feature connection unit obtains global characteristics for being connected the multichannel image feature entirely.
Feature Mapping unit, for the mapping relations based on the convolutional neural networks model construction, to described global special Sign carries out Feature Mapping, obtains second front view.
In one exemplary embodiment, the dense point cloud obtains module 970 and includes but is not limited to: cutting unit and reverse Projecting cell.
Wherein, cutting unit, for according to image channel coding mode, second front view to be divided into distance view Figure, height view and intensity view;
Backwards projection unit is used for described apart from view, height view and intensity view according to the forward sight angle of field Coordinate system where to the original point cloud is projected respectively, obtains the dense point cloud of the target scene.
In one exemplary embodiment, described device 900 further includes but is not limited to: point cloud obtain module, projection module and Training module.
Wherein, point cloud obtains module, for being directed to same target scene, obtains original point cloud to be trained and dense to training Point cloud.
Projection module, for generating corresponding front view by original point cloud to be trained and dense point cloud to be trained projection, and Using the front view of the front view of original point cloud to be trained and dense point cloud to be trained as the training sample.
Training module, for guiding the deep learning model to carry out model training according to the training sample, by complete At the mapping relations between the deep learning model construction different resolution front view of model training.
In one exemplary embodiment, described cloud obtains module and includes but is not limited to: single frames point cloud acquiring unit, adjacent Frame point cloud acquiring unit and consecutive frame point cloud superpositing unit.
Wherein, single frames point cloud acquiring unit, for obtaining the single frames laser point cloud in the target scene, with single frames laser Point cloud is as the original point cloud to be trained, and using acquisition moment as the current time of the original point cloud to be trained.
Consecutive frame point cloud acquiring unit for determining adjacent moment according to the current time, and is directed to the target field Scape obtains the collected several frame laser point clouds of the adjacent moment.
Consecutive frame point cloud superpositing unit, several frame laser point clouds for will acquire are overlapped, and are obtained described wait instruct Practice dense point cloud.
In one exemplary embodiment, the consecutive frame point cloud superpositing unit includes but is not limited to: Target Segmentation subelement, It is registrated subelement and splicing subelement.
Wherein, Target Segmentation subelement, the same entity for being directed in the target scene, from several frames got Segmentation obtains corresponding reference target in laser point cloud.
It is registrated subelement, the reference target for obtaining according to segmentation matches several frame laser point clouds got It is quasi-.
It is superimposed subelement, is superimposed to obtain the dense point cloud to be trained for several frame laser point clouds by completing registration.
In one exemplary embodiment, the splicing subelement includes but is not limited to: function constructs subelement, parameter Estimation Subelement and projective transformation subelement.
Wherein, function constructs subelement, for several frame laser point clouds building projective transformation function to get.
Parameter Estimation subelement, the reference target for being obtained according to segmentation estimate the parameter of the projective transformation function.
Projective transformation subelement, for being swashed in several frames got according to the projective transformation function for completing parameter Estimation Projective transformation is carried out between luminous point cloud.
In one exemplary embodiment, described device 900 further includes but is not limited to: elements recognition module, element show mould Block and map generation module.
Wherein, elements recognition module carries out map elements extraction for the dense point cloud based on the target scene, obtains Map elements data.
Element display module, for according to three-dimensional of the map elements in the target scene in the map elements data Position shows the map elements in target scene map.
Map generation module, for obtaining the edit instruction and sound that are directed to map elements described in the target scene map It answers, generates the high-precision map of the target scene.
It should be noted that realizing that the device of laser point cloud denseization is carrying out realization laser provided by above-described embodiment It, only the example of the division of the above functional modules, can be according to need in practical application when the processing of point cloud denseization It wants and is completed by different functional modules above-mentioned function distribution, i.e., the internal structure of the device of realization laser point cloud denseization will It is divided into different functional modules, to complete all or part of the functions described above.
In addition, realizing the device of laser point cloud denseization provided by above-described embodiment and realizing laser point cloud denseization The embodiment of method belongs to same design, and the concrete mode that wherein modules execute operation carries out in embodiment of the method Detailed description, details are not described herein again.
Figure 20 is please referred to, in one exemplary embodiment, a kind of computer equipment 1000, including an at least processor 1001, an at least memory 1002 and at least a communication bus 1003.
Wherein, computer-readable instruction is stored on memory 1002, processor 1001 is read by communication bus 1003 The computer-readable instruction stored in memory 1002.
Realize that the realization laser point cloud in the various embodiments described above is thick when the computer-readable instruction is executed by processor 1001 The method of densification.
In one exemplary embodiment, a kind of computer readable storage medium, is stored thereon with computer program, the calculating The method of realization laser point cloud denseization in the various embodiments described above is realized when machine program is executed by processor.
Above content, preferable examples embodiment only of the invention, is not intended to limit embodiment of the present invention, this Field those of ordinary skill central scope according to the present invention and spirit can be carried out very easily corresponding flexible or repaired Change, therefore protection scope of the present invention should be subject to protection scope required by claims.

Claims (15)

1. a kind of method for realizing laser point cloud denseization characterized by comprising
Obtain the original point cloud of target scene;
The original point cloud is projected to a cylinder according to forward sight angle of field, generates the first front view, the forward sight angle of field with Azimuth when laser radar acquires the original point cloud is related;
Based on the mapping relations between different resolution front view constructed by deep learning model, mapped by first front view The second front view is obtained, the high resolution of second front view is in the resolution ratio of first front view;
Coordinate system where second front view is projected to the original point cloud, obtains the dense point cloud of the target scene.
2. the method as described in claim 1, which is characterized in that it is described according to forward sight angle of field by the original point cloud project to One cylinder generates the first front view, comprising:
Azimuth when acquiring the original point cloud to the laser radar traverses, and determines institute by the azimuth traversed State forward sight angle of field;
The original point cloud is obtained in the view to be projected of the forward sight angle of field;
The view projections to be projected are corresponded to the regional area of the forward sight angle of field into the cylinder;
The cylinder unwrapping is obtained first front view by the traversal to be done.
3. method according to claim 2, which is characterized in that described to obtain the original point cloud in the forward sight angle of field View to be projected, comprising:
The original point cloud is obtained in the forward sight angle of field apart from view, height view and intensity view;
According to image channel coding mode, by it is described apart from view, height view and intensity View synthesis be the view to be projected Figure.
4. the method as described in claim 1, which is characterized in that the deep learning model is convolutional neural networks model;
The mapping relations based between different resolution front view constructed by deep learning model, by first front view Mapping obtains the second front view, comprising:
First front view is inputted into the convolutional neural networks model, extraction obtains multichannel image feature;
The multichannel image feature is connected entirely, obtains global characteristics;
Based on the mapping relations of the convolutional neural networks model construction, Feature Mapping is carried out to the global characteristics, obtains institute State the second front view.
5. the method as described in claim 1, which is characterized in that described to project the second front view to where the original point cloud Coordinate system obtains the dense point cloud of the target scene, comprising:
According to image channel coding mode, second front view is divided into apart from view, height view and intensity view;
It is projected described respectively according to the forward sight angle of field to the original point cloud apart from view, height view and intensity view Place coordinate system obtains the dense point cloud of the target scene.
6. the method as described in claim 1, which is characterized in that the method also includes:
For same target scene, original point cloud to be trained and dense point cloud to be trained are obtained;
Corresponding front view is generated by original point cloud to be trained and dense point cloud to be trained projection, and with original point cloud to be trained The front view of front view and dense point cloud to be trained is as the training sample;
The deep learning model is guided to carry out model training according to the training sample, by the depth for completing model training Practise the mapping relations between model construction different resolution front view.
7. method as claimed in claim 6, which is characterized in that it is described to be directed to same target scene, obtain original point to be trained Cloud and dense point cloud to be trained, comprising:
The single frames laser point cloud for obtaining the target scene, using single frames laser point cloud as the original point cloud to be trained, and with The acquisition moment of the original point cloud to be trained is as current time;
Adjacent moment is determined according to the current time, and is directed to the target scene, and it is collected to obtain the adjacent moment Several frame laser point clouds;
Several frame laser point clouds that will acquire are overlapped, and obtain the dense point cloud to be trained.
8. the method for claim 7, which is characterized in that several frame laser point clouds that will acquire are overlapped, Obtain the dense point cloud to be trained, comprising:
For the same entity in the target scene, divides from several frame laser point clouds got and referred to accordingly Target;
According to the reference target that segmentation obtains, several frame laser point clouds got are registrated;
Several frame laser point clouds by completing registration are superimposed to obtain the dense point cloud to be trained.
9. method according to claim 8, which is characterized in that the reference target obtained according to segmentation, to what is got Several frame laser point clouds are registrated, comprising:
Several frame laser point clouds to get construct projective transformation function;
The parameter of the projective transformation function is estimated according to the reference target that segmentation obtains;
According to the projective transformation function for completing parameter Estimation, projective transformation is carried out between several frame laser point clouds got.
10. method as described in any one of claim 1 to 9, which is characterized in that the method also includes:
Dense point cloud based on the target scene carries out map elements extraction, obtains map elements data;
According to three-dimensional position of the map elements in the target scene in the map elements data, in target scene map Show the map elements;
The edit instruction for map elements described in the target scene map and response are obtained, the target scene is generated High-precision map.
11. a kind of device for realizing laser point cloud denseization characterized by comprising
Original point cloud obtains module, for obtaining the original point cloud of target scene;
Front view obtains module, for projecting the original point cloud to a cylinder according to forward sight angle of field, generates the first forward sight Azimuth when figure, the forward sight angle of field acquire the original point cloud with laser radar is related;
Front view mapping block, for based on the mapping relations between different resolution front view constructed by deep learning model, It is mapped to obtain the second front view by first front view, the high resolution of second front view is in first front view Resolution ratio;
Dense point cloud obtains module, for coordinate system where projecting the second front view to the original point cloud, obtains the mesh Mark the dense point cloud of scene.
12. device as claimed in claim 11, which is characterized in that the deep learning model is convolutional neural networks model;
The front view mapping block includes:
First front view is inputted the convolutional neural networks model by feature extraction unit, and extraction obtains multichannel image Feature;
The multichannel image feature is connected entirely, obtains global characteristics by feature connection unit;
Feature Mapping unit carries out the global characteristics special based on the mapping relations of the convolutional neural networks model construction Sign mapping, obtains second front view.
13. device as claimed in claim 11, which is characterized in that described device further include:
Point cloud obtains module, for being directed to same target scene, obtains original point cloud to be trained and dense point cloud to be trained;
Projection module, for projecting the corresponding front view of generation by original point cloud train and dense point cloud train, and with to The front view of training original point cloud and the front view of dense point cloud to be trained are as the training sample;
Training module, for guiding the deep learning model to carry out model training according to the training sample, by completing mould Mapping relations between the deep learning model construction different resolution front view of type training.
14. a kind of computer equipment characterized by comprising
Processor;And
Memory is stored with computer-readable instruction on the memory, and the computer-readable instruction is held by the processor The method of realization laser point cloud denseization as described in any one of claims 1 to 10 is realized when row.
15. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The method of realization laser point cloud denseization as described in any one of claims 1 to 10 is realized when being executed by processor.
CN201811374889.4A 2018-11-19 2018-11-19 Method and device for realizing laser point cloud densification and computer equipment Active CN109493407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811374889.4A CN109493407B (en) 2018-11-19 2018-11-19 Method and device for realizing laser point cloud densification and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811374889.4A CN109493407B (en) 2018-11-19 2018-11-19 Method and device for realizing laser point cloud densification and computer equipment

Publications (2)

Publication Number Publication Date
CN109493407A true CN109493407A (en) 2019-03-19
CN109493407B CN109493407B (en) 2022-03-25

Family

ID=65696832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811374889.4A Active CN109493407B (en) 2018-11-19 2018-11-19 Method and device for realizing laser point cloud densification and computer equipment

Country Status (1)

Country Link
CN (1) CN109493407B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097084A (en) * 2019-04-03 2019-08-06 浙江大学 Pass through the knowledge fusion method of projection feature training multitask student network
CN110276794A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Information processing method, information processing unit, terminal device and server
CN110363771A (en) * 2019-07-15 2019-10-22 武汉中海庭数据技术有限公司 A kind of isolation guardrail form point extracting method and device based on three dimensional point cloud
CN110824496A (en) * 2019-09-18 2020-02-21 北京迈格威科技有限公司 Motion estimation method, motion estimation device, computer equipment and storage medium
CN110992485A (en) * 2019-12-04 2020-04-10 北京恒华伟业科技股份有限公司 GIS map three-dimensional model azimuth display method and device and GIS map
CN111009011A (en) * 2019-11-28 2020-04-14 深圳市镭神智能***有限公司 Method, device, system and storage medium for predicting vehicle direction angle
CN111192265A (en) * 2019-12-25 2020-05-22 中国科学院上海微***与信息技术研究所 Point cloud based semantic instance determination method and device, electronic equipment and storage medium
CN111221808A (en) * 2019-12-31 2020-06-02 武汉中海庭数据技术有限公司 Unattended high-precision map quality inspection method and device
CN111439594A (en) * 2020-03-09 2020-07-24 兰剑智能科技股份有限公司 Unstacking method and system based on 3D visual guidance
CN111476242A (en) * 2020-03-31 2020-07-31 北京经纬恒润科技有限公司 Laser point cloud semantic segmentation method and device
CN111667522A (en) * 2020-06-04 2020-09-15 上海眼控科技股份有限公司 Three-dimensional laser point cloud densification method and equipment
CN111724478A (en) * 2020-05-19 2020-09-29 华南理工大学 Point cloud up-sampling method based on deep learning
CN111854748A (en) * 2019-04-09 2020-10-30 北京航迹科技有限公司 Positioning system and method
CN112308889A (en) * 2020-10-23 2021-02-02 香港理工大学深圳研究院 Point cloud registration method and storage medium by utilizing rectangle and oblateness information
CN112446953A (en) * 2020-11-27 2021-03-05 广州景骐科技有限公司 Point cloud processing method, device, equipment and storage medium
CN112837410A (en) * 2021-02-19 2021-05-25 北京三快在线科技有限公司 Method and device for processing training model and point cloud
CN113219489A (en) * 2021-05-13 2021-08-06 深圳数马电子技术有限公司 Method and device for determining point pair of multi-line laser, computer equipment and storage medium
CN113359141A (en) * 2021-07-28 2021-09-07 东北林业大学 Forest fire positioning method and system based on unmanned aerial vehicle multi-sensor data fusion
CN114764746A (en) * 2021-09-22 2022-07-19 清华大学 Super-resolution method and device for laser radar, electronic device and storage medium
CN115690641A (en) * 2022-05-25 2023-02-03 中仪英斯泰克进出口有限公司 Screen control method and system based on image display
CN115965928A (en) * 2023-03-16 2023-04-14 安徽蔚来智驾科技有限公司 Point cloud feature enhancement method, target detection method, device, medium and vehicle
CN114266863B (en) * 2021-12-31 2024-02-09 西安交通大学 3D scene graph generation method, system, device and readable storage medium based on point cloud
CN117788476A (en) * 2024-02-27 2024-03-29 南京邮电大学 Workpiece defect detection method and device based on digital twin technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346608A (en) * 2013-07-26 2015-02-11 株式会社理光 Sparse depth map densing method and device
CN108154560A (en) * 2018-01-25 2018-06-12 北京小马慧行科技有限公司 Laser point cloud mask method, device and readable storage medium storing program for executing
CN108198145A (en) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 For the method and apparatus of point cloud data reparation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346608A (en) * 2013-07-26 2015-02-11 株式会社理光 Sparse depth map densing method and device
CN108198145A (en) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 For the method and apparatus of point cloud data reparation
CN108154560A (en) * 2018-01-25 2018-06-12 北京小马慧行科技有限公司 Laser point cloud mask method, device and readable storage medium storing program for executing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN XIAOZHI 等: "Multi-View 3D Object Detection Network for Autonomous Driving", 《IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
LI BO 等: "Vehicle Detection from 3D Lidar Using Fully Convolutional Network", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
谭红春: "一种高效的人脸三维点云超分辨率融合方法", 《光学技术》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097084B (en) * 2019-04-03 2021-08-31 浙江大学 Knowledge fusion method for training multitask student network through projection characteristics
CN110097084A (en) * 2019-04-03 2019-08-06 浙江大学 Pass through the knowledge fusion method of projection feature training multitask student network
CN111854748A (en) * 2019-04-09 2020-10-30 北京航迹科技有限公司 Positioning system and method
CN111854748B (en) * 2019-04-09 2022-11-22 北京航迹科技有限公司 Positioning system and method
CN110276794A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Information processing method, information processing unit, terminal device and server
CN110276794B (en) * 2019-06-28 2022-03-01 Oppo广东移动通信有限公司 Information processing method, information processing device, terminal device and server
CN110363771B (en) * 2019-07-15 2021-08-17 武汉中海庭数据技术有限公司 Isolation guardrail shape point extraction method and device based on three-dimensional point cloud data
CN110363771A (en) * 2019-07-15 2019-10-22 武汉中海庭数据技术有限公司 A kind of isolation guardrail form point extracting method and device based on three dimensional point cloud
CN110824496A (en) * 2019-09-18 2020-02-21 北京迈格威科技有限公司 Motion estimation method, motion estimation device, computer equipment and storage medium
CN111009011B (en) * 2019-11-28 2023-09-19 深圳市镭神智能***有限公司 Method, device, system and storage medium for predicting vehicle direction angle
CN111009011A (en) * 2019-11-28 2020-04-14 深圳市镭神智能***有限公司 Method, device, system and storage medium for predicting vehicle direction angle
CN110992485A (en) * 2019-12-04 2020-04-10 北京恒华伟业科技股份有限公司 GIS map three-dimensional model azimuth display method and device and GIS map
CN111192265A (en) * 2019-12-25 2020-05-22 中国科学院上海微***与信息技术研究所 Point cloud based semantic instance determination method and device, electronic equipment and storage medium
CN111221808A (en) * 2019-12-31 2020-06-02 武汉中海庭数据技术有限公司 Unattended high-precision map quality inspection method and device
CN111439594A (en) * 2020-03-09 2020-07-24 兰剑智能科技股份有限公司 Unstacking method and system based on 3D visual guidance
CN111476242A (en) * 2020-03-31 2020-07-31 北京经纬恒润科技有限公司 Laser point cloud semantic segmentation method and device
CN111476242B (en) * 2020-03-31 2023-10-20 北京经纬恒润科技股份有限公司 Laser point cloud semantic segmentation method and device
CN111724478A (en) * 2020-05-19 2020-09-29 华南理工大学 Point cloud up-sampling method based on deep learning
CN111724478B (en) * 2020-05-19 2021-05-18 华南理工大学 Point cloud up-sampling method based on deep learning
CN111667522A (en) * 2020-06-04 2020-09-15 上海眼控科技股份有限公司 Three-dimensional laser point cloud densification method and equipment
CN112308889B (en) * 2020-10-23 2021-08-31 香港理工大学深圳研究院 Point cloud registration method and storage medium by utilizing rectangle and oblateness information
CN112308889A (en) * 2020-10-23 2021-02-02 香港理工大学深圳研究院 Point cloud registration method and storage medium by utilizing rectangle and oblateness information
CN112446953A (en) * 2020-11-27 2021-03-05 广州景骐科技有限公司 Point cloud processing method, device, equipment and storage medium
CN112837410A (en) * 2021-02-19 2021-05-25 北京三快在线科技有限公司 Method and device for processing training model and point cloud
CN113219489A (en) * 2021-05-13 2021-08-06 深圳数马电子技术有限公司 Method and device for determining point pair of multi-line laser, computer equipment and storage medium
CN113219489B (en) * 2021-05-13 2024-04-16 深圳数马电子技术有限公司 Point-to-point determination method, device, computer equipment and storage medium for multi-line laser
CN113359141A (en) * 2021-07-28 2021-09-07 东北林业大学 Forest fire positioning method and system based on unmanned aerial vehicle multi-sensor data fusion
CN114764746A (en) * 2021-09-22 2022-07-19 清华大学 Super-resolution method and device for laser radar, electronic device and storage medium
CN114266863B (en) * 2021-12-31 2024-02-09 西安交通大学 3D scene graph generation method, system, device and readable storage medium based on point cloud
CN115690641B (en) * 2022-05-25 2023-08-01 中仪英斯泰克进出口有限公司 Screen control method and system based on image display
CN115690641A (en) * 2022-05-25 2023-02-03 中仪英斯泰克进出口有限公司 Screen control method and system based on image display
CN115965928A (en) * 2023-03-16 2023-04-14 安徽蔚来智驾科技有限公司 Point cloud feature enhancement method, target detection method, device, medium and vehicle
CN117788476A (en) * 2024-02-27 2024-03-29 南京邮电大学 Workpiece defect detection method and device based on digital twin technology
CN117788476B (en) * 2024-02-27 2024-05-10 南京邮电大学 Workpiece defect detection method and device based on digital twin technology

Also Published As

Publication number Publication date
CN109493407B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN109493407A (en) Realize the method, apparatus and computer equipment of laser point cloud denseization
CN110160502B (en) Map element extraction method, device and server
US10297074B2 (en) Three-dimensional modeling from optical capture
US11461964B2 (en) Satellite SAR artifact suppression for enhanced three-dimensional feature extraction, change detection, and visualizations
US20190026400A1 (en) Three-dimensional modeling from point cloud data migration
JP6862409B2 (en) Map generation and moving subject positioning methods and devices
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
US10643378B2 (en) Method and device for modelling three-dimensional road model, and storage medium
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
WO2021223465A1 (en) High-precision map building method and system
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
US10477178B2 (en) High-speed and tunable scene reconstruction systems and methods using stereo imagery
JP7440005B2 (en) High-definition map creation method, apparatus, device and computer program
CN109598794A (en) The construction method of three-dimension GIS dynamic model
CN109801371B (en) Network three-dimensional electronic map construction method based on Cesium
CN109685893A (en) Space integration modeling method and device
Liu et al. Software-defined active LiDARs for autonomous driving: A parallel intelligence-based adaptive model
CN111726535A (en) Smart city CIM video big data image quality control method based on vehicle perception
CN116337068A (en) Robot synchronous semantic positioning and mapping method and system based on humanoid thought
CN116612235A (en) Multi-view geometric unmanned aerial vehicle image three-dimensional reconstruction method and storage medium
CN115578495A (en) Special effect image drawing method, device, equipment and medium
CN113642395B (en) Building scene structure extraction method for city augmented reality information labeling
Zhou et al. Object detection and spatial location method for monocular camera based on 3D virtual geographical scene
CN113362236B (en) Point cloud enhancement method, point cloud enhancement device, storage medium and electronic equipment
KR102616437B1 (en) Method for calibration of lidar and IMU, and computer program recorded on record-medium for executing method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant