CN108734058A - Obstacle identity recognition methods, device, equipment and storage medium - Google Patents
Obstacle identity recognition methods, device, equipment and storage medium Download PDFInfo
- Publication number
- CN108734058A CN108734058A CN201710253581.3A CN201710253581A CN108734058A CN 108734058 A CN108734058 A CN 108734058A CN 201710253581 A CN201710253581 A CN 201710253581A CN 108734058 A CN108734058 A CN 108734058A
- Authority
- CN
- China
- Prior art keywords
- visual angle
- point cloud
- dimensional point
- barrier
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses obstacle identity recognition methods, device, equipment and storage medium, wherein method includes:Obtain the corresponding three dimensional point cloud of barrier to be identified;The three dimensional point cloud is mapped to two dimensional image;Based on the two dimensional image, the type of barrier is identified by deep learning algorithm.Using scheme of the present invention, the accuracy of recognition result can be improved.
Description
【Technical field】
The present invention relates to Computer Applied Technology, more particularly to obstacle identity recognition methods, device, equipment and storage is situated between
Matter.
【Background technology】
Automatic driving vehicle, alternatively referred to as automatic driving vehicle refer to perceiving vehicle periphery ring by various sensors
Border, and according to obtained road, vehicle location and obstacle information etc. is perceived, steering and the speed of vehicle are controlled, to make
Vehicle can be travelled reliably and securely on road.
Laser radar is the important sensor that automatic driving vehicle is used to perceive three-dimensional environment, and laser radar scanning one encloses field
Scape returns to the point cloud data of scene three dimensions, i.e., three-dimensional (3D) point cloud data.
Based on the three dimensional point cloud scanned, the detection of barrier and the identification etc. of obstacle identity can be carried out, so as to
Automatic driving vehicle carries out avoidance operation etc..
In the prior art, be usually all to the identification of obstacle identity carry out in three dimensions, and it is not a kind of at
The accuracy of ripe realization method, recognition result is relatively low.
【Invention content】
In view of this, the present invention provides obstacle identity recognition methods, device, equipment and storage medium, can improve
The accuracy of recognition result.
Specific technical solution is as follows:
A kind of obstacle identity recognition methods, including:
Obtain the corresponding three dimensional point cloud of barrier to be identified;
The three dimensional point cloud is mapped to two dimensional image;
Based on the two dimensional image, the type of the barrier is identified by deep learning algorithm.
According to one preferred embodiment of the present invention,
This method further comprises:
Obtain each barrier detected from the three dimensional point cloud that scanning obtains;
Respectively using each barrier detected as the barrier to be identified;
Wherein, the three dimensional point cloud is scanned to obtain to automatic driving vehicle ambient enviroment.
According to one preferred embodiment of the present invention,
The two dimensional image includes:Two-dimentional RGB image.
According to one preferred embodiment of the present invention,
The three dimensional point cloud, which is mapped to two-dimentional RGB image, includes:
The three dimensional point cloud is mapped to the channels R of two dimensional image from the first visual angle;
The three dimensional point cloud is mapped to the channels G of two dimensional image from the second visual angle;
The three dimensional point cloud is mapped to the channel B of two dimensional image from third visual angle;
The two-dimentional RGB image is generated according to each mapping result.
According to one preferred embodiment of the present invention,
First visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Second visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
The third visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
First visual angle, second visual angle and the third visual angle are different visual angles.
A kind of obstacle identity identification device, including:Acquiring unit, map unit and taxon;
The acquiring unit for obtaining the corresponding three dimensional point cloud of barrier to be identified, and is sent to described reflect
Penetrate unit;
The map unit for the three dimensional point cloud to be mapped to two dimensional image, and the two dimensional image is sent out
Give the taxon;
The taxon identifies the barrier for being based on the two dimensional image by deep learning algorithm
Type.
According to one preferred embodiment of the present invention,
The acquiring unit is further used for,
Obtain each barrier detected from the three dimensional point cloud that scanning obtains;
Respectively using each barrier detected as the barrier to be identified;
Wherein, the three dimensional point cloud is scanned to obtain to automatic driving vehicle ambient enviroment.
According to one preferred embodiment of the present invention,
The two dimensional image includes:Two-dimentional RGB image.
According to one preferred embodiment of the present invention,
The three dimensional point cloud is mapped to the channels R of two dimensional image from the first visual angle by the map unit, from second
The three dimensional point cloud is mapped to the channels G of two dimensional image by visual angle, is mapped from third visual angle by the three dimensional point cloud
To the channel B of two dimensional image, the two-dimentional RGB image is generated according to each mapping result.
According to one preferred embodiment of the present invention,
First visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Second visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
The third visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
First visual angle, second visual angle and the third visual angle are different visual angles.
A kind of computer equipment, including memory, processor and be stored on the memory and can be in the processor
The computer program of upper operation, the processor realize method as described above when executing described program.
A kind of computer readable storage medium is stored thereon with computer program, real when described program is executed by processor
Now method as described above.
It can be seen that using scheme of the present invention, first by barrier to be identified corresponding three based on above-mentioned introduction
Dimension point cloud data is mapped to two dimensional image, and later based on obtained two dimensional image, barrier is identified by deep learning algorithm
Type, i.e., barrier to be identified is transformed into two-dimensional space from three dimensions, carries out dimension-reduction treatment, and know in two dimensional image
Other field, deep learning algorithm are a kind of very ripe algorithms, ensure that the accuracy of recognition result, i.e., compared to existing
Technology improves the accuracy of recognition result.
【Description of the drawings】
Fig. 1 is the flow chart of obstacle identity recognition methods embodiment of the present invention.
Correspondence schematic diagrames of the Fig. 2 between different visual angles of the present invention and different channels.
Fig. 3 is the schematic diagram of three-dimensional point cloud of the present invention.
Fig. 4 is the flow chart of obstacle identity recognition methods preferred embodiment of the present invention.
Fig. 5 is the composed structure schematic diagram of obstacle identity identification device embodiment of the present invention.
Fig. 6 shows the block diagram of the exemplary computer system/server 12 suitable for being used for realizing embodiment of the present invention.
【Specific implementation mode】
In order to keep technical scheme of the present invention clearer, clear, develop simultaneously embodiment referring to the drawings, to institute of the present invention
The scheme of stating is described in further detail.
Fig. 1 is the flow chart of obstacle identity recognition methods embodiment of the present invention, as shown in Figure 1, including following tool
Body realization method.
In 101, the corresponding three dimensional point cloud of barrier to be identified is obtained.
Before this, each barrier detected from the three dimensional point cloud that scanning obtains can be obtained first, and is divided
Not using each barrier detected as barrier to be identified, that is, be directed to each barrier for detecting, can respectively according to
Its type is identified in mode of the present invention.
Wherein, the three dimensional point cloud scanned can be scanned to obtain to automatic driving vehicle ambient enviroment.
It is specific how from the three dimensional point cloud that scanning obtains to detect that barrier be decided according to the actual requirements, than
Such as, clustering algorithm can be used.
Cluster refers to that one data set is divided into different classes or cluster according to certain specific criteria so that with a class
Or the similitude between the data in cluster is as big as possible.
Common clustering algorithm can be divided into following a few classes:Division methods, the method based on density, are based on net at hierarchical method
The method of network, the method etc. based on model.
For the three dimensional point cloud that scanning obtains, zero barrier may be therefrom detected, it is also possible to detect one
Barrier, it is also possible to detect multiple barriers.
For each barrier, its corresponding three dimensional point cloud can be determined respectively, for one according to the prior art
For barrier, corresponding three dimensional point cloud is the part in the three dimensional point cloud that scanning obtains.
In 102, the three dimensional point cloud got is mapped to two dimensional image.
Preferably, the two dimensional image that mapping obtains can be two-dimentional RGB (RGB, Red, Green, Blue) image.
Specifically mapping mode can be:
Three dimensional point cloud is mapped to the channels R of two dimensional image from the first visual angle;
Three dimensional point cloud is mapped to the channels G of two dimensional image from the second visual angle;
Three dimensional point cloud is mapped to the channel B of two dimensional image from third visual angle;
Two-dimentional RGB image is generated according to each mapping result.
Wherein, the first visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Second visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Third visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
First visual angle, the second visual angle and third visual angle are different visual angles.
For example, it can be that left side regards that the first visual angle, which can be that overlook visual angle, the second visual angle can be the positive visual angle of headstock, third visual angle,
Angle.
Correspondingly, three dimensional point cloud can be mapped to the channels R of two dimensional image from vertical view visual angle, by three dimensional point cloud
The channels G of two dimensional image are mapped to from the positive visual angle of headstock, the B that three dimensional point cloud is mapped to two dimensional image from left side perspective leads to
Road.
To which the correspondence between visual angle and channel as shown in Figure 2 can be obtained, Fig. 2 is different visual angles of the present invention
Correspondence schematic diagram between different channels, as shown in Fig. 2, overlooking visual angle corresponds to the channels R, it is logical that the positive visual angle of headstock corresponds to G
Road, left side perspective correspond to channel B.
Certainly, correspondence described above by way of example only, specifically can be according to actual needs using which kind of correspondence
Depending on.
Specific how to be mapped can equally be decided according to the actual requirements, for example, for overlooking visual angle, can be used as follows
Mapping mode.
For a point in three dimensions, it is assumed that its coordinate position is (10,20,30), wherein 10 sit for the directions x
Mark, 20 be the directions y coordinate, and 30 be the directions z coordinate.
When being mapped from vertical view visual angle, the directions z coordinate can be set as to 0, and then using the directions x coordinate and the directions y
Coordinate calibrates the coordinate position (10,20) of a two-dimensional space, corresponds in two dimensional image, you can refer to the seat in two dimensional image
Mark is set to the pixel of (10,20), and value of the pixel on the channels R may be configured as 255, indicates that color is most bright.
It should be noted that in three dimensions, the directions x coordinate and the directions y coordinate may be negative value, in this case,
When being mapped, it is also necessary to carry out translation etc., be implemented as the prior art.
Fig. 3 is the schematic diagram of three-dimensional point cloud of the present invention, as shown in figure 3, point cloud is a discrete point, some positions
A little, some positions do not have a little, therefore, for each pixel on two dimensional image, if existed in three dimensions
Corresponding, then the value on its channel R may be configured as 255, if corresponding point is not present in three dimensions, R is logical
Value on road may be configured as 0.
In the manner described above, value of each pixel on the channels R on two dimensional image can be respectively obtained.
According to mode similar to the above, value of each pixel on the channels G on two dimensional image can be respectively obtained
And each value of the pixel in channel B.
Due to having got value of each pixel in the channels R, the channels G and channel B respectively, then then can be obtained one
Width two dimension RGB image.
In 103, based on obtained two dimensional image, the type of barrier is identified by deep learning algorithm.
After obtaining two dimensional image, you can be based on two dimensional image, identify the type of barrier.
Preferably, deep learning algorithm can be used, the type of barrier is identified.
Specifically use which kind of deep learning algorithm that can be decided according to the actual requirements, for example, can be used using wider convolution
Neural network (CNN, Convolution Neural Network) algorithm.
Convolutional neural networks are a kind of multilayer neural networks, are good at the correlation machine study that processing image is especially big image
Problem.
Convolutional neural networks are by serial of methods, successfully by the continuous dimensionality reduction of the huge problem of image recognition of data, finally
It can be trained to.
One typical convolutional neural networks can be made of convolutional layer, pond layer, full articulamentum, wherein convolutional layer with
Pond layer cooperation, forms multiple convolution groups, successively extracts feature, and classification is completed eventually by several full articulamentums.
The operation that convolutional layer is completed, it is believed that it is to be inspired by local receptor field concept, and pond layer, primarily to
Reduce data dimension.
For synthesis, convolutional neural networks are distinguished by convolution come simulation feature, and by the weights of convolution it is shared and
Chi Hua completes the tasks such as classification to reduce the order of magnitude of network parameter finally by traditional neural network.
It is triple channel specific to the two-dimentional RGB image in the present invention, got, passes through convolutional neural networks even depth
Algorithm is practised, can fully learn the feature to each visual angle, ensure that the accuracy of recognition result.
By the way that two-dimentional RGB image is identified, it may be determined that the type for going out barrier, such as people, bicycle, motor vehicle
Deng.
Based on above-mentioned introduction, Fig. 4 is the flow chart of obstacle identity recognition methods preferred embodiment of the present invention, is such as schemed
Shown in 4, including realization method in detail below.
In 401, each barrier detected from the three dimensional point cloud that scanning obtains is obtained, for what is detected
Each barrier respectively as barrier to be identified, and is handled according to mode shown in 402~404.
For automatic driving vehicle, laser radar is the important sensing that automatic driving vehicle is used to perceive three-dimensional environment
Device, laser radar scanning one enclose scene, can return to the point cloud data of scene three dimensions, i.e. three dimensional point cloud.
After getting three dimensional point cloud, can detection of obstacles be carried out according to it first, that is, detected unmanned
Barrier present in vehicle periphery scene, and can by each barrier by predetermined way mark out come.
Later, the type of each barrier detected can be further identified, correspondingly, each obstacle that can will be detected
Object is handled according to mode shown in 402~404 respectively respectively as barrier to be identified.
In 402, the corresponding three dimensional point cloud of barrier is obtained.
The three dimensional point cloud of composition barrier can be obtained.
In 403, by three different visual angles, the corresponding three dimensional point cloud of barrier is mapped to two dimension respectively
The channels R, the channels G and the channel B of RGB image, obtain two-dimentional RGB image.
Three different visual angles, which may respectively be, overlooks visual angle, the positive visual angle of headstock and left side perspective.
Correspondence between visual angle and channel can be decided according to the actual requirements, for example, as shown in Fig. 2, vertical view visual angle can
The channels corresponding R, the positive visual angle of headstock can correspond to the channels G, and left side perspective can correspond to channel B.
In 404, based on obtained two-dimentional RGB image, the type of barrier is identified by deep learning algorithm.
The deep learning algorithm can be convolutional neural networks etc..
By the way that two-dimentional RGB image is identified, it may be determined that go out the type of barrier, such as people, bicycle, motor vehicle.
It can be seen that using mode described in above-described embodiment based on above-mentioned introduction, by barrier to be identified from three-dimensional space
Between be transformed into two-dimensional space, that is, carry out dimension-reduction treatment, obtain two-dimentional RGB image, and based on two-dimentional RGB image, by depth
The type that algorithm identifies barrier to be practised, and field is identified in two dimensional image, deep learning algorithm is a kind of very ripe algorithm,
Ensure that the accuracy of recognition result.
It is the introduction about embodiment of the method above, below by way of device embodiment, to scheme of the present invention into traveling
One step explanation.
Fig. 5 is the composed structure schematic diagram of obstacle identity identification device embodiment of the present invention, as shown in figure 5, packet
It includes:Acquiring unit 501, map unit 502 and taxon 503.
Acquiring unit 501 for obtaining the corresponding three dimensional point cloud of barrier to be identified, and is sent to map unit
502。
Two dimensional image for three dimensional point cloud to be mapped to two dimensional image, and is sent to grouping sheet by map unit 502
Member 503.
Taxon 503 identifies the type of barrier by deep learning algorithm for being based on two dimensional image.
Acquiring unit 501 can obtain each barrier detected from the three dimensional point cloud that scanning obtains, and respectively
Using each barrier detected as barrier to be identified.
Wherein, three dimensional point cloud can be scanned to obtain to automatic driving vehicle ambient enviroment.
Three dimensional point cloud is obtained for scanning, may therefrom detect zero barrier, it is also possible to detect a barrier
Hinder object, it is also possible to detect multiple barriers, for each barrier, can determine that its is corresponding respectively according to the prior art
Three dimensional point cloud, for a barrier, corresponding three dimensional point cloud is the three dimensional point cloud that scanning obtains
In a part.
For each barrier to be identified, acquiring unit 501 can further obtain the corresponding three-dimensional point cloud number of barrier
According to, and it is sent to map unit 502.
Correspondingly, three dimensional point cloud can be mapped to two dimensional image by map unit 502, that is, carry out dimension-reduction treatment, from three
Dimension space is transformed into two-dimensional space.
Preferably, the two dimensional image that mapping obtains is two-dimentional RGB image.
Specifically, following mapping mode can be used in map unit 502:
Three dimensional point cloud is mapped to the channels R of two dimensional image from the first visual angle;
Three dimensional point cloud is mapped to the channels G of two dimensional image from the second visual angle;
Three dimensional point cloud is mapped to the channel B of two dimensional image from third visual angle;
Two-dimentional RGB image is generated according to each mapping result.
Wherein, the first visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Second visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Third visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
First visual angle, the second visual angle and third visual angle are different visual angles.
For example, it can be that left side regards that the first visual angle, which can be that overlook visual angle, the second visual angle can be the positive visual angle of headstock, third visual angle,
Angle.
Correspondingly, three dimensional point cloud can be mapped to the channels R of two dimensional image by map unit 502 from vertical view visual angle, will
Three dimensional point cloud is mapped to the channels G of two dimensional image from the positive visual angle of headstock, and three dimensional point cloud is mapped to from left side perspective
The channel B of two dimensional image.
After obtaining two-dimentional RGB image, taxon 503 can be based on two-dimentional RGB image, be known by deep learning algorithm
Do not go out the type of barrier.
Preferably, the deep learning algorithm can be convolutional neural networks algorithm etc..
The two-dimentional RGB image got is triple channel, can be abundant by convolutional neural networks even depth learning algorithm
Learn the feature to each visual angle, ensures that the accuracy of recognition result.
By the way that two-dimentional RGB image is identified, it may be determined that the type for going out barrier, such as people, bicycle, motor vehicle
Deng.
The specific workflow of Fig. 5 shown device embodiments please refers to the related description in preceding method embodiment, no longer
It repeats.
It can be seen that using mode described in above-described embodiment based on above-mentioned introduction, by barrier to be identified from three-dimensional space
Between be transformed into two-dimensional space, that is, carry out dimension-reduction treatment, obtain two-dimentional RGB image, and based on two-dimentional RGB image, by depth
The type that algorithm identifies barrier to be practised, and field is identified in two dimensional image, deep learning algorithm is a kind of very ripe algorithm,
Ensure that the accuracy of recognition result.
Fig. 6 shows the block diagram of the exemplary computer system/server 12 suitable for being used for realizing embodiment of the present invention.
The computer system/server 12 that Fig. 6 is shown is only an example, should not be to the function and use scope of the embodiment of the present invention
Bring any restrictions.
As shown in fig. 6, computer system/server 12 is showed in the form of universal computing device.Computer system/service
The component of device 12 can include but is not limited to:One or more processor (processing unit) 16, memory 28 connect not homology
The bus 18 of system component (including memory 28 and processor 16).
Bus 18 indicates one or more in a few class bus structures, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using the arbitrary bus structures in a variety of bus structures.It lifts
For example, these architectures include but not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC)
Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Computer system/server 12 typically comprises a variety of computer system readable media.These media can be appointed
What usable medium that can be accessed by computer system/server 12, including volatile and non-volatile media, it is moveable and
Immovable medium.
Memory 28 may include the computer system readable media of form of volatile memory, such as random access memory
Device (RAM) 30 and/or cache memory 32.Computer system/server 12 may further include it is other it is removable/no
Movably, volatile/non-volatile computer system storage medium.Only as an example, storage system 34 can be used for reading and writing
Immovable, non-volatile magnetic media (Fig. 6 do not show, commonly referred to as " hard disk drive ").It, can although being not shown in Fig. 6
To provide for the disc driver to moving non-volatile magnetic disk (such as " floppy disk ") read-write, and to removable non-volatile
Property CD (such as CD-ROM, DVD-ROM or other optical mediums) read and write CD drive.In these cases, each to drive
Dynamic device can be connected by one or more data media interfaces with bus 18.Memory 28 may include at least one program
There is one group of (for example, at least one) program module, these program modules to be configured to perform the present invention for product, the program product
The function of each embodiment.
Program/utility 40 with one group of (at least one) program module 42 can be stored in such as memory 28
In, such program module 42 includes --- but being not limited to --- operating system, one or more application program, other programs
Module and program data may include the realization of network environment in each or certain combination in these examples.Program mould
Block 42 usually executes function and/or method in embodiment described in the invention.
Computer system/server 12 can also be (such as keyboard, sensing equipment, aobvious with one or more external equipments 14
Show device 24 etc.) communication, it is logical that the equipment interacted with the computer system/server 12 can be also enabled a user to one or more
Letter, and/or any set with so that the computer system/server 12 communicated with one or more of the other computing device
Standby (such as network interface card, modem etc.) communicates.This communication can be carried out by input/output (I/O) interface 22.And
And computer system/server 12 can also pass through network adapter 20 and one or more network (such as LAN
(LAN), wide area network (WAN) and/or public network, such as internet) communication.As shown in fig. 6, network adapter 20 passes through bus
18 communicate with other modules of computer system/server 12.It should be understood that although not shown in the drawings, computer can be combined
Systems/servers 12 use other hardware and/or software module, including but not limited to:Microcode, device driver, at redundancy
Manage unit, external disk drive array, RAID system, tape drive and data backup storage system etc..
Processor 16 is stored in the program in memory 28 by operation, to perform various functions at application and data
Reason, such as realize the method in Fig. 1 and embodiment illustrated in fig. 4, i.e.,:Obtain the corresponding three-dimensional point cloud number of barrier to be identified
According to, three dimensional point cloud is mapped to two dimensional image, be based on the two dimensional image, barrier is identified by deep learning algorithm
Type etc..
Wherein, the two dimensional image can be two-dimentional RGB image.
Correspondingly, three dimensional point cloud can be mapped to the channels R of two dimensional image from the first visual angle, from the second visual angle by three
Dimension point cloud data is mapped to the channels G of two dimensional image, and the B that three dimensional point cloud is mapped to two dimensional image from third visual angle leads to
Road finally generates two-dimentional RGB image according to each mapping result.
Wherein, the first visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Second visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Third visual angle can be one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
First visual angle, the second visual angle and third visual angle are different visual angles.
Specific implementation please refers to the respective description in preceding method embodiment, repeats no more.
The present invention discloses a kind of computer readable storage mediums, are stored thereon with computer program, the program quilt
Processor will realize the method in embodiment as shown in Figure 1 and Figure 4 when executing.
The arbitrary combination of one or more computer-readable media may be used.Computer-readable medium can be calculated
Machine readable signal medium or computer readable storage medium.Computer readable storage medium for example can be --- but it is unlimited
In --- electricity, system, device or the device of magnetic, optical, electromagnetic, infrared ray or semiconductor, or the arbitrary above combination.It calculates
The more specific example (non exhaustive list) of machine readable storage medium storing program for executing includes:Electrical connection with one or more conducting wires, just
It takes formula computer disk, hard disk, random access memory (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this document, can be any include computer readable storage medium or storage journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.
Computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated,
Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including --- but
It is not limited to --- electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be
Any computer-readable medium other than computer readable storage medium, which can send, propagate or
Transmission for by instruction execution system, device either device use or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited
In --- wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
It can be write with one or more programming languages or combinations thereof for executing the computer that operates of the present invention
Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++,
Further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with
It fully executes, partly execute on the user computer on the user computer, being executed as an independent software package, portion
Divide and partly executes or executed on a remote computer or server completely on the remote computer on the user computer.?
Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including LAN (LAN) or
Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as carried using Internet service
It is connected by internet for quotient).
In several embodiments provided by the present invention, it should be understood that disclosed device and method etc. can pass through
Other modes are realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit,
Only a kind of division of logic function, formula that in actual implementation, there may be another division manner.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can be stored in one and computer-readable deposit
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer
It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the present invention
The part steps of embodiment the method.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM,
Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. it is various
The medium of program code can be stored.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
With within principle, any modification, equivalent substitution, improvement and etc. done should be included within the scope of protection of the invention god.
Claims (12)
1. a kind of obstacle identity recognition methods, which is characterized in that including:
Obtain the corresponding three dimensional point cloud of barrier to be identified;
The three dimensional point cloud is mapped to two dimensional image;
Based on the two dimensional image, the type of the barrier is identified by deep learning algorithm.
2. according to the method described in claim 1, it is characterized in that,
This method further comprises:
Obtain each barrier detected from the three dimensional point cloud that scanning obtains;
Respectively using each barrier detected as the barrier to be identified;
Wherein, the three dimensional point cloud is scanned to obtain to automatic driving vehicle ambient enviroment.
3. according to the method described in claim 1, it is characterized in that,
The two dimensional image includes:Two-dimentional RGB image.
4. according to the method described in claim 3, it is characterized in that,
The three dimensional point cloud, which is mapped to two-dimentional RGB image, includes:
The three dimensional point cloud is mapped to the channels R of two dimensional image from the first visual angle;
The three dimensional point cloud is mapped to the channels G of two dimensional image from the second visual angle;
The three dimensional point cloud is mapped to the channel B of two dimensional image from third visual angle;
The two-dimentional RGB image is generated according to each mapping result.
5. according to the method described in claim 4, it is characterized in that,
First visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Second visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
The third visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
First visual angle, second visual angle and the third visual angle are different visual angles.
6. a kind of obstacle identity identification device, which is characterized in that including:Acquiring unit, map unit and taxon;
The acquiring unit, for obtaining the corresponding three dimensional point cloud of barrier to be identified, and it is single to be sent to the mapping
Member;
The map unit for the three dimensional point cloud to be mapped to two dimensional image, and the two dimensional image is sent to
The taxon;
The taxon identifies the type of the barrier by deep learning algorithm for being based on the two dimensional image.
7. device according to claim 6, which is characterized in that
The acquiring unit is further used for,
Obtain each barrier detected from the three dimensional point cloud that scanning obtains;
Respectively using each barrier detected as the barrier to be identified;
Wherein, the three dimensional point cloud is scanned to obtain to automatic driving vehicle ambient enviroment.
8. device according to claim 6, which is characterized in that
The two dimensional image includes:Two-dimentional RGB image.
9. device according to claim 8, which is characterized in that
The three dimensional point cloud is mapped to the channels R of two dimensional image from the first visual angle by the map unit, from the second visual angle
The three dimensional point cloud is mapped to the channels G of two dimensional image, two are mapped to from third visual angle by the three dimensional point cloud
The channel B for tieing up image generates the two-dimentional RGB image according to each mapping result.
10. device according to claim 9, which is characterized in that
First visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
Second visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
The third visual angle is one of the following:Overlook visual angle, the positive visual angle of headstock, left side perspective;
First visual angle, second visual angle and the third visual angle are different visual angles.
11. a kind of computer equipment, including memory, processor and it is stored on the memory and can be on the processor
The computer program of operation, which is characterized in that the processor is realized when executing described program as any in Claims 1 to 5
Method described in.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that described program is handled
Such as method according to any one of claims 1 to 5 is realized when device executes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710253581.3A CN108734058B (en) | 2017-04-18 | 2017-04-18 | Obstacle type identification method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710253581.3A CN108734058B (en) | 2017-04-18 | 2017-04-18 | Obstacle type identification method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108734058A true CN108734058A (en) | 2018-11-02 |
CN108734058B CN108734058B (en) | 2022-05-27 |
Family
ID=63924745
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710253581.3A Active CN108734058B (en) | 2017-04-18 | 2017-04-18 | Obstacle type identification method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108734058B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919145A (en) * | 2019-01-21 | 2019-06-21 | 江苏徐工工程机械研究院有限公司 | A kind of mine card test method and system based on 3D point cloud deep learning |
CN109948448A (en) * | 2019-02-20 | 2019-06-28 | 苏州风图智能科技有限公司 | For the detection method of 3D barrier, device, system and computer storage medium |
CN110046569A (en) * | 2019-04-12 | 2019-07-23 | 北京百度网讯科技有限公司 | A kind of data processing method, device and electronic equipment |
CN110163064A (en) * | 2018-11-30 | 2019-08-23 | 腾讯科技(深圳)有限公司 | A kind of recognition methods of Sign for road, device and storage medium |
CN111160360A (en) * | 2018-11-07 | 2020-05-15 | 北京四维图新科技股份有限公司 | Image recognition method, device and system |
CN111310811A (en) * | 2020-02-06 | 2020-06-19 | 东华理工大学 | Large-scene three-dimensional point cloud classification method based on multi-dimensional feature optimal combination |
CN111325049A (en) * | 2018-12-13 | 2020-06-23 | 北京京东尚科信息技术有限公司 | Commodity identification method and device, electronic equipment and readable medium |
CN112016638A (en) * | 2020-10-26 | 2020-12-01 | 广东博智林机器人有限公司 | Method, device and equipment for identifying steel bar cluster and storage medium |
CN112037521A (en) * | 2020-07-24 | 2020-12-04 | 长沙理工大学 | Vehicle type identification method and hazardous chemical substance vehicle identification method |
CN114721404A (en) * | 2022-06-08 | 2022-07-08 | 超节点创新科技(深圳)有限公司 | Obstacle avoidance method, robot and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101408931A (en) * | 2007-10-11 | 2009-04-15 | Mv科技软件有限责任公司 | System and method for 3D object recognition |
CN103093191A (en) * | 2012-12-28 | 2013-05-08 | 中电科信息产业有限公司 | Object recognition method with three-dimensional point cloud data and digital image data combined |
CN104636725A (en) * | 2015-02-04 | 2015-05-20 | 华中科技大学 | Gesture recognition method based on depth image and gesture recognition system based on depth images |
-
2017
- 2017-04-18 CN CN201710253581.3A patent/CN108734058B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101408931A (en) * | 2007-10-11 | 2009-04-15 | Mv科技软件有限责任公司 | System and method for 3D object recognition |
CN103093191A (en) * | 2012-12-28 | 2013-05-08 | 中电科信息产业有限公司 | Object recognition method with three-dimensional point cloud data and digital image data combined |
CN104636725A (en) * | 2015-02-04 | 2015-05-20 | 华中科技大学 | Gesture recognition method based on depth image and gesture recognition system based on depth images |
Non-Patent Citations (1)
Title |
---|
NIKOLAUS KARPINSKY,ETL.: "Three-bit representation of three-dimensional range data", 《APPLIED OPTICS》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160360A (en) * | 2018-11-07 | 2020-05-15 | 北京四维图新科技股份有限公司 | Image recognition method, device and system |
CN111160360B (en) * | 2018-11-07 | 2023-08-01 | 北京四维图新科技股份有限公司 | Image recognition method, device and system |
CN110163064B (en) * | 2018-11-30 | 2022-04-05 | 腾讯科技(深圳)有限公司 | Method and device for identifying road marker and storage medium |
CN110163064A (en) * | 2018-11-30 | 2019-08-23 | 腾讯科技(深圳)有限公司 | A kind of recognition methods of Sign for road, device and storage medium |
CN111325049A (en) * | 2018-12-13 | 2020-06-23 | 北京京东尚科信息技术有限公司 | Commodity identification method and device, electronic equipment and readable medium |
CN109919145A (en) * | 2019-01-21 | 2019-06-21 | 江苏徐工工程机械研究院有限公司 | A kind of mine card test method and system based on 3D point cloud deep learning |
CN109948448B (en) * | 2019-02-20 | 2021-03-12 | 苏州风图智能科技有限公司 | Method, device and system for detecting 3D obstacle and computer storage medium |
CN109948448A (en) * | 2019-02-20 | 2019-06-28 | 苏州风图智能科技有限公司 | For the detection method of 3D barrier, device, system and computer storage medium |
CN110046569A (en) * | 2019-04-12 | 2019-07-23 | 北京百度网讯科技有限公司 | A kind of data processing method, device and electronic equipment |
CN110046569B (en) * | 2019-04-12 | 2022-04-12 | 北京百度网讯科技有限公司 | Unmanned driving data processing method and device and electronic equipment |
CN111310811A (en) * | 2020-02-06 | 2020-06-19 | 东华理工大学 | Large-scene three-dimensional point cloud classification method based on multi-dimensional feature optimal combination |
CN112037521A (en) * | 2020-07-24 | 2020-12-04 | 长沙理工大学 | Vehicle type identification method and hazardous chemical substance vehicle identification method |
CN112037521B (en) * | 2020-07-24 | 2021-10-22 | 长沙理工大学 | Vehicle type identification method and hazardous chemical substance vehicle identification method |
CN112016638A (en) * | 2020-10-26 | 2020-12-01 | 广东博智林机器人有限公司 | Method, device and equipment for identifying steel bar cluster and storage medium |
CN112016638B (en) * | 2020-10-26 | 2021-04-06 | 广东博智林机器人有限公司 | Method, device and equipment for identifying steel bar cluster and storage medium |
CN114721404A (en) * | 2022-06-08 | 2022-07-08 | 超节点创新科技(深圳)有限公司 | Obstacle avoidance method, robot and storage medium |
CN114721404B (en) * | 2022-06-08 | 2022-09-13 | 超节点创新科技(深圳)有限公司 | Obstacle avoidance method, robot and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108734058B (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734058A (en) | Obstacle identity recognition methods, device, equipment and storage medium | |
CN109101861A (en) | Obstacle identity recognition methods, device, equipment and storage medium | |
JP6842520B2 (en) | Object detection methods, devices, equipment, storage media and vehicles | |
US20210279503A1 (en) | Image processing method, apparatus, and device, and storage medium | |
JP6364049B2 (en) | Vehicle contour detection method, device, storage medium and computer program based on point cloud data | |
CN109271944A (en) | Obstacle detection method, device, electronic equipment, vehicle and storage medium | |
CN109214980A (en) | A kind of 3 d pose estimation method, device, equipment and computer storage medium | |
CN108319901B (en) | Biopsy method, device, computer equipment and the readable medium of face | |
CN109116374A (en) | Determine the method, apparatus, equipment and storage medium of obstacle distance | |
CN109344804A (en) | A kind of recognition methods of laser point cloud data, device, equipment and medium | |
CN109343061A (en) | Transducer calibration method, device, computer equipment, medium and vehicle | |
CN109492507A (en) | The recognition methods and device of the traffic light status, computer equipment and readable medium | |
CN110363817B (en) | Target pose estimation method, electronic device, and medium | |
WO2018170472A1 (en) | Joint 3d object detection and orientation estimation via multimodal fusion | |
CN105333883B (en) | A kind of guidance path track display method and device for head up display | |
CN108629231A (en) | Obstacle detection method, device, equipment and storage medium | |
CN108805979A (en) | A kind of dynamic model three-dimensional rebuilding method, device, equipment and storage medium | |
CN108491848B (en) | Image saliency detection method and device based on depth information | |
CN111247557A (en) | Method and system for detecting moving target object and movable platform | |
CN111369617B (en) | 3D target detection method of monocular view based on convolutional neural network | |
CN111931764B (en) | Target detection method, target detection frame and related equipment | |
US10915781B2 (en) | Scene reconstructing system, scene reconstructing method and non-transitory computer-readable medium | |
CN109229109A (en) | Judge the method, apparatus, equipment and computer storage medium of vehicle heading | |
CN109118532A (en) | Vision depth of field estimation method, device, equipment and storage medium | |
CN109085620A (en) | Automatic driving vehicle positions abnormal calibration method, apparatus, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |