CN111507339A - Target point cloud obtaining method based on intensity image - Google Patents

Target point cloud obtaining method based on intensity image Download PDF

Info

Publication number
CN111507339A
CN111507339A CN202010301597.9A CN202010301597A CN111507339A CN 111507339 A CN111507339 A CN 111507339A CN 202010301597 A CN202010301597 A CN 202010301597A CN 111507339 A CN111507339 A CN 111507339A
Authority
CN
China
Prior art keywords
data
intensity image
pixel
image data
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010301597.9A
Other languages
Chinese (zh)
Other versions
CN111507339B (en
Inventor
朱翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenzhen Survey Technology Co ltd
Original Assignee
Beijing Shenzhen Survey Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenzhen Survey Technology Co ltd filed Critical Beijing Shenzhen Survey Technology Co ltd
Priority to CN202010301597.9A priority Critical patent/CN111507339B/en
Publication of CN111507339A publication Critical patent/CN111507339A/en
Application granted granted Critical
Publication of CN111507339B publication Critical patent/CN111507339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a target point cloud obtaining method based on an intensity image, which comprises the steps of obtaining original point cloud data and extracting intensity image data from the original point cloud data; sampling processing is carried out on the intensity image data according to a preset grid step length, and sampling pixel points are determined; further determining seed pixel points; adding a seed label and judgment data to each pixel point, then determining a seed pixel point to which the first pixel point belongs according to first distance data between the first seed pixel point and the first pixel point in the second field of the first seed pixel point and the judgment data of the first pixel point, and generating first intensity image data; calling a preset binarization algorithm to carry out binarization processing on the first intensity image data to obtain second intensity image data; calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain target intensity image data; and acquiring target point cloud data from the original point cloud data according to the pixel coordinate value of each pixel in the target intensity image data.

Description

Target point cloud obtaining method based on intensity image
Technical Field
The invention relates to the field of data processing, in particular to a target point cloud obtaining method based on an intensity image.
Background
The current three-dimensional imaging neighborhood, time-of-flight research, is one of the hots. Compared with the five other three-dimensional imaging technologies, the time-of-flight three-dimensional imaging method can obtain a transient image, which means that excessive subsequent processing is not required when the depth of field is calculated, so that a higher frame rate can be achieved, and the related cost can be saved because the system expense of the subsequent processing is reduced. The current flight time three-dimensional imaging research mostly focuses on the fields of transient imaging, super-resolution, non-visual field detection imaging, flight time mass spectrum and the like. In addition, because the flight time three-dimensional imaging mode is generally adopted, the range of distance measurement can be adjusted by changing the pulse frequency, the visual field size and the light source intensity of the laser, so that the detection range of the flight time three-dimensional imaging is high in elasticity, and the method is suitable for operations such as face recognition, gesture recognition and tracking, body sensing recognition, game interaction and the like in a short-distance range, is also suitable for detecting targets in a long distance, and has a very wide potential application scene. However, these application scenarios require target objectification of the three-dimensional point cloud, i.e., extracting the target point cloud of interest from the background.
Because the time-of-flight three-dimensional imaging method is applied to commercialization and family service time later, a point cloud target extraction scheme obtained by the time-of-flight three-dimensional imaging method is less, and the difficulty in extracting a target by only utilizing point cloud space position information is higher. At present, methods for determining target point cloud data from three-dimensional point cloud data are mostly designed for specific scenes, the extraction rate of a characteristic value method is not high for processing scenes with large deviation and many abnormal values, a single random sampling consistency algorithm is used for extracting the target point cloud, and the situation of boundary false extraction is easy to occur.
Disclosure of Invention
Aiming at the defects of the prior art, the embodiment of the invention aims to provide a target point cloud obtaining method based on an intensity image, and the target point cloud data is obtained from the original point cloud data obtained by a flight time camera according to the characteristic that a flight time three-dimensional imaging technology can simultaneously obtain the point cloud position and the intensity information.
In order to solve the above problem, in a first aspect, the present invention provides a method for obtaining a target point cloud based on an intensity image, the method including:
s1, acquiring original point cloud data, and extracting intensity image data from the original point cloud data;
s2, sampling the intensity image data according to a preset grid step length, and determining sampling pixel points;
s3, determining seed pixel points according to the gradient values of all the pixel points in the first neighborhood of the sampling pixel points;
s4, adding a seed label to each pixel point in the intensity image data, setting the seed label as a first label, adding judgment data to each pixel point, and setting the judgment data as initial judgment data;
s5, performing distance calculation processing on a first pixel point in a second field of a first seed pixel point and the first seed pixel point to obtain first distance data of the first pixel point and the first seed pixel point;
s6, when the first distance data is smaller than the determination data of the first pixel point, setting the seed label of the first pixel point as a first seed pixel point, and setting the determination data of the first pixel point as the first distance data;
s7, iteratively executing S5 and S6 until the first distance data between the first pixel point corresponding to each first seed pixel point and the first seed pixel point is larger than or equal to the judgment data of the first pixel point, and generating first intensity image data according to the processed pixel point data;
s8, calling a preset binarization algorithm to carry out binarization processing on the first intensity image data to obtain second intensity image data;
s9, calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain target intensity image data;
and S10, acquiring target point cloud data from the original point cloud data according to the pixel coordinate value of each pixel in the target intensity image data.
Preferably, the determining the seed pixel according to the gradient value of each pixel in the first neighborhood of the sampling pixel specifically includes:
sorting the gradient values of all the pixel points in the first neighborhood of the sampling pixel points according to an ascending order, and determining a first gradient value arranged at a first position;
and determining the pixel point corresponding to the first gradient value as the seed pixel point.
Preferably, the distance calculation processing is performed on the first pixel point and the first seed pixel point in the second field of the first seed pixel point, and obtaining the first distance data between the first pixel point and the first seed pixel point specifically includes:
performing spatial distance calculation processing according to the pixel coordinate value of the first seed pixel point and the pixel coordinate value of the first pixel point to obtain first spatial distance data of the first seed pixel point and the first pixel point;
performing first color distance data calculation processing according to the color value of the first seed pixel point and the color value of the first pixel point to obtain first color distance data of the first seed pixel point and the first pixel point;
and calling a preset distance calculation method to perform distance calculation processing on the first spatial distance data and the first color distance data to obtain the first distance data.
Preferably, the invoking a preset binarization algorithm to perform binarization processing on the first intensity image data to obtain second intensity image data specifically includes:
and calling a maximum between-class variance algorithm to carry out binarization processing on the first intensity image data, and generating second intensity image data according to each pixel data after processing.
Preferably, the step of calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain target intensity image data specifically includes:
and performing expansion operation processing on the second intensity image data by adopting a disc expansion template with the radius of 3 to obtain the target intensity image data.
Preferably, the acquiring target point cloud data from the original point cloud data according to the pixel coordinate value of each pixel in the target intensity image data specifically includes:
mapping pixel coordinate values of all pixel points in the target intensity image data to the original point cloud data, and determining target points according to the mapped pixel coordinate values;
and generating the target point cloud data according to the point cloud data corresponding to the target point.
Preferably, before the sampling processing of the intensity image data according to the preset grid step length and the determination of the sampling pixel point, the method further includes:
and calling a first preset image model conversion method to perform first model conversion processing on the intensity image data to generate converted intensity image data.
Further preferably, after generating the first intensity image data from the processed pixel point data, the method further includes:
calling a second preset image model conversion method to perform second model conversion processing on the first intensity image data to generate converted first intensity image data; the second preset image model conversion method and the first preset image model conversion method have a corresponding relation.
In a second aspect, the present invention provides an apparatus comprising a memory for storing a program and a processor for performing the method of any of the first aspects.
In a third aspect, the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the first aspects.
According to the target point cloud obtaining method based on the intensity image, the obtained original point cloud data is obtained according to the imaging characteristics of the time-of-flight camera, the target range of the intensity image data of the original point cloud data is determined in the process of obtaining the target point cloud, then the target point cloud data is obtained from the original point cloud data according to the corresponding relation between the target pixel in the intensity image data and the pixel coordinate of each point cloud data in the original point cloud data, the error of the obtained target point cloud data is small, the complexity of obtaining the target point cloud data from the point cloud data is simplified in the extraction process, and the purposes of reducing the calculated amount in the target point cloud obtaining process and saving the system expenditure are achieved.
Drawings
Fig. 1 is a target point cloud obtaining method based on an intensity image according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention discloses a target point cloud obtaining method based on an intensity image. Fig. 1 is a target point cloud obtaining method based on an intensity image according to an embodiment of the present invention. As shown, the method comprises the following steps:
s1, acquiring the original point cloud data and extracting the intensity image data from the original point cloud data.
Specifically, the original point cloud data is a point cloud data set obtained by scanning or shooting a target scene through a three-dimensional imaging system and then performing data processing. The raw point cloud data has geometric position information and intensity data information. The intensity data information of the point cloud data acquired by the time-of-flight camera is usually obtained by acquiring a color image through the camera, and then assigning the color information of the pixel at the corresponding position to the corresponding point in the point cloud. The intensity image data extracted from the original point cloud data in the embodiment of the present invention is actually image information data including color information.
The original point cloud data generated by acquiring image data of a target scene by a flight time camera receives an image acquisition instruction, shoots the target scene according to the image acquisition instruction to generate original point cloud data, and the resolution of the flight time camera is M × N, so that the original point cloud data generated by acquiring the target scene by the flight time camera comprises M × N point cloud pixel points, wherein M and N are positive integers.
In a specific example of the embodiment of the present invention, the time-of-flight camera includes a sensor with a resolution of 320 × 240 and a matched time-of-flight controller, the time-of-flight camera uses an infrared light source of 850nm as a light emitting source, and the collected depth data includes phase information, intensity amplitude information, ambient light, and a flag bit.
And S2, sampling the intensity image data according to the preset grid step length, and determining sampling pixel points.
In the embodiment of the present invention, the intensity image data has M × N pixel points, and the intensity image data is sampled according to S × S to obtain K sampling pixel points, then the number K of sampling pixel points and the preset network step size satisfy the relationship K × S × S — M × N, where K, S, M, N is a positive integer.
Before the method provided by the invention is implemented, the determination of the preset network step length is carried out after data processing amount and the expectation to be achieved by the processed image data are comprehensively considered and evaluated. In the embodiment of the invention, the processing of the intensity image data is only to determine the target point cloud position information in the point cloud data, so that after the operation speed and the classification effect are comprehensively considered, the preset network step length is determined to be 500. Therefore, the target range can be determined, and the data processing speed can be improved.
In a preferred embodiment of the present invention, before sampling the intensity image data and determining the sampling pixel point, the determination is performed according to the color model image of the intensity image data, and when the intensity image data is RGB color model image data, a first preset image model conversion method needs to be invoked to perform first model conversion on the intensity image data and convert the intensity image data into L AB color model image data.
And S3, determining a seed pixel according to the gradient value of each pixel in the first neighborhood of the sampling pixel.
Specifically, gradient values of a plurality of pixel points in a first neighborhood of the sampling pixel point are obtained, and the pixel point with the minimum gradient value is determined as a seed pixel point. After each sampling pixel point collected in the step S2 is processed, a plurality of seed pixel points are determined in the intensity image data, and each seed pixel point cannot be selected from the edge and the noise point.
In the preferred scheme of the embodiment of the invention, the gradient values of all the pixel points in the first neighborhood of the sampling pixel point are sorted according to the ascending order, and the first gradient value arranged at the first position is determined, and then the pixel point corresponding to the first gradient value is determined as the seed pixel point.
And S4, adding a seed label to each pixel point in the intensity image data, setting the seed label as a first label, adding judgment data to each pixel point, and setting the judgment data as initial judgment data.
Specifically, each pixel point in the intensity image data is initialized, that is, a seed label and judgment data are added to each pixel point.
The seed label is used to indicate to which seed pixel a pixel belongs. And initializing the seed label to be not belonging to any seed pixel point during initialization. That is, the first label is not belonging to any seed pixel. For example, in a specific example of the embodiment of the present invention, the seed label of the pixel point i is initialized to-1.
The judgment data is used for indicating the distance value between the pixel point and the seed pixel point to which the pixel point belongs, and in the initialization process, the pixel point is initialized to be not belonging to any seed pixel point, so that the judgment data of each pixel point is initialized to be initial judgment data in the initialization process, and the initial judgment data is any data value exceeding the maximum value of the distances between all the pixel points in the intensity image data. For example, in a specific example of the embodiment of the present invention, the determination data of the pixel point i is initialized to infinity.
S5, distance calculation processing is carried out on the first pixel point and the first seed pixel point in the second field of the first seed pixel point, and first distance data of the first pixel point and the first seed pixel point are obtained.
Specifically, the pixel points in the second field of the first seed pixel points are first pixel points, and distance calculation processing is performed on the first pixel points and the first seed pixel points to obtain first distance data of the first pixel points and the first seed pixel points. Wherein the second neighborhood is used to determine a range to the first seed pixel neighborhood. That is, it is determined which range pixels need to be calculated for the first distance data between the first seed pixel and the first seed pixel. The value of the second neighborhood may be determined after evaluating the data calculation amount through a plurality of tests before executing the method provided by the embodiment of the invention.
For example, if the preset network step is S, the size of the second neighborhood is determined to be 2S, that is, pixels in the neighborhood of the first seed pixel 2S × S are traversed, and first distance data between each pixel and the first seed pixel is calculated.
In order to determine the seed pixel point to which each pixel point in the intensity image data belongs, traversing the first pixel point in the second neighborhood of each seed pixel point, and calculating the first distance between the first pixel point and the first seed pixel point, so as to determine the first seed pixel point to which the pixel point belongs according to the first distance in the following steps.
In a preferred embodiment of the present invention, calculating the first distance between the first seed pixel point and the pixel point in the second neighborhood includes the following steps:
firstly, spatial distance calculation processing is carried out according to the pixel coordinate value of the first seed pixel point and the pixel coordinate value of the first pixel point, and first spatial distance data of the first seed pixel point and the first pixel point are obtained.
Specifically, first spatial distance data d is calculated according to pixel coordinate values x and y of a first pixel point i and a first seed pixel point js. According to the calculation formula:
Figure BDA0002454194900000081
and secondly, calculating and processing the first color distance data according to the color value of the first seed pixel point and the color value of the first pixel point to obtain the first color distance data of the first seed pixel point and the first pixel point.
Specifically, first color distance data d is calculated according to the l value, the a value and the b value of the first pixel point i and the first seed pixel point jcAccording to the calculation formula:
Figure BDA0002454194900000082
and finally, calling a preset distance calculation method to perform distance calculation processing on the first spatial distance data and the first color distance data to obtain first distance data.
Specifically, according to the first spatial distance data and the first color distance data, a calculation formula is adopted:
Figure BDA0002454194900000091
wherein d iscThe first color distance data, ds, S, and m are a constant used to select the importance of spatial proximity, and the larger m, the more important m is the spatial proximity than the color similarity. In bookIn a specific example of the embodiment of the present invention, m is 10, and S is 500.
S6, when the first distance data is smaller than the determination data of the first pixel point, setting the seed label of the first pixel point as a first seed pixel point, and setting the determination data of the first pixel point as the first distance data.
Specifically, whether the first distance is smaller than the judgment data of the first pixel point or not is judged, when the first distance data is smaller than the first judgment data, the first pixel point finds a seed pixel point closer to the first pixel point, at the moment, the seed label of the first pixel point is set as the first seed pixel point, and the judgment data of the first pixel point is updated to be the first distance data. When the first distance is greater than the judgment data of the first pixel point, it is indicated that the seed pixel point to which the first pixel point belongs is closer to the first pixel point than the first seed pixel point, and at this time, the first pixel point is not processed.
It should be noted that, when the step is executed for the first time, the first determination data of each pixel point is the initial determination data, and is certainly greater than the first distance data, so that the seed label of the first pixel point is set as the first seed pixel at this time, and the determination data of the first pixel point is updated to the first distance data.
And (4) processing each first seed pixel point in the intensity image data in the step, and determining a seed pixel point to which each pixel point in the intensity image data belongs.
And S7, iteratively executing S5 and S6 until the first distance data between the first pixel point corresponding to each first sub-pixel point and the first sub-pixel point is larger than or equal to the judgment data of the first pixel point, and generating first intensity image data according to the processed pixel point data.
Specifically, after the processing procedures of steps S5-S6, a seed pixel to which each pixel in the intensity image data belongs is determined, but only once determination is performed, and it cannot be determined that the seed pixel corresponding to the seed pixel is the final seed pixel, so S5 and S6 are iteratively performed on the intensity image data, and when the determination data of all pixels in the intensity image data is smaller than the recalculated first distance data, the determination of the seed pixel to which each pixel belongs in the intensity image data is completed, that is, the intensity image data is divided into a plurality of region blocks centered on the seed pixels. At this time, first intensity image data is generated according to all the processed pixel point data.
In a preferred embodiment of the present invention, after the first intensity image data is generated, a second preset image model conversion method needs to be invoked to perform a second model conversion process on the first intensity image data to generate the converted first intensity image data, which is intended to convert the L AB color model image data of the first intensity image data into RGB color model image data.
And S8, calling a preset binarization algorithm to carry out binarization processing on the first intensity image data to obtain second intensity image data.
Specifically, in order to extract target data information in the first intensity map data, a preset binarization algorithm is called to perform binarization processing on the first intensity map data, so as to increase the identification efficiency of identifying the target data.
In a preferred scheme of the embodiment of the invention, a maximum between-class variance algorithm is called to carry out binarization processing on the first intensity image data, and the second intensity image data is generated according to each processed pixel data. The threshold selection when the maximum inter-class variance algorithm is used can be determined according to the number of targets, if the targets are all at the same depth, a single threshold is used, and if the targets are not at the same depth, the threshold is increased according to the depth level. For example: in a specific example of the embodiment of the present invention, a target object in original point cloud data needs to be extracted, a single threshold is selected, a maximum inter-variance algorithm is used to perform binarization processing on first intensity image data, and each pixel point in the first intensity image data is divided according to the threshold, so that binarized second intensity image data is obtained.
And S9, calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain target intensity image data.
Specifically, in a preferred embodiment of the present invention, a disc expansion template with a radius of 3 is used to perform expansion operation processing on the second intensity image data, so as to obtain the target intensity image data. The processed target intensity image data appropriately enlarges target pixels, and can reserve target pixel points to the greatest extent, so that the obtained target intensity image reserves all pixel points of a target object, and the integrity of target point cloud extraction is greatly improved.
And S10, acquiring target point cloud data from the original point cloud data according to the pixel coordinate value of each pixel in the target intensity image data.
Specifically, the intensity image data is extracted from the original point cloud data, so that each pixel point of the intensity image data has a pixel coordinate corresponding relationship with each point cloud data of the original point cloud data, and in the process of processing the intensity image data, the pixel coordinate value of the intensity image data is not changed, so that each pixel in the target intensity image data obtained after processing has the same pixel coordinate corresponding relationship with the original point cloud data.
In a preferred embodiment of the present invention, the pixel coordinate values of the pixel points in the target intensity image data are mapped to the original point cloud data, the target points are determined according to the mapped pixel coordinate values, and then the target point cloud data is generated according to the point cloud data corresponding to the target points.
In a specific example of the embodiment of the present invention, the target intensity image data includes a pixel point k with a pixel coordinate value of (r, c), and then the point cloud data point with the pixel coordinate value of (r, c) in the original point cloud data is determined as the target point, so that all the target points can be determined in the original point cloud, and then the target point cloud data is generated according to the point cloud data corresponding to all the target points.
The embodiment of the invention provides a target point cloud obtaining method based on an intensity image, which is characterized in that the target point cloud is obtained from original point cloud data obtained by the method according to the imaging characteristics of a time-of-flight camera, the target range of the intensity image data of the original point cloud data is determined in the process of obtaining the target point cloud, then the target point cloud data is obtained from the original point cloud data according to the corresponding relation between target pixels in the intensity image data and pixel coordinates of each point cloud data in the original point cloud data, the error of the obtained target point cloud data is small, the complexity of obtaining the target point cloud data from the point cloud data is simplified in the extraction process, and the purposes of reducing the calculated amount in the target point cloud obtaining process and saving the system expenditure are achieved.
The second embodiment of the invention provides equipment which comprises a memory and a processor, wherein the memory is used for storing programs, and the memory can be connected with the processor through a bus. The memory may be a non-volatile memory such as a hard disk drive and a flash memory, in which a software program and a device driver are stored. The software program is capable of performing various functions of the above-described methods provided by embodiments of the present invention; the device drivers may be network and interface drivers. The processor is used for executing a software program, and the software program can realize the method provided by the first embodiment of the invention when being executed.
A third embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method provided in the first embodiment of the present invention is implemented.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A target point cloud obtaining method based on an intensity image is characterized by comprising the following steps:
s1, acquiring original point cloud data, and extracting intensity image data from the original point cloud data;
s2, sampling the intensity image data according to a preset grid step length, and determining sampling pixel points;
s3, determining seed pixel points according to the gradient values of all the pixel points in the first neighborhood of the sampling pixel points;
s4, adding a seed label to each pixel point in the intensity image data, setting the seed label as a first label, adding judgment data to each pixel point, and setting the judgment data as initial judgment data;
s5, performing distance calculation processing on a first pixel point in a second field of a first seed pixel point and the first seed pixel point to obtain first distance data of the first pixel point and the first seed pixel point;
s6, when the first distance data is smaller than the determination data of the first pixel point, setting the seed label of the first pixel point as a first seed pixel point, and setting the determination data of the first pixel point as the first distance data;
s7, iteratively executing S5 and S6 until the first distance data between the first pixel point corresponding to each first seed pixel point and the first seed pixel point is larger than or equal to the judgment data of the first pixel point, and generating first intensity image data according to the processed pixel point data;
s8, calling a preset binarization algorithm to carry out binarization processing on the first intensity image data to obtain second intensity image data;
s9, calling a preset morphological processing algorithm to perform target identification processing on the second intensity image data to obtain target intensity image data;
and S10, acquiring target point cloud data from the original point cloud data according to the pixel coordinate value of each pixel in the target intensity image data.
2. The method of claim 1, wherein the determining the seed pixel according to the gradient value of each pixel in the first neighborhood of the sampling pixel specifically comprises:
sorting the gradient values of all the pixel points in the first neighborhood of the sampling pixel points according to an ascending order, and determining a first gradient value arranged at a first position;
and determining the pixel point corresponding to the first gradient value as the seed pixel point.
3. The method for obtaining an intensity-image-based target point cloud according to claim 1, wherein the step of performing distance calculation processing on a first pixel point in a second field of a first seed pixel point and the first seed pixel point to obtain first distance data between the first pixel point and the first seed pixel point specifically comprises:
performing spatial distance calculation processing according to the pixel coordinate value of the first seed pixel point and the pixel coordinate value of the first pixel point to obtain first spatial distance data of the first seed pixel point and the first pixel point;
performing first color distance data calculation processing according to the color value of the first seed pixel point and the color value of the first pixel point to obtain first color distance data of the first seed pixel point and the first pixel point;
and calling a preset distance calculation method to perform distance calculation processing on the first spatial distance data and the first color distance data to obtain the first distance data.
4. The method for obtaining an intensity image-based target point cloud according to claim 1, wherein the step of invoking a preset binarization algorithm to binarize the first intensity image data to obtain second intensity image data specifically comprises:
and calling a maximum between-class variance algorithm to carry out binarization processing on the first intensity image data, and generating second intensity image data according to each pixel data after processing.
5. The method for obtaining a target point cloud based on an intensity image according to claim 1, wherein a preset morphological processing algorithm is invoked to perform target identification processing on the second intensity image data, and the obtaining of the target intensity image data specifically includes:
and performing expansion operation processing on the second intensity image data by adopting a disc expansion template with the radius of 3 to obtain the target intensity image data.
6. The method of claim 1, wherein the obtaining target point cloud data from the raw point cloud data according to the pixel coordinate value of each pixel in the target intensity image data specifically comprises:
mapping pixel coordinate values of all pixel points in the target intensity image data to the original point cloud data, and determining target points according to the mapped pixel coordinate values;
and generating the target point cloud data according to the point cloud data corresponding to the target point.
7. The method as claimed in claim 1, wherein before the sampling processing of the intensity image data according to the preset grid step length and determining the sampling pixel points, the method further comprises:
and calling a first preset image model conversion method to perform first model conversion processing on the intensity image data to generate converted intensity image data.
8. The method of claim 7, wherein after generating the first intensity image data from the processed pixel point data, the method further comprises:
calling a second preset image model conversion method to perform second model conversion processing on the first intensity image data to generate converted first intensity image data; the second preset image model conversion method and the first preset image model conversion method have a corresponding relation.
9. An apparatus, characterized in that the apparatus comprises a memory for storing a program and a processor for performing the intensity image based target point cloud acquisition method as claimed in any one of claims 1 to 8.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, which when executed by a processor implements the intensity image-based target point cloud acquisition method according to any one of claims 1 to 8.
CN202010301597.9A 2020-04-16 2020-04-16 Target point cloud acquisition method based on intensity image Active CN111507339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010301597.9A CN111507339B (en) 2020-04-16 2020-04-16 Target point cloud acquisition method based on intensity image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010301597.9A CN111507339B (en) 2020-04-16 2020-04-16 Target point cloud acquisition method based on intensity image

Publications (2)

Publication Number Publication Date
CN111507339A true CN111507339A (en) 2020-08-07
CN111507339B CN111507339B (en) 2023-07-18

Family

ID=71864132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010301597.9A Active CN111507339B (en) 2020-04-16 2020-04-16 Target point cloud acquisition method based on intensity image

Country Status (1)

Country Link
CN (1) CN111507339B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110216951A1 (en) * 2010-03-03 2011-09-08 Medicsight Plc Medical Image Processing
US20120106799A1 (en) * 2009-07-03 2012-05-03 Shenzhen Taishan Online Technology Co., Ltd. Target detection method and apparatus and image acquisition device
CN102792124A (en) * 2009-10-17 2012-11-21 荷马里光电子公司 Enhanced imaging method and apparatus
CN104299260A (en) * 2014-09-10 2015-01-21 西南交通大学 Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
CN106909873A (en) * 2016-06-21 2017-06-30 湖南拓视觉信息技术有限公司 The method and apparatus of recognition of face
CN106951829A (en) * 2017-02-23 2017-07-14 南京邮电大学 A kind of notable method for checking object of video based on minimum spanning tree
CN106951905A (en) * 2017-02-16 2017-07-14 江苏大学 Apple identification and localization method on a kind of tree based on TOF camera
CN107203759A (en) * 2017-06-06 2017-09-26 浙江大学 A kind of branch's recursion road restructing algorithm based on two view geometries
CN107742113A (en) * 2017-11-08 2018-02-27 电子科技大学 One kind is based on the posterior SAR image complex target detection method of destination number
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN110232315A (en) * 2019-04-29 2019-09-13 华为技术有限公司 Object detection method and device
CN110659547A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Object recognition method, device, vehicle and computer-readable storage medium
CN110979321A (en) * 2019-12-30 2020-04-10 北京深测科技有限公司 Obstacle avoidance method for unmanned vehicle

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106799A1 (en) * 2009-07-03 2012-05-03 Shenzhen Taishan Online Technology Co., Ltd. Target detection method and apparatus and image acquisition device
CN102792124A (en) * 2009-10-17 2012-11-21 荷马里光电子公司 Enhanced imaging method and apparatus
US20110216951A1 (en) * 2010-03-03 2011-09-08 Medicsight Plc Medical Image Processing
CN104299260A (en) * 2014-09-10 2015-01-21 西南交通大学 Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
CN106909873A (en) * 2016-06-21 2017-06-30 湖南拓视觉信息技术有限公司 The method and apparatus of recognition of face
CN106951905A (en) * 2017-02-16 2017-07-14 江苏大学 Apple identification and localization method on a kind of tree based on TOF camera
CN106951829A (en) * 2017-02-23 2017-07-14 南京邮电大学 A kind of notable method for checking object of video based on minimum spanning tree
CN107203759A (en) * 2017-06-06 2017-09-26 浙江大学 A kind of branch's recursion road restructing algorithm based on two view geometries
CN107742113A (en) * 2017-11-08 2018-02-27 电子科技大学 One kind is based on the posterior SAR image complex target detection method of destination number
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN110659547A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Object recognition method, device, vehicle and computer-readable storage medium
CN110232315A (en) * 2019-04-29 2019-09-13 华为技术有限公司 Object detection method and device
CN110979321A (en) * 2019-12-30 2020-04-10 北京深测科技有限公司 Obstacle avoidance method for unmanned vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙雪桄: "基于组件树的医学图像分割算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李成浩: "基于权重关联与环境自适应的目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN111507339B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN108228798B (en) Method and device for determining matching relation between point cloud data
CN111222395B (en) Target detection method and device and electronic equipment
EP2783328B1 (en) Text detection using multi-layer connected components with histograms
WO2015184764A1 (en) Pedestrian detection method and device
CN108399424B (en) Point cloud classification method, intelligent terminal and storage medium
CN111507340B (en) Target point cloud data extraction method based on three-dimensional point cloud data
US20200191971A1 (en) Method and System for Vehicle Detection Using LIDAR
CN111681256A (en) Image edge detection method and device, computer equipment and readable storage medium
WO2022179549A1 (en) Calibration method and apparatus, computer device, and storage medium
KR101742115B1 (en) An inlier selection and redundant removal method for building recognition of multi-view images
US20140169684A1 (en) Distance Metric for Image Comparison
CN110852207A (en) Blue roof building extraction method based on object-oriented image classification technology
JP2009163682A (en) Image discrimination device and program
CN108960247B (en) Image significance detection method and device and electronic equipment
CN104616019A (en) Identification method for electronic equipment signboard image
JP2013037539A (en) Image feature amount extraction device and program thereof
JP2011165170A (en) Object detection device and program
US11275963B2 (en) Image identification apparatus, image identification method, and non-transitory computer-readable storage medium for storing image identification program
CN114119695A (en) Image annotation method and device and electronic equipment
JP6077785B2 (en) Object detection apparatus and program
JP2018180879A (en) Object recognition device, object recognition system, and object recognition method
JP6546385B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
CN111507339B (en) Target point cloud acquisition method based on intensity image
US11699303B2 (en) System and method of acquiring coordinates of pupil center point
WO2022096343A1 (en) Method and apparatus for distinguishing different configuration states of an object based on an image representation of the object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant