CN101825442A - Mobile platform-based color laser point cloud imaging system - Google Patents

Mobile platform-based color laser point cloud imaging system Download PDF

Info

Publication number
CN101825442A
CN101825442A CN 201010160499 CN201010160499A CN101825442A CN 101825442 A CN101825442 A CN 101825442A CN 201010160499 CN201010160499 CN 201010160499 CN 201010160499 A CN201010160499 A CN 201010160499A CN 101825442 A CN101825442 A CN 101825442A
Authority
CN
China
Prior art keywords
module
point
laser radar
camera
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201010160499
Other languages
Chinese (zh)
Inventor
付梦印
杨毅
杨鑫
朱昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN 201010160499 priority Critical patent/CN101825442A/en
Publication of CN101825442A publication Critical patent/CN101825442A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a mobile platform-based color laser point cloud imaging system and belongs to the field of environment awareness. The mobile platform-based color laser point cloud imaging system comprises a mobile platform, a support rod, a camera, a camera calibration module, a motion control module, a man-machine interaction module, a spatial alignment module, a mechanical wrist, a depth map establishing module, a filter module, a laser scanning radar, an attitude and azimuth reference module, and a laser point cloud dyeing module, wherein the mechanical wrist comprises a connecting shaft A, a connecting shaft B and a connecting cantilever. The mobile platform-based color laser point cloud imaging system acquires the three-dimensional point cloud information and image information through the laser scanning radar and the camera, and realizes the imaging of the color laser point cloud by the spatial alignment method and the laser point cloud dyeing method of the laser scanning radar and the camera. Compared with the traditional methods, the mobile platform-based color laser point cloud imaging system improves the registration accuracy of the laser scanning radar and the camera, has more real color three-dimensional point cloud obtained, can acquire the color three-dimensional point cloud in wide environment range by means of the mobile platform simultaneously, and has wider application range.

Description

A kind of color laser point cloud imaging system based on mobile platform
Technical field
The present invention relates to a kind of color laser point cloud imaging system, belong to the environment sensing field based on mobile platform.
Background technology
In recent years, laser scanner technique has obtained very big development.Utilize scanning laser radar can obtain the accurate 3D data of real scene height, these data have the most directly reflected the true form characteristic of objective things, therefore people are with its effective means as the three-dimensional point cloud information of obtaining testee fast, and three-dimensional point cloud information comprises the metric (distance, geometric configuration etc.) of testee; But its shortcoming is: can not provide texture and colouring information about object scene, the therefore simple work such as identification of using three-dimensional point cloud information to be difficult to realize object.Video camera can obtain the image information of testee apace, comprises color and two-dimentional geometrical shape information, texture etc., and make subject image identification more or less freely, but its shortcoming is: the precision of information such as testee spatial form, position is relatively poor.
Because scanning laser radar and video camera respectively have relative merits, on performance, supply mutually, therefore be used in combination laser with video camera can be given full play to both advantages, three-dimensional point cloud information and image information are combined obtain the color three dimension dot cloud, in the three-dimensional scenic structure, have vital role.The color three dimension dot cloud comprises information such as metric, color and texture.
Existing scanning laser radar is used in combination with video camera, obtains the method for color three dimension dot cloud, can be divided into according to the difference of laser scanner technique: drive sweep technology and active scan technology.When scanning under the drive sweep mode: testee is rotated and translation motion under the drive of utility appliance, and scanning laser radar and video camera are towards testee, and the laser scanning plane becomes fixed angle with camera optical axis.When testee is rotated and during translation motion, scanning laser radar and video camera carry out data acquisition, realize obtaining of three-dimensional point cloud information and image information.Obviously, this mode only is applicable to and can be placed on the object of measuring on the turntable, can't be applicable to the measurement of the object that environment sensing on a large scale and inconvenience are moved.When scanning under the active scan mode, scanning laser radar and video camera are fixed together, and make the two towards Same Scene, the line-structured light of scanning laser radar scans fixed scene, obtains three-dimensional point cloud information; Then, video camera is taken the coloured image of same visual field, obtains image information.But this mounting means, the photocentre of the two must be in diverse location, causes the visual field of the two institute's perception inconsistent, finally makes the coloration result of fusion exist the coincidence zone, visual field of error and the two institute's perception can not reach maximum.Therefore for situations such as environment sensing on a large scale, still need and to be improved this active scan mode.
At present, existing pertinent literature mainly concentrates on the research of the blending algorithm aspect of the image information that three-dimensional point cloud information that scanning laser radar scanning is obtained and video camera shooting obtain, perhaps mentioned color laser point cloud imaging system only is applicable to stationary platform, yet there are no the constructing plan based on the color laser point cloud imaging system of mobile platform.
Summary of the invention
The objective of the invention is to overcome the defective that present prior art exists, propose a kind of color laser point cloud imaging system based on mobile platform.This system obtains three-dimensional point cloud information and image information by scanning laser radar and video camera respectively, utilizes the spacial alignment method and the laser point cloud colouring method of scanning laser radar and video camera again, realizes the imaging of color laser point cloud.
The objective of the invention is to be achieved through the following technical solutions.
A kind of color laser point cloud imaging system comprises: mobile platform, support bar, video camera, camera calibration module, motion-control module, human-computer interaction module, spacial alignment module, wrist, depth map make up module, filter module, scanning laser radar, attitude orientation referrer module, laser point cloud dyeing module; Wherein, wrist comprises coupling shaft A, coupling shaft B and connects cantilever.
Support bar and wrist are formed support frame, support bar be connected cantilever and be connected with coupling shaft B by coupling shaft A, coupling shaft A is in vertical state, coupling shaft B is in horizontality, coupling shaft A and coupling shaft B can be around its axle center rotations, and on-fixed is connected between support bar and coupling shaft A, the coupling shaft B, connect between cantilever and coupling shaft A, the coupling shaft B and fixedly connected, that is: when coupling shaft A rotates around its axle center, support bar does not rotate with coupling shaft, keeps rotating with speed with coupling shaft A and connect cantilever; When coupling shaft B rotated around the axle center, support bar did not rotate with coupling shaft, kept rotating with speed with coupling shaft B and connect cantilever.Support frame is fixed on the mobile platform by support bar, connects cantilever and does not contact mobile platform; Video camera is fixedly installed in scanning laser radar and is connected on the cantilever, and the photocentre of the photocentre of video camera and scanning laser radar, equates that with the distance in the axle center of coupling shaft B the photocentre of video camera and the photocentre of scanning laser radar are on same perpendicular.
Video camera is connected with human-computer interaction module, camera calibration module, laser point cloud dyeing module, wrist; Camera calibration module is connected with video camera, spacial alignment module, laser point cloud dyeing module; Motion-control module is connected with wrist, human-computer interaction module, mobile platform; Human-computer interaction module is connected with scanning laser radar, video camera, motion-control module, depth map structure module, spacial alignment module, attitude orientation referrer module; The spacial alignment module is connected with human-computer interaction module, filter module, camera calibration module, laser point cloud dyeing module; Wrist is connected with motion-control module, scanning laser radar, video camera; Depth map makes up module and is connected with scanning laser radar, human-computer interaction module; Filter module is connected with human-computer interaction module, spacial alignment module; Scanning laser radar is connected with wrist, human-computer interaction module, laser point cloud dyeing module; The attitude orientation referrer module is connected with human-computer interaction module, laser point cloud dyeing module; Laser point cloud dyeing module is connected with scanning laser radar, video camera, human-computer interaction module, attitude orientation referrer module, spacial alignment module, camera calibration module;
Described video camera is used to obtain the camera review data, and sends camera calibration module, human-computer interaction module and laser point cloud dyeing module respectively to.
Described camera calibration module receives the view data that video camera obtains, and obtains video camera projection matrix K after treatment, and video camera projection matrix K is sent to spacial alignment module and laser point cloud dyeing module.
Described motion-control module is according to the steering order of human-computer interaction module, the rotation of driving device wrist and the motion of mobile platform.
The major function of described human-computer interaction module is: input, output function that man-machine interaction is provided; Include but not limited to: the 1. rotational parameters of wrist input, 2. mobile platform the kinematic parameter input, 3. scanning laser radar the sweep parameter input, 4. receive and show depth map image that image that video camera obtains and depth map make up module and produce, 5. receive color three dimension cloud data that the attitude orientation referrer module transmits and the function that shows, 6. is provided at selected characteristic point on the camera review, and send the unique point coordinate in the camera review to the spacial alignment module; 7. be provided at the function of choosing character pair point candidate region point set in the depth map, and send its coordinate to filter module.Described unique point candidate region point set is the corresponding candidate region of unique point in depth map of choosing on the camera review.
Described spacial alignment module receives the depth map unique point coordinate corresponding with the camera review unique point that is transmitted from camera review unique point coordinate and filter module that human-computer interaction module transmitted, and the video camera projection matrix K of reception camera calibration module transmission, according to document " vision sensor and range laser radar spacial alignment method " (Fu Mengyin, Liu Mingyang; Infrared and laser engineering, 2009,38 (1)) disclosed space conversion method obtains the coordinate conversion matrix that camera coordinates is tied to the scanning laser radar coordinate system in, comprises rotation matrix and translation matrix, and sends rotation matrix and translation matrix to laser point cloud dyeing module.
Described wrist can be finished following two kinds of motions under motion-control module drives: 1. control linkage axle B rotates scanning laser radar fixed thereon around its axle center rotation on pitch orientation; 2. control linkage axle A rotates scanning laser radar around its axle center rotation in the horizontal direction.
Described depth map makes up the three dimensional point cloud of module reception from scanning laser radar, obtains depth map after treatment, and sends depth map to human-computer interaction module.
Described filter module receives the depth map unique point candidate region point set that transmits from human-computer interaction module, obtains depth map unique point coordinate by processing, sends depth map unique point coordinate to the spacial alignment module.
Described scanning laser radar receives the laser radar sweep parameter that human-computer interaction module transmits, and under the drive of wrist scene is scanned, and obtains three dimensional point cloud, and sends it to depth map structure module and laser point cloud dyeing module.
Described attitude orientation referrer module is used for the transformational relation that the attitude information of Laser Measurement scanning radar under earth coordinates set up earth coordinates and scanning laser radar coordinate system, and described scanning laser radar attitude information includes but not limited to: the angle of pitch, roll angle and position angle; Then, laser point cloud is dyeed color three dimension dot cloud data-switching that module transmits to earth coordinates, and the color three dimension cloud data after will changing is sent to human-computer interaction module and shows.
Described laser point cloud dyeing module is finished the dyeing of laser point cloud, that is: receive the view data that three dimensional point cloud that scanning laser radar transmits and video camera transmit, and receive rotation matrix and the translation matrix that the spacial alignment module is provided, and the video camera projection matrix K that provides of camera calibration module, find out the corresponding point of three dimensional point cloud in the camera review data, and extract its colouring information.With the three-dimensional location coordinates of three dimensional point cloud and in view data the three-dimensional RGB colouring information of corresponding point merge into six-dimensional coordinate, obtain the color three dimension cloud data, and send human-computer interaction module to and show, finish the dyeing of laser point cloud.
Its workflow is:
Step 1, video camera is demarcated.Be specially: import the wrist rotational parameters and pass to motion-control module by human-computer interaction module, motion-control module makes video camera towards the dead ahead according to the rotational parameters driving device wrist rotation of the wrist that receives, images acquired, the image that obtains is sent to camera calibration module to be handled, obtain the camera calibration matrix K, send the camera calibration matrix K to spacial alignment module and laser point cloud dyeing module.
The rotational parameters of described wrist includes but not limited to: start angle, termination point, stepping angle.
The kinematic parameter of described mobile platform includes but not limited to: start, stop, direction, speed command.
Described scanning laser radar sweep parameter includes but not limited to: scanning visual angle and scanning resolution.
The described method of obtaining the camera calibration matrix K adopts disclosed method in the document " Flexible camera calibration byviewing a plane from unknown orientations. " (Zhang Zhengyou.Proceedings ofInternational Conference on Computer Vision.1999:666~673).
Step 2, obtain the depth map of spacial alignment template.Described spacial alignment template adopts document " vision sensor and range laser radar spacial alignment method " (Fu Mengyin, Liu Mingyang; Infrared and laser engineering, 2009,38 (1)) disclosed spacial alignment template in.
The operation of this step and step 1 are in no particular order.Be specially: by human-computer interaction interface input wrist rotational parameters and scanning laser radar sweep parameter, send motion-control module and scanning laser radar respectively to, motion-control module driving device wrist rotates and carries out the laser scanning of spacial alignment template, the three dimensional point cloud that obtains is sent to depth map and makes up module, through the depth map of span alignment template after the processing of depth map structure module, and send human-computer interaction module to.
Step 3, on the basis of step 2, obtain the camera review of spacial alignment template.
By human-computer interaction interface input wrist rotational parameters, send motion-control module to, motion-control module driving device wrist rotates the scene that makes the scanning of camera alignment step 2 scanning laser radar, takes, and sends the camera review that obtains to human-computer interaction module.
Step 4, on the basis of step 3, obtain camera review unique point coordinate and the depth map unique point coordinate corresponding with it.
By human-computer interaction module, in the camera review that step 3 obtains, adopt document " vision sensor and range laser radar spacial alignment method " (Fu Mengyin, Liu Mingyang; Infrared and laser engineering, 2009,38 (1)) disclosed spacial alignment method in, the selected characteristic point, and will choose the camera review unique point coordinate that obtains and send the spacial alignment module to; In depth map, obtain the depth map unique point corresponding with the camera review unique point, send its coordinate to the spacial alignment module.
The described method of obtaining depth map unique point coordinate is:
The 1st step: at first provide related definition:
Mixed pixel and noise pixel that definition depth map unique point candidate region point is concentrated are bad point, and its set is bad some set A CpThe concentrated mixed pixel of depth map unique point candidate region point is removed in definition and other point beyond the noise pixel is better, and its set is better set A gDefine better frontier point and the normal point of comprising, its set is corresponding to row series classification and row series classification, is respectively the frontier point set A Ed1With the normal point set A Nr1, the frontier point set A Ed2With the normal point set A Nr2, frontier point is the pairing analyzing spot in actual object border in the laser scanning scene, normal point is better set A gIn other point except that frontier point; The definition point to be located comprises bad point undetermined and frontier point undetermined, and its set is the point to be located set A Uc
The 2nd step: the institute that depth map unique point candidate region point is concentrated is numbered a little, uses p I, jThe expression, wherein i represent the row number, j represent row number, 1≤i≤m+1,1≤j≤n+1 and i, j, m, n are positive integer; Set ρ I, jFor the scanning laser radar photocentre to p I, jDistance; l I, jBe p I, jWith p I, j+1Among 2 to less that of the distance of the scanning laser radar photocentre vertical line that line is done between bigger that point of distance and scanning laser radar photocentre; Δ ρ I, jBe p I, jWith p I, j+1Between distance; Δ θ I, jBe p I, jWith p I, j+1Between line and l I, jAngle;
The 3rd step: set i=1, j=1 promptly chooses first first point of going;
The 4th step: be categorized as this point better;
The 5th step: Δ ρ I, j(p I, jWith p I, j+1Between distance), Δ θ I, j(p I, jWith p I, j+1Between line and l I, jAngle) value can calculate by formula 1:
Δρ i , j Δθ i , j = ( ρ i , j + 1 2 + ρ i , j 2 - 2 ρ i , j ρ i , j + 1 cos θ rs ) 1 / 2 arcsin ( ( lρ i , j - sρ i , j cos θ rs ) / Δρ i , j ) - - - ( 1 )
Wherein, l ρ I, jBe ρ I, jAnd ρ I, j+1In higher value; S ρ I, jBe ρ I, jAnd ρ I, j+1In smaller value, θ RsBe scanning laser radar scan angle resolution.
The 6th step: judging point p I, j+1Type, i.e. judging point p I, j+1Be normal point, bad point or frontier point; Concrete operations are:
If 1. the 5th go on foot the p that obtains I, jWith p I, j+1Between distance, delta ρ I, jGreater than a certain artificial setting value Δ ρ Th, and p I, jWith p I, j+1Between line and l I, jAngle Δ θ I, jGreater than a certain artificial setting value Δ θ Th, then set a sign amount μ I, j, and its value is set at 1; Otherwise, sign is measured μ I, jValue be set at 0.
If 2. p I, j∈ A gAnd μ I, j=1, p then I, j∈ A Ed1, p I, j+1∈ A Uc
If p I, j∈ A gAnd μ I, j=0, p then I, j∈ A Nr1, p I, j+1∈ A g
If p I, j∈ A UcAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If p I, j∈ A UcAnd μ I, j=0, p then I, j∈ A Ed1, p I, j+1∈ A g
If p I, j∈ A CpAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If j=n-1 and μ I, j=1, p then I, n∈ A Cp
If j=n-1 and μ I, j=0, p then I, n∈ A Ed1
The 7th step: judge whether j=n sets up,, forwarded for the 8th step to if set up; Otherwise, replacement j=j+1; Returned for the 5th step;
The 8th step: judge whether i=m sets up, if set up end operation; Otherwise, replacement i=i+1, j=1; Returned for the 4th step;
By the 2nd operation that went on foot for the 8th step, all finished normal point, badly or the classification of frontier point according to the row order with what depth map unique point candidate region point was concentrated.
The 9th step: set l ' I, jBe p I, jWith p I+1, jAmong 2 to less that of the distance of the scanning laser radar photocentre vertical line that line is done between bigger that point of distance and scanning laser radar photocentre; Δ ρ ' I, jBe p I, jWith p I+1, jBetween distance; Δ θ ' I, jBe p I, jWith p I+1, jBetween line and l ' I, jAngle;
The 10th step: reset i=1, j=1 promptly chooses first first point that is listed as;
The 11st step: be categorized as this point better;
The 12nd step: Δ ρ ' I, j(p I, jWith p I+1, jBetween distance), Δ θ ' I, j(p I, jWith p I+1, jBetween line and l ' I, jAngle) value can calculate by formula 2:
Δρ i , j ′ Δθ i , j ′ = ( ρ i + 1 , j 2 + ρ i , j 2 - 2 ρ i , j ρ i + 1 , j cos θ rs ) 1 / 2 arcsin ( ( lρ i , j ′ - sρ i , j ′ cos θ rs ) / Δρ i , j ′ ) - - - ( 2 )
Wherein, l ρ ' I, jBe ρ I, jAnd ρ I+1, jIn higher value; S ρ ' I, jBe ρ I, jAnd ρ I+1, jIn smaller value, θ RsBe scanning laser radar scan angle resolution.
The 13rd step: judging point p I+1, jType, i.e. judging point p I+1, jBe normal point, bad point or frontier point; Concrete operations are:
If 1. the 12nd go on foot the p that obtains I, jWith p I+1, jBetween distance, delta ρ ' I, jGreater than a certain artificial setting value Δ ρ Th, and p I, jWith p I+1, jBetween line and l ' I, jAngle Δ θ ' I, jGreater than a certain artificial setting value Δ θ Th, then set sign amount μ I, jValue be 1; Otherwise, sign is measured μ I, jValue be set at 0.
If 2. p I, j∈ A gAnd μ I, j=1, p then I, j∈ A Ed2, p I, j+1∈ A Uc
If p I, j∈ A gAnd μ I, j=0, p then I, j∈ A Nr2, p I, j+1∈ A g
If p I, j∈ A UcAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If p I, j∈ A UcAnd μ I, j=0, p then I, j∈ A Ed2, p I, j+1∈ A g
If p I, j∈ A CpAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If j=n-1 and μ I, j=1, p then I, n∈ A Cp
If j=n-1 and μ I, j=0, p then I, n∈ A Ed2
The 14th step: judge whether i=m sets up,, forwarded for the 15th step to if set up; Otherwise, replacement i=i+1; Returned for the 12nd step;
The 15th step: judge whether j=n sets up, if set up end operation; Otherwise, replacement i=1, j=j+1; Returned for the 11st step;
Go on foot the 15th operation that goes on foot, whole classification of having finished normal point, bad point or frontier point according to row in proper order that depth map unique point candidate region point is concentrated by the 9th.
The 16th step: with the frontier point set A Ed1In the three-dimensional coordinate of being had a few in x and z coordinate figure get x and the z coordinate figure of average as the depth map unique point, with the frontier point set A Ed2In the three-dimensional coordinate of being had a few in the y coordinate figure get the y coordinate figure of average as the depth map unique point.
Through the operation of above-mentioned steps, can obtain the depth map characteristic point coordinates.
Step 5, obtain the coordinate conversion matrix that camera coordinates is tied to the scanning laser radar coordinate system.
On the basis of step 4, the spacial alignment module is according to the camera review unique point coordinate and the depth map unique point coordinate that obtain, in conjunction with the camera calibration matrix K that camera calibration module transmitted, carry out the spacial alignment of scanning laser radar and video camera, obtain the coordinate conversion matrix that camera coordinates is tied to the scanning laser radar coordinate system, comprise rotation matrix and translation matrix, and send it to laser point cloud dyeing module.
Describedly obtain the method that camera coordinates is tied to the coordinate conversion matrix of scanning laser radar coordinate system and adopt document " vision sensor and range laser radar spacial alignment method " (Fu Mengyin, Liu Mingyang; Infrared and laser engineering, 2009,38 (1)) disclosed method.
Step 6, use scanning laser radar obtain the three dimensional point cloud of object scene.
On the basis of step 5, by human-computer interaction interface input wrist rotational parameters and scanning laser radar sweep parameter, send motion-control module and scanning laser radar respectively to, motion-control module driving device wrist rotates and carries out the laser scanning of object scene, and the three dimensional point cloud that scanning obtains is sent to laser point cloud dyeing module.
Step 7, use video camera obtain the camera review of object scene.
On the basis of step 6, by human-computer interaction interface input wrist rotational parameters, send motion-control module to, motion-control module driving device wrist rotates and makes video camera head for target scene, and the object scene that camera alignment and scanning laser radar are scanned, shooting obtains the camera review of object scene, sends laser point cloud dyeing module then to.
Step 8, obtain the color three dimension cloud data.
On the basis of step 7, the object scene image that object scene three dimensional point cloud that transmits according to scanning laser radar in the laser point cloud dyeing module and video camera transmit, the video camera projection matrix K that camera calibration module provides, rotation matrix and translation matrix that the spacial alignment module is provided, find the corresponding point of three dimensional point cloud in camera review by coordinate conversion, and extract the colouring information of corresponding point.With the three-dimensional location coordinates of three dimensional point cloud and merge into six-dimensional coordinate, obtain the color three dimension cloud data, and send the attitude orientation referrer module at the three-dimensional RGB colouring information of camera review data corresponding point.
Step 9, the color three dimension cloud data is transformed into earth coordinates and shows from the scanning laser radar coordinate system.
On the basis of step 8, the attitude orientation referrer module dyes color three dimension dot cloud data-switching that module transmits to earth coordinates with laser point cloud, and the color three dimension cloud data after will changing is sent to human-computer interaction module and shows.
Through above-mentioned steps, can finish color laser point cloud imaging based on mobile platform.
Beneficial effect
The present invention compares with classic method, has improved the registration accuracy of scanning laser radar and video camera, and the color three dimension dot cloud that obtains is truer; Simultaneously by means of mobile platform, can finish on a large scale that the color three dimension dot cloud of environment obtains, the scope of application is wider.
Description of drawings
Fig. 1 is the present invention about the structured flowchart based on a kind of embodiment of the color laser point cloud imaging system of mobile platform;
Fig. 2 is the present invention about the front elevation based on the support frame of a kind of embodiment of the color laser point cloud imaging system of mobile platform;
Fig. 3 is the present invention about the side view based on the support frame of a kind of embodiment of the color laser point cloud imaging system of mobile platform;
Wherein: 1-coupling shaft A; 2-coupling shaft B; 3-connects cantilever; The 4-support bar.
Embodiment
Below in conjunction with the drawings and specific embodiments the present invention is done detailed description.
A kind of color laser point cloud imaging system based on mobile platform, its structure comprises as shown in Figure 1: mobile platform, support bar 4 (not drawing in Fig. 1), video camera, camera calibration module, motion-control module, human-computer interaction module, spacial alignment module, wrist, depth map make up module, filter module, scanning laser radar, attitude orientation referrer module, laser point cloud dyeing module.
Support bar 4 is formed support frame with wrist, and wherein, wrist comprises coupling shaft A1, coupling shaft B2 and connect cantilever 3, its front elevation as shown in Figure 2, side view is as shown in Figure 3; Support bar 4 be connected cantilever 3 and be connected with coupling shaft B2 by coupling shaft A1, coupling shaft A1 is in vertical state, coupling shaft B2 is in horizontality, coupling shaft A1 and coupling shaft B2 can be around its axle center rotations, and on-fixed is connected between support bar 4 and coupling shaft A1, the coupling shaft B2, connects between cantilever 3 and coupling shaft A1, the coupling shaft B2 to fixedly connected, that is: as coupling shaft A1 during around the rotation of its axle center, support bar 4 does not rotate with coupling shaft, keeps rotating with speed with coupling shaft A1 and connect cantilever 3; When coupling shaft B2 rotated around the axle center, support bar 4 did not rotate with coupling shaft, kept rotating with speed with coupling shaft B2 and connect cantilever 3.Support frame is fixed on the mobile platform by support bar 4, connects cantilever 3 and does not contact mobile platform; Video camera is fixedly installed in scanning laser radar and is connected on the cantilever 3, and the photocentre of the photocentre of video camera and scanning laser radar, equates that with the distance in the axle center of coupling shaft B2 the photocentre of video camera and the photocentre of scanning laser radar are on same perpendicular.
Video camera is connected with human-computer interaction module, camera calibration module, laser point cloud dyeing module, wrist; Camera calibration module is connected with video camera, spacial alignment module, laser point cloud dyeing module; Motion-control module is connected with wrist, human-computer interaction module, mobile platform; Human-computer interaction module is connected with scanning laser radar, video camera, motion-control module, depth map structure module, spacial alignment module, attitude orientation referrer module; The spacial alignment module is connected with human-computer interaction module, filter module, camera calibration module, laser point cloud dyeing module; Wrist is connected with motion-control module, scanning laser radar, video camera; Depth map makes up module and is connected with scanning laser radar, human-computer interaction module; Filter module is connected with human-computer interaction module, spacial alignment module; Scanning laser radar is connected with wrist, human-computer interaction module, laser point cloud dyeing module; The attitude orientation referrer module is connected with human-computer interaction module, laser point cloud dyeing module; Laser point cloud dyeing module is connected with scanning laser radar, video camera, human-computer interaction module, attitude orientation referrer module, spacial alignment module, camera calibration module; Wherein, video camera is the cmos camera of analog interface video camera or USB interface.
Its workflow is:
Step 1, video camera is demarcated.Be specially: import the wrist rotational parameters and pass to motion-control module by human-computer interaction module, motion-control module makes video camera towards the dead ahead according to the rotational parameters driving device wrist rotation of the wrist that receives, images acquired, the image that obtains is sent to camera calibration module to be handled, obtain the camera calibration matrix K, send the camera calibration matrix K to spacial alignment module and laser point cloud dyeing module.
The rotational parameters of described wrist includes but not limited to: start angle, termination point, stepping angle.
The kinematic parameter of described mobile platform includes but not limited to: start, stop, direction, speed command.
Described scanning laser radar sweep parameter includes but not limited to: scanning visual angle and scanning resolution.
The described method of obtaining the camera calibration matrix K adopts disclosed method in the document " Flexible camera calibration byviewing a plane from unknown orientations. " (Zhang Zhengyou.Proceedings ofInternational Conference on Computer Vision.1999:666~673).
Step 2, obtain the depth map of spacial alignment template.Described spacial alignment template adopts according to document " vision sensor and range laser radar spacial alignment method " (Fu Mengyin, Liu Mingyang; Infrared and laser engineering, 2009,38 (1)) disclosed spacial alignment template in.
The operation of this step and step 1 are in no particular order.Be specially: by human-computer interaction interface input wrist rotational parameters and scanning laser radar sweep parameter, send motion-control module and scanning laser radar respectively to, motion-control module driving device wrist rotates and carries out the laser scanning of spacial alignment template, the three dimensional point cloud that obtains is sent to depth map and makes up module, through the depth map of span alignment template after the processing of depth map structure module, and send human-computer interaction module to.
Step 3, on the basis of step 2, obtain the camera review of spacial alignment template.
By human-computer interaction interface input wrist rotational parameters, send motion-control module to, motion-control module driving device wrist rotates the scene that makes the scanning of camera alignment step 2 scanning laser radar, takes, and sends the camera review that obtains to human-computer interaction module.
Step 4, on the basis of step 3, obtain camera review unique point coordinate and the depth map unique point coordinate corresponding with it.
By human-computer interaction module, in the camera review that step 3 obtains, adopt document " vision sensor and range laser radar spacial alignment method " (Fu Mengyin, Liu Mingyang; Infrared and laser engineering, 2009,38 (1)) disclosed spacial alignment method in, the selected characteristic point, and will choose the camera review unique point coordinate that obtains and send the spacial alignment module to; In depth map, obtain the depth map unique point corresponding with the camera review unique point, send its coordinate to the spacial alignment module.
The described method of obtaining depth map unique point coordinate is:
The 1st step: at first provide related definition:
Mixed pixel and noise pixel that definition depth map unique point candidate region point is concentrated are bad point, and its set is bad some set A CpThe concentrated mixed pixel of depth map unique point candidate region point is removed in definition and other point beyond the noise pixel is better, and its set is better set A gDefine better frontier point and the normal point of comprising, its set is corresponding to row series classification and row series classification, is respectively the frontier point set A Ed1With the normal point set A Nr1, the frontier point set A Ed2With the normal point set A Nr2, frontier point is the pairing analyzing spot in actual object border in the laser scanning scene, normal point is better set A gIn other point except that frontier point; The definition point to be located comprises bad point undetermined and frontier point undetermined, and its set is the point to be located set A Uc
The 2nd step: the institute that depth map unique point candidate region point is concentrated is numbered a little, uses p I, jThe expression, wherein i represent the row number, j represent row number, 1≤i≤m+1,1≤j≤n+1 and i, j, m, n are positive integer; Set ρ I, jFor the scanning laser radar photocentre to p I, jDistance; l I, jBe p I, jWith p I, j+1Among 2 to less that of the distance of the scanning laser radar photocentre vertical line that line is done between bigger that point of distance and scanning laser radar photocentre; Δ ρ I, jBe p I, jWith p I, j+1Between distance; Δ θ I, jBe p I, jWith p I, j+1Between line and l I, jAngle;
The 3rd step: set i=1, j=1 promptly chooses first first point of going;
The 4th step: be categorized as this point better;
The 5th step: Δ ρ I, j(p I, jWith p I, j+1Between distance), Δ θ I, j(p I, jWith p I, j+1Between line and l I, jAngle) value can calculate by formula 7:
Δρ i , j Δθ i , j = ( ρ i , j + 1 2 + ρ i , j 2 - 2 ρ i , j ρ i , j + 1 cos θ rs ) 1 / 2 arcsin ( ( lρ i , j - sρ i , j cos θ rs ) / Δρ i , j ) - - - ( 1 )
Wherein, l ρ I, jBe ρ I, jAnd ρ I, j+1In higher value; S ρ I, jBe ρ I, jAnd ρ I, j+1In smaller value, θ RsBe scanning laser radar scan angle resolution.
The 6th step: judging point p I, j+1Type, i.e. judging point p I, j+1Be normal point, bad point or frontier point; Concrete operations are:
If 1. the 5th go on foot the p that obtains I, jWith p I, j+1Between distance, delta ρ I, jGreater than a certain artificial setting value Δ ρ Th, and p I, jWith p I, j+1Between line and l I, jAngle Δ θ I, jGreater than a certain artificial setting value Δ θ Th, then set a sign amount μ I, j, and its value is set at 1; Otherwise, sign is measured μ I, jValue be set at 0.
If 2. p I, j∈ A gAnd μ I, j=1, p then I, j∈ A Ed1, p I, j+1∈ A Uc
If p I, j∈ A gAnd μ I, j=0, p then I, j∈ A Nr1, p I, j+1∈ A g
If p I, j∈ A UcAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If p I, j∈ A UcAnd μ I, j=0, p then I, j∈ A Ed1, p I, j+1∈ A g
If p I, j∈ A CpAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If j=n-1 and μ I, j=1, p then I, n∈ A Cp
If j=n-1 and μ I, j=0, p then I, n∈ A Ed1
The 7th step: judge whether j=n sets up,, forwarded for the 8th step to if set up; Otherwise, replacement j=j+1; Returned for the 5th step;
The 8th step: judge whether i=m sets up, if set up end operation; Otherwise, replacement i=i+1, j=1; Returned for the 4th step;
By the 2nd operation that went on foot for the 8th step, all finished normal point, badly or the classification of frontier point according to the row order with what depth map unique point candidate region point was concentrated.
The 9th step: set l ' I, jBe p I, jWith p I+1, jAmong 2 to less that of the distance of the scanning laser radar photocentre vertical line that line is done between bigger that point of distance and scanning laser radar photocentre; Δ ρ ' I, jBe p I, jWith p I+1, jBetween distance; Δ θ ' I, jBe p I, jWith p I+1, jBetween line and l ' I, jAngle;
The 10th step: reset i=1, j=1 promptly chooses first first point that is listed as;
The 11st step: be categorized as this point better;
The 12nd step: Δ ρ ' I, j(p I, jWith p I+1, jBetween distance), Δ θ ' I, j(p I, jWith p I+1, jBetween line and l ' I, jAngle) value can calculate by formula 8:
Δρ i , j ′ Δθ i , j ′ = ( ρ i + 1 , j 2 + ρ i , j 2 - 2 ρ i , j ρ i + 1 , j cos θ rs ) 1 / 2 arcsin ( ( lρ i , j ′ - sρ i , j ′ cos θ rs ) / Δρ i , j ′ ) - - - ( 8 )
Wherein, l ρ ' I, jBe ρ I, jAnd ρ I+1, jIn higher value; S ρ ' I, jBe ρ I, jAnd ρ I+1, jIn smaller value, θ RsBe scanning laser radar scan angle resolution.
The 13rd step: judging point p I+1, jType, i.e. judging point p I+1, jBe normal point, bad point or frontier point; Concrete operations are:
If 1. the 12nd go on foot the p that obtains I, jWith p I+1, jBetween distance, delta ρ ' I, jGreater than a certain artificial setting value Δ ρ Th, and p I, jWith p I+1, jBetween line and l ' I, jAngle Δ θ ' I, jGreater than a certain artificial setting value Δ θ Th, then set sign amount μ I, jValue be 1; Otherwise, sign is measured μ I, jValue be set at 0.
If 2. p I, j∈ A gAnd μ I, j=1, p then I, j∈ A Ed2, p I, j+1∈ A Uc
If p I, j∈ A gAnd μ I, j=0, p then I, j∈ A Nr2, p I, j+1∈ A g
If p I, j∈ A UcAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If p I, j∈ A UcAnd μ I, j=0, p then I, j∈ A Ed2, p I, j+1∈ A g
If p I, j∈ A CpAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If j=n-1 and μ I, j=1, p then I, n∈ A Cp
If j=n-1 and μ I, j=0, p then I, n∈ A Ed2
The 14th step: judge whether i=m sets up,, forwarded for the 15th step to if set up; Otherwise, replacement i=i+1; Returned for the 12nd step;
The 15th step: judge whether j=n sets up, if set up end operation; Otherwise, replacement i=1, j=j+1; Returned for the 11st step;
Go on foot the 15th operation that goes on foot, whole classification of having finished normal point, bad point or frontier point according to row in proper order that depth map unique point candidate region point is concentrated by the 9th.
The 16th step: with the frontier point set A Ed1In the three-dimensional coordinate of being had a few in x and z coordinate figure get x and the z coordinate figure of average as the depth map unique point, with the frontier point set A Ed2In the three-dimensional coordinate of being had a few in the y coordinate figure get the y coordinate figure of average as the depth map unique point.
Through the operation of above-mentioned steps, can obtain the depth map characteristic point coordinates.
Step 5, on the basis of step 4, obtain the coordinate conversion matrix that camera coordinates is tied to the scanning laser radar coordinate system.
The spacial alignment module is according to the camera review unique point coordinate and the depth map unique point coordinate that obtain, in conjunction with the camera calibration matrix K that camera calibration module transmitted, carry out the spacial alignment of scanning laser radar and video camera, obtain the coordinate conversion matrix that camera coordinates is tied to the scanning laser radar coordinate system, comprise rotation matrix and translation matrix, and send it to laser point cloud dyeing module.
Describedly obtain the method that camera coordinates is tied to the coordinate conversion matrix of scanning laser radar coordinate system and adopt " vision sensor and range laser radar spacial alignment method " (Fu Mengyin, Liu Mingyang; Infrared and laser engineering, 2009,38 (1)) disclosed method in.
Step 6, use scanning laser radar obtain the three dimensional point cloud of object scene.
On the basis of step 5, by human-computer interaction interface input wrist rotational parameters and scanning laser radar sweep parameter, send motion-control module and scanning laser radar respectively to, motion-control module driving device wrist rotates and carries out the laser scanning of object scene, and the three dimensional point cloud that scanning obtains is sent to laser point cloud dyeing module.
Step 7, use video camera obtain the camera review of object scene.
On the basis of step 6, by human-computer interaction interface input wrist rotational parameters, send motion-control module to, motion-control module driving device wrist rotates and makes video camera head for target scene, and the object scene that camera alignment and scanning laser radar are scanned, shooting obtains the camera review of object scene, sends laser point cloud dyeing module then to.
Step 8, on the basis of step 7, the object scene image that object scene three dimensional point cloud that transmits according to scanning laser radar in the laser point cloud dyeing module and video camera transmit, the video camera projection matrix K that camera calibration module provides, rotation matrix and translation matrix that the spacial alignment module is provided, find the corresponding point of three dimensional point cloud in camera review by coordinate conversion, and extract the colouring information of corresponding point.With the three-dimensional location coordinates of three dimensional point cloud and merge into six-dimensional coordinate, obtain the color three dimension cloud data, and send the attitude orientation referrer module at the three-dimensional RGB colouring information of camera review data corresponding point.
Carrying out laser point cloud dyeing concrete steps is:
(1) in the setting laser scanning radar scanning scene, in the resulting three dimensional point cloud, arbitrary analyzing spot P at the three-dimensional coordinate of scanning laser radar system is: X LMS=[x LMSy LMSz LMS] T,
(2) through rotation, translation transformation, the coordinate of invocation point P under camera coordinate system is:
x Camera y Camera z Camera = R - 1 x LMS y LMS z LMS - T .
(3) further, the coordinate of invocation point P under image coordinate system is:
u v 1 = K · x Camera / z Camera y Camera / z Camera 1 .
(4) by the image coordinate of some P, color value that can index point P: R, G, B.
(5) will put the rectangular space coordinate of P and color value and merge, and obtain the locus of a P and the 6 DOF of color characteristic and represent:
X LMS - RGB = x LMS y LMS z LMS R G B
After the institute to scanning laser radar scanning finishes the above-mentioned sextuple step of representing a little, promptly obtain the color three dimension dot cloud.
Step 9, finish the dyeing of laser point cloud.Be specially:
On the basis of step 8, the attitude orientation referrer module dyes color three dimension dot cloud data-switching that module transmits to earth coordinates with laser point cloud, and the color three dimension cloud data after will changing is sent to human-computer interaction module and shows.So far, finish the dyeing of laser point cloud.
The method that the color three dimension dot cloud that described laser point cloud dyeing module transmits is transformed into earth coordinates is:
The crab angle that setting attitude orientation referrer module records scanning laser radar is α, and the angle of pitch is β, and roll angle is γ, and the coordinate of the three dimensional point cloud of scanning laser radar under the scanning laser radar coordinate system is X d
X d = x d y d z d
Then the coordinate of laser scanning point under earth coordinates is X e:
X e = x e y e z e = cos α cos β sin α cos β - sin β cos α sin β sin γ - sin α cos γ sin α sin β sin γ + cos α cos γ cos β sin γ cos α sin β cos γ + sin α sin γ sin α sin β cos γ + cos α sin γ cos β cos γ x d y d z d .
On the basis of step 8, the color three dimension laser point cloud that obtains being transformed under the earth coordinates is X E-RGB:
X e - RGB = x e y e z e R G B
The color three dimension laser point cloud that is transformed under the earth coordinates is sent to human-computer interaction interface, shows by OPENGL software.
Through above-mentioned steps, can finish color laser point cloud imaging based on mobile platform.
The invention is not restricted to this example, every mentality of designing of utilizing the design, the design of doing some simple change all should fall within protection scope of the present invention.

Claims (4)

1. color laser point cloud imaging system is characterized in that: comprising: mobile platform, support bar (4), video camera, camera calibration module, motion-control module, human-computer interaction module, spacial alignment module, wrist, depth map make up module, filter module, scanning laser radar, attitude orientation referrer module, laser point cloud dyeing module; Wherein, wrist comprises coupling shaft A (1), coupling shaft B (2) and connects cantilever (3);
Support bar (4) is formed support frame with wrist, support bar (4) be connected cantilever (3) and be connected with coupling shaft B (2) by coupling shaft A (1), coupling shaft A (1) is in vertical state, coupling shaft B (2) is in horizontality, coupling shaft A (1) and coupling shaft B (2) can be around its axle center rotations, and support bar (4) and coupling shaft A (1), on-fixed connects between the coupling shaft B (2), connect cantilever (3) and coupling shaft A (1), fixedly connected between the coupling shaft B (2), that is: when coupling shaft A (1) rotates around its axle center, support bar (4) does not rotate with coupling shaft, keeps rotating with speed with coupling shaft A (1) and connect cantilever (3); When coupling shaft B (2) rotated around the axle center, support bar (4) did not rotate with coupling shaft, kept rotating with speed with coupling shaft B (2) and connect cantilever (3); Support frame is fixed on the mobile platform by support bar (4), connects cantilever (3) and does not contact mobile platform; Video camera is fixedly installed in scanning laser radar and is connected on the cantilever (3), and the photocentre of video camera and the photocentre of scanning laser radar, equate that with the distance in the axle center of coupling shaft B (2) photocentre of video camera and the photocentre of scanning laser radar are on same perpendicular;
Video camera is connected with human-computer interaction module, camera calibration module, laser point cloud dyeing module, wrist; Camera calibration module is connected with video camera, spacial alignment module, laser point cloud dyeing module; Motion-control module is connected with wrist, human-computer interaction module, mobile platform; Human-computer interaction module is connected with scanning laser radar, video camera, motion-control module, depth map structure module, spacial alignment module, attitude orientation referrer module; The spacial alignment module is connected with human-computer interaction module, filter module, camera calibration module, laser point cloud dyeing module; Wrist is connected with motion-control module, scanning laser radar, video camera; Depth map makes up module and is connected with scanning laser radar, human-computer interaction module; Filter module is connected with human-computer interaction module, spacial alignment module; Scanning laser radar is connected with wrist, human-computer interaction module, laser point cloud dyeing module; The attitude orientation referrer module is connected with human-computer interaction module, laser point cloud dyeing module; Laser point cloud dyeing module is connected with scanning laser radar, video camera, human-computer interaction module, attitude orientation referrer module, spacial alignment module, camera calibration module;
Described video camera is used to obtain the camera review data, and sends camera calibration module, human-computer interaction module and laser point cloud dyeing module respectively to;
Described camera calibration module receives the view data that video camera obtains, and obtains video camera projection matrix K after treatment, and video camera projection matrix K is sent to spacial alignment module and laser point cloud dyeing module;
Described motion-control module is according to the steering order of human-computer interaction module, the rotation of driving device wrist and the motion of mobile platform;
The major function of described human-computer interaction module is: input, output function that man-machine interaction is provided; Include but not limited to: the 1. rotational parameters of wrist input, 2. mobile platform the kinematic parameter input, 3. scanning laser radar the sweep parameter input, 4. receive and show depth map image that image that video camera obtains and depth map make up module and produce, 5. receive color three dimension cloud data that the attitude orientation referrer module transmits and the function that shows, 6. is provided at selected characteristic point on the camera review, and send the unique point coordinate in the camera review to the spacial alignment module; 7. be provided at the function of choosing character pair point candidate region point set in the depth map, and send its coordinate to filter module; Described unique point candidate region point set is the corresponding candidate region of unique point in depth map of choosing on the camera review;
Described spacial alignment module receives the depth map unique point coordinate corresponding with the camera review unique point that is transmitted from camera review unique point coordinate and filter module that human-computer interaction module transmitted, and the video camera projection matrix K of reception camera calibration module transmission, obtain the coordinate conversion matrix that camera coordinates is tied to the scanning laser radar coordinate system according to disclosed space conversion method in the document " vision sensor and range laser radar spacial alignment method ", comprise rotation matrix and translation matrix, and send rotation matrix and translation matrix to laser point cloud dyeing module;
Described wrist can be finished following two kinds of motions under motion-control module drives: 1. control linkage axle B (2) rotates scanning laser radar fixed thereon around its axle center rotation on pitch orientation; 2. control linkage axle A (1) rotates scanning laser radar around its axle center rotation in the horizontal direction;
Described depth map makes up the three dimensional point cloud of module reception from scanning laser radar, obtains depth map after treatment, and sends depth map to human-computer interaction module;
Described filter module receives the depth map unique point candidate region point set that transmits from human-computer interaction module, obtains depth map unique point coordinate by processing, sends depth map unique point coordinate to the spacial alignment module;
Described scanning laser radar receives the laser radar sweep parameter that human-computer interaction module transmits, and under the drive of wrist scene is scanned, and obtains three dimensional point cloud, and sends it to depth map structure module and laser point cloud dyeing module;
Described attitude orientation referrer module is used for the transformational relation that the attitude information of Laser Measurement scanning radar under earth coordinates set up earth coordinates and scanning laser radar coordinate system, and described scanning laser radar attitude information includes but not limited to: the angle of pitch, roll angle and position angle; Then, laser point cloud is dyeed color three dimension dot cloud data-switching that module transmits to earth coordinates, and the color three dimension cloud data after will changing is sent to human-computer interaction module and shows;
Described laser point cloud dyeing module is finished the dyeing of laser point cloud, that is: receive the view data that three dimensional point cloud that scanning laser radar transmits and video camera transmit, and receive rotation matrix and the translation matrix that the spacial alignment module is provided, and the video camera projection matrix K that provides of camera calibration module, find out the corresponding point of three dimensional point cloud in the camera review data, and extract its colouring information; With the three-dimensional location coordinates of three dimensional point cloud and in view data the three-dimensional RGB colouring information of corresponding point merge into six-dimensional coordinate, obtain the color three dimension cloud data, and send human-computer interaction module to and show, finish the dyeing of laser point cloud;
Its workflow is:
Step 1, video camera is demarcated; Be specially: import the wrist rotational parameters and pass to motion-control module by human-computer interaction module, motion-control module makes video camera towards the dead ahead according to the rotational parameters driving device wrist rotation of the wrist that receives, images acquired, the image that obtains is sent to camera calibration module to be handled, obtain the camera calibration matrix K, send the camera calibration matrix K to spacial alignment module and laser point cloud dyeing module;
The rotational parameters of described wrist includes but not limited to: start angle, termination point, stepping angle;
The kinematic parameter of described mobile platform includes but not limited to: start, stop, direction, speed command;
Described scanning laser radar sweep parameter includes but not limited to: scanning visual angle and scanning resolution;
Step 2, obtain the depth map of spacial alignment template; Described spacial alignment template adopts disclosed spacial alignment template in the document " vision sensor and range laser radar spacial alignment method ";
The operation of this step and step 1 are in no particular order; Be specially: by human-computer interaction interface input wrist rotational parameters and scanning laser radar sweep parameter, send motion-control module and scanning laser radar respectively to, motion-control module driving device wrist rotates and carries out the laser scanning of spacial alignment template, the three dimensional point cloud that obtains is sent to depth map and makes up module, through the depth map of span alignment template after the processing of depth map structure module, and send human-computer interaction module to;
Step 3, on the basis of step 2, obtain the camera review of spacial alignment template;
By human-computer interaction interface input wrist rotational parameters, send motion-control module to, motion-control module driving device wrist rotates the scene that makes the scanning of camera alignment step 2 scanning laser radar, takes, and sends the camera review that obtains to human-computer interaction module;
Step 4, on the basis of step 3, obtain camera review unique point coordinate and the depth map unique point coordinate corresponding with it;
Pass through human-computer interaction module, in the camera review that step 3 obtains, adopt disclosed spacial alignment method in the document " vision sensor and range laser radar spacial alignment method ", the selected characteristic point, and will choose the camera review unique point coordinate that obtains and send the spacial alignment module to; In depth map, obtain the depth map unique point corresponding with the camera review unique point, send its coordinate to the spacial alignment module;
Step 5, obtain the coordinate conversion matrix that camera coordinates is tied to the scanning laser radar coordinate system;
On the basis of step 4, the spacial alignment module is according to the camera review unique point coordinate and the depth map unique point coordinate that obtain, in conjunction with the camera calibration matrix K that camera calibration module transmitted, carry out the spacial alignment of scanning laser radar and video camera, obtain the coordinate conversion matrix that camera coordinates is tied to the scanning laser radar coordinate system, comprise rotation matrix and translation matrix, and send it to laser point cloud dyeing module;
Step 6, use scanning laser radar obtain the three dimensional point cloud of object scene;
On the basis of step 5, by human-computer interaction interface input wrist rotational parameters and scanning laser radar sweep parameter, send motion-control module and scanning laser radar respectively to, motion-control module driving device wrist rotates and carries out the laser scanning of object scene, and the three dimensional point cloud that scanning obtains is sent to laser point cloud dyeing module;
Step 7, use video camera obtain the camera review of object scene;
On the basis of step 6, by human-computer interaction interface input wrist rotational parameters, send motion-control module to, motion-control module driving device wrist rotates and makes video camera head for target scene, and the object scene that camera alignment and scanning laser radar are scanned, shooting obtains the camera review of object scene, sends laser point cloud dyeing module then to;
Step 8, obtain the color three dimension cloud data;
On the basis of step 7, the object scene image that object scene three dimensional point cloud that transmits according to scanning laser radar in the laser point cloud dyeing module and video camera transmit, the video camera projection matrix K that camera calibration module provides, rotation matrix and translation matrix that the spacial alignment module is provided, find the corresponding point of three dimensional point cloud in camera review by coordinate conversion, and extract the colouring information of corresponding point; With the three-dimensional location coordinates of three dimensional point cloud and merge into six-dimensional coordinate, obtain the color three dimension cloud data, and send the attitude orientation referrer module at the three-dimensional RGB colouring information of camera review data corresponding point;
Step 9, the color three dimension cloud data is transformed into earth coordinates and shows from the scanning laser radar coordinate system;
On the basis of step 8, the attitude orientation referrer module dyes color three dimension dot cloud data-switching that module transmits to earth coordinates with laser point cloud, and the color three dimension cloud data after will changing is sent to human-computer interaction module and shows;
Through above-mentioned steps, can finish color laser point cloud imaging based on mobile platform.
2. a kind of color laser point cloud imaging system as claimed in claim 1 is characterized in that: the method for obtaining depth map unique point coordinate described in the step 4 of its course of work is:
The 1st step: at first provide related definition:
Mixed pixel and noise pixel that definition depth map unique point candidate region point is concentrated are bad point, and its set is bad some set A CpThe concentrated mixed pixel of depth map unique point candidate region point is removed in definition and other point beyond the noise pixel is better, and its set is better set A gDefine better frontier point and the normal point of comprising, its set is corresponding to row series classification and row series classification, is respectively the frontier point set A Ed1With the normal point set A Nr1, the frontier point set A Ed2With the normal point set A Nr2, frontier point is the pairing analyzing spot in actual object border in the laser scanning scene, normal point is better set A gIn other point except that frontier point; The definition point to be located comprises bad point undetermined and frontier point undetermined, and its set is the point to be located set A Uc
The 2nd step: the institute that depth map unique point candidate region point is concentrated is numbered a little, uses p I, jThe expression, wherein i represent the row number, j represent row number, 1≤i≤m+1,1≤j≤n+1 and i, j, m, n are positive integer; Set ρ I, jFor the scanning laser radar photocentre to p I, jDistance; l I, jBe p I, jWith p I, j+1Among 2 to less that of the distance of the scanning laser radar photocentre vertical line that line is done between bigger that point of distance and scanning laser radar photocentre; Δ ρ I, jBe p I, jWith p I, j+1Between distance; Δ θ I, jBe p I, jWith p I, j+1Between line and l I, jAngle;
The 3rd step: set i=1, j=1 promptly chooses first first point of going;
The 4th step: be categorized as this point better;
The 5th step: Δ ρ I, j, Δ θ I, jValue can calculate by formula 1:
Δ ρ i , j Δ θ i , j = ( ρ i , j + 1 2 + ρ i , j 2 - 2 ρ i , j ρ i , j + 1 cos θ rs ) 1 / 2 arcsin ( ( lρ i , j - s ρ i , j cos θ rs ) / Δρ i , j ) - - - ( 1 )
Wherein, l ρ I, jBe ρ I, jAnd ρ I, j+1In higher value; S ρ I, jBe ρ I, jAnd ρ I, j+1In smaller value, θ RsBe scanning laser radar scan angle resolution;
The 6th step: judging point p I, j+1Type, i.e. judging point p I, j+1Be normal point, bad point or frontier point; Concrete operations are:
If 1. the 5th go on foot the p that obtains I, jWith p I, j+1Between distance, delta ρ I, jGreater than a certain artificial setting value Δ ρ Th, and p I, jWith p I, j+1Between line and l I, jAngle Δ θ I, jGreater than a certain artificial setting value Δ θ Th, then set a sign amount μ I, j, and its value is set at 1; Otherwise, sign is measured μ I, jValue be set at 0;
If 2. p I, j∈ A gAnd μ I, j=1, p then I, j∈ A Ed1, p I, j+1∈ A Uc
If p I, j∈ A gAnd μ I, j=0, p then I, j∈ A Nr1, p I, j+1∈ A g
If p I, j∈ A UcAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If p I, j∈ A UcAnd μ I, j=0, p then I, j∈ A Ed1, p I, j+1∈ A g
If p I, j∈ A CpAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If j=n-1 and μ I, j=1, p then I, n∈ A Cp
If j=n-1 and μ I, j=0, p then I, n∈ A Ed1
The 7th step: judge whether j=n sets up,, forwarded for the 8th step to if set up; Otherwise, replacement j=j+1; Returned for the 5th step;
The 8th step: judge whether i=m sets up, if set up end operation; Otherwise, replacement i=i+1, j=1; Returned for the 4th step;
By the 2nd operation that went on foot for the 8th step, all finished normal point, badly or the classification of frontier point according to the row order with what depth map unique point candidate region point was concentrated;
The 9th step: set l ' I, jBe p I, jWith p I+1, jAmong 2 to less that of the distance of the scanning laser radar photocentre vertical line that line is done between bigger that point of distance and scanning laser radar photocentre; Δ ρ ' I, jBe p I, jWith p I+1, jBetween distance; Δ θ ' I, jBe p I, jWith p I+1, jBetween line and l ' I, jAngle;
The 10th step: reset i=1, j=1 promptly chooses first first point that is listed as;
The 11st step: be categorized as this point better;
The 12nd step: Δ ρ ' I, j, Δ θ ' I, jValue can calculate by formula 2:
Δ ρ i , j ′ Δ θ i , j ′ = ( ρ i + 1 , j 2 + ρ i , j 2 - 2 ρ i , j ρ i + 1 , j cos θ rs ) 1 / 2 arcsin ( ( lρ i , j ′ - s ρ i , j ′ - cos θ rs ) / Δ ρ i , j ′ ) - - - ( 2 )
Wherein, l ρ I, jBe ρ I, jAnd ρ I+1, jIn higher value; S ρ ' I, jBe ρ I, jAnd ρ I+1, jIn smaller value, θ RsBe scanning laser radar scan angle resolution;
The 13rd step: judging point p I+1, jType, i.e. judging point p I+1, jBe normal point, bad point or frontier point; Concrete operations are:
If 1. the 12nd go on foot the p that obtains I, jWith p I+1, jBetween distance, delta ρ I, jGreater than a certain artificial setting value Δ ρ Th, and p I, jWith p I+1, jBetween line and l ' I, jAngle Δ θ ' I, jGreater than a certain artificial setting value Δ θ Th, then set sign amount μ I, jValue be 1; Otherwise, sign is measured μ I, jValue be set at 0;
If 2. p I, j∈ A gAnd μ I, j=1, p then I, j∈ A Ed2, p I, j+1∈ A Uc
If p I, j∈ A gAnd μ I, j=0, p then I, j∈ A Nr2, p I, j+1∈ A g
If p I, j∈ A UcAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If p I, j∈ A UcAnd μ I, j=0, p then I, j∈ A Ed2, p I, j+1∈ A g
If p I, j∈ A CpAnd μ I, j=1, p then I, j∈ A Cp, p I, j+1∈ A Uc
If j=n-1 and μ I, j=1, p then I, n∈ A Cp
If j=n-1 and μ I, j=0, p then I, n∈ A Ed2
The 14th step: judge whether i=m sets up,, forwarded for the 15th step to if set up; Otherwise, replacement i=i+1; Returned for the 12nd step;
The 15th step: judge whether j=n sets up, if set up end operation; Otherwise, replacement i=1, j=j+1; Returned for the 11st step;
Go on foot the 15th operation that goes on foot, whole classification of having finished normal point, bad point or frontier point according to row in proper order that depth map unique point candidate region point is concentrated by the 9th;
The 16th step: with the frontier point set A Ed1In the three-dimensional coordinate of being had a few in x and z coordinate figure get x and the z coordinate figure of average as the depth map unique point, with the frontier point set A Ed2In the three-dimensional coordinate of being had a few in the y coordinate figure get the y coordinate figure of average as the depth map unique point;
Through the operation of above-mentioned steps, can obtain the depth map characteristic point coordinates.
3. a kind of color laser point cloud imaging system as claimed in claim 1 or 2 is characterized in that: the method for obtaining the camera calibration matrix K described in the step 1 of its workflow adopts disclosed method in the document " Flexiblecamera calibration by viewing a plane from unknown orientations. ".
4. a kind of color laser point cloud imaging system as claimed in claim 1 or 2 is characterized in that: obtain method employing document " vision sensor and the range laser radar spacial alignment method " disclosed method that camera coordinates is tied to the coordinate conversion matrix of scanning laser radar coordinate system described in the step 5 of its workflow.
CN 201010160499 2010-04-30 2010-04-30 Mobile platform-based color laser point cloud imaging system Pending CN101825442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010160499 CN101825442A (en) 2010-04-30 2010-04-30 Mobile platform-based color laser point cloud imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010160499 CN101825442A (en) 2010-04-30 2010-04-30 Mobile platform-based color laser point cloud imaging system

Publications (1)

Publication Number Publication Date
CN101825442A true CN101825442A (en) 2010-09-08

Family

ID=42689497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010160499 Pending CN101825442A (en) 2010-04-30 2010-04-30 Mobile platform-based color laser point cloud imaging system

Country Status (1)

Country Link
CN (1) CN101825442A (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074047A (en) * 2011-01-06 2011-05-25 天津市星际空间地理信息工程有限公司 High-fineness urban three-dimensional modeling method
CN102168954A (en) * 2011-01-14 2011-08-31 浙江大学 Monocular-camera-based method for measuring depth, depth field and sizes of objects
CN102185996A (en) * 2011-05-11 2011-09-14 上海融磁电子有限公司 Shooting type scanner for imaging local imaging surface
CN102314674A (en) * 2011-08-29 2012-01-11 北京建筑工程学院 Registering method for data texture image of ground laser radar
CN102506830A (en) * 2011-11-21 2012-06-20 奇瑞汽车股份有限公司 Vision-based positioning method and device
CN104374376A (en) * 2014-11-05 2015-02-25 北京大学 Vehicle-mounted three-dimensional measurement system device and application thereof
CN104517270A (en) * 2014-12-25 2015-04-15 深圳市一体太赫兹科技有限公司 Terahertz image processing method and system
CN105204015A (en) * 2015-09-14 2015-12-30 上海无线电设备研究所 Control display system and method for laser active imaging system
CN105225269A (en) * 2015-09-22 2016-01-06 浙江大学 Based on the object modelling system of motion
CN105928457A (en) * 2016-06-21 2016-09-07 大连理工大学 Omnidirectional three-dimensional laser color scanning system and method thereof
CN106043169A (en) * 2016-07-01 2016-10-26 百度在线网络技术(北京)有限公司 Environment perception device and information acquisition method applicable to environment perception device
CN106384362A (en) * 2016-10-13 2017-02-08 河南龙璟科技有限公司 Control system of three-dimensional scanner
CN106842187A (en) * 2016-12-12 2017-06-13 西南石油大学 Positioner and its method are merged in a kind of phase-array scanning with Computer Vision
CN107204037A (en) * 2016-03-17 2017-09-26 中国科学院光电研究院 3-dimensional image generation method based on main passive 3-D imaging system
WO2017197617A1 (en) * 2016-05-19 2017-11-23 深圳市速腾聚创科技有限公司 Movable three-dimensional laser scanning system and movable three-dimensional laser scanning method
CN107462881A (en) * 2017-07-21 2017-12-12 北京航空航天大学 A kind of laser range sensor scaling method
CN107850449A (en) * 2015-08-03 2018-03-27 通腾全球信息公司 Method and system for generating and using locating reference datum
CN107909029A (en) * 2017-11-14 2018-04-13 福州瑞芯微电子股份有限公司 A kind of real scene virtualization acquisition method and circuit
CN108089199A (en) * 2017-12-26 2018-05-29 深圳慎始科技有限公司 A kind of semisolid three-dimensional colour imaging device
CN108182428A (en) * 2018-01-31 2018-06-19 福州大学 The method that front truck state recognition and vehicle follow
CN108415034A (en) * 2018-04-27 2018-08-17 绵阳天眼激光科技有限公司 A kind of laser radar real-time imaging devices
CN108603933A (en) * 2016-01-12 2018-09-28 三菱电机株式会社 The system and method exported for merging the sensor with different resolution
CN108802759A (en) * 2018-06-07 2018-11-13 北京大学 The nearly sensing system of movable type towards plant phenotype and data capture method
CN109212554A (en) * 2017-07-03 2019-01-15 百度在线网络技术(北京)有限公司 On-vehicle information acquisition system and its control method and device
CN109683144A (en) * 2017-10-19 2019-04-26 通用汽车环球科技运作有限责任公司 The three-dimensional alignment of radar sensor and camera sensor
CN109870118A (en) * 2018-11-07 2019-06-11 南京林业大学 A kind of point cloud acquisition method of Oriented Green plant temporal model
CN110162089A (en) * 2019-05-30 2019-08-23 北京三快在线科技有限公司 A kind of unpiloted emulation mode and device
CN110766170A (en) * 2019-09-05 2020-02-07 国网江苏省电力有限公司 Image processing-based multi-sensor fusion and personnel positioning method
CN110764070A (en) * 2019-10-29 2020-02-07 北科天绘(合肥)激光技术有限公司 Data real-time fusion processing method and device based on three-dimensional data and image data
CN110865367A (en) * 2019-11-30 2020-03-06 山西禾源科技股份有限公司 Intelligent fusion method for radar video data
CN111260781A (en) * 2020-01-15 2020-06-09 北京云迹科技有限公司 Method and device for generating image information and electronic equipment
CN111630520A (en) * 2019-07-30 2020-09-04 深圳市大疆创新科技有限公司 Method and device for processing point cloud
CN112286231A (en) * 2020-06-20 2021-01-29 芜湖易来达雷达科技有限公司 Civil millimeter wave radar multi-antenna measurement and control system based on three-dimensional space scanning
CN113129590A (en) * 2021-04-12 2021-07-16 武汉理工大学 Traffic facility information intelligent analysis method based on vehicle-mounted radar and graphic measurement
CN113608234A (en) * 2021-07-30 2021-11-05 复旦大学 City data acquisition system
CN114643599A (en) * 2020-12-18 2022-06-21 沈阳新松机器人自动化股份有限公司 Three-dimensional machine vision system and method based on point laser and area-array camera
CN117029699A (en) * 2023-09-28 2023-11-10 东莞市兆丰精密仪器有限公司 Line laser measuring method, device and system and computer readable storage medium
CN117492026A (en) * 2023-12-29 2024-02-02 天津华铁科为科技有限公司 Railway wagon loading state detection method and system combined with laser radar scanning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《光学技术》 20100131 邓志红 等 一种改进的视觉传感器与激光测距雷达特征匹配点提取算法 第43-47页 第36卷, 第1期 2 *
《华中科技大学学报(自然科学版 增刊I)》 20081031 刘大学 等 一种单线激光雷达和可见光摄像机的标定方法 第68-71页 第36卷, 2 *
《红外与激光工程》 20090228 付梦印 等 视觉传感器与激光测距雷达空间对准方法 第74-78页 1-4 第38卷, 第1期 2 *

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074047A (en) * 2011-01-06 2011-05-25 天津市星际空间地理信息工程有限公司 High-fineness urban three-dimensional modeling method
CN102074047B (en) * 2011-01-06 2012-08-08 天津市星际空间地理信息工程有限公司 High-fineness urban three-dimensional modeling method
CN102168954A (en) * 2011-01-14 2011-08-31 浙江大学 Monocular-camera-based method for measuring depth, depth field and sizes of objects
CN102185996A (en) * 2011-05-11 2011-09-14 上海融磁电子有限公司 Shooting type scanner for imaging local imaging surface
CN102185996B (en) * 2011-05-11 2013-07-10 上海融磁电子有限公司 Shooting type scanner for imaging local imaging surface
CN102314674A (en) * 2011-08-29 2012-01-11 北京建筑工程学院 Registering method for data texture image of ground laser radar
CN102506830A (en) * 2011-11-21 2012-06-20 奇瑞汽车股份有限公司 Vision-based positioning method and device
CN102506830B (en) * 2011-11-21 2014-03-12 奇瑞汽车股份有限公司 Vision-based positioning method and device
CN104374376A (en) * 2014-11-05 2015-02-25 北京大学 Vehicle-mounted three-dimensional measurement system device and application thereof
CN104374376B (en) * 2014-11-05 2016-06-15 北京大学 A kind of vehicle-mounted three-dimension measuring system device and application thereof
CN104517270A (en) * 2014-12-25 2015-04-15 深圳市一体太赫兹科技有限公司 Terahertz image processing method and system
CN107850449A (en) * 2015-08-03 2018-03-27 通腾全球信息公司 Method and system for generating and using locating reference datum
CN105204015A (en) * 2015-09-14 2015-12-30 上海无线电设备研究所 Control display system and method for laser active imaging system
CN105204015B (en) * 2015-09-14 2018-07-10 上海无线电设备研究所 A kind of control display system and its method for Laser Active Imaging System Used
CN105225269A (en) * 2015-09-22 2016-01-06 浙江大学 Based on the object modelling system of motion
CN105225269B (en) * 2015-09-22 2018-08-17 浙江大学 Object modelling system based on motion
CN108603933A (en) * 2016-01-12 2018-09-28 三菱电机株式会社 The system and method exported for merging the sensor with different resolution
CN107204037A (en) * 2016-03-17 2017-09-26 中国科学院光电研究院 3-dimensional image generation method based on main passive 3-D imaging system
WO2017197617A1 (en) * 2016-05-19 2017-11-23 深圳市速腾聚创科技有限公司 Movable three-dimensional laser scanning system and movable three-dimensional laser scanning method
CN105928457A (en) * 2016-06-21 2016-09-07 大连理工大学 Omnidirectional three-dimensional laser color scanning system and method thereof
CN106043169A (en) * 2016-07-01 2016-10-26 百度在线网络技术(北京)有限公司 Environment perception device and information acquisition method applicable to environment perception device
CN106384362B (en) * 2016-10-13 2019-08-16 河南龙璟科技有限公司 A kind of control system of spatial digitizer
CN106384362A (en) * 2016-10-13 2017-02-08 河南龙璟科技有限公司 Control system of three-dimensional scanner
CN106842187A (en) * 2016-12-12 2017-06-13 西南石油大学 Positioner and its method are merged in a kind of phase-array scanning with Computer Vision
CN109212554B (en) * 2017-07-03 2024-05-10 百度在线网络技术(北京)有限公司 Vehicle-mounted information acquisition system and control method and device thereof
CN109212554A (en) * 2017-07-03 2019-01-15 百度在线网络技术(北京)有限公司 On-vehicle information acquisition system and its control method and device
CN107462881A (en) * 2017-07-21 2017-12-12 北京航空航天大学 A kind of laser range sensor scaling method
CN109683144A (en) * 2017-10-19 2019-04-26 通用汽车环球科技运作有限责任公司 The three-dimensional alignment of radar sensor and camera sensor
CN107909029A (en) * 2017-11-14 2018-04-13 福州瑞芯微电子股份有限公司 A kind of real scene virtualization acquisition method and circuit
CN108089199A (en) * 2017-12-26 2018-05-29 深圳慎始科技有限公司 A kind of semisolid three-dimensional colour imaging device
CN108182428A (en) * 2018-01-31 2018-06-19 福州大学 The method that front truck state recognition and vehicle follow
CN108415034A (en) * 2018-04-27 2018-08-17 绵阳天眼激光科技有限公司 A kind of laser radar real-time imaging devices
CN108802759A (en) * 2018-06-07 2018-11-13 北京大学 The nearly sensing system of movable type towards plant phenotype and data capture method
CN109870118A (en) * 2018-11-07 2019-06-11 南京林业大学 A kind of point cloud acquisition method of Oriented Green plant temporal model
CN109870118B (en) * 2018-11-07 2020-09-11 南京林业大学 Point cloud collection method for green plant time sequence model
CN110162089A (en) * 2019-05-30 2019-08-23 北京三快在线科技有限公司 A kind of unpiloted emulation mode and device
WO2021016891A1 (en) * 2019-07-30 2021-02-04 深圳市大疆创新科技有限公司 Method and apparatus for processing point cloud
CN111630520A (en) * 2019-07-30 2020-09-04 深圳市大疆创新科技有限公司 Method and device for processing point cloud
CN110766170A (en) * 2019-09-05 2020-02-07 国网江苏省电力有限公司 Image processing-based multi-sensor fusion and personnel positioning method
CN110766170B (en) * 2019-09-05 2022-09-20 国网江苏省电力有限公司 Image processing-based multi-sensor fusion and personnel positioning method
CN110764070A (en) * 2019-10-29 2020-02-07 北科天绘(合肥)激光技术有限公司 Data real-time fusion processing method and device based on three-dimensional data and image data
CN110865367A (en) * 2019-11-30 2020-03-06 山西禾源科技股份有限公司 Intelligent fusion method for radar video data
CN110865367B (en) * 2019-11-30 2023-05-05 山西禾源科技股份有限公司 Intelligent radar video data fusion method
CN111260781B (en) * 2020-01-15 2024-04-19 北京云迹科技股份有限公司 Method and device for generating image information and electronic equipment
CN111260781A (en) * 2020-01-15 2020-06-09 北京云迹科技有限公司 Method and device for generating image information and electronic equipment
CN112286231A (en) * 2020-06-20 2021-01-29 芜湖易来达雷达科技有限公司 Civil millimeter wave radar multi-antenna measurement and control system based on three-dimensional space scanning
CN114643599A (en) * 2020-12-18 2022-06-21 沈阳新松机器人自动化股份有限公司 Three-dimensional machine vision system and method based on point laser and area-array camera
CN113129590A (en) * 2021-04-12 2021-07-16 武汉理工大学 Traffic facility information intelligent analysis method based on vehicle-mounted radar and graphic measurement
CN113608234A (en) * 2021-07-30 2021-11-05 复旦大学 City data acquisition system
CN117029699A (en) * 2023-09-28 2023-11-10 东莞市兆丰精密仪器有限公司 Line laser measuring method, device and system and computer readable storage medium
CN117029699B (en) * 2023-09-28 2024-04-26 东莞市兆丰精密仪器有限公司 Line laser measuring method, device and system and computer readable storage medium
CN117492026A (en) * 2023-12-29 2024-02-02 天津华铁科为科技有限公司 Railway wagon loading state detection method and system combined with laser radar scanning
CN117492026B (en) * 2023-12-29 2024-03-15 天津华铁科为科技有限公司 Railway wagon loading state detection method and system combined with laser radar scanning

Similar Documents

Publication Publication Date Title
CN101825442A (en) Mobile platform-based color laser point cloud imaging system
CN109458928B (en) Laser line scanning 3D detection method and system based on scanning galvanometer and event camera
CN109029284B (en) A kind of three-dimensional laser scanner based on geometrical constraint and camera calibration method
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN103759671B (en) A kind of dental model three-dimensional surface data non-contact scanning method
Fröhlich et al. Terrestrial laser scanning–new perspectives in 3D surveying
CN107016667B (en) A kind of device obtaining large parts three-dimensional point cloud using binocular vision
CN102927908B (en) Robot eye-on-hand system structured light plane parameter calibration device and method
CN105157566B (en) The method of 3 D stereo colour point clouds scanning
CN110223379A (en) Three-dimensional point cloud method for reconstructing based on laser radar
CN106767913B (en) Compound eye system calibration device and calibration method based on single LED luminous point and two-dimensional rotary table
CN1250942C (en) Construction optical visual sense transducer calibration method based on plane targets
CN108594245A (en) A kind of object movement monitoring system and method
CN101403606B (en) Large visual field dual-shaft measuring apparatus based on line-structured light
CN104346829A (en) Three-dimensional color reconstruction system and method based on PMD (photonic mixer device) cameras and photographing head
CN102506711B (en) Line laser vision three-dimensional rotate scanning method
CN101493526A (en) Lunar vehicle high speed three-dimensional laser imaging radar system and imaging method
CN108614277A (en) Double excitation single camera three-dimensional imaging scan table and scanning, imaging method
CN201293837Y (en) Moonmobile high speed three-dimensional laser imaging radar system
CN107560547A (en) A kind of scanning system and scan method
CN104976968A (en) Three-dimensional geometrical measurement method and three-dimensional geometrical measurement system based on LED tag tracking
CN113205603A (en) Three-dimensional point cloud splicing reconstruction method based on rotating platform
CN111062992B (en) Dual-view-angle line laser scanning three-dimensional imaging device and method
Kumar et al. An optical triangulation method for non-contact profile measurement
CN114279325A (en) System and method for calibrating spatial position relationship of vision measurement module measurement coordinate system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100908