WO2023139946A1 - Distance measuring device and distance measuring method - Google Patents

Distance measuring device and distance measuring method Download PDF

Info

Publication number
WO2023139946A1
WO2023139946A1 PCT/JP2022/044810 JP2022044810W WO2023139946A1 WO 2023139946 A1 WO2023139946 A1 WO 2023139946A1 JP 2022044810 W JP2022044810 W JP 2022044810W WO 2023139946 A1 WO2023139946 A1 WO 2023139946A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
target object
distance
distance measuring
measuring device
Prior art date
Application number
PCT/JP2022/044810
Other languages
French (fr)
Japanese (ja)
Inventor
光晴 大木
康平 原田
豊 三富
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2023139946A1 publication Critical patent/WO2023139946A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Definitions

  • the present disclosure relates to a ranging device and a ranging method.
  • ToF sensor Conventionally, distance measuring devices based on the ToF (Time-of-Flight) method are known (see Patent Document 1, for example).
  • ToF sensor such a ToF-based distance measuring device will be referred to as a "ToF sensor” as appropriate.
  • the ToF sensor emits linear light.
  • the irradiated light is reflected by the target object for distance measurement and returns to the ToF sensor.
  • the ToF sensor measures the distance to the target object by receiving the returned light and measuring the time from irradiation of the light to reception of the light.
  • the ToF sensor has a light source and a light emitting part consisting of a DOE (Diffractive Optical Element), a lens, and an image sensor.
  • a DOE diffractive Optical Element
  • the lens collects the light that has been reflected by the target object and returned from the linearly irradiated light. Some pixels of the image sensor then receive the light condensed by the lens.
  • the distance measured by the ToF sensor is the distance from the ToF sensor to the target object. Also, the direction of the target object can be known from the position of the pixel received by the image sensor.
  • the lens since the lens has distortion, it is difficult to accurately determine the direction of the target object from the position of the pixel received by the image sensor.
  • lens distortion for example, Brown's lens distortion model can be used (see, for example, Non-Patent Document 1 and Non-Patent Document 2).
  • Duane Brown “Close-Range Camera Calibration", Photogrammetric Engineering, vol. 37, no.8, pp.855-866, 1971. John G Fryer, Daune Brown: “Lens Distortion for Close-Range Photogrammetry”, Photogrammetric Engineering and Remote Sensing, vol.52, no.1, pp.51-58, 1986.
  • the conventional technology has room for further improvement in accurately measuring the three-dimensional position of the target object.
  • the lens distortion model is just a model, so it does not perfectly represent the actual lens. Therefore, even if the lens distortion model is used, it is still difficult to accurately determine the direction of the target object.
  • the 3D position of the target object measured by the conventional ToF sensor contains errors.
  • the present disclosure proposes a ranging device and ranging method capable of accurately measuring the three-dimensional position of a target object.
  • a distance measuring device includes: a light emitting unit that irradiates a target object with linear light; a storage unit that stores an irradiation direction of the linear light measured in advance;
  • FIG. 4 is a diagram showing how light is received by an image sensor in a ToF sensor;
  • FIG. 4 is an explanatory diagram of distances measured by a ToF sensor;
  • 1 is a block diagram showing a configuration example of a distance measuring device according to an embodiment of the present disclosure;
  • FIG. 1 is an explanatory diagram of a distance measuring device according to an embodiment of the present disclosure;
  • FIG. 4 is an explanatory diagram of calibration parameters according to an embodiment of the present disclosure;
  • FIG. 6 is a flowchart showing a processing procedure of ranging processing according to the embodiment of the present disclosure;
  • 1 is an explanatory diagram of a calibration device according to an embodiment of the present disclosure;
  • FIG. FIG. 4 is an explanatory diagram of a calibration method according to an embodiment of the present disclosure;
  • FIG. 4 is a flowchart showing a processing procedure of calibration processing according to the embodiment of the present disclosure; It is a block diagram which shows the 1st modification. It is a block diagram which shows the 2nd
  • FIG. 1 is a diagram showing how an image sensor in a ToF sensor receives light.
  • FIG. 2 is an explanatory diagram of distance data measured by the ToF sensor.
  • the ToF sensor has a light emitting part consisting of a light source and DOE, a lens and an image sensor.
  • Light emitted from the light emitting unit is linearly irradiated in a plurality of directions by the DOE.
  • the lens collects the light that has been reflected by the target object and returned from the linearly irradiated light. Some pixels of the image sensor then receive the light condensed by the lens.
  • Fig. 1 schematically shows how light is received by the image sensor. Rectangles in the figure are image sensors.
  • the black circles in the drawing represent pixels that have received light that has been reflected back from the target object.
  • the ToF sensor measures the time when the pixels indicated by the black circles receive light. Then, the distance is calculated from the difference between the "time of light reception” and the "time of light emission” which is the time when light is emitted from the light emitting unit. Note that the example of FIG. 1 shows a case where the number of linearly irradiated lights is six.
  • the distance measured by the ToF sensor is the distance between the ToF sensor and the target object.
  • the direction of the target object can be known from the position of the pixel received by the image sensor. Specifically, the direction of the target object is (u, v, F), where (u, v) is the pixel position where light is received by the image sensor and F is the focal length of the lens.
  • the ToF sensor outputs the distance measurement result in an orthogonal coordinate system (X, Y, Z) with the ToF sensor as the origin. Therefore, the ToF sensor calculates and outputs the position (x, y, z) in the orthogonal coordinate system from the calculated "distance between the ToF sensor and the target object” and "target object direction (u, v, F)" (see, for example, Patent Document 1).
  • depth represents the distance measured by the ToF sensor or represents the value of the Z coordinate of the position in the orthogonal coordinate system.
  • radial_depth the distance measured by the ToF sensor
  • radial_depth is the distance from the origin of the aforementioned Cartesian coordinate system (X, Y, Z) to the target object.
  • radial_depth in order to accurately obtain the position (x, y, z) of the target object, the radial_depth must be accurate and the direction of the target object must be accurate.
  • the lens has distortion, and even if the lens distortion model is used, it is just a model, so it is difficult to accurately determine the direction of the target object. Therefore, the three-dimensional position of the target object measured by the conventional ToF sensor contains an error.
  • a distance measuring device having a light emitting unit that irradiates a target object with linear light stores the irradiation direction of the linear light measured in advance, measures the distance to the target object based on the linear light and the reflected light reflected by the target object, and calculates the three-dimensional position of the target object based on the distance and the irradiation direction.
  • FIG. 3 is a block diagram showing a configuration example of the distance measuring device 110 according to the embodiment of the present disclosure.
  • FIG. 4 is an explanatory diagram of the distance measuring device 110 according to the embodiment of the present disclosure.
  • FIG. 5 is an explanatory diagram of calibration parameters according to an embodiment of the present disclosure.
  • FIG. 6 is a flowchart showing a processing procedure of distance measurement processing according to the embodiment of the present disclosure.
  • the distance measuring device 110 has a control section 111, a light emitting section 112, a lens 113, an image sensor 114, a computing section 115, a ROM (Read Only Memory) 116, and an output terminal 117.
  • the light emitting section 112 has a light source 112-1 and a DOE 112-2.
  • the image sensor 114 corresponds to an example of a "light receiving section”.
  • the ROM 116 corresponds to an example of a "storage unit".
  • the control unit 111 controls the entire distance measuring device 110 .
  • the control unit 111 controls light emission from the light source 112-1.
  • the control unit 111 also controls light reception by the image sensor 114 .
  • the control unit 111 controls the calculation in the calculation unit 115 .
  • the main components of the distance measuring device 110 are a light emitting unit 112 including a light source 112-1, and an image sensor 114 that receives reflected light from the light emitted from the light emitting unit 112 reaching a target object 120 and reflected.
  • the image sensor 114 has pixels arranged in a two-dimensional array. Each pixel has a light-receiving element, and the light-receiving element receives the aforementioned reflected light.
  • the light emitted from the light emitting section 112 is emitted in a plurality of directions as linear light 130 by the DOE 112-2.
  • Some pixels of the image sensor 114 receive reflected light 131 that is reflected by the target object 120 from the linear light 130 emitted in a plurality of directions.
  • a lens 113 is provided on the entire surface of the image sensor 114 so that the reflected light 131 that has been reflected by the target object 120 and returned can be efficiently focused on the pixels of the image sensor 114 .
  • linear light 130 and the reflected light 131 are each indicated by one arrow in FIG. 3, there are actually a plurality of them.
  • a case where six linear lights 130 are emitted will be taken as an example, as described with reference to FIG. 1 .
  • embodiments of the present disclosure do not limit the number of such linear lights 130 .
  • FIG. 4 shows the main parts (light source 112-1, DOE 112-2, lens 113 and image sensor 114) in rangefinder 110 and target object 120.
  • FIG. 4 shows the main parts (light source 112-1, DOE 112-2, lens 113 and image sensor 114) in rangefinder 110 and target object 120.
  • the light emitted from the light source 112-1 becomes six linear lights 130 by the DOE 112-2. These linear lights 130 are reflected by the target object 120 .
  • the reflected light 131 is condensed by the lens 113 and received by six specific pixels of the image sensor 114 .
  • the six pixel positions are stored in the ROM 116 in advance, and the calculation unit 115 reads information on these six pixel positions ((u, v) described later) from the ROM 116 . Then, the calculation unit 115 acquires the information of the light receiving time at the six pixel positions corresponding to the information from the image sensor 114 .
  • the ROM 116 stores calibrated calibration parameters for each of the six linear lights 130 .
  • “Linear light number” is an identifier for the linear light 130 . Since this is an example in which there are six linear lights 130, they are numbered 1 to 6 here for convenience.
  • a and b are parameters indicating respective directions of the linear light 130 .
  • u and v are pixel positions on the image sensor 114 onto which the linear lights 130 are projected (hereinafter referred to as “spots" as appropriate).
  • the directions of the linear lights 130 emitted from the DOE 112-2 are stored in advance as parameters a and b.
  • the parameters a and b are set through calibration described later so as to indicate the correct direction of each linear light 130 .
  • Distance measuring device 110 obtains radial_depth in six directions from the difference between the light emission time when light is emitted from light emitting unit 112 and the light reception time when pixels of image sensor 114 receive light. This calculation is performed by the calculation unit 115 .
  • the calculation unit 115 reads parameters a and b indicating the direction of the linear light 130 from the ROM 116, and uses the aforementioned radial_depth to calculate (x, y, z) that satisfies the following equation (1).
  • the subscript i in equation (1) is the number of each linear light 130, and is 1 to 6 in the embodiment of the present disclosure.
  • the distance measuring device 110 outputs the calculated (x, y, z) from the output terminal 117 . Again, it demonstrates using a flowchart.
  • the projected position (u i , v i ) of each spot read from the ROM 116 and the actual projected position on the image sensor 114 may be slightly different. Therefore, the pixels read out in step S12 should be read not only from the position (u i , v i ), but also from the surrounding pixels. Then, a pixel with the largest amount of received light may be selected from the group of pixels at the position (u i , v i ) and its surrounding positions, and radial_depth may be calculated from the data at that pixel.
  • the distance measuring device 110 including the light emitting unit 112 that irradiates the linear light 130 to the target object 120 stores the irradiation direction of the linear light 130 measured in advance, measures the distance radial_depth to the target object 120 based on the linear light 130 and the reflected light 131 of the linear light 130 reflected by the target object 120, and measures the distance radial
  • the three-dimensional position of the target object 120 is calculated based on _depth and the irradiation direction. Therefore, according to the ranging method according to the embodiment of the present disclosure, the three-dimensional position of the target object 120 can be accurately measured.
  • FIG. 7 is a diagram showing a configuration example of the calibration device 210 according to the embodiment of the present disclosure.
  • FIG. 8 is an explanatory diagram of the calibration method according to the embodiment of the present disclosure.
  • FIG. 9 is a flow chart showing the procedure of the calibration process according to the embodiment of the present disclosure.
  • the calibration device 210 has a distance measuring device 110, a high-pixel camera 140, and a graph paper 150.
  • Graph paper 150 is an example of a "calibration member.” 7 shows the main parts (light source 112-1, DOE 112-2, lens 113 and image sensor 114) in the distance measuring device 110. As shown in FIG. 7,
  • the high-pixel camera 140 is a camera with higher pixels than the image sensor 114.
  • the high-pixel camera 140 is provided near the distance measuring device 110 and arranged so as to photograph the graph paper 150 .
  • the grid sheet 150 is arranged so that the linear light 130 emitted from the distance measuring device 110 is irradiated onto the grid sheet 150 .
  • the grid sheet 150 is arranged so that the image sensor 114 can photograph the grid sheet 150 .
  • the distance from the grid sheet 150 after placement to the DOE 112-2 is z0 .
  • the orthogonal coordinate system (X, Y, Z) of the distance measuring device 110 is defined as follows.
  • the X-axis is parallel to the horizontal axis (Xpaper axis) of the graph paper 150 .
  • the Y-axis is parallel to the vertical axis (Ypaper axis) of the graph paper 150 . Therefore, the Z-axis is an axis perpendicular to the graph paper 150 .
  • the origin of the orthogonal coordinate system (X, Y, Z) is set at a distance of z 0 from the grid sheet 150 .
  • the origin of the coordinate system (Xpaper, Ypaper) of the grid paper 150 is (0, 0, z 0 ) in the orthogonal coordinate system of the distance measuring device 110 .
  • the distance measurement device 110 performs distance measurement operation assuming that the grid paper 150 is the target object 120 . Specifically, distance measuring device 110 causes light emitting unit 112 to emit light, and image sensor 114 receives the light. At the same time, the high pixel camera 140 photographs the graph paper 150 .
  • the light from the light emitting unit 112 becomes linear light 130 through the DOE 112-2 and is irradiated onto the graph paper 150. Therefore, as shown in FIG. 8, six light spots indicated by black circles appear on the graph paper 150. As shown in FIG. 8,
  • (h i , k i ) are accurate positions because they are obtained as six positions on the graph paper 150 by analyzing the captured image of the high-pixel camera 140 having higher pixels than the image sensor 114 .
  • the direction of the linear light 130 is ( hi , k i , z 0 ). Since the direction (h i , k i , z 0 ) is defined from the correct position (h i , k i ), it can be said that the direction (h i , k i , z 0 ) itself is also correct.
  • the ROM 116 stores, as the irradiation direction of the linear light 130, the direction of the trajectory of the linear light 130, which is measured through an image of the linear light 130 captured in advance by the high-pixel camera 140.
  • the image sensor 114 is also performing photographing. Therefore, the image captured by the image sensor 114 also includes the graph paper 150 and the six light spots.
  • the projection position for the accurate position (h i , k i ) on the graph paper 150 can be obtained.
  • the projected position of the position (h i , k i ) on the graph paper 150 should completely match as a high-luminance pixel position.
  • the image sensors used for ToF sensors have low resolution. Accurate positions cannot be determined by analyzing captured images captured by a low-resolution image sensor. In other words, the projected position of the position (h i , k i ) on the graph paper 150 cannot be obtained accurately.
  • the projection position of the position (hi , k i ) on the graph paper 150 is obtained on the image captured by the image sensor 114, and the pixel with the highest luminance is detected in the vicinity of this projection position.
  • the i-th linear light 130 is emitted in the direction (a i , b i , 1), and the light is received at the pixel position (u i , v i ) of the image sensor 114.
  • step S21 the calibration device 210 first performs distance measurement with the distance measurement device 110, and at the same time, performs photographing with the high-pixel camera 140.
  • distance measurement by the distance measuring device 110 specifically means that the light emitting unit 112 emits light and the image sensor 114 receives light.
  • step S22 six high-brightness pixel positions are detected from the image captured by the high-pixel camera 140.
  • step S24 six high-brightness pixel positions are detected from the captured image by the image sensor 114.
  • the calibration of the distance measuring device 110 is performed, and the calibration parameters a i , bi , u i , and v i are written in the ROM 116 .
  • FIG. 10 is a block diagram showing a first modified example. Also, FIG. 11 is a block diagram showing a second modification.
  • the configuration example in which the image sensor 114 and the arithmetic unit 115 are separated was given, but the image sensor 114 and the arithmetic unit 115 may be configured with a single semiconductor chip.
  • the image sensor 114 and the arithmetic unit 115 may be realized by one semiconductor chip 118.
  • FIG. 11 may be used. That is, as shown in FIG. 11, an APU (Application Processor Unit) 119 may be provided after the distance measuring device 110A.
  • the APU 119 corresponds to an example of a "processing unit".
  • the processing may be distributed between the APU 119 and, for example, step S12 in FIG.
  • step S11 data at position (u i , v i ) is sent from the ROM 116 to the calculation unit 115 .
  • step S12 data at the position (u i , v i ) is read from the image sensor 114, and the calculation unit 115 calculates the distance radial_depth i at that position.
  • step S13 data on the direction (a i , b i ) of each spot is sent from the ROM 116 to the APU 119 via the output terminal 117-2.
  • step S14 the radial_depth i calculated by the calculation unit 115 is sent to the APU 119 via the output terminal 117-1, so that the APU 119 solves the above equation (1), which is the simultaneous equations regarding x i , y i , and z i .
  • the APU 119 outputs x i , y i and z i .
  • the distance measuring devices 110 and 110A may sequentially emit the plurality of linear lights 130 one by one instead of emitting the plurality of linear lights 130 all at once.
  • the processing procedure shown in FIG. 6 is repeated the number of times of the subscript i while adding the subscript i each time one linear light 130 is emitted.
  • the calibration member is the graph paper 150, but the calibration member is not limited. That is, the calibrating member only needs to be able to be accurately positioned on the photographed image of the calibrating member and has a regular geometric pattern.
  • each component of each device illustrated is functionally conceptual and does not necessarily need to be physically configured as illustrated.
  • the specific form of distribution and integration of each device is not limited to the one shown in the figure, and all or part of them can be functionally or physically distributed and integrated in arbitrary units according to various loads and usage conditions.
  • the distance measuring device 110 includes the light emitting unit 112 that irradiates the target object 120 with the linear light 130, the ROM 116 that stores the previously measured irradiation direction of the linear light 130 (equivalent to an example of a “storage unit”), the reflected light 131 that the linear light 130 has reflected on the target object 120, and the distance radial_depth to the target object 120 based on the linear light 130. , and calculates the three-dimensional position of the target object 120 based on the distance radial_depth and the irradiation direction. Accordingly, the three-dimensional position of the target object 120 can be accurately measured.
  • the present technology can also take the following configuration.
  • a light emitting unit that irradiates a target object with linear light; a storage unit that stores an irradiation direction of the linear light that has been measured in advance; a calculation unit that measures a distance to the target object based on the linear light and the reflected light reflected by the target object, and calculates a three-dimensional position of the target object based on the distance and the irradiation direction; A ranging device.
  • the storage unit storing, as the irradiation direction, the direction of the trajectory of the linear light measured through a photographed image of the linear light previously photographed by a camera; The distance measuring device according to (1) above.
  • the storage unit further comprising a light-receiving unit having pixels arranged in a two-dimensional array and receiving the reflected light at any of the pixels;
  • the storage unit further storing a correspondence relationship between a pixel position receiving the reflected light and the irradiation direction;
  • the calculation unit is measuring the distance based on the light emission time of the linear light and the light reception time of the reflected light at the pixel position;
  • the distance measuring device according to (1) or (2) above.
  • the storage unit storing the irradiation direction measured through a captured image captured by a high-pixel camera having a higher pixel count than the light-receiving unit;
  • the distance measuring device according to (3) above.
  • the storage unit storing the pixel position with the highest brightness around the incident position of the reflected light in the light receiving unit; The distance measuring device according to (4) above.
  • the storage unit storing, as the irradiation direction, the direction of the trajectory of the linear light, which is measured in advance based on a photographed image of the calibration member having a regular geometric pattern when the linear light is irradiated to the calibration member; The distance measuring device according to any one of (2) to (5) above.
  • the light receiving unit and the computing unit are provided on one semiconductor chip, The distance measuring device according to (3), (4) or (5).
  • a distance measurement method performed by a distance measurement device having a light emitting unit that irradiates a target object with linear light, storing the irradiation direction of the linear light measured in advance; measuring the distance to the target object based on the linear light and the reflected light reflected by the target object, and calculating the three-dimensional position of the target object based on the distance and the irradiation direction; Ranging methods, including

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

A distance measuring device (110, 110A) comprises: a light emitting unit (112) for emitting linear light (130) onto a target object (120); a ROM (116) (corresponding to an example of a "storage unit") for storing an emission direction of the linear light (130), measured in advance; and a calculating unit (115) for measuring a distance (radial_depth) to the target object (120) on the basis of reflected light (131) obtained by the linear light (130) being reflected from the target object (120), and the linear light (130), and calculating a position, in three dimensions, of the target object (120) on the basis of the distance (radial_depth) and the emission direction.

Description

測距装置および測距方法Ranging device and ranging method
 本開示は、測距装置および測距方法に関する。 The present disclosure relates to a ranging device and a ranging method.
 従来、ToF(Time-of-Flight)方式による測距装置が知られている(例えば、特許文献1参照)。以下、かかるToF方式による測距装置を適宜「ToFセンサ」と呼ぶ。 Conventionally, distance measuring devices based on the ToF (Time-of-Flight) method are known (see Patent Document 1, for example). Hereinafter, such a ToF-based distance measuring device will be referred to as a "ToF sensor" as appropriate.
 ToFセンサは、線状の光を照射する。照射された光は、測距の対象物体に反射し、ToFセンサに戻ってくる。ToFセンサは、この戻ってきた光を受光し、光の照射から受光までの時間を計測することによって、対象物体までの距離を測定する。 The ToF sensor emits linear light. The irradiated light is reflected by the target object for distance measurement and returns to the ToF sensor. The ToF sensor measures the distance to the target object by receiving the returned light and measuring the time from irradiation of the light to reception of the light.
 具体的には、ToFセンサは、光源とDOE(Diffractive Optical Element)からなる発光部、および、レンズとイメージセンサを有する。発光部から発せられる光は、DOEにより複数の方向に線状に照射される。レンズは、これら線状に照射された光が対象物体に反射して戻ってきた光を集光する。そして、イメージセンサのいくつかの画素は、レンズにより集光された光を受光する。 Specifically, the ToF sensor has a light source and a light emitting part consisting of a DOE (Diffractive Optical Element), a lens, and an image sensor. Light emitted from the light emitting unit is linearly irradiated in a plurality of directions by the DOE. The lens collects the light that has been reflected by the target object and returned from the linearly irradiated light. Some pixels of the image sensor then receive the light condensed by the lens.
 ToFセンサで測距される距離は、ToFセンサから対象物体までの距離である。また、イメージセンサで受光された画素位置からは、その対象物体の方向が分かる。 The distance measured by the ToF sensor is the distance from the ToF sensor to the target object. Also, the direction of the target object can be known from the position of the pixel received by the image sensor.
 ところで、レンズには歪みがあるため、イメージセンサで受光された画素位置から、その対象物体の方向を正確に求めることは難しい。レンズの歪みに対しては、例えば、Brownのレンズ歪みモデルを利用することが考えられる(例えば、非特許文献1および非特許文献2参照)。 By the way, since the lens has distortion, it is difficult to accurately determine the direction of the target object from the position of the pixel received by the image sensor. For lens distortion, for example, Brown's lens distortion model can be used (see, for example, Non-Patent Document 1 and Non-Patent Document 2).
特開2020-153865号公報JP 2020-153865 A
 しかしながら、従来技術には、対象物体の3次元上での位置を正確に測定するうえで、さらなる改善の余地がある。 However, the conventional technology has room for further improvement in accurately measuring the three-dimensional position of the target object.
 例えば、レンズ歪みモデルは、あくまでモデルであるため、実際のレンズを完全に表現しているわけではない。したがって、レンズ歪みモデルを利用したとしても、対象物体の方向を正確に求めることはやはり難しい。 For example, the lens distortion model is just a model, so it does not perfectly represent the actual lens. Therefore, even if the lens distortion model is used, it is still difficult to accurately determine the direction of the target object.
 対象物体の方向を正確に求めることが難しい以上、従来のToFセンサで測定される対象物体の3次元上での位置には誤差が含まれることとなる。 As long as it is difficult to accurately determine the direction of the target object, the 3D position of the target object measured by the conventional ToF sensor contains errors.
 そこで、本開示では、対象物体の3次元上での位置を正確に測定することができる測距装置および測距方法を提案する。 Therefore, the present disclosure proposes a ranging device and ranging method capable of accurately measuring the three-dimensional position of a target object.
 上記の課題を解決するために、本開示に係る一形態の測距装置は、対象物体に対し線状光を照射する発光部と、予め測定された前記線状光の照射方向を記憶する記憶部と、前記線状光が前記対象物体に反射した反射光および前記線状光に基づいて前記対象物体までの距離を測定し、該距離と前記照射方向とに基づいて前記対象物体の3次元上での位置を算出する演算部と、を備える。 In order to solve the above problems, a distance measuring device according to one embodiment of the present disclosure includes: a light emitting unit that irradiates a target object with linear light; a storage unit that stores an irradiation direction of the linear light measured in advance;
ToFセンサ内にあるイメージセンサでの受光の様子を示す図である。FIG. 4 is a diagram showing how light is received by an image sensor in a ToF sensor; ToFセンサが測距する距離の説明図である。FIG. 4 is an explanatory diagram of distances measured by a ToF sensor; 本開示の実施形態に係る測距装置の構成例を示すブロック図である。1 is a block diagram showing a configuration example of a distance measuring device according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係る測距装置の説明図である。1 is an explanatory diagram of a distance measuring device according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係るキャリブレーションパラメータの説明図である。FIG. 4 is an explanatory diagram of calibration parameters according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係る測距処理の処理手順を示すフローチャートである。6 is a flowchart showing a processing procedure of ranging processing according to the embodiment of the present disclosure; 本開示の実施形態に係るキャリブレーション装置の説明図である。1 is an explanatory diagram of a calibration device according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係るキャリブレーション方法の説明図である。FIG. 4 is an explanatory diagram of a calibration method according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係るキャリブレーション処理の処理手順を示すフローチャートである。4 is a flowchart showing a processing procedure of calibration processing according to the embodiment of the present disclosure; 第1の変形例を示すブロック図である。It is a block diagram which shows the 1st modification. 第2の変形例を示すブロック図である。It is a block diagram which shows the 2nd modification.
 以下に、本開示の実施形態について図面に基づいて詳細に説明する。なお、以下の各実施形態において、同一の部位には同一の符号を付することにより重複する説明を省略する。 Below, embodiments of the present disclosure will be described in detail based on the drawings. In addition, in each of the following embodiments, the same parts are denoted by the same reference numerals, thereby omitting redundant explanations.
 また、以下に示す項目順序に従って本開示を説明する。
  1.概要
  2.測距装置の構成
  3.キャリブレーション方法
  4.変形例
   4-1.第1の変形例および第2の変形例
   4-2.第3の変形例
   4-3.その他の変形例
  5.むすび
Also, the present disclosure will be described according to the order of items shown below.
1. Overview 2. Configuration of rangefinder 3. Calibration method 4 . Modification 4-1. First Modification and Second Modification 4-2. Third modification 4-3. Other modifications 5. Conclusion
<<1.概要>>
 まず、本開示の実施形態に係る測距方法の概要から説明するが、これに先立って、従来技術に係るToFセンサの課題について、図1および図2を用いてより具体的に説明しておく。図1は、ToFセンサ内にあるイメージセンサでの受光の様子を示す図である。また、図2は、ToFセンサが測距する距離データの説明図である。
<<1. Overview>>
First, the overview of the distance measurement method according to the embodiment of the present disclosure will be described. Prior to this, the problem of the ToF sensor according to the conventional technology will be described more specifically with reference to FIGS. 1 and 2. FIG. FIG. 1 is a diagram showing how an image sensor in a ToF sensor receives light. FIG. 2 is an explanatory diagram of distance data measured by the ToF sensor.
 既に述べた通り、ToFセンサは、光源とDOEからなる発光部、および、レンズとイメージセンサを有する。発光部から発せられる光は、DOEにより複数の方向に線状に照射される。レンズは、これら線状に照射された光が対象物体に反射して戻ってきた光を集光する。そして、イメージセンサのいくつかの画素は、レンズにより集光された光を受光する。 As already mentioned, the ToF sensor has a light emitting part consisting of a light source and DOE, a lens and an image sensor. Light emitted from the light emitting unit is linearly irradiated in a plurality of directions by the DOE. The lens collects the light that has been reflected by the target object and returned from the linearly irradiated light. Some pixels of the image sensor then receive the light condensed by the lens.
 イメージセンサでの受光の様子を模式的に図1に示す。図中の矩形はイメージセンサである。そして、図中の黒い丸印は、対象物体に反射して戻ってきた光を受光した各画素を表している。ToFセンサは、この黒い丸印で示された画素について、受光した時刻を計測する。そして、かかる「受光時刻」と、発光部で発光した時刻である「発光時刻」との差により、距離を算出する。なお、図1の例は、線状に照射された光の数が6つである場合を示している。 Fig. 1 schematically shows how light is received by the image sensor. Rectangles in the figure are image sensors. The black circles in the drawing represent pixels that have received light that has been reflected back from the target object. The ToF sensor measures the time when the pixels indicated by the black circles receive light. Then, the distance is calculated from the difference between the "time of light reception" and the "time of light emission" which is the time when light is emitted from the light emitting unit. Note that the example of FIG. 1 shows a case where the number of linearly irradiated lights is six.
 ToFセンサで測距される距離は、ToFセンサと対象物体までの距離である。また、イメージセンサで受光された画素位置からは、その対象物体の方向が分かる。具体的には、イメージセンサで受光された画素位置を(u,v)とし、レンズの焦点距離をFとすると、対象物体の方向は、(u,v,F)である。 The distance measured by the ToF sensor is the distance between the ToF sensor and the target object. Also, the direction of the target object can be known from the position of the pixel received by the image sensor. Specifically, the direction of the target object is (u, v, F), where (u, v) is the pixel position where light is received by the image sensor and F is the focal length of the lens.
 通常、ToFセンサからは、測距結果は、ToFセンサを原点とする直交座標系(X,Y,Z)で出力される。そこで、ToFセンサでは、算出された「ToFセンサと対象物体までの距離」と「対象物体の方向(u,v,F)」から、直交座標系における位置(x,y,z)を計算して出力する(例えば、特許文献1参照)。 Normally, the ToF sensor outputs the distance measurement result in an orthogonal coordinate system (X, Y, Z) with the ToF sensor as the origin. Therefore, the ToF sensor calculates and outputs the position (x, y, z) in the orthogonal coordinate system from the calculated "distance between the ToF sensor and the target object" and "target object direction (u, v, F)" (see, for example, Patent Document 1).
 なお、ToFセンサで測距される距離(ToFセンサから対象物体までの距離)と、直交座標系における位置のZ座標の値は、一般的にどちらも、「depth」と呼ばれる。そのため、depthが、ToFセンサで測距される距離を表しているのか、直交座標系における位置のZ座標の値を表しているのか曖昧である。本開示の実施形態では、この曖昧さを排除するために、ToFセンサで測距される距離を「radial_depth」と呼ぶことにする。 Both the distance measured by the ToF sensor (the distance from the ToF sensor to the target object) and the Z coordinate value of the position in the Cartesian coordinate system are generally called "depth". Therefore, it is ambiguous whether depth represents the distance measured by the ToF sensor or represents the value of the Z coordinate of the position in the orthogonal coordinate system. To eliminate this ambiguity, the embodiments of the present disclosure will refer to the distance measured by the ToF sensor as "radial_depth."
 かかる「radial_depth」を図2に示す。同図に示すように、radial_depthは、前述の直交座標系(X,Y,Z)の原点から対象物体までの距離である。ここで、対象物体の位置(x,y,z)を正確に求めるためには、radial_depthが正確であることと、対象物体の方向が正確である必要がある。 Such "radial_depth" is shown in FIG. As shown in the figure, radial_depth is the distance from the origin of the aforementioned Cartesian coordinate system (X, Y, Z) to the target object. Here, in order to accurately obtain the position (x, y, z) of the target object, the radial_depth must be accurate and the direction of the target object must be accurate.
 しかしながら、既に述べた通り、レンズには歪みがあり、レンズ歪みモデルを利用したとしても、あくまでモデルであるため、対象物体の方向を正確に求めることは難しい。したがって、従来のToFセンサで測定される対象物体の3次元上での位置には誤差が含まれることとなる。 However, as already mentioned, the lens has distortion, and even if the lens distortion model is used, it is just a model, so it is difficult to accurately determine the direction of the target object. Therefore, the three-dimensional position of the target object measured by the conventional ToF sensor contains an error.
 そこで、本開示の実施形態に係る測距方法では、対象物体に対し線状光を照射する発光部を備える測距装置が、予め測定された上記線状光の照射方向を記憶し、上記線状光が上記対象物体に反射した反射光および上記線状光に基づいて上記対象物体までの距離を測定し、かかる距離と上記照射方向とに基づいて上記対象物体の3次元上での位置を算出することとした。以下、図3~図6を用いて、より具体的に説明する。 Therefore, in the distance measuring method according to the embodiment of the present disclosure, a distance measuring device having a light emitting unit that irradiates a target object with linear light stores the irradiation direction of the linear light measured in advance, measures the distance to the target object based on the linear light and the reflected light reflected by the target object, and calculates the three-dimensional position of the target object based on the distance and the irradiation direction. A more specific description will be given below with reference to FIGS. 3 to 6. FIG.
<<2.測距装置の構成>>
 図3は、本開示の実施形態に係る測距装置110の構成例を示すブロック図である。また、図4は、本開示の実施形態に係る測距装置110の説明図である。また、図5は、本開示の実施形態に係るキャリブレーションパラメータの説明図である。また、図6は、本開示の実施形態に係る測距処理の処理手順を示すフローチャートである。
<<2. Configuration of Range Finder>>
FIG. 3 is a block diagram showing a configuration example of the distance measuring device 110 according to the embodiment of the present disclosure. Also, FIG. 4 is an explanatory diagram of the distance measuring device 110 according to the embodiment of the present disclosure. Further, FIG. 5 is an explanatory diagram of calibration parameters according to an embodiment of the present disclosure. Also, FIG. 6 is a flowchart showing a processing procedure of distance measurement processing according to the embodiment of the present disclosure.
 図3に示すように、測距装置110は、制御部111と、発光部112と、レンズ113と、イメージセンサ114と、演算部115と、ROM(Read Only Memory)116と、出力端子117とを有する。発光部112は、光源112-1と、DOE112-2とを有する。イメージセンサ114は、「受光部」の一例に相当する。ROM116は、「記憶部」の一例に相当する。 As shown in FIG. 3, the distance measuring device 110 has a control section 111, a light emitting section 112, a lens 113, an image sensor 114, a computing section 115, a ROM (Read Only Memory) 116, and an output terminal 117. The light emitting section 112 has a light source 112-1 and a DOE 112-2. The image sensor 114 corresponds to an example of a "light receiving section". The ROM 116 corresponds to an example of a "storage unit".
 制御部111は、測距装置110の全体を制御する。制御部111は、光源112-1での発光の制御を行う。また制御部111は、イメージセンサ114での受光の制御を行う。また、制御部111は、演算部115での演算の制御を行う。 The control unit 111 controls the entire distance measuring device 110 . The control unit 111 controls light emission from the light source 112-1. The control unit 111 also controls light reception by the image sensor 114 . Also, the control unit 111 controls the calculation in the calculation unit 115 .
 測距装置110の主たる構成要素は、光源112-1を含む発光部112と、その発光部112からの光が対象物体120に到達して反射した反射光を受光するイメージセンサ114である。イメージセンサ114は、2次元アレイ状に画素が配置されている。各画素には受光素子があり、受光素子は、前述の反射光を受光する。 The main components of the distance measuring device 110 are a light emitting unit 112 including a light source 112-1, and an image sensor 114 that receives reflected light from the light emitted from the light emitting unit 112 reaching a target object 120 and reflected. The image sensor 114 has pixels arranged in a two-dimensional array. Each pixel has a light-receiving element, and the light-receiving element receives the aforementioned reflected light.
 より詳細に述べると、次の通りである。図3に示すように、発光部112から発光された光は、DOE112-2により線状光130として複数の方向に照射される。そして、イメージセンサ114のいくつかの画素は、複数の方向に照射された線状光130が対象物体120に反射した反射光131を受光する。 In more detail, it is as follows. As shown in FIG. 3, the light emitted from the light emitting section 112 is emitted in a plurality of directions as linear light 130 by the DOE 112-2. Some pixels of the image sensor 114 receive reflected light 131 that is reflected by the target object 120 from the linear light 130 emitted in a plurality of directions.
 すなわち、イメージセンサ114の特定のいくつかの画素には、発光部112から発光された光が到達する。この様子は、既に図1に示した通りである。なお、イメージセンサ114の全面にはレンズ113が設置されていて、対象物体120に反射して戻ってきた反射光131を効率よくイメージセンサ114の画素に集光させることができる。 That is, light emitted from the light emitting unit 112 reaches certain pixels of the image sensor 114 . This situation is already shown in FIG. A lens 113 is provided on the entire surface of the image sensor 114 so that the reflected light 131 that has been reflected by the target object 120 and returned can be efficiently focused on the pixels of the image sensor 114 .
 なお、図3では、線状光130と反射光131は、それぞれ1つの矢印で示されているが、実際には複数である。以降では、図1で説明したのと同様に、6つの線状光130が照射される場合を例に挙げる。無論、本開示の実施形態は、かかる線状光130の数を限定するものではない。 Although the linear light 130 and the reflected light 131 are each indicated by one arrow in FIG. 3, there are actually a plurality of them. Hereinafter, a case where six linear lights 130 are emitted will be taken as an example, as described with reference to FIG. 1 . Of course, embodiments of the present disclosure do not limit the number of such linear lights 130 .
 次に、図4を用いて、測距装置110の動作について説明する。なお、図4には、測距装置110内にある主要部分(光源112-1、DOE112-2、レンズ113およびイメージセンサ114)と、対象物体120とを示している。 Next, the operation of the distance measuring device 110 will be described using FIG. 4 shows the main parts (light source 112-1, DOE 112-2, lens 113 and image sensor 114) in rangefinder 110 and target object 120. FIG.
 図4に示すように、光源112-1から発光された光は、DOE112-2により、6つの線状光130となる。これら線状光130は、対象物体120にて反射される。反射した反射光131は、レンズ113にて集光されて、イメージセンサ114の特定の6つの画素にて受光される。 As shown in FIG. 4, the light emitted from the light source 112-1 becomes six linear lights 130 by the DOE 112-2. These linear lights 130 are reflected by the target object 120 . The reflected light 131 is condensed by the lens 113 and received by six specific pixels of the image sensor 114 .
 6つの画素位置は、予めROM116に記憶されており、演算部115は、ROM116から、これら6つの画素位置(後述する(u,v))の情報を読み出す。そして、演算部115は、イメージセンサ114から、かかる情報に対応する6つの画素位置での受光時刻の情報を取得する。 The six pixel positions are stored in the ROM 116 in advance, and the calculation unit 115 reads information on these six pixel positions ((u, v) described later) from the ROM 116 . Then, the calculation unit 115 acquires the information of the light receiving time at the six pixel positions corresponding to the information from the image sensor 114 .
 ここで、ROM116について説明しておく。図5に示すように、ROM116には、6つの線状光130ごとのキャリブレーション済みのキャリブレーションパラメータが記憶されている。「線状光の番号」は、線状光130の識別子である。線状光130が6つである例なので、ここでは、便宜的に1~6の番号を付している。 Here, the ROM 116 will be explained. As shown in FIG. 5, the ROM 116 stores calibrated calibration parameters for each of the six linear lights 130 . “Linear light number” is an identifier for the linear light 130 . Since this is an example in which there are six linear lights 130, they are numbered 1 to 6 here for convenience.
 a,bは、線状光130のそれぞれの方向を示すパラメータである。また、u,vは、各線状光130が投影されているイメージセンサ114上の画素位置(以下、適宜「スポット」という)である。 a and b are parameters indicating respective directions of the linear light 130 . Also, u and v are pixel positions on the image sensor 114 onto which the linear lights 130 are projected (hereinafter referred to as "spots" as appropriate).
 パラメータa,bについて詳しく述べると、a,bが与えられたとき、3次元上での方向は(a,b,1)として表される。つまり、x=a×z、y=b×zを満たす方向(x,y,z)である。 To describe the parameters a and b in detail, when a and b are given, the three-dimensional direction is expressed as (a, b, 1). That is, it is a direction (x, y, z) that satisfies x=a×z and y=b×z.
 本開示の実施形態の特徴の一つは、DOE112-2から発光される各線状光130の方向を、予めパラメータa,bとして記憶させていることである。パラメータa,bは、後述するキャリブレーションを経て、各線状光130の正確な方向を示すように設定されている。 One of the features of the embodiment of the present disclosure is that the directions of the linear lights 130 emitted from the DOE 112-2 are stored in advance as parameters a and b. The parameters a and b are set through calibration described later so as to indicate the correct direction of each linear light 130 .
 図4の説明に戻る。そして、測距装置110は、発光部112から光が発光された発光時刻と、イメージセンサ114の画素が受光した受光時刻との差により、6つの方向におけるradial_depthを求める。この計算は、演算部115にて行われる。 Return to the description of Fig. 4. Distance measuring device 110 then obtains radial_depth in six directions from the difference between the light emission time when light is emitted from light emitting unit 112 and the light reception time when pixels of image sensor 114 receive light. This calculation is performed by the calculation unit 115 .
 さらに、演算部115は、線状光130の方向を示すパラメータa,bをROM116から読み出し、前述のradial_depthを用いて、下記の式(1)を満たす(x,y,z)を計算する。なお、式(1)における添え字iは、各線状光130の番号であり、本開示の実施形態では、1~6である。 Furthermore, the calculation unit 115 reads parameters a and b indicating the direction of the linear light 130 from the ROM 116, and uses the aforementioned radial_depth to calculate (x, y, z) that satisfies the following equation (1). Note that the subscript i in equation (1) is the number of each linear light 130, and is 1 to 6 in the embodiment of the present disclosure.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 そして、測距装置110は、算出された(x,y,z)を出力端子117から出力する。再度、フローチャートを用いて説明する。 Then, the distance measuring device 110 outputs the calculated (x, y, z) from the output terminal 117 . Again, it demonstrates using a flowchart.
 図6に示すように、測距装置110は、ステップS11にて、まずROM116から各スポットの投影位置(u,v)を読み出す(i=1~6)。 As shown in FIG. 6, in step S11, the distance measuring device 110 first reads the projection position (u i , v i ) of each spot from the ROM 116 (i=1 to 6).
 つづいて、測距装置110は、ステップS12にて、イメージセンサ114から位置(u,v)におけるデータを読み出し、その位置における距離(radial_depth)を演算部115で計算する(i=1~6)。なお、radial_depthの計算は、既存の技術であり、周知の通りのため、ここでの説明は省略する。 Subsequently, in step S12, distance measuring device 110 reads out data at position (u i , v i ) from image sensor 114, and calculates distance (radial_depth i ) at that position in arithmetic unit 115 (i=1 to 6). Note that the calculation of radial_depth is an existing technique and is well known, so the description is omitted here.
 そして、測距装置110は、ステップS13にて、ROM116から各スポットの方向(a,b)を読み出す(i=1~6)。 Then, in step S13, the distance measuring device 110 reads the direction (a i , b i ) of each spot from the ROM 116 (i=1 to 6).
 そして、測距装置110は、ステップS14にて、ステップS12の結果であるradial_depthとステップS13で得た(a,b)を使って、x,y,zに関する連立方程式である上記の式(1)を解く(i=1~6)。 Then, in step S14, distance measuring device 110 uses radial_depth i obtained in step S12 and (a i , bi ) obtained in step S13 to solve the above equation (1), which is a simultaneous equation relating to x i , y i , and z i (i=1 to 6).
 そして、測距装置110は、ステップS15にて、出力端子117からx,y,zを出力する(i=1~6)。そして、処理を終了する。 Distance measuring device 110 then outputs x i , y i , and z i from output terminal 117 (i=1 to 6) in step S15. Then, the process ends.
 なお、視差の関係で、ROM116から読み出す各スポットの投影位置(u,v)と、実際のイメージセンサ114上の投影位置が微妙に異なっている場合もありうる。そこで、ステップS12にて読み出す画素は位置(u,v)だけでなく、その周辺の画素からも読み出すようにするとよい。そして、位置(u,v)およびその周辺の位置における画素群の中で、受光量が最も大きい画素を選び出し、当該画素におけるデータからradial_depthを計算するようにしてもよい。 Due to parallax, the projected position (u i , v i ) of each spot read from the ROM 116 and the actual projected position on the image sensor 114 may be slightly different. Therefore, the pixels read out in step S12 should be read not only from the position (u i , v i ), but also from the surrounding pixels. Then, a pixel with the largest amount of received light may be selected from the group of pixels at the position (u i , v i ) and its surrounding positions, and radial_depth may be calculated from the data at that pixel.
 このように、本開示の実施形態に係る測距方法では、対象物体120に対し線状光130を照射する発光部112を備える測距装置110が、予め測定された線状光130の照射方向を記憶し、線状光130が対象物体120に反射した反射光131および線状光130に基づいて対象物体120までの距離radial_depthを測定し、かかる距離radial_depthと上記照射方向とに基づいて対象物体120の3次元上での位置を算出する。したがって、本開示の実施形態に係る測距方法によれば、対象物体120の3次元上での位置を正確に測定することができる。 As described above, in the distance measuring method according to the embodiment of the present disclosure, the distance measuring device 110 including the light emitting unit 112 that irradiates the linear light 130 to the target object 120 stores the irradiation direction of the linear light 130 measured in advance, measures the distance radial_depth to the target object 120 based on the linear light 130 and the reflected light 131 of the linear light 130 reflected by the target object 120, and measures the distance radial The three-dimensional position of the target object 120 is calculated based on _depth and the irradiation direction. Therefore, according to the ranging method according to the embodiment of the present disclosure, the three-dimensional position of the target object 120 can be accurately measured.
<<3.キャリブレーション方法>>
 ところで、上述してきた本開示の実施形態に係る測距方法を実行するうえで、重要になるのがROM116に予め記憶されるパラメータa,bの精度である。次に、かかるパラメータa,bの精度を確保するための、本開示の実施形態に係るキャリブレーション方法について説明する。
<<3. Calibration method >>
By the way, the accuracy of the parameters a and b pre-stored in the ROM 116 is important in executing the distance measuring method according to the embodiment of the present disclosure described above. Next, a calibration method according to an embodiment of the present disclosure for ensuring the accuracy of the parameters a and b will be described.
 図7は、本開示の実施形態に係るキャリブレーション装置210の構成例を示す図である。また、図8は、本開示の実施形態に係るキャリブレーション方法の説明図である。また、図9は、本開示の実施形態に係るキャリブレーション処理の処理手順を示すフローチャートである。 FIG. 7 is a diagram showing a configuration example of the calibration device 210 according to the embodiment of the present disclosure. Also, FIG. 8 is an explanatory diagram of the calibration method according to the embodiment of the present disclosure. Also, FIG. 9 is a flow chart showing the procedure of the calibration process according to the embodiment of the present disclosure.
 図7に示すように、キャリブレーション装置210は、測距装置110と、高画素カメラ140と、方眼用紙150とを有する。方眼用紙150は、「較正部材」の一例である。なお、図7には、測距装置110については、測距装置110内にある主要部分(光源112-1、DOE112-2、レンズ113およびイメージセンサ114)を示している。 As shown in FIG. 7, the calibration device 210 has a distance measuring device 110, a high-pixel camera 140, and a graph paper 150. Graph paper 150 is an example of a "calibration member." 7 shows the main parts (light source 112-1, DOE 112-2, lens 113 and image sensor 114) in the distance measuring device 110. As shown in FIG.
 高画素カメラ140は、イメージセンサ114よりも高画素なカメラである。高画素カメラ140は、測距装置110の近傍に設けられ、方眼用紙150を撮影可能に配置される。 The high-pixel camera 140 is a camera with higher pixels than the image sensor 114. The high-pixel camera 140 is provided near the distance measuring device 110 and arranged so as to photograph the graph paper 150 .
 方眼用紙150は、測距装置110から発光される線状光130が方眼用紙150上に照射されるように配置される。あわせて、方眼用紙150は、イメージセンサ114が方眼用紙150を撮影可能となるように配置される。なお、ここで、配置後の方眼用紙150からDOE112-2までの距離は、zとする。 The grid sheet 150 is arranged so that the linear light 130 emitted from the distance measuring device 110 is irradiated onto the grid sheet 150 . In addition, the grid sheet 150 is arranged so that the image sensor 114 can photograph the grid sheet 150 . Here, the distance from the grid sheet 150 after placement to the DOE 112-2 is z0 .
 かかるキャリブレーション装置210において、測距装置110の直交座標系(X,Y,Z)は、次のように定義される。X軸は、方眼用紙150の横軸(Xpaper軸)と平行な軸である。Y軸は、方眼用紙150の縦軸(Ypaper軸)と平行な軸である。したがって、Z軸は、方眼用紙150に直交する向きの軸である。また、直交座標系(X,Y,Z)の原点は、方眼用紙150からzの距離に設定される。方眼用紙150の座標系(Xpaper,Ypaper)の原点は、測距装置110での直交座標系では(0,0,z)となる。 In such a calibration device 210, the orthogonal coordinate system (X, Y, Z) of the distance measuring device 110 is defined as follows. The X-axis is parallel to the horizontal axis (Xpaper axis) of the graph paper 150 . The Y-axis is parallel to the vertical axis (Ypaper axis) of the graph paper 150 . Therefore, the Z-axis is an axis perpendicular to the graph paper 150 . Also, the origin of the orthogonal coordinate system (X, Y, Z) is set at a distance of z 0 from the grid sheet 150 . The origin of the coordinate system (Xpaper, Ypaper) of the grid paper 150 is (0, 0, z 0 ) in the orthogonal coordinate system of the distance measuring device 110 .
 このように設定された状態で、本開示の実施形態に係るキャリブレーション方法では、測距装置110が、方眼用紙150を対象物体120と仮定する測距動作を行う。具体的には、測距装置110は、発光部112を発光させて、イメージセンサ114で受光する。同時に、高画素カメラ140は、方眼用紙150を撮影する。 In this setting state, in the calibration method according to the embodiment of the present disclosure, the distance measurement device 110 performs distance measurement operation assuming that the grid paper 150 is the target object 120 . Specifically, distance measuring device 110 causes light emitting unit 112 to emit light, and image sensor 114 receives the light. At the same time, the high pixel camera 140 photographs the graph paper 150 .
 このとき、発光部112からの光は、DOE112-2を介して線状光130となり方眼用紙150に照射される。したがって、図8に示すように、方眼用紙150には、図中に黒い丸印で示す6つの光点が写る。 At this time, the light from the light emitting unit 112 becomes linear light 130 through the DOE 112-2 and is irradiated onto the graph paper 150. Therefore, as shown in FIG. 8, six light spots indicated by black circles appear on the graph paper 150. As shown in FIG.
 かかる方眼用紙150は、上述のように、高画素カメラ140によって撮影されているので、この撮影画像を解析することで、方眼用紙150上の6つの高輝度の位置を得ることができる。このようにして、方眼用紙150上の6つの高輝度の位置を求める。これら高輝度の位置を(h,k)とする(i=1~6)。 Since the graph paper 150 is photographed by the high-pixel camera 140 as described above, six high-brightness positions on the graph paper 150 can be obtained by analyzing this photographed image. In this manner, six high-intensity positions on the graph paper 150 are obtained. These high luminance positions are defined as (h i , k i ) (i=1 to 6).
 (h,k)は、イメージセンサ114よりも高画素な高画素カメラ140の撮影画像を解析して方眼用紙150上の6つの位置として求めているので、正確な位置である。 (h i , k i ) are accurate positions because they are obtained as six positions on the graph paper 150 by analyzing the captured image of the high-pixel camera 140 having higher pixels than the image sensor 114 .
 また、線状光130は、測距装置110での直交座標系(X,Y,Z)において、Z=zの平面上で(h,k)に照射されているので、線状光130の方向は、(h,k,z)となる。かかる方向(h,k,z)は、正確な位置(h,k)から規定される方向であるので、方向(h,k,z)そのものも正確であるということができる。言い換えれば、ROM116は、高画素カメラ140によって線状光130を予め撮影した撮影画像を介して測定される線状光130の軌跡の方向を線状光130の照射方向として記憶することとなる。 In addition, since the linear light 130 is radiated at (hi , k i ) on the plane of Z=z 0 in the orthogonal coordinate system (X, Y, Z) of the distance measuring device 110, the direction of the linear light 130 is ( hi , k i , z 0 ). Since the direction (h i , k i , z 0 ) is defined from the correct position (h i , k i ), it can be said that the direction (h i , k i , z 0 ) itself is also correct. In other words, the ROM 116 stores, as the irradiation direction of the linear light 130, the direction of the trajectory of the linear light 130, which is measured through an image of the linear light 130 captured in advance by the high-pixel camera 140.
 また、測距装置110は測距動作を行っているので、イメージセンサ114でも撮影が行われている。したがって、イメージセンサ114が撮影した撮影画像にも、方眼用紙150と6つの光点が写っている。このイメージセンサ114の撮影画像を解析することで、方眼用紙150上の正確な位置(h,k)に対する投影位置を得ることができる。イメージセンサ114の撮影画像上では、この投影位置の近傍で最も高輝度な画素の画素位置を検出する。かかる検出された画素位置が(u,v)となる(i=1~6)。 Further, since the distance measuring device 110 is performing distance measuring operation, the image sensor 114 is also performing photographing. Therefore, the image captured by the image sensor 114 also includes the graph paper 150 and the six light spots. By analyzing the captured image of the image sensor 114, the projection position for the accurate position (h i , k i ) on the graph paper 150 can be obtained. On the captured image of the image sensor 114, the pixel position of the pixel with the highest luminance is detected in the vicinity of this projection position. Such detected pixel positions are (u i , v i ) (i=1 to 6).
 ここで補足しておくと、理想的には、イメージセンサ114で撮影された撮影画像においては、方眼用紙150上の位置(h,k)の投影位置が、高輝度な画素位置として完全に一致するはずである。 Supplementing here, ideally, in the captured image captured by the image sensor 114, the projected position of the position (h i , k i ) on the graph paper 150 should completely match as a high-luminance pixel position.
 しかし、一般的に、ToFセンサに使用されるイメージセンサは、低解像度である。低解像度のイメージセンサで撮影された撮影画像を解析しても、正確な位置を求めることはできない。つまり、方眼用紙150上の位置(h,k)の投影位置を正確に求めることができない。 However, generally the image sensors used for ToF sensors have low resolution. Accurate positions cannot be determined by analyzing captured images captured by a low-resolution image sensor. In other words, the projected position of the position (h i , k i ) on the graph paper 150 cannot be obtained accurately.
 そこで、本開示の実施形態に係るキャリブレーション方法では、方眼用紙150上の位置(h,k)の投影位置をイメージセンサ114の撮影画像上に求めて、さらにこの投影位置の近傍で最も高輝度な画素を検出するようにしている。 Therefore, in the calibration method according to the embodiment of the present disclosure, the projection position of the position (hi , k i ) on the graph paper 150 is obtained on the image captured by the image sensor 114, and the pixel with the highest luminance is detected in the vicinity of this projection position.
 このように、投影位置の近傍をさらに探索することで、方眼用紙150上の位置(h,k)に対する投影位置がイメージセンサ114の撮影画像上では正確でなくても、位置(h,k)に照射されている光が、イメージセンサ114上で実際にどこの画素位置に投影されるかを正確に求めることができる。 In this way, by further searching the vicinity of the projection position, even if the projection position with respect to the position ( hi , k i ) on the graph paper 150 is not accurate on the captured image of the image sensor 114, it is possible to accurately determine on which pixel position the light irradiated at the position (hi, k i ) is actually projected on the image sensor 114.
 このようにして、i番目の線状光130は、方向(a,b,1)に照射され、その光は、イメージセンサ114の画素位置(u,v)にて受光されるという対応関係を解析することができる(i=1~6)。ここで、a=h/z、b=k/zである。 In this way, the i-th linear light 130 is emitted in the direction (a i , b i , 1), and the light is received at the pixel position (u i , v i ) of the image sensor 114. Correspondence can be analyzed (i = 1 to 6). where a i =h i /z 0 and b i =k i /z 0 .
 再度、フローチャートを用いて説明する。図9に示すように、キャリブレーション装置210は、まずステップS21にて、測距装置110で測距を行い、同時に高画素カメラ140で撮影を行う。なお、測距装置110での測距とは、具体的には、発光部112を発光させイメージセンサ114で受光を行う。 I will explain again using the flowchart. As shown in FIG. 9, in step S21, the calibration device 210 first performs distance measurement with the distance measurement device 110, and at the same time, performs photographing with the high-pixel camera 140. FIG. Note that distance measurement by the distance measuring device 110 specifically means that the light emitting unit 112 emits light and the image sensor 114 receives light.
 そして、ステップS22にて、高画素カメラ140での撮影画像から、高輝度の画素位置を6点検出する。高画素カメラ140の撮影画像には方眼用紙150も撮影されているので、画像解析により、方眼用紙150上の6点の位置を求める。方眼用紙150上の位置を(h,k)とする(i=1~6)。 Then, in step S22, six high-brightness pixel positions are detected from the image captured by the high-pixel camera 140. FIG. Since the graph paper 150 is also photographed in the photographed image of the high-pixel camera 140, the positions of the six points on the graph paper 150 are obtained by image analysis. Let the position on the graph paper 150 be (h i , k i ) (i=1 to 6).
 そして、ステップS23にて、a=h/z、b=k/zとする(i=1~6)。 Then, in step S23, a i =h i /z 0 and b i =k i /z 0 (i=1 to 6).
 そして、ステップS24にて、イメージセンサ114での撮影画像から、高輝度の画素位置を6点検出する。かかる画素位置を(u,v)とする(i=1~6)。 Then, in step S24, six high-brightness pixel positions are detected from the captured image by the image sensor 114. FIG. Such pixel positions are assumed to be (u i , v i ) (i=1 to 6).
 そして、ステップS25にて、ROM116に、a,b,u,vを書き込む(i=1~6)。そして、処理を終了する。 Then, in step S25, a i , b i , u i , and v i are written in the ROM 116 (i=1 to 6). Then, the process ends.
 このようにして、測距装置110のキャリブレーションが行われ、キャリブレーションパラメータa,b,u,vがROM116に書き込まれる。 Thus, the calibration of the distance measuring device 110 is performed, and the calibration parameters a i , bi , u i , and v i are written in the ROM 116 .
<<4.変形例>>
 ところで、上述してきた本開示の実施形態には、いくつかの変形例を挙げることができる。
<<4. Modification>>
By the way, several modifications can be given to the embodiments of the present disclosure described above.
<4-1.第1の変形例および第2の変形例>
 図10は、第1の変形例を示すブロック図である。また、図11は、第2の変形例を示すブロック図である。
<4-1. First Modification and Second Modification>
FIG. 10 is a block diagram showing a first modified example. Also, FIG. 11 is a block diagram showing a second modification.
 上述した本開示の実施形態では、図3に示したように、イメージセンサ114と演算部115とが分離している構成例を挙げたが、これらイメージセンサ114と演算部115は、1つの半導体チップで構成してもよい。 In the above-described embodiment of the present disclosure, as shown in FIG. 3, the configuration example in which the image sensor 114 and the arithmetic unit 115 are separated was given, but the image sensor 114 and the arithmetic unit 115 may be configured with a single semiconductor chip.
 すなわち、図10に示すように、イメージセンサ114と演算部115は、1つの半導体チップ118で実現してもよい。 That is, as shown in FIG. 10, the image sensor 114 and the arithmetic unit 115 may be realized by one semiconductor chip 118.
 さらに、図11に示す構成でもよい。すなわち、図11に示すように、APU(Application Processor Unit)119が、測距装置110Aの後段に設けられてもよい。APU119は、「処理ユニット」の一例に相当する。 Furthermore, the configuration shown in FIG. 11 may be used. That is, as shown in FIG. 11, an APU (Application Processor Unit) 119 may be provided after the distance measuring device 110A. The APU 119 corresponds to an example of a "processing unit".
 半導体チップ118内にある演算部115は、演算量が限られるため、例えば図6に示したすべての処理を演算部115で処理することが難しい場合もある。そこで、APU119との間で処理を分散し、例えば図6のステップS12は演算部115が実行し、ステップS14はAPU119が実行するようにしてもよい。 Since the calculation amount of the calculation unit 115 in the semiconductor chip 118 is limited, it may be difficult for the calculation unit 115 to process all the processing shown in FIG. 6, for example. Therefore, the processing may be distributed between the APU 119 and, for example, step S12 in FIG.
 これにより、演算部115での処理量を軽減することができる。図6を参照しつつ説明すると、詳細は次の通りである。 As a result, the amount of processing in the calculation unit 115 can be reduced. Referring to FIG. 6, the details are as follows.
 まず、ステップS11にて、ROM116から演算部115へ位置(u,v)におけるデータが送られる。そして、ステップS12にて、イメージセンサ114から位置(u,v)におけるデータを読み出し、その位置における距離radial_depthを演算部115が計算する。 First, in step S11, data at position (u i , v i ) is sent from the ROM 116 to the calculation unit 115 . Then, in step S12, data at the position (u i , v i ) is read from the image sensor 114, and the calculation unit 115 calculates the distance radial_depth i at that position.
 そして、ステップS13にて、ROM116から出力端子117-2を介して、APU119へ各スポットの方向(a,b)のデータが送られる。そして、ステップS14にて、演算部115で計算されたradial_depthが出力端子117-1を介してAPU119へ送られるので、APU119が、x,y,zに関する連立方程式である上記の式(1)を解く。そして、APU119が、x,y,zを出力することとなる。 Then, in step S13, data on the direction (a i , b i ) of each spot is sent from the ROM 116 to the APU 119 via the output terminal 117-2. Then, in step S14, the radial_depth i calculated by the calculation unit 115 is sent to the APU 119 via the output terminal 117-1, so that the APU 119 solves the above equation (1), which is the simultaneous equations regarding x i , y i , and z i . Then, the APU 119 outputs x i , y i and z i .
 このようにすることで、各スポットの方向(a,b)のデータを演算部115に送る必要もなく、かつ、演算部115でx,y,zに関する計算を行う必要もなくなる。これにより、演算部115の処理負荷を軽減することができる。 By doing so, there is no need to send the data of the direction (a i , bi ) of each spot to the calculation section 115, and the calculation section 115 does not need to perform calculations regarding x i , y i , and z i . Thereby, the processing load of the calculation unit 115 can be reduced.
<4-2.第3の変形例>
 また、この他にも、測距装置110,110Aは、一斉に複数の線状光130を発光するのではなく、複数の線状光130を1つずつ順次発光するようにしてもよい。この場合、例えば図6に示した処理手順は、線状光130が1つ発光されるごとに、添え字iを加算しつつ、添え字iの回数分が繰り返されることとなる。また、上述した本開示の実施形態では、較正部材が方眼用紙150であるものとしたが、較正部材を限定するものではない。すなわち、較正部材は、較正部材が撮影された撮影画像上において正確な位置の測定が可能であればよく、規則的な幾何学模様を有していればよい。
<4-2. Third modification>
In addition to this, the distance measuring devices 110 and 110A may sequentially emit the plurality of linear lights 130 one by one instead of emitting the plurality of linear lights 130 all at once. In this case, for example, the processing procedure shown in FIG. 6 is repeated the number of times of the subscript i while adding the subscript i each time one linear light 130 is emitted. Further, in the embodiment of the present disclosure described above, the calibration member is the graph paper 150, but the calibration member is not limited. That is, the calibrating member only needs to be able to be accurately positioned on the photographed image of the calibrating member and has a regular geometric pattern.
<4-3.その他の変形例>
 また、上述した本開示の実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部又は一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部又は一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。
<4-3. Other modified examples>
Further, among the processes described in the embodiments of the present disclosure described above, all or part of the processes described as being performed automatically can be performed manually, or all or part of the processes described as being performed manually can be automatically performed by a known method. In addition, information including processing procedures, specific names, various data and parameters shown in the above documents and drawings can be arbitrarily changed unless otherwise specified. For example, the various information shown in each drawing is not limited to the illustrated information.
 また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部又は一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的又は物理的に分散・統合して構成することができる。 Also, each component of each device illustrated is functionally conceptual and does not necessarily need to be physically configured as illustrated. In other words, the specific form of distribution and integration of each device is not limited to the one shown in the figure, and all or part of them can be functionally or physically distributed and integrated in arbitrary units according to various loads and usage conditions.
 また、上述した本開示の実施形態は、処理内容を矛盾させない領域で適宜組み合わせることが可能である。また、本実施形態のシーケンス図或いはフローチャートに示された各ステップは、適宜順序を変更することが可能である。 In addition, the embodiments of the present disclosure described above can be appropriately combined in areas where the processing contents are not inconsistent. Also, the order of the steps shown in the sequence diagrams or flowcharts of this embodiment can be changed as appropriate.
<<5.むすび>>
 以上説明したように、本開示の一実施形態によれば、測距装置110は、対象物体120に対し線状光130を照射する発光部112と、予め測定された線状光130の照射方向を記憶するROM116(「記憶部」の一例に相当)と、線状光130が対象物体120に反射した反射光131および線状光130に基づいて対象物体120までの距離radial_depthを測定し、距離radial_depthと上記照射方向とに基づいて対象物体120の3次元上での位置を算出する演算部115と、を備える。これにより、対象物体120の3次元上での位置を正確に測定することができる。
<<5. Conclusion>>
As described above, according to an embodiment of the present disclosure, the distance measuring device 110 includes the light emitting unit 112 that irradiates the target object 120 with the linear light 130, the ROM 116 that stores the previously measured irradiation direction of the linear light 130 (equivalent to an example of a “storage unit”), the reflected light 131 that the linear light 130 has reflected on the target object 120, and the distance radial_depth to the target object 120 based on the linear light 130. , and calculates the three-dimensional position of the target object 120 based on the distance radial_depth and the irradiation direction. Accordingly, the three-dimensional position of the target object 120 can be accurately measured.
 以上、本開示の各実施形態について説明したが、本開示の技術的範囲は、上述の各実施形態そのままに限定されるものではなく、本開示の要旨を逸脱しない範囲において種々の変更が可能である。また、異なる実施形態及び変形例にわたる構成要素を適宜組み合わせてもよい。 Although the embodiments of the present disclosure have been described above, the technical scope of the present disclosure is not limited to the embodiments described above, and various modifications are possible without departing from the gist of the present disclosure. Moreover, you may combine the component over different embodiment and modifications suitably.
 また、本明細書に記載された各実施形態における効果はあくまで例示であって限定されるものでは無く、他の効果があってもよい。 Also, the effects of each embodiment described in this specification are merely examples and are not limited, and other effects may be provided.
 なお、本技術は以下のような構成も取ることができる。
(1)
 対象物体に対し線状光を照射する発光部と、
 予め測定された前記線状光の照射方向を記憶する記憶部と、
 前記線状光が前記対象物体に反射した反射光および前記線状光に基づいて前記対象物体までの距離を測定し、該距離と前記照射方向とに基づいて前記対象物体の3次元上での位置を算出する演算部と、
 を備える、測距装置。
(2)
 前記記憶部は、
 カメラによって前記線状光を予め撮影した撮影画像を介して測定される前記線状光の軌跡の方向を前記照射方向として記憶する、
 前記(1)に記載の測距装置。
(3)
 2次元アレイ状に配置された画素を有し、前記反射光を前記画素のいずれかにおいて受光する受光部をさらに備え、
 前記記憶部は、
 前記反射光を受光する画素位置と前記照射方向との対応関係をさらに記憶し、
 前記演算部は、
 前記線状光の発光時刻および前記画素位置における前記反射光の受光時刻に基づいて前記距離を測定する、
 前記(1)または(2)に記載の測距装置。
(4)
 前記記憶部は、
 前記受光部よりも高画素な高画素カメラによって撮影した撮影画像を介して測定される前記照射方向を記憶する、
 前記(3)に記載の測距装置。
(5)
 前記記憶部は、
 前記受光部における前記反射光の入射位置の周辺で最も高輝度な前記画素位置を記憶する、
 前記(4)に記載の測距装置。
(6)
 前記記憶部は、
 規則的な幾何学模様を有する較正部材に対して前記線状光が照射された際に前記較正部材を撮影した撮影画像に基づいて予め測定した前記線状光の軌跡の方向を前記照射方向として記憶する、
 前記(2)~(5)のいずれか一つに記載の測距装置。
(7)
 前記受光部および前記演算部は、1つの半導体チップに設けられる、
 前記(3)、(4)または(5)に記載の測距装置。
(8)
 前記演算部の後段において、前記演算部によって算出された前記距離に基づいて前記対象物体の3次元上での位置を算出する処理ユニットをさらに備える、
 前記(7)に記載の測距装置。
(9)
 対象物体に対し線状光を照射する発光部を備える測距装置が実行する測距方法であって、
 予め測定された前記線状光の照射方向を記憶することと、
 前記線状光が前記対象物体に反射した反射光および前記線状光に基づいて前記対象物体までの距離を測定し、該距離と前記照射方向とに基づいて前記対象物体の3次元上での位置を算出することと、
 を含む、測距方法。
Note that the present technology can also take the following configuration.
(1)
a light emitting unit that irradiates a target object with linear light;
a storage unit that stores an irradiation direction of the linear light that has been measured in advance;
a calculation unit that measures a distance to the target object based on the linear light and the reflected light reflected by the target object, and calculates a three-dimensional position of the target object based on the distance and the irradiation direction;
A ranging device.
(2)
The storage unit
storing, as the irradiation direction, the direction of the trajectory of the linear light measured through a photographed image of the linear light previously photographed by a camera;
The distance measuring device according to (1) above.
(3)
further comprising a light-receiving unit having pixels arranged in a two-dimensional array and receiving the reflected light at any of the pixels;
The storage unit
further storing a correspondence relationship between a pixel position receiving the reflected light and the irradiation direction;
The calculation unit is
measuring the distance based on the light emission time of the linear light and the light reception time of the reflected light at the pixel position;
The distance measuring device according to (1) or (2) above.
(4)
The storage unit
storing the irradiation direction measured through a captured image captured by a high-pixel camera having a higher pixel count than the light-receiving unit;
The distance measuring device according to (3) above.
(5)
The storage unit
storing the pixel position with the highest brightness around the incident position of the reflected light in the light receiving unit;
The distance measuring device according to (4) above.
(6)
The storage unit
storing, as the irradiation direction, the direction of the trajectory of the linear light, which is measured in advance based on a photographed image of the calibration member having a regular geometric pattern when the linear light is irradiated to the calibration member;
The distance measuring device according to any one of (2) to (5) above.
(7)
The light receiving unit and the computing unit are provided on one semiconductor chip,
The distance measuring device according to (3), (4) or (5).
(8)
Further comprising a processing unit for calculating the three-dimensional position of the target object based on the distance calculated by the calculation unit, after the calculation unit;
The distance measuring device according to (7) above.
(9)
A distance measurement method performed by a distance measurement device having a light emitting unit that irradiates a target object with linear light,
storing the irradiation direction of the linear light measured in advance;
measuring the distance to the target object based on the linear light and the reflected light reflected by the target object, and calculating the three-dimensional position of the target object based on the distance and the irradiation direction;
Ranging methods, including
 110,110A 測距装置
 111 制御部
 112 発光部
 112-1 光源
 112-2 DOE
 113 レンズ
 114 イメージセンサ
 115 演算部
 116 ROM
 117,117-1,117-2 出力端子
 118 半導体チップ
 119 APU
 120 対象物体
 130 線状光
 131 反射光
 140 高画素カメラ
 210 キャリブレーション装置
110, 110A rangefinder 111 control unit 112 light emitting unit 112-1 light source 112-2 DOE
113 lens 114 image sensor 115 computing unit 116 ROM
117, 117-1, 117-2 output terminal 118 semiconductor chip 119 APU
120 target object 130 linear light 131 reflected light 140 high pixel camera 210 calibration device

Claims (9)

  1.  対象物体に対し線状光を照射する発光部と、
     予め測定された前記線状光の照射方向を記憶する記憶部と、
     前記線状光が前記対象物体に反射した反射光および前記線状光に基づいて前記対象物体までの距離を測定し、該距離と前記照射方向とに基づいて前記対象物体の3次元上での位置を算出する演算部と、
     を備える、測距装置。
    a light emitting unit that irradiates a target object with linear light;
    a storage unit that stores an irradiation direction of the linear light that has been measured in advance;
    a calculation unit that measures a distance to the target object based on the linear light and the reflected light reflected by the target object, and calculates a three-dimensional position of the target object based on the distance and the irradiation direction;
    A ranging device.
  2.  前記記憶部は、
     カメラによって前記線状光を予め撮影した撮影画像を介して測定される前記線状光の軌跡の方向を前記照射方向として記憶する、
     請求項1に記載の測距装置。
    The storage unit
    storing, as the irradiation direction, the direction of the trajectory of the linear light measured through a photographed image of the linear light previously photographed by a camera;
    The distance measuring device according to claim 1.
  3.  2次元アレイ状に配置された画素を有し、前記反射光を前記画素の特定のいずれかにおいて受光する受光部をさらに備え、
     前記記憶部は、
     前記反射光を受光する画素位置と前記照射方向との対応関係をさらに記憶し、
     前記演算部は、
     前記線状光の発光時刻および前記画素位置における前記反射光の受光時刻に基づいて前記距離を測定する、
     請求項1に記載の測距装置。
    further comprising a light-receiving unit that has pixels arranged in a two-dimensional array and receives the reflected light in a specific one of the pixels;
    The storage unit
    further storing a correspondence relationship between a pixel position receiving the reflected light and the irradiation direction;
    The calculation unit is
    measuring the distance based on the light emission time of the linear light and the light reception time of the reflected light at the pixel position;
    The distance measuring device according to claim 1.
  4.  前記記憶部は、
     前記受光部よりも高画素な高画素カメラによって撮影した撮影画像を介して測定される前記照射方向を記憶する、
     請求項3に記載の測距装置。
    The storage unit
    storing the irradiation direction measured through a captured image captured by a high-pixel camera having a higher pixel count than the light-receiving unit;
    The distance measuring device according to claim 3.
  5.  前記記憶部は、
     前記受光部における前記反射光の入射位置の周辺で最も高輝度な前記画素位置を記憶する、
     請求項4に記載の測距装置。
    The storage unit
    storing the pixel position with the highest brightness around the incident position of the reflected light in the light receiving unit;
    The distance measuring device according to claim 4.
  6.  前記記憶部は、
     規則的な幾何学模様を有する較正部材に対して前記線状光が照射された際に前記較正部材を撮影した撮影画像に基づいて予め測定した前記線状光の軌跡の方向を前記照射方向として記憶する、
     請求項2に記載の測距装置。
    The storage unit
    storing, as the irradiation direction, the direction of the trajectory of the linear light, which is measured in advance based on a photographed image of the calibration member having a regular geometric pattern when the linear light is irradiated to the calibration member;
    The distance measuring device according to claim 2.
  7.  前記受光部および前記演算部は、1つの半導体チップに設けられる、
     請求項3に記載の測距装置。
    The light receiving unit and the computing unit are provided on one semiconductor chip,
    The distance measuring device according to claim 3.
  8.  前記演算部の後段において、前記演算部によって算出された前記距離に基づいて前記対象物体の3次元上での位置を算出する処理ユニットをさらに備える、
     請求項7に記載の測距装置。
    Further comprising a processing unit for calculating the three-dimensional position of the target object based on the distance calculated by the calculation unit, after the calculation unit;
    The distance measuring device according to claim 7.
  9.  対象物体に対し線状光を照射する発光部を備える測距装置が実行する測距方法であって、
     予め測定された前記線状光の照射方向を記憶することと、
     前記線状光が前記対象物体に反射した反射光および前記線状光に基づいて前記対象物体までの距離を測定し、該距離と前記照射方向とに基づいて前記対象物体の3次元上での位置を算出することと、
     を含む、測距方法。
    A distance measurement method performed by a distance measurement device having a light emitting unit that irradiates a target object with linear light,
    storing the irradiation direction of the linear light measured in advance;
    measuring the distance to the target object based on the linear light and the reflected light reflected by the target object, and calculating the three-dimensional position of the target object based on the distance and the irradiation direction;
    Ranging methods, including
PCT/JP2022/044810 2022-01-19 2022-12-06 Distance measuring device and distance measuring method WO2023139946A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-006648 2022-01-19
JP2022006648 2022-01-19

Publications (1)

Publication Number Publication Date
WO2023139946A1 true WO2023139946A1 (en) 2023-07-27

Family

ID=87348164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/044810 WO2023139946A1 (en) 2022-01-19 2022-12-06 Distance measuring device and distance measuring method

Country Status (1)

Country Link
WO (1) WO2023139946A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10239013A (en) * 1997-02-26 1998-09-11 Sefuto Kenkyusho:Kk Position measuring device
JP2007064723A (en) * 2005-08-30 2007-03-15 Hitachi Ltd Image input apparatus and calibration method
US20120050528A1 (en) * 2010-05-31 2012-03-01 University Of North Carolina At Charlotte Dimensional measurement through a combination of photogrammetry and optical scattering
JP2018155649A (en) * 2017-03-17 2018-10-04 京セラ株式会社 Electromagnetic wave detector, program, and electromagnetic wave detection system
JP2019207127A (en) * 2018-05-28 2019-12-05 三菱電機株式会社 Laser calibration device, calibration method therefor, and image input device including laser calibration device
JP2020153865A (en) * 2019-03-20 2020-09-24 株式会社リコー Three-dimensional information acquisition device, information processor, and system
US20200300977A1 (en) * 2019-03-22 2020-09-24 Viavi Solutions Inc. Time of flight-based three-dimensional sensing system
JP2021015113A (en) * 2019-07-11 2021-02-12 ローム株式会社 Three-dimensional sensing system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10239013A (en) * 1997-02-26 1998-09-11 Sefuto Kenkyusho:Kk Position measuring device
JP2007064723A (en) * 2005-08-30 2007-03-15 Hitachi Ltd Image input apparatus and calibration method
US20120050528A1 (en) * 2010-05-31 2012-03-01 University Of North Carolina At Charlotte Dimensional measurement through a combination of photogrammetry and optical scattering
JP2018155649A (en) * 2017-03-17 2018-10-04 京セラ株式会社 Electromagnetic wave detector, program, and electromagnetic wave detection system
JP2019207127A (en) * 2018-05-28 2019-12-05 三菱電機株式会社 Laser calibration device, calibration method therefor, and image input device including laser calibration device
JP2020153865A (en) * 2019-03-20 2020-09-24 株式会社リコー Three-dimensional information acquisition device, information processor, and system
US20200300977A1 (en) * 2019-03-22 2020-09-24 Viavi Solutions Inc. Time of flight-based three-dimensional sensing system
JP2021015113A (en) * 2019-07-11 2021-02-12 ローム株式会社 Three-dimensional sensing system

Similar Documents

Publication Publication Date Title
US10764487B2 (en) Distance image acquisition apparatus and application thereof
KR102134688B1 (en) Optical distance measuring method and optical distance measuring device
JP5140761B2 (en) Method for calibrating a measurement system, computer program, electronic control unit, and measurement system
EP1364226B1 (en) Apparatus and method for obtaining three-dimensional positional data from a two-dimensional captured image
US9928595B2 (en) Devices, systems, and methods for high-resolution multi-view camera calibration
DK2993450T3 (en) Method and arrangement for recording acoustic and optical information as well as a corresponding computer program and a corresponding computer-readable storage medium
KR100753885B1 (en) Image obtaining apparatus
CN106524922B (en) Ranging calibration method, device and electronic equipment
CN103201617B (en) Substrate inspecting method
WO2004044522A1 (en) Three-dimensional shape measuring method and its device
JP6282098B2 (en) Calibration apparatus and method
US11688102B2 (en) Image capture system with calibration function
EP3049756B1 (en) Modeling arrangement and method and system for modeling the topography of a three-dimensional surface
US10713810B2 (en) Information processing apparatus, method of controlling information processing apparatus, and storage medium
US9250071B2 (en) Measurement apparatus and correction method of the same
JP2002202122A (en) Calibration method for two-dimensional distance image sensor
WO2022050279A1 (en) Three-dimensional measurement device
JP5482032B2 (en) Distance measuring device and distance measuring method
WO2023139946A1 (en) Distance measuring device and distance measuring method
CN116977448A (en) Low-resolution TOF area array internal parameter calibration method, device, equipment and medium
JP2019100969A (en) Distance measuring device, distance measurement method, and distance measurement program
US20200301017A1 (en) Range finding device, range finding method and storage medium
JP2014066538A (en) Target for photogrammetry, and photogrammetry method
CN116592766A (en) Precise three-dimensional measurement method and device based on fusion of laser and monocular vision
TW201940840A (en) Appearance inspection device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22922097

Country of ref document: EP

Kind code of ref document: A1