WO2024004538A1 - Measurement device, irradiation device, and band-pass filter - Google Patents

Measurement device, irradiation device, and band-pass filter Download PDF

Info

Publication number
WO2024004538A1
WO2024004538A1 PCT/JP2023/020859 JP2023020859W WO2024004538A1 WO 2024004538 A1 WO2024004538 A1 WO 2024004538A1 JP 2023020859 W JP2023020859 W JP 2023020859W WO 2024004538 A1 WO2024004538 A1 WO 2024004538A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
light emitting
measuring device
metalens
temperature
Prior art date
Application number
PCT/JP2023/020859
Other languages
French (fr)
Japanese (ja)
Inventor
和也 本橋
Original Assignee
株式会社小糸製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2022106333A external-priority patent/JP2024005894A/en
Priority claimed from JP2022106334A external-priority patent/JP2024005895A/en
Priority claimed from JP2022127331A external-priority patent/JP2024024483A/en
Application filed by 株式会社小糸製作所 filed Critical 株式会社小糸製作所
Publication of WO2024004538A1 publication Critical patent/WO2024004538A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements

Definitions

  • the present disclosure relates to a measurement device, an irradiation device, and a bandpass filter.
  • Patent Document 1 describes a distance measuring device that measures the distance to a reflective object based on the flight time of light from emitting pulsed light to receiving reflected light.
  • a bandpass filter may be provided in the light receiving optical system.
  • the wavelength of the light emitted from the light emitting section changes depending on the temperature, it is necessary to expand the passband of the bandpass filter. As a result, there is a risk that noise mixed into the light reception data of the light receiving element will increase, and measurement accuracy will deteriorate.
  • a light emitting section (light source) is constructed by two-dimensionally arranging VCSELs serving as light emitting elements, a plurality of light emitting elements are arranged at intervals due to gaps between emitters. As a result, a gap is created in the light irradiated onto the measurement area, resulting in an area where measurement cannot be performed.
  • the primary objective of the present disclosure is to suppress the influence of noise.
  • a second objective of the present disclosure is to suppress the occurrence of areas where light cannot be irradiated.
  • One form of the present disclosure for achieving the above-mentioned first object includes: a light emitting section that irradiates light with a wavelength corresponding to temperature within a predetermined wavelength band; a filter that allows reflected light of the light irradiated from the light emitting part to pass through; a light receiving sensor having a plurality of light receiving elements; and a light receiving sensor that is disposed between the filter and the light receiving sensor, and transmits the light that has passed through the filter to two or more of the light receiving elements.
  • This is a measurement device including a dispersion element that disperses light into a light receiving element.
  • Another form of the present disclosure for achieving the above first object includes a light emitting section that irradiates light with a wavelength depending on the temperature, and a bandpass filter that passes reflected light of the light irradiated from the light emitting section. and a light-receiving sensor that receives the light that has passed through the band-pass filter, and the wavelength of the light that can pass through the band-pass filter changes depending on the temperature.
  • One form of the present disclosure for achieving the above-mentioned second object includes a light-emitting section having a plurality of light-emitting elements arranged at intervals, and a plurality of metalens with shifted optical axes.
  • the measuring device includes an optical system that irradiates light from the light emitting element through another metalens between the light from the plurality of light emitting elements that is irradiated through the metalens.
  • the influence of noise can be suppressed.
  • FIG. 1 is an explanatory diagram of the overall configuration of the measuring device 1.
  • FIG. 2 is a schematic explanatory diagram of the measuring device 1.
  • FIG. 3 is a timing chart for explaining an example of the measurement method.
  • FIG. 4 is an explanatory diagram of the light receiving section 20 of this embodiment.
  • FIG. 5 is an explanatory diagram of the light receiving section 20 of a reference example.
  • FIG. 6 is a correspondence table used in the first measurement method.
  • FIG. 7 is an explanatory diagram of the first measurement method.
  • FIG. 8 is an explanatory diagram of the first measurement method.
  • FIG. 9 is an explanatory diagram of the first measurement method.
  • FIG. 10 is a table used in the second measurement method.
  • FIG. 11 is an explanatory diagram of the second measurement method.
  • FIG. 11 is an explanatory diagram of the second measurement method.
  • FIG. 12 is an explanatory diagram of the correspondence between the light emitting element 121 and the light receiving element 222.
  • FIG. 13 is an explanatory diagram of another correspondence relationship between the light emitting element 121 and the light receiving element 222.
  • FIG. 14 is an explanatory diagram of the overall configuration of the measuring device 1001.
  • FIG. 15 is a schematic explanatory diagram of the measuring device 1001.
  • FIG. 16 is a timing chart for explaining an example of the measurement method.
  • FIG. 17 is an explanatory diagram of the light receiving section 1020.
  • FIG. 18 is an explanatory diagram of the light receiving section 1020.
  • FIG. 19 is a schematic explanatory diagram showing the characteristics of the bandpass filter BPF of this embodiment.
  • FIG. 20 is an enlarged explanatory diagram of the structure of the bandpass filter BPF.
  • FIG. 20 is an enlarged explanatory diagram of the structure of the bandpass filter BPF.
  • FIG. 21 is an enlarged explanatory diagram of the structure of the bandpass filter BPF.
  • FIG. 22 is a graph showing the characteristics of the bandpass filter BPF.
  • FIG. 23 is a graph showing another characteristic of the bandpass filter BPF.
  • FIG. 24 is an explanatory diagram of a modification of the arrangement of the bandpass filter BPF.
  • FIG. 25 is an explanatory diagram of the arrangement of another bandpass filter BPF'.
  • FIG. 26 is an explanatory diagram of the arrangement of another bandpass filter BPF'.
  • FIG. 27 is an explanatory diagram of the arrangement of another bandpass filter BPF'.
  • FIG. 28 is an explanatory diagram of the overall configuration of the measuring device 2001.
  • FIG. 29 is a schematic explanatory diagram of the measuring device 2001.
  • FIG. 29 is a schematic explanatory diagram of the measuring device 2001.
  • FIG. 30 is a timing chart for explaining an example of the measurement method.
  • FIG. 31 is an explanatory diagram of the arrangement of the light emitting elements 2121 of the light emitting section 2012.
  • FIG. 32 is a reference explanatory diagram of light irradiated onto the measurement area 2050.
  • FIG. 33 is an explanatory diagram of light irradiated onto the measurement area 2050 in this embodiment.
  • FIG. 34 is an explanatory diagram of the lens unit 2015.
  • FIG. 35 is an explanatory diagram of an image (irradiation spot) of a light emitting point of the light emitting element 2121 formed by one metalens 2016.
  • FIG. 31 is an explanatory diagram of the arrangement of the light emitting elements 2121 of the light emitting section 2012.
  • FIG. 32 is a reference explanatory diagram of light irradiated onto the measurement area 2050.
  • FIG. 33 is an explanatory diagram of light irradiated onto the measurement area 2050 in this embodiment.
  • FIG. 34 is an explanatory
  • FIG. 36 is an explanatory diagram of an image of a light emitting point (irradiation spot group) of the light emitting element 2121 formed by a plurality of metalens 2016.
  • FIG. 37 is an explanatory diagram of images of light emitting points (a group of irradiation spots) of a plurality of light emitting elements 2121 formed by a plurality of metalens 2016.
  • FIG. 38 is an explanatory diagram of optical conditions for partially overlapping two irradiation spots adjacent in a predetermined direction.
  • FIG. 39 is an explanatory diagram of optical conditions for partially overlapping two irradiation spot groups adjacent in a predetermined direction.
  • FIG. 1 is an explanatory diagram of the overall configuration of the measuring device 1.
  • FIG. 2 is a schematic explanatory diagram of the measuring device 1.
  • each direction is defined as shown in FIG. 2.
  • the Z direction is a direction along the optical axis of the light receiving optical system 24.
  • the object 90 to be measured by the measuring device 1 is located away from the measuring device 1 in the Z direction.
  • the X direction and the Y direction are directions perpendicular to the Z direction.
  • the plurality of light emitting elements 121 constituting the light emitting section 12 are two-dimensionally arranged along the X direction and the Y direction.
  • the plurality of pixels 221 of the light receiving sensor 22 are also two-dimensionally arranged along the X direction and the Y direction.
  • the measuring device 1 is a device that measures the distance to the target object 90.
  • the measuring device 1 is a device having a function of so-called LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging).
  • the measuring device 1 emits measurement light, detects the reflected light reflected from the surface of the object 90, and measures the time from emitting the measurement light until receiving the reflected light. Measure the distance using the TOF method (Time of flight).
  • the measuring device 1 includes an irradiation section 10, a light receiving section 20, and a control section 30. Furthermore, the measuring device 1 of this embodiment includes a temperature sensor 41.
  • the irradiation unit 10 is an irradiation device that irradiates measurement light toward the object 90.
  • the irradiation unit 10 irradiates the measurement area 50 (see FIG. 2) with measurement light at a predetermined angle of view.
  • the irradiation section 10 has a light emitting section 12 and a light projection optical system 14.
  • the light emitting unit 12 is a member (light source) that emits light.
  • the light emitting section 12 is composed of a surface emitting laser (VCSEL) array chip.
  • the light emitting unit 12 includes a plurality of light emitting elements 121 (for example, a surface emitting laser; VCSEL), and the plurality of light emitting elements 121 are two-dimensionally arranged along the X direction and the Y direction.
  • the light projection optical system 14 is an optical system that irradiates the measurement area 50 with the light emitted from the light emitting section 12 .
  • the light emitting unit 12 can cause each light emitting element 121 to emit light individually.
  • Each light emitting element 121 of the light emitting unit 12 is associated with a predetermined region of the measurement area 50 via the light projection optical system 14. Light emitted from a certain light emitting element 121 is irradiated onto a corresponding region of the measurement area 50 via the light projection optical system 14.
  • the irradiation section 10 may be configured to emit light from the entire light emitting surface of the light emitting section 12 and irradiate the entire measurement area 50 with the light at once. Note that the wavelength of the light emitted by the light emitting element 121 changes depending on the temperature. This point will be discussed later.
  • the light receiving unit 20 receives reflected light from the target object 90.
  • the light receiving unit 20 receives reflected light from the measurement area 50 (see FIG. 2).
  • the light receiving section 20 includes a light receiving sensor 22 and a light receiving optical system 24.
  • the light receiving sensor 22 has a plurality of pixels 221 arranged two-dimensionally. For example, in the case of the VGA light receiving sensor 22, 480 ⁇ 640 pixels 221 are two-dimensionally arranged. Each pixel 221 has a light receiving element 222, and the light receiving element 222 outputs a signal (light receiving data) according to the amount of light received.
  • the light receiving optical system 24 is an optical system that causes the light receiving unit 20 to receive reflected light from the measurement area 50.
  • the light receiving optical system 24 forms an image of the measurement area 50 on the light receiving surface of the light receiving sensor 22 .
  • Each pixel 221 of the light receiving sensor 22 is associated with a predetermined region of the measurement area 50 via the light receiving optical system 24.
  • a certain pixel 221 of the light receiving sensor 22 receives light (reflected light and background light) from a corresponding region of the measurement area 50 via the light receiving optical system 24.
  • each pixel 221 of the light receiving sensor 22 is associated with a predetermined light emitting element 121 of the light emitting section 12. Light emitted from a certain light emitting element 121 is received by the corresponding pixel 221 via the light projecting optical system 14 and the light receiving optical system 24. Note that the light receiving sensor 22 and the light receiving optical system 24 will be described later.
  • the control unit 30 controls the measuring device 1.
  • the control unit 30 controls the irradiation unit 10 and controls the light emitted from the irradiation unit 10 . Furthermore, the control unit 30 measures the distance to the object 90 using the TOF method (Time of Flight) based on the output result of the light receiving unit 20.
  • the control unit 30 includes an arithmetic unit and a storage device (not shown).
  • the computing device is, for example, a computing processing device such as a CPU or a GPU. A part of the arithmetic device may be constituted by an analog arithmetic circuit.
  • a storage device is a device that is composed of a main storage device and an auxiliary storage device, and stores programs and data. Various processes for measuring the distance to the target object 90 are executed by the arithmetic device executing the program stored in the storage device.
  • FIG. 1 shows functional blocks for various processes.
  • the control section 30 includes a setting section 32, a timing control section 34, and a distance measuring section 36.
  • the setting unit 32 performs various settings.
  • the timing control section 34 controls the processing timing of each section. For example, the timing control section 34 controls the timing at which light is emitted from the light emitting section 12.
  • the distance measuring section 36 measures the distance to the target object 90.
  • the distance measurement section 36 includes a signal processing section 362, a time detection section 364, and a distance calculation section 366.
  • the signal processing unit 362 processes the output signal (light reception data) of the light reception sensor 22.
  • the time detection unit 364 detects the flight time of light (the time from when the light is irradiated until the reflected light arrives).
  • the distance calculation unit 366 calculates the distance to the target object 90.
  • FIG. 3 is a timing chart for explaining an example of the measurement method.
  • the control unit 30 causes the light emitting unit 12 of the irradiation unit 10 to emit pulsed light at a predetermined period.
  • the upper side of FIG. 3 shows the timing at which the light emitting section 12 emits pulsed light (emission timing).
  • the light emitted from the light emitting section 12 is irradiated onto the measurement area 50 via the light projection optical system 14.
  • the light reflected on the surface of the object 90 within the measurement area 50 is received by the light receiving sensor 22 via the light receiving optical system 24.
  • the pixel 221 of the light receiving sensor 22 receives the pulsed reflected light.
  • the timing at which the pulsed reflected light arrives is shown.
  • arrival timing is shown.
  • Pixel data S of a certain pixel 221 of the light receiving sensor 22 (light reception data of the light receiving element 222 of a certain pixel 221) is shown.
  • Pixel data S of the light receiving sensor 22 is data indicating the amount of light received by the pixel 221.
  • the control unit 30 may emit light from all the light emitting elements 121 of the light emitting unit 12 to irradiate the entire measurement area 50 at once, or may emit light from all the light emitting elements 121 of the light emitting unit 12, or may emit light to the entire measurement area 50 at once, or may emit light from all the light emitting elements 121 of the light emitting unit 12.
  • the light emitting element 121 (for example, one light emitting element 121) may emit light to irradiate only a predetermined region of the measurement area 50 with the light.
  • the control unit 30 uses pixel data of the pixel 221 corresponding to the light emitting element 121 that has emitted light. You will get S.
  • the pixel data S acquired by the control unit 30 will be described later.
  • the distance measuring section 36 (signal processing section 362) of the control section 30 detects the arrival timing of the reflected light based on the pixel data S of each pixel 221. For example, the signal processing unit 362 detects the arrival timing of the reflected light based on the timing of the peak of pixel data of each pixel 221.
  • the distance measuring section 36 (time detecting section 364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. The time Tf corresponds to the time the light travels back and forth between the measuring device 1 and the target object 90. Then, the distance measuring section 36 (distance calculating section 366) calculates the distance L to the target object 90 based on the time Tf.
  • the control unit 30 generates a distance image by calculating the distance to the object 90 for each pixel 221 based on the time Tf detected for each pixel 221 of the light receiving unit 20.
  • FIG. 5 is an explanatory diagram of the light receiving section 20 of a reference example.
  • the light receiving optical system 24 includes a condensing lens 241 and a bandpass filter BPF.
  • the condensing lens 241 is an optical element that forms an image of the measurement area 50 on the light receiving surface of the light receiving sensor 22.
  • the bandpass filter BPF is a filter that passes light of a specific wavelength and cuts light of other than the specific wavelength.
  • the wavelength band of light that passes through the band-pass filter BPF may be referred to as a "pass band”
  • the wavelength band of light that is cut by the band-pass filter BPF may be referred to as a "cut-off band”. . Since the bandpass filter BPF needs to transmit reflected light, it can transmit at least the wavelength of the light emitted from the light emitting section 12.
  • the passband of the bandpass filter BPF needs to include at least the wavelength of the light emitted from the light emitting section 12. Since the light-receiving optical system 24 includes the band-pass filter BPF, it is possible to cut off light in the cutoff band of background light such as sunlight, and therefore it is possible to suppress the influence of noise caused by the background light.
  • the wavelength ⁇ of the light emitted from the light emitting section 12 changes depending on the temperature. For example, when the temperature of the light emitting section 12 increases, the wavelength ⁇ of light emitted from the light emitting section 12 becomes longer.
  • the wavelength ⁇ of the light emitted from the light emitting section 12 changes in the range of 875 to 935 nm depending on the temperature.
  • the temperature of the light emitting unit 12 changes in the range of T1 to T3, and when the temperature is T1, the wavelength ⁇ 1 is 875 nm, when the temperature is T2, the wavelength ⁇ 2 is 905 nm, and when the temperature is T3, the wavelength ⁇ 3 is 875 nm. is assumed to be 935 nm.
  • FIG. 4 is an explanatory diagram of the light receiving section 20 of this embodiment.
  • the light receiving optical system 24 of this embodiment includes a bandpass filter BPF and a dispersion element 25.
  • the light-receiving optical system 24 in the figure includes a condenser lens 241, the light-receiving optical system 24 may not include the condenser lens 241, as will be described later.
  • the dispersion element 25 is an optical element that disperses light.
  • the dispersion element 25 is arranged between the bandpass filter BPF and the light receiving sensor 22.
  • the light that has passed through the bandpass filter BPF is incident on the dispersion element 25
  • the light dispersed by the dispersion element 25 is incident on the light receiving element 222 of the light receiving sensor 22 .
  • the dispersion element 25 of this embodiment is set to disperse the light that has passed through the bandpass filter BPF (light in the passband of the bandpass filter BPF) over the plurality of light receiving elements 222.
  • the dispersion element 25 separates the light that has passed through the bandpass filter BPF into the plurality of light receiving elements 222 .
  • three light-receiving elements 222 are associated with a certain light-emitting element 121, and these three light-receiving elements 222 are referred to as a first light-receiving element 222A, a second light-receiving element 222B, and a second light-receiving element 222B. It may be called a 3-light receiving element 222C. Further, the light reception data of the first light receiving element 222A may be indicated as S1, the light reception data of the second light receiving element 222B may be indicated as S2, and the light reception data of the third light receiving element 222C may be indicated as S3.
  • the dispersion element 25 receives light from a predetermined area of the measurement area 50 corresponding to this light emitting element 121 (the area on the measurement area 50 that is irradiated with the light emitted from this light emitting element 121), and filters the band pass filter BPF.
  • the light in the passband is dispersed over the three light receiving elements 222 (first light receiving element 222A, second light receiving element 222B, and third light receiving element 222C) associated with this light emitting element 121.
  • the range in which the dispersion element 25 disperses the light in the passband of the band-pass filter BPF is not limited to the range covering three light receiving elements 222, but may be any range covering two or more plurality of light receiving elements 222. good.
  • the dispersion element 25 can be composed of a prism, a diffraction grating, a metamaterial, or the like.
  • a metamaterial is an optical element in which a microstructure smaller than the wavelength of light is arranged on a substrate (for example, a glass substrate).
  • the dispersion element 25 may be made of a metamaterial in which microstructures are arranged three-dimensionally, or may be made of a metamaterial (metasurface) in which microstructures are arranged two-dimensionally.
  • the dispersive element 25 of this embodiment is made of a metamaterial, it is possible to make the dispersive element 25 have a function of condensing the light that has passed through the bandpass filter BPF onto the light receiving sensor 22 (light condensing function). It is. Thereby, it is possible to reduce the size of the light receiving optical system 24. Note that when the dispersion element 25 made of a metamaterial has a light condensing function, the light receiving optical system 24 does not need to include the condensing lens 241 shown in the figure.
  • the light-receiving optical system 24 includes the condensing lens 241
  • the dispersive element 25 since the light-receiving optical system 24 includes the condensing lens 241, there is no need for the dispersive element 25 to have a condensing function, and the design constraints of the dispersive element 25 are alleviated. It becomes easy to configure the element 25.
  • the dispersion element 25 of this embodiment is made of a metamaterial, the function of absorbing the light vibrating in a direction crossing the predetermined direction (polarizing filter function) is distributed while allowing the light vibrating in a predetermined direction to pass through. It is possible to have the element 25 have both. For example, when the light emitted by the light emitting unit 12 is light that vibrates in a predetermined direction, the dispersion element 25 made of a metamaterial passes the light that vibrates in the predetermined direction and vibrates in a direction that intersects with the predetermined direction. By absorbing light, the influence of noise can be suppressed.
  • the dispersion element 25 by having the dispersion element 25 have a polarizing filter function in which the polarization axis is vertical and the absorption axis is horizontal, the reflected light directly arriving from the object 90 is allowed to pass, while the reflected light reflected from the road surface (horizontal can absorb light that vibrates in different directions. In this way, by having the dispersion element 25 having a polarization filter function, it is possible to suppress the influence of noise caused by unnecessary light.
  • the dispersion element 25 may be composed of a prism or a diffraction grating.
  • the dispersive element 25 is configured with a prism, the polarization angle of the light is small, so in order to disperse the light across the plurality of light receiving elements 222, it is necessary to set the distance between the dispersive element 25 and the light receiving sensor 22 to be long.
  • the dispersive element 25 is formed of a diffraction grating, if the number of lines of the diffraction grating is small, it is necessary to set the distance between the dispersive element 25 and the light receiving sensor 22 to be long, as in the case of a prism.
  • the dispersion element 25 is configured with a diffraction grating having a large number of lines, the intensity of the light received by the light receiving element 222 decreases due to the loss of light depending on the diffraction efficiency.
  • the dispersive element 25 is made of a metamaterial, the distance between the dispersive element 25 and the light-receiving sensor 22 can be set short, and the decrease in the intensity of the light received by the light-receiving element 222 can also be suppressed.
  • the dispersive element 25 when the dispersive element 25 is composed of a prism, it is necessary to set the distance between the dispersive element 25 and the light receiving sensor 22 to about several tens of mm, whereas when the dispersive element 25 is composed of a metamaterial, , it is possible to set the distance between the dispersion element 25 and the light receiving sensor 22 to about 10 ⁇ m.
  • background lights with wavelengths in the range of wavelengths ⁇ 1 to ⁇ 3 pass through the bandpass filter BPF.
  • the background light that has passed through the bandpass filter BPF enters the dispersion element 25 and is dispersed across the three light receiving elements 222. Therefore, in the case of the configuration shown in this embodiment, compared to the reference example, the influence of noise included in the pixel data S (data indicating the amount of light received by the pixel 221; light reception data of the light receiving element 222 that received reflected light) is reduced. It can be reduced to about 1/3. In the case of the configuration shown in this embodiment, the SN ratio is improved about three times as compared to the reference example.
  • the measuring device 1 includes a temperature sensor 41.
  • the temperature sensor 41 is a sensor that measures the temperature of the measuring device 1 (particularly the temperature of the light emitting section 12).
  • the temperature sensor 41 outputs temperature data indicating the measurement result to the control unit 30.
  • the control unit 30 measures the distance based on the light reception data of the light receiving element 222 that corresponds to the temperature data of the temperature sensor 41. This point will be explained below.
  • FIG. 6 is a correspondence table used in the first measurement method.
  • the signal processing unit 362 of the control unit 30 stores a correspondence table shown in FIG. 6 in advance.
  • the correspondence table the temperature T and the received light data to be used as the pixel data S are associated with each other.
  • the light emitting unit 12 when the temperature is T1 to T12, the light emitting unit 12 emits light with a wavelength of 875 to 895 nm, and the dispersion element 25 spectrally disperses the light in this wavelength band toward the first light receiving element 222A.
  • the temperature is between T12 and T23, the light emitting section 12 emits light with a wavelength of 895 to 915 nm, and the dispersion element 25 separates the light in this wavelength band toward the second light receiving element 222B.
  • the temperature is T23 to T3
  • the light emitting section 12 emits light with a wavelength of 915 to 935 nm, and the dispersion element 25 disperses the light in this wavelength band toward the third light receiving element 222C.
  • the signal processing unit 362 refers to the correspondence table based on the temperature data acquired from the temperature sensor 41 and determines the received light data to be used as the pixel data S. For example, when the temperature data of the temperature sensor 41 is in the range of T1 to T12 (here, T1 or more and T12 or less), as shown in FIG.
  • the light reception data S1 of the first light reception element 222A is acquired from among the three light reception elements 222 (the first light reception element 222A, the second light reception element 222B, and the third light reception element 222C) corresponding to the light reception element 121.
  • the control unit 30 acquires the light reception data S2 of the second light receiving element 222B based on the correspondence table, as shown in FIG. Furthermore, when the temperature data of the temperature sensor 41 is in the range of T23 to T3, the control unit 30 acquires light reception data S3 of the third light receiving element 222C based on the correspondence table, as shown in FIG.
  • the signal processing unit 362 obtains pixel data S of the pixel 221 corresponding to the light emitting element 121 by obtaining light reception data from the light receiving element 222 selected according to the temperature.
  • the distance measuring unit 36 (signal processing unit 362) of the control unit 30 determines the arrival of the reflected light based on the light reception data (pixel data S of the pixel 221; see FIG. 3) acquired from the light receiving element 222 selected according to the temperature. Detect timing. Then, the distance measuring section 36 (time detection section 364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. Further, the distance measuring section 36 (distance calculating section 366) calculates the distance L to the target object 90 based on the time Tf.
  • the temperature data of the temperature sensor 41 is measured under a situation (standard situation) in which the first light receiving element 222A to the third light receiving element 222C output predetermined light reception data.
  • a phenomenon occurs in which the control unit 30 outputs different distances.
  • the light emitting unit 12 is at a predetermined temperature (reference temperature)
  • the first to third light receiving elements 222A to 222C are at a predetermined temperature.
  • a situation for outputting the received light data S1 to S3 is set, and this situation is set as a reference situation.
  • the reference temperature and reference distance can be set arbitrarily.
  • the dispersion element 25 disperses the reflected light with a wavelength ⁇ 2 toward the second light receiving element 222B
  • the bandpass The situation is such that the light with wavelengths ⁇ 1 to ⁇ 3 passing through the filter BPF is dispersed across the three light receiving elements 222 according to the wavelength, and the three light receiving elements 222 respectively receive light reception data S1 to S3 according to this reference situation. It will be output.
  • the control unit 30 acquires the light reception data S2 of the second light receiving element 222B as the pixel data S as shown in FIG. The calculated distance will be output.
  • the temperature data of the temperature sensor 41 is changed under the above reference situation. Note that here, only the temperature data acquired by the control unit 30 is changed (in other words, dummy temperature data is simply input to the control unit 30), and the wavelength of the light emitted by the light emitting unit 12 is the same as the standard.
  • the situation is the same as that of the reference situation, and the light reception data output by the light receiving element 222 is also the same as the reference situation.
  • the control unit 30 changes the output distance from the distance according to the light reception data S2 of the second light reception element 222B. The distance is changed according to the light reception data S1 of 1 light receiving element 222A.
  • the controller 30 when the temperature data of the temperature sensor 41 changes under the condition (reference condition) in which the light receiving element 222 outputs predetermined received light data, the controller 30 will output different distances. In other words, by changing the temperature data of the temperature sensor 41 under a situation in which the light receiving element 222 outputs predetermined light receiving data (reference condition), the light receiving data of the light receiving element 222 corresponding to the temperature data of the temperature sensor 41 is changed. It is possible to verify that the distance is measured by
  • ⁇ Second measurement method> The angle of the light emitted from the dispersion element 25 toward the light receiving element 222 gradually changes depending on the wavelength. Therefore, the dispersion element 25 separates the reflected light toward the boundary between the two light receiving elements 222 (for example, the first light receiving element 222A and the second light receiving element 222B), and the two light receiving elements 222 receive the reflected light. There is. In such a case, rather than measuring the distance based on the light reception data of one of the three light receiving elements 222 as in the first measuring method, the distance is measured based on the light receiving data of the two light receiving elements 222 that receive the reflected light. It is advantageous to measure distances based on data. In the second measurement method, it is possible to measure the distance based on the light reception data of two or more light receiving elements 222 according to the temperature data.
  • FIG. 10 is a table used in the second measurement method.
  • FIG. 11 is an explanatory diagram of the second measurement method.
  • a weight table shown in FIG. 10 is stored in advance in the storage unit (not shown) of the control unit 30.
  • the weight table associates the temperature T with a weighting coefficient.
  • the weighting coefficient corresponds to weighting data associated with temperature data of the temperature sensor 41.
  • the weighting coefficients include a first weighting coefficient W1, a second weighting coefficient W2, and a third weighting coefficient W3.
  • the first weighting coefficient W1 is a weighting coefficient for the light reception data S1 of the first light receiving element 222A.
  • the second weighting coefficient W2 is a weighting coefficient for the light reception data S2 of the second light receiving element 222B.
  • the third weighting coefficient W3 is a weighting coefficient for the light reception data S3 of the third light receiving element 222C.
  • the signal processing unit 362 acquires a weighting coefficient (weighting data) corresponding to the temperature data acquired from the temperature sensor 41 from a weighting table stored in the storage unit. That is, the signal processing unit 362 refers to the weighting table based on the temperature data acquired from the temperature sensor 41, and determines the weighting coefficients (first weighting coefficient W1, second weighting coefficient W2, and third weighting coefficient W3) corresponding to the temperature data. ) to obtain. Then, the signal processing unit 362 receives light reception data S1 to S3 of the three light receiving elements 222 (first light receiving element 222A, second light receiving element 222B, and third light receiving element 222C) corresponding to the light emitting element 121 that has emitted the light. The pixel data S is calculated by acquiring the respective light receiving data S1 to S3 of the respective light receiving elements 222 and weighting them according to the weighting coefficients. Specifically, the signal processing unit 362 calculates the pixel data S based on the following equation.
  • W1 and W2 are set such that the closer temperature T is to T1 than T2, W1 becomes larger than W2 (W3 is set to zero). ing).
  • W2 and W3 are set such that the closer temperature T is to T2 than T3, W2 becomes larger than W3 (W1 is set to zero). ing). Thereby, even if the dispersion element 25 disperses the reflected light at the boundary between the second light receiving element 222B and the third light receiving element 222C, the distance can be measured with high accuracy.
  • the distance measuring unit 36 (signal processing unit 362) of the control unit 30 detects the arrival timing of the reflected light based on the calculated pixel data S (see FIG. 3). Then, the distance measuring section 36 (time detection section 364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. Further, the distance measuring section 36 (distance calculating section 366) calculates the distance L to the target object 90 based on the time Tf.
  • the control The unit 30 will output different distances.
  • the control The unit 30 will output different distances.
  • the temperature data of the temperature sensor 41 changes under a situation in which the light receiving element 222 outputs predetermined light receiving data (reference condition)
  • the light receiving data of the light receiving element 222 corresponding to the temperature data of the temperature sensor 41 is changed. It is possible to verify that the distance is measured by
  • the distance outputted by the control unit 30 is gradually changed.
  • FIG. 12 is an explanatory diagram of the correspondence between the light emitting element 121 and the light receiving element 222. Further, FIG. 12 is also an explanatory diagram of the correspondence relationship between the pixel 221 and the light receiving element 222.
  • the light emitting section 12 includes a plurality of light emitting elements 121.
  • two adjacent light emitting elements 121 (#1, #2) among the plurality of light emitting elements 121 of the light emitting section 12 are shown.
  • regions (#1, #2) on the measurement area 50 corresponding to two light emitting elements 121 (#1, #2).
  • pixels 221 (#1, #2) of two light receiving sensors 22 corresponding to two light emitting elements 121 (#1, #2) and two areas (#1, #2) on the measurement area 50 are shown. #2) is shown.
  • the two pixels 221 (#1, #2) correspond to the two light emitting elements 121 (#1, #2) of the light emitting section 12, respectively.
  • the light emitted from the light emitting element 121#1 is irradiated onto the area #1 on the measurement area 50, and the light (reflected light and background light) from the area #1 is received by the pixel 221#1. Become. Further, the light emitted from the light emitting element 121#2 is irradiated onto the region #2 on the measurement area 50, and the light (reflected light and background light) from the region #2 is received by the pixel 221#2. It turns out.
  • one pixel 221 of the light receiving sensor 22 includes a plurality of (here, three) light receiving elements 222.
  • the dispersion element 25 of the light-receiving optical system 24 distributes light (reflected light and background light) arriving from region #1 on the measurement area 50 to a plurality of light-receiving elements 222 (here, the first The light is dispersed across the light receiving element 222A to the third light receiving element 222C).
  • the dispersion element 25 of the light-receiving optical system 24 disperses the light (reflected light and background light) arriving from the region #2 on the measurement area 50 across the plurality of light-receiving elements 222 belonging to the pixel 221 #2.
  • a plurality of light-receiving elements 222 (a plurality of light-receiving elements 222 belonging to pixel 221#1) are associated with the light-emitting element 121#1, and a plurality of light-receiving elements are associated with the light-emitting element 121#2.
  • 222 (a plurality of light receiving elements 222 belonging to pixel 221#2) are separate and do not overlap. In this way, when the multiple light receiving elements 222 associated with one of the two light emitting elements 121 are different from the multiple light receiving elements 222 associated with the other light emitting element 121 , the control section 30 (timing control section 34) can cause the two light emitting elements 121 to emit light simultaneously.
  • the control section 30 timing control section 34
  • control unit 30 may emit light from all the light emitting elements 121 of the light emitting unit 12 to irradiate the entire measurement area 50 with light at once. However, light may be emitted from some of the light emitting elements 121 (for example, one light emitting element 121) of the light emitting unit 12 to irradiate only a predetermined region of the measurement area 50 with the light.
  • FIG. 13 is an explanatory diagram of another correspondence relationship between the light emitting element 121 and the light receiving element 222.
  • FIG. 13 is also an explanatory diagram of another correspondence relationship between the pixel 221 and the light receiving element 222.
  • one pixel 221 of the light receiving sensor 22 is constituted by one light receiving element 222.
  • the dispersion element 25 of the light-receiving optical system 24 transmits light (reflected light and background light) arriving from area #1 on the measurement area 50 to three light-receiving elements 222 composed of three pixels 221. spectroscopy across the spectrum. Further, the dispersion element 25 of the light receiving optical system 24 spreads the light (reflected light and background light) arriving from region #2 on the measurement area 50 over the three light receiving elements 222 made up of the three pixels 221. Spectroscopy.
  • FIG. 13 one pixel 221 of the light receiving sensor 22 is constituted by one light receiving element 222.
  • the control unit 30 transmits light from the two light emitting elements 121 at the same time. Do not emit radiation.
  • the control unit 30 causes some of the light emitting elements 121 (for example, one light emitting element 121) of the light emitting unit 12 to emit light to irradiate only a predetermined region of the measurement area 50 with the light.
  • control unit 30 irradiates light from a plurality of light emitting elements 121 whose corresponding plurality of light receiving elements 222 do not overlap.
  • a condensing lens 241 is placed in front of the dispersion element 25 (on the side of the measurement area 50) to collect light (reflected light) arriving from different areas (#1, #2) on the measurement area 50. It is desirable that the condensing lens 241 condenses the light and background light at different positions on the dispersion element 25. This makes it easy to configure the light receiving optical system 24 so that the plurality of light receiving elements 222 belonging to the pixel 221#1 and the plurality of light receiving elements 222 belonging to the pixel 221#2 do not overlap, as shown in FIG. Become.
  • the measuring device 1 of this embodiment includes a light emitting section 12, a band pass filter BPF (corresponding to a filter), a light receiving sensor 22, and a dispersion element 25.
  • the light emitting unit 12 emits light of a wavelength depending on the temperature within a predetermined wavelength band.
  • the bandpass filter BPF passes light in a predetermined wavelength band, and passes reflected light of the light emitted from the light emitting unit 12.
  • the light receiving sensor 22 has a plurality of light receiving elements 222.
  • the dispersion element 25 is disposed between the bandpass filter BPF and the light receiving sensor 22, and disperses the light that has passed through the bandpass filter BPF to the two or more light receiving elements 222.
  • the measuring device 1 of this embodiment further includes a temperature sensor 41 and a control section 30, and the control section 30 determines the distance based on the light reception data of the light receiving element 222 corresponding to the temperature data of the temperature sensor 41. Measure. As a result, the distance can be measured based on the received light data with the influence of noise suppressed, thereby improving measurement accuracy.
  • the measuring device 1 of the present embodiment further includes a temperature sensor 41 and a control unit 30, and the temperature of the temperature sensor 41 under a situation (reference situation) in which the light receiving element 222 outputs predetermined light reception data.
  • the control unit 30 outputs a different distance.
  • the temperature data of the temperature sensor 41 under a situation in which the light receiving element 222 outputs predetermined light receiving data (reference condition)
  • the light receiving data of the light receiving element 222 corresponding to the temperature data of the temperature sensor 41 is changed. It is possible to verify that the distance is measured by
  • the measuring device 1 of this embodiment includes a storage unit that stores weighting coefficients (corresponding to weighting data) associated with temperature data of the temperature sensor 41 (see FIG. 10). Then, the control unit 30 acquires a weighting coefficient (weighting data) corresponding to the temperature data of the temperature sensor 41 from the storage unit, and weights each light reception data of the two or more light receiving elements 222 according to the weighting coefficient. By doing so, the received light data S is calculated, and the distance is measured based on the received light data S. Thereby, even if the dispersion element 25 disperses the reflected light at the boundary of the light receiving element 222, the distance can be measured with high accuracy.
  • the dispersion element 25 of this embodiment is made of a metamaterial. Thereby, the distance between the dispersion element 25 and the light receiving sensor 22 can be set short, and the measuring device 1 can be downsized.
  • the dispersive element 25 is made of a metamaterial, it is desirable that the dispersive element 25 has a light condensing function (a function of condensing the light that has passed through the filter onto the light receiving sensor 22). Thereby, it is possible to reduce the size of the light receiving optical system 24.
  • the dispersive element 25 when the dispersive element 25 is made of a metamaterial, the dispersive element 25 should have a polarization filter function (a function of allowing light vibrating in a predetermined direction to pass and absorbing light vibrating in a direction crossing the predetermined direction). is desirable. This makes it possible to suppress the influence of noise caused by unnecessary light.
  • the dispersion element 25 may be configured by a prism or a diffraction grating. Thereby, the dispersion element 25 can be constructed
  • the measuring device 1 of this embodiment includes a condenser lens 241 disposed between the bandpass filter BPF and the dispersion element 25. This eliminates the need for the dispersion element 25 to also have a light condensing function, so that restrictions on the design of the dispersion element 25 can be alleviated.
  • FIG. 14 is an explanatory diagram of the overall configuration of the measuring device 1001.
  • FIG. 15 is a schematic explanatory diagram of the measuring device 1001.
  • each direction is defined as shown in FIG. 15.
  • the Z direction is a direction along the optical axis of the light receiving optical system 1024. Note that the object 1090 to be measured by the measuring device 1001 is separated from the measuring device 1001 in the Z direction.
  • the X direction and the Y direction are directions perpendicular to the Z direction.
  • the X direction is a direction in which the optical axis of the light projecting optical system 1014 and the optical axis of the light receiving optical system 1024 are lined up.
  • the Y direction is a direction perpendicular to the X direction and the Z direction.
  • the measuring device 1001 is a device that measures the distance to the target object 1090.
  • the measuring device 1001 is a device having a function of so-called LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging).
  • the measuring device 1001 emits measurement light, detects the reflected light reflected from the surface of the target object 1090, and measures the time from when the measurement light is emitted to when the reflected light is received. Measure the distance using the TOF method (Time of flight).
  • the measuring device 1001 includes an irradiation section 1010, a light emitting section 1020, and a control section 1030.
  • the irradiation unit 1010 is an irradiation device that irradiates measurement light toward the object 1090.
  • the irradiation unit 1010 irradiates the measurement area 1050 (see FIG. 15) with measurement light at a predetermined angle of view.
  • the irradiation unit 1010 includes a light emitting unit 1012 and a light projection optical system 1014.
  • the light emitting unit 1012 is a member (light source) that emits light.
  • the light emitting unit 1012 is configured with a surface emitting laser (VCSEL) array chip.
  • the light emitting unit 1012 includes a plurality of light emitting elements 1121 (for example, a surface emitting laser; VCSEL), and the plurality of light emitting elements 1121 are two-dimensionally arranged along the X direction and the Y direction.
  • the light projection optical system 1014 is an optical system that irradiates the measurement area 1050 with the light emitted from the light emitting section 1012.
  • the light emitting unit 1012 can cause each light emitting element 1121 to emit light individually.
  • Each light emitting element 1121 of the light emitting unit 1012 is associated with a predetermined region of the measurement area 1050 via the light projection optical system 1014.
  • the irradiation unit 1010 may be configured to emit light from the entire light emitting surface of the light emitting unit 1012 and irradiate the entire measurement area 1050 with light at once.
  • the light projecting optical system 1014 may include a rotatable mirror (for example, a polygon mirror), and the light from the light emitting unit 1012 may be irradiated onto the measurement area 1050 by rotating the mirror. Note that the wavelength of the light emitted by the light emitting section 1012 changes depending on the temperature. This point will be discussed later.
  • the light emitting unit 1020 receives reflected light from the target object 1090.
  • the light emitting unit 1020 receives reflected light from the measurement area 1050 (see FIG. 15).
  • the light emitting unit 1020 includes a light receiving sensor 1022 and a light receiving optical system 1024.
  • the light receiving sensor 1022 has a plurality of pixels 1221 arranged two-dimensionally. For example, in the case of the VGA light receiving sensor 1022, 480 ⁇ 640 pixels 1221 are two-dimensionally arranged. Each pixel 1221 has a light receiving element, and the light receiving element outputs a signal (light receiving data) according to the amount of light received.
  • the light receiving optical system 1024 is an optical system that causes the light emitting unit 1020 to receive reflected light from the measurement area 1050.
  • the light receiving optical system 1024 forms an image of the measurement area 1050 on the light receiving surface of the light receiving sensor 1022.
  • Each pixel 1221 of the light receiving sensor 1022 is associated with a predetermined region of the measurement area 1050 via the light receiving optical system 1024.
  • a certain pixel 1221 of the light receiving sensor 1022 receives light (reflected light and background light) from a corresponding region of the measurement area 1050 via the light receiving optical system 1024.
  • each pixel 1221 of the light receiving sensor 1022 is associated with a predetermined light emitting element 1121 of the light emitting section 1012. Light emitted from a certain light emitting element 1121 is received by the corresponding pixel 1221 via the light projecting optical system 1014 and the light receiving optical system 1024. Note that the light receiving optical system 1024 will be described later.
  • the control unit 1030 controls the measurement device 1001.
  • the control unit 1030 controls the irradiation unit 1010 and controls the light emitted from the irradiation unit 1010. Furthermore, the control unit 1030 measures the distance to the object 1090 using a TOF method (Time of Flight) based on the output result of the light emitting unit 1020.
  • the control unit 1030 includes an arithmetic unit and a storage device (not shown).
  • the arithmetic device is an arithmetic processing device composed of, for example, a CPU, GPU, MPU, ASIC, or the like. A part of the arithmetic device may be constituted by an analog arithmetic circuit.
  • a storage device is a device that is composed of a main storage device and an auxiliary storage device, and stores programs and data.
  • Various processes for measuring the distance to the target object 1090 are executed by the arithmetic device executing the program stored in the storage device.
  • FIG. 14 shows functional blocks for various processes.
  • the control section 1030 includes a setting section 1032, a timing control section 1034, and a distance measuring section 1036.
  • the setting unit 1032 performs various settings.
  • the timing control unit 1034 controls the processing timing of each unit. For example, the timing control unit 1034 controls the timing at which light is emitted from the light emitting unit 1012.
  • the distance measuring unit 1036 measures the distance to the target object 1090.
  • the distance measurement section 1036 includes a signal processing section 1362, a time detection section 1364, and a distance calculation section 1366.
  • the signal processing unit 1362 processes the output signal (light reception data) of the light reception sensor 1022.
  • the time detection unit 1364 detects the flight time of light (the time from when the light is irradiated until the reflected light arrives).
  • the distance calculation unit 1366 calculates the distance to the target object 1090.
  • FIG. 16 is a timing chart for explaining an example of the measurement method.
  • the control unit 1030 causes the light emitting unit 1012 of the irradiation unit 1010 to emit pulsed light at a predetermined period.
  • the upper side of FIG. 16 shows the timing at which the light emitting unit 1012 emits pulsed light (measuring light).
  • the light emitted from the light emitting unit 1012 is irradiated onto the measurement area 1050 via the light projection optical system 1014.
  • the light reflected from the surface of the object 1090 within the measurement area 1050 is received by the light receiving sensor 1022 via the light receiving optical system 1024.
  • the pixel 1221 of the light receiving sensor 1022 receives the pulsed reflected light.
  • the center of FIG. 16 shows the timing at which the pulsed reflected light arrives. At the bottom of FIG.
  • Pixel data S of a certain pixel 1221 of the light receiving sensor 1022 (light reception data of a light receiving element of a certain pixel 1221) is shown.
  • Pixel data S of the light receiving sensor 1022 is data indicating the amount of light received by the pixel 1221.
  • the control unit 1030 may emit light from all the light emitting elements 1121 of the light emitting unit 1012 to irradiate the entire measurement area 1050 with light at once, or may emit light from all of the light emitting elements 1121 of the light emitting unit 1012, or may emit light from all of the light emitting elements 1121 of the light emitting unit 1012.
  • the light emitting element 1121 (for example, one light emitting element 1121) may emit light to irradiate only a predetermined region of the measurement area 1050 with light.
  • the control unit 1030 When emitting light from some of the light emitting elements 1121 (for example, one light emitting element 1121) of the light emitting unit 1012, the control unit 1030 (signal processing unit 1362) generates pixel data of the pixel 1221 corresponding to the light emitting element 1121 that has emitted light. You will get S.
  • the distance measuring section 1036 (signal processing section 1362) of the control section 1030 detects the arrival timing of the reflected light based on the pixel data S of each pixel 1221. For example, the signal processing unit 1362 detects the arrival timing of the reflected light based on the timing of the peak of the pixel data S.
  • the distance measurement unit 1036 (time detection unit 1364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. The time Tf corresponds to the time during which light travels back and forth between the measuring device 1001 and the target object 1090.
  • the control unit 1030 generates a distance image by calculating the distance to the object 1090 for each pixel 1221 based on the time Tf detected for each pixel 1221 of the light emitting unit 1020.
  • the light receiving optical system 1024 includes a condenser lens 1241 and a bandpass filter BPF.
  • the condensing lens 1241 is an optical element that forms an image of the measurement area 1050 on the light receiving surface of the light receiving sensor 1022.
  • the bandpass filter BPF is a filter that passes light of a specific wavelength and cuts (attenuates) light of other wavelengths. In the bandpass filter BPF, the transmittance of light having a specific wavelength is high, and the transmittance of light other than the specific wavelength is low (the attenuation rate of light other than the specific wavelength is high).
  • the wavelength band of light that passes through the band-pass filter BPF is called the "pass band”
  • the wavelength band of light that is cut (attenuated) by the band-pass filter BPF is called the "cut-off band”.
  • the passband is a wavelength band in which the transmittance of the bandpass filter BPF is 50% or more
  • the cutoff band is a band in which the transmittance is less than 50%. Since the bandpass filter BPF needs to transmit reflected light, it can transmit at least the light of the wavelength emitted from the light emitting section 1012 (the transmittance for the wavelength of the light emitted from the light emitting section 1012 is high). .
  • the passband of the bandpass filter BPF needs to include at least the wavelength of the light emitted from the light emitting section 1012. Since the light-receiving optical system 1024 includes the band-pass filter BPF, it is possible to cut off light in the cutoff band of background light such as sunlight, and therefore it is possible to suppress the influence of noise caused by the background light.
  • the wavelength ⁇ of the light emitted from the light emitting section 1012 changes depending on the temperature. For example, when the temperature of the light emitting section 1012 increases, the wavelength ⁇ of light emitted from the light emitting section 1012 becomes longer.
  • T2 105 °C
  • the light emitting section 1012 emits light with a wavelength ⁇ corresponding to the temperature within the wavelength band ⁇ 1 to ⁇ 2 within the operating temperature range T1 to T2.
  • the bandpass filter BPF needs to transmit the light emitted from the light emitting section 1012 with high transmittance. If the pass band of the band pass filter BPF is constant regardless of temperature, it is high for all wavelengths in the wavelength band ( ⁇ 1 to ⁇ 2) of light emitted from the light emitting unit 1012 in the range of operating temperature T1 to T2. Transmittance is required. In other words, it is necessary to expand the passband of the bandpass filter BPF so that it includes the entire wavelength band (range of ⁇ 1 to ⁇ 2) of light emitted from the light emitting section 1012. However, as a result of expanding the passband of the bandpass filter BPF, the amount of background light (light in the passband in the background light) that passes through the bandpass filter BPF increases.
  • the passband of the bandpass filter BPF is constant regardless of temperature, noise included in the pixel data S (data indicating the amount of light received by the pixel 1221) will increase. Therefore, in this embodiment, the wavelength of light that can pass through the bandpass filter BPF changes depending on the temperature. In other words, the bandpass filter BPF of this embodiment changes its characteristics depending on the temperature (the bandpass filter BPF has temperature dependence). This point will be explained below.
  • FIG. 19 is a schematic explanatory diagram showing the characteristics of the bandpass filter BPF of this embodiment.
  • the horizontal axis of the graph in the figure shows the wavelength (unit: nm), and the vertical axis shows the transmittance (unit: %).
  • the two graphs in the figure show the relationship between wavelength and transmittance of the bandpass filter BPF at two different temperatures.
  • the thick line in the figure is a graph showing the characteristics of the bandpass filter BPF at the temperature T11.
  • the thin line in the figure is a graph showing the characteristics of the bandpass filter BPF at the temperature T12 (>T11).
  • the passband of the bandpass filter BPF (the wavelength band of light passing through the bandpass filter BPF) changes depending on the temperature.
  • the wavelength of the passband of the bandpass filter BPF becomes longer. That is, as the temperature increases, the passband of the bandpass filter BPF shifts to the right side (longer wavelength side) in the graph.
  • the center wavelength of the passband of the bandpass filter BPF (the peak wavelength at which the transmittance peaks) is ⁇ 11
  • T12 >T12
  • the center wavelength of the passband of the bandpass filter BPF is ⁇ 11.
  • the center wavelength of the passband of the pass filter BPF is ⁇ 12 (> ⁇ 12).
  • the transmittance of the bandpass filter BPF for the wavelength ⁇ 11 is higher when the temperature is T11 than when the temperature is T12 (>T11). Further, the transmittance of the bandpass filter BPF for the wavelength ⁇ 12 is higher when the temperature is T12 (>T11) than when the temperature is T11.
  • the transmittance of the bandpass filter BPF for wavelength ⁇ 1 becomes high, and the wavelength It becomes easier for the reflected light of ⁇ 1 to pass through the bandpass filter BPF.
  • T2 105°C
  • the transmittance of the bandpass filter BPF for wavelength ⁇ 2 becomes high, and the wavelength ⁇ 2 It becomes easier for the reflected light to pass through the bandpass filter BPF.
  • center wavelength (peak wavelength) of the passband of the bandpass filter BPF at temperature T1 ( ⁇ T11) is set to ⁇ 1
  • center wavelength of the passband of the bandpass filter BPF at temperature T2 ( ⁇ T12) is set to ⁇ 1.
  • the structure of the bandpass filter BPF for setting the wavelength (peak wavelength) to ⁇ 2 will be described later.
  • the wavelength ⁇ of the light emitted from the light emitting section 1012 becomes longer. Therefore, in this embodiment, as the temperature increases, the wavelength at which the transmittance peaks (center wavelength) becomes longer.
  • a bandpass filter BPF is used. As a result, even if the wavelength ⁇ of the light emitted from the light emitting unit 1012 changes depending on the temperature, the transmittance of the bandpass filter BPF for the wavelength ⁇ becomes high, and the transmittance of the bandpass filter BPF for wavelengths other than the wavelength ⁇ increases. Transmittance becomes low. As a result, the influence of noise included in the pixel data S can be reduced, and the S/N ratio can be improved.
  • FIG. 20 is a top view of the bandpass filter BPF.
  • FIG. 21 is a cross-sectional view of the bandpass filter BPF. Although only seven protrusions 1027 are depicted in FIG. 20, countless protrusions 1027 are arranged on the surface of the bandpass filter BPF in the arrangement pattern shown in FIG. 20.
  • the bandpass filter BPF is composed of a base material 1025 and a thin film 1026.
  • the base material 1025 is made of a light-transmitting member.
  • the base material 1025 is composed of a glass substrate (here, organic glass).
  • a thin film 1026 is formed on the surface of the base material 1025.
  • the base material 1025 is made of a light-transmissive resin (resin with high transmittance), and here the base material 1025 is made of PMMA (Poly Methyl Methacrylate).
  • the base material 1025 may be made of other organic glass.
  • the base material 1025 is made of a material having a larger coefficient of linear expansion than the thin film 1026.
  • the base material 1025 may have a coefficient of linear expansion comparable to that of the thin film 1026, and does not need to be made of a material having a coefficient of linear expansion larger than that of the thin film 1026.
  • a thin film 1026 is formed on the surface of the base material 1025. Further, the thin film 1026 is made of a material having a different refractive index from that of the base material 1025. For example, the thin film 1026 is made of a material with a high refractive index so that the refractive index n2 of the thin film 1026 is higher (larger) than the refractive index n1 of the base material 1025.
  • the thin film 1026 is made of titanium oxide (TiO 2 ).
  • the thin film 1026 is not limited to titanium oxide (TiO 2 ), and may be made of, for example, amorphous silicon ( ⁇ -Si).
  • the thin film 1026 is made of a material (inorganic material) having a smaller coefficient of linear expansion than the base material 1025.
  • the thin film 1026 may have a coefficient of linear expansion comparable to that of the base material 1025, and does not need to be made of a material having a coefficient of linear expansion smaller than that of the base material 1025.
  • a plurality of protrusions 1027 are provided on the surface of the thin film 1026.
  • the protrusions 1027 on the surface of the thin film 1026 are microstructures smaller than the wavelength of light, and are sometimes called nanoposts.
  • the protrusion 1027 has a columnar shape, and here the protrusion 1027 has a cylindrical shape.
  • the protrusion 1027 may be configured in a prismatic shape (for example, a hexagonal column, an octagonal column, etc.).
  • a large number of protrusions 1027 are arranged in a predetermined pattern.
  • columnar projections 1027 are arranged in a grid pattern. In other words, a nanopost array is formed on the surface of the thin film 1026.
  • the protrusions 1027 By arranging a large number of protrusions 1027 in a grid pattern, the protrusions 1027 can be arranged at predetermined intervals. Furthermore, by arranging the columnar protrusions 1027 in a grid pattern, the polarization dependence of the bandpass filter BPF can be weakened. In addition, as shown in FIG. 20, by arranging a large number of protrusions 1027 in a triangular lattice shape (regular triangular lattice shape), each protrusion 1027 is arranged in a high density so that the intervals with the surrounding six protrusions 1027 are equal. It can be placed in Note that the arrangement pattern of the protrusions 1027 is not limited to the pattern shown in FIG. 20.
  • a large number of protrusions 1027 may be arranged in a hexagonal lattice so that the intervals between the protrusions 1027 and the three surrounding protrusions 1027 are equal.
  • the protrusion 1027 is made of the same material as the thin film 1026, and here it is made of titanium oxide (TiO 2 ).
  • the phase and direction of the light are controlled, and the wavelengths that strengthen each other (or wavelengths that weaken each other) are controlled.
  • the base material 1025 and the thin film 1026 (and the protrusions 1027) expand and contract in response to temperature changes, so the interval between the protrusions 1027 changes (the density of the protrusions 1027 changes) in accordance with the temperature.
  • the wavelength of light that can pass through the band-pass filter BPF changes depending on the temperature, and the center wavelength of the passband of the band-pass filter BPF changes.
  • the passband of the bandpass filter BPF can be changed to accommodate the fact that the wavelength ⁇ of the light emitted from the light emitting section 1012 becomes longer when the temperature rises.
  • the shape of the protrusion 1027 (the height and diameter of the cylinder) also changes depending on the temperature, and this effect also changes the wavelength of light that can pass through the bandpass filter BPF. The wavelength will change.
  • the characteristics (temperature dependence) of the bandpass filter BPF can be adjusted by adjusting the interval between the protrusions 1027.
  • the height and diameter of the protrusion 1027 it is possible to adjust the characteristics of the bandpass filter BPF (such as the wavelength of the passband and cutoff band).
  • a thin film 1026 having protrusions 1027 is formed on a base material 1025 having a large coefficient of linear expansion.
  • the coefficient of linear expansion of the protrusions 1027 is small, the change in the interval between the protrusions 1027 depending on the temperature can be increased.
  • the linear expansion coefficient of the base material 1025 does not have to be larger than the linear expansion coefficient of the thin film 1026 (and protrusions 1027). It may be about the same level as .
  • FIG. 22 is a graph showing the characteristics of the bandpass filter BPF.
  • the thick line in the figure is a graph showing the characteristics of the bandpass filter BPF when the temperature is 25° C. (corresponding to T1 and T11 described above).
  • the thin line in the figure is a graph showing the characteristics of the bandpass filter BPF when the temperature is 105° C. (corresponding to T2 and T12 described above).
  • the base material 1025 is made of PMMA (Poly Methyl Methacrylate), and the thin film 1026 (and protrusion 1027) is made of titanium oxide (TiO 2 ).
  • the linear expansion coefficient of the base material 1025 is 56 ⁇ 10 ⁇ 6 /°C, and the linear expansion coefficient of the thin film 1026 is 10 ⁇ 10 ⁇ 6 /°C.
  • the refractive index of the base material 1025 is 1.49, and the refractive index of the thin film 1026 is 2.5.
  • the thickness of thin film 1026 is 146 nm.
  • the protrusions 1027 are arranged in the pattern shown in FIG. 20, and the interval between the protrusions 1027 is 557 nm. Further, the radius of the cylindrical protrusion 1027 is 185 nm, and the height of the protrusion 1027 is 145 nm.
  • the bandpass filter BPF shown in FIG. 22 has a high transmittance at 940 nm (corresponding to ⁇ 1 and ⁇ 11 described above) at 25°C. Since the light emitting unit 1012 of this embodiment emits light with a wavelength of 940 nm at 25° C., this bandpass filter BPF easily transmits reflected light. Further, this bandpass filter BPF has a high transmittance at 946 nm (corresponding to the above-mentioned ⁇ 2 and ⁇ 12) at 105°C. Since the light emitting unit 1012 of this embodiment emits light with a wavelength of 946 nm when the temperature is 105° C., this bandpass filter BPF easily transmits reflected light.
  • the light emitting unit 1012 emits light with a wavelength of 940 nm when the temperature is 25°C, and emits light with a wavelength of 946 nm when the temperature is 105°C.
  • the temperature characteristics of the light emitting section 1012 may differ depending on the structure of the light emitting section 1012.
  • the arrangement (interval between the protrusions 1027) and shape (height and diameter) of the protrusions 1027 of the bandpass filter BPF are changed depending on the temperature characteristics of the light emitting part 1012 (also, the base material 1025 and the thin film 1027) (You may change the material.)
  • FIG. 23 is a graph showing another characteristic of the bandpass filter BPF. Note that here, it is assumed that the light emitting unit 1012 emits light with a wavelength of 1298 nm when the temperature is 25°C, and emits light with a wavelength of 1308 nm when the temperature is 105°C.
  • the linear expansion coefficient of the base material 1025 is 56 ⁇ 10 ⁇ 6 /°C
  • the linear expansion coefficient of the thin film 1026 is 10 ⁇ 10 ⁇ 6 /°C.
  • the refractive index of the base material 1025 is 1.49
  • the refractive index of the thin film 1026 is 3.48.
  • the thickness of thin film 1026 is 270 nm.
  • the protrusions 1027 are arranged in the pattern shown in FIG. 20, and the interval between the protrusions 1027 is 860 nm. Further, the diameter of the cylindrical protrusion 1027 is 210 nm (radius is 105 nm), and the height of the protrusion 1027 is 160 nm.
  • the bandpass filter BPF shown in FIG. 23 has a high transmittance at 1298 nm (corresponding to ⁇ 1 and ⁇ 11 described above) at 25°C. Further, this bandpass filter BPF has a high transmittance at 1308 nm (corresponding to the above-mentioned ⁇ 2 and ⁇ 12) at 105°C. In this way, by appropriately changing the shape and arrangement of the protrusions 1027 of the bandpass filter BPF, it is possible to create a bandpass filter BPF adapted to the temperature characteristics of the light emitting section 1012.
  • the shape of the projection 1027 is not limited to the columnar shape.
  • the protrusion 1027 may be configured as a protrusion (a striped protrusion).
  • a large number of protrusions 1027 are arranged on the surface of the thin film 1026 at predetermined intervals in a predetermined direction.
  • the bandpass filter BPF in which the protrusions 1027 are arranged in a predetermined direction has stronger polarization dependence than the bandpass filter BPF in which the columnar protrusions 1027 are arranged in a lattice pattern (see FIG. 20). Become.
  • the bandpass filter BPF in which the protrusions 1027 are arranged in a predetermined direction when passing or blocking light vibrating in a specific direction.
  • the bandpass filter BPF can be adjusted by adjusting the distance between the protrusions 1027 (the distance between the protrusions) and the shape of the protrusions 1027 (height and width of the protrusions). The characteristics (passband, cutoff band wavelength, etc.) can be adjusted.
  • FIG. 24 is an explanatory diagram of a modification of the arrangement of the bandpass filter BPF.
  • the bandpass filter BPF is arranged between the condenser lens 1241 and the light receiving sensor 1022.
  • the influence of the angle dependence of the bandpass filter BPF can be suppressed. (In other words, the angle of light incident on the bandpass filter BPF can be suppressed).
  • the bandpass filter BPF when the bandpass filter BPF is placed in front of the condenser lens 1241 (on the object 1090 side), light enters the bandpass filter BPF from a wide angle, so the bandpass filter BPF is Becomes more susceptible to angle dependence.
  • the bandpass filter BPF is configured so that the transmittance changes depending on the temperature as in this embodiment, the angle dependence of the bandpass filter BPF may become strong. It is particularly effective to arrange the filter BPF between the condenser lens 1241 and the light receiving sensor 1022.
  • the light receiving optical system 1024 further include a bandpass filter BPF' that is different from the above-mentioned bandpass filter BPF.
  • Another band pass filter BPF' passes the light in the wavelength band emitted by the light emitting unit 1012, and cuts (attenuates) the light outside the wavelength band.
  • the light emitting unit 1012 emits light with a wavelength ⁇ corresponding to the temperature in the wavelength band ⁇ 1 to ⁇ 2 within the operating temperature range, another bandpass filter BPF' , while attenuating light in a wavelength band smaller than ⁇ 1 and in a wavelength band larger than ⁇ 2.
  • another bandpass filter BPF' while attenuating light in a wavelength band smaller than ⁇ 1 and in a wavelength band larger than ⁇ 2.
  • Another band-pass filter BPF' may be placed in front of the band-pass filter BPF (on the object 1090 side), or on the rear side of the band-pass filter BPF (on the object 1090 side when viewed from the band-pass filter BPF). may be placed on the opposite side). However, as shown in FIGS. 25 to 27, if another band-pass filter BPF' is placed in front of the band-pass filter BPF (on the object 1090 side), the filter passes through the other band-pass filter BPF'. It is possible to allow only that light to enter the band-pass filter BPF, and there is no need to allow light of a wavelength that is cut by another band-pass filter BPF' to enter the band-pass filter BPF.
  • bandpass filter BPF when the bandpass filter BPF is placed between the condenser lens 1241 and the light receiving sensor 1022, another bandpass filter BPF' is placed on the front side of the condenser lens 1241 (object 1090 side) as shown in FIG. ), or as shown in FIG. 27, the rear side of the condenser lens 1241 (the side opposite to the object 1090 when viewed from the condenser lens 1241; here, the condenser lens 1241 and the bandpass filter BPF may be placed between the Note that when another bandpass filter BPF' is arranged as shown in FIG. 27, the influence of the angle dependence of the bandpass filter BPF' can be suppressed.
  • the measuring device 1001 of this embodiment includes a light emitting section 1012, a band pass filter BPF, and a light receiving sensor 1022.
  • the light emitting unit 1012 emits light of a wavelength depending on the temperature.
  • the bandpass filter BPF allows the reflected light of the light emitted from the light emitting section 1012 to pass therethrough.
  • the light receiving sensor 1022 receives the light that has passed through the band pass filter BPF.
  • the wavelength of light that can pass through the bandpass filter BPF changes depending on the temperature.
  • the band pass filter BPF can pass the light, and the light receiving sensor 1022 can receive the reflected light.
  • the background light received by the light receiving sensor 1022 can be reduced, and the influence of noise caused by the background light can be suppressed.
  • the bandpass filter BPF includes a base material 1025 and a thin film 1026 formed on the base material 1025, and a plurality of protrusions 1027 are formed in a predetermined pattern on the surface of the thin film 1026.
  • the distance between the protrusions 1027 changes depending on the temperature. Thereby, the characteristics of the bandpass filter BPF can be changed depending on the temperature.
  • the linear expansion coefficient of the base material 1025 is larger than that of the thin film 1026. Therefore, even if the coefficient of linear expansion of the thin film 1026 is small, the change in the distance between the protrusions 1027 depending on the temperature can be increased.
  • the base material 1025 is made of a light-transmitting resin. This makes it easy to make the linear expansion coefficient of the base material 1025 larger than that of the thin film 1026.
  • the protrusions 1027 are configured in a columnar shape, and a plurality of protrusions 1027 are arranged in a grid pattern. Thereby, the polarization dependence of the bandpass filter BPF can be weakened.
  • the protrusion 1027 by configuring the protrusion 1027 as a protruding strip and arranging a plurality of protrusions in a predetermined direction, a bandpass filter BPF whose characteristics change depending on the temperature may be configured. Thereby, the bandpass filter BPF can be configured to pass or block light vibrating in a specific direction.
  • the bandpass filter BPF be placed between the condenser lens 1241 and the light receiving sensor 1022. Thereby, the influence of the angle dependence of the bandpass filter BPF can be suppressed.
  • bandpass filter BPF that passes light in the wavelength band emitted by the light emitting unit 1012.
  • the band pass filter BPF of this embodiment includes a base material 1025 and a thin film 1026 formed on the base material 1025, and a plurality of protrusions 1027 are arranged in a predetermined pattern on the surface of the thin film 1026.
  • the distance between the protrusions 1027 changes depending on the temperature. This makes it possible to realize a bandpass filter whose characteristics change depending on the temperature.
  • FIG. 28 is an explanatory diagram of the overall configuration of the measuring device 2001.
  • FIG. 29 is a schematic explanatory diagram of the measuring device 2001.
  • each direction is defined as shown in FIG. 29.
  • the Z direction is a direction along the optical axis of the light projection optical system 2014. Note that the object 2090 to be measured by the measuring device 2001 is separated from the measuring device 2001 in the Z direction. Further, the X direction and the Y direction are directions perpendicular to the Z direction. Note that the plurality of light emitting elements 2121 constituting the light emitting section 2012 are two-dimensionally arranged along the X direction and the Y direction (described later; see FIGS. 31 to 33). The plurality of pixels 2221 of the light emitting section 2020 are also two-dimensionally arranged along the X direction and the Y direction.
  • the measuring device 2001 is a device that measures the distance to the target object 2090.
  • the measurement device 2001 is a device having a function of so-called LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging).
  • the measuring device 2001 emits measurement light, detects the reflected light reflected from the surface of the target object 2090, and measures the time from when the measurement light is emitted to when the reflected light is received. Measure the distance using the TOF method (Time of flight).
  • the measuring device 2001 includes an irradiation section 2010, a light emitting section 2020, and a control section 2030.
  • the irradiation unit 2010 is an irradiation device that irradiates measurement light toward the target object 2090.
  • the irradiation unit 2010 irradiates the measurement area 2050 (see FIG. 29) with measurement light at a predetermined angle of view.
  • the irradiation unit 2010 includes a light emitting unit 2012 and a light projection optical system 2014.
  • the light emitting unit 2012 is a member (light source) that emits light.
  • the light emitting unit 2012 is configured with a surface emitting laser (VCSEL) array chip.
  • the light projection optical system 2014 is an optical system that irradiates the measurement area 2050 with the light emitted from the light emitting section 2012. The detailed configuration of the irradiation unit 2010 will be described later.
  • the light emitting unit 2020 receives reflected light from the target object 2090.
  • the light emitting unit 2020 receives reflected light from the measurement area 2050 (see FIG. 29).
  • the light emitting unit 2020 includes a light receiving sensor 2022 and a light receiving optical system 2024.
  • the light receiving sensor 2022 has a plurality of pixels 2221 arranged two-dimensionally. For example, in the case of the VGA light receiving sensor 2022, 480 ⁇ 640 pixels 2221 are two-dimensionally arranged. Each pixel 2221 has a light receiving element, and the light receiving element outputs a signal according to the amount of received light.
  • the light receiving optical system 2024 is an optical system that causes the light emitting unit 2020 to receive reflected light from the measurement area 2050.
  • the light receiving optical system 2024 forms an image of the measurement area 2050 on the light receiving surface of the light receiving sensor 2022 .
  • the control unit 2030 controls the measurement device 2001.
  • the control unit 2030 controls the irradiation unit 2010 and controls the light emitted from the irradiation unit 2010. Further, the control unit 2030 measures the distance to the target object 2090 based on the output result of the light emitting unit 2020 using a TOF method (Time of Flight).
  • the control unit 2030 includes an arithmetic unit and a storage device (not shown).
  • the computing device is, for example, a computing processing device such as a CPU or a GPU. A part of the arithmetic device may be constituted by an analog arithmetic circuit.
  • a storage device is a device that is composed of a main storage device and an auxiliary storage device, and stores programs and data. Various processes for measuring the distance to the target object 2090 are executed by the arithmetic device executing the program stored in the storage device.
  • FIG. 28 shows functional blocks for various processes.
  • the control section 2030 includes a setting section 2032, a timing control section 2034, and a distance measuring section 2036.
  • the setting unit 2032 performs various settings.
  • the timing control unit 2034 controls the processing timing of each unit. For example, the timing control unit 2034 controls the timing at which light is emitted from the light emitting unit 2012.
  • the distance measuring unit 2036 measures the distance to the target object 2090.
  • the distance measurement section 2036 includes a signal processing section 2362, a time detection section 2364, and a distance calculation section 2366.
  • the signal processing unit 2362 processes the output signal of the light receiving sensor 2022.
  • the time detection unit 2364 detects the flight time of light (the time from when the light is irradiated until the reflected light arrives).
  • the distance calculation unit 2366 calculates the distance to the target object 2090.
  • FIG. 30 is a timing chart for explaining an example of the measurement method.
  • the control unit 2030 causes the light emitting unit 2012 of the irradiation unit 2010 to emit pulsed light at a predetermined period.
  • the upper side of FIG. 30 shows the timing at which the light emitting unit 2012 emits pulsed light (emission timing).
  • the light emitted from the light emitting unit 2012 is irradiated onto the measurement area 2050 via the light projection optical system 2014.
  • the light reflected from the surface of the object 2090 within the measurement area 2050 is received by the light receiving sensor 2022 via the light receiving optical system 2024.
  • Each pixel 2221 of the light receiving sensor 2022 receives pulsed reflected light.
  • the timing at which the pulsed reflected light arrives is shown.
  • an output signal of a certain pixel 2221 of the light receiving sensor 2022 is shown.
  • Each pixel 2221 of the light receiving sensor 2022 outputs a signal according to the amount of light received.
  • the control unit 2030 may emit light from all of the light emitting elements 2121 of the light emitting unit 2012 to irradiate the entire measurement area 2050 with light at once, or may emit light from all of the light emitting elements 2121 of the light emitting unit 2012, or may emit light from all of the light emitting elements 2121 of the light emitting unit 2012.
  • the light emitting element 2121 (for example, one light emitting element 2121) may emit light to irradiate only a predetermined region of the measurement area 2050 with the light.
  • the control unit 2030 acquires a signal output from the pixel 2221 corresponding to the light emitting unit 2012 to emit light (e.g., one light emitting element 2121).
  • the signal of the pixel 2221 that does not correspond to the light emitting unit 2012 to be used is not processed.
  • the irradiation unit 2010 of this embodiment emits light from some light emitting elements 2121 (for example, one light emitting element 2121) of the light emitting unit 2012 to a predetermined area of the measurement area 2050 (corresponding to a group of irradiation spots described later). It is possible to irradiate the area with light.
  • the distance measuring section 2036 (signal processing section 2362) of the control section 2030 detects the arrival timing of the reflected light based on the output signal of each pixel 2221.
  • the signal processing unit 2362 detects the arrival timing of the reflected light based on the peak timing of the output signal of each pixel 2221.
  • the signal processing unit 2362 may determine the arrival timing of the reflected light based on the peak of the signal obtained by cutting the DC component of the output signal of the pixel 2221 in order to remove the influence of disturbance light (for example, sunlight).
  • the distance measurement unit 2036 (time detection unit 2364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing.
  • the time Tf corresponds to the time during which light travels back and forth between the measuring device 2001 and the target object 2090.
  • the control unit 2030 generates a distance image by calculating the distance to the object 2090 for each pixel 2221 based on the time Tf detected for each pixel 2221 of the light emitting unit 2020.
  • FIG. 31 is an explanatory diagram of the arrangement of the light emitting elements 2121 of the light emitting section 2012.
  • the light emitting unit 2012 is composed of, for example, a surface emitting laser (VCSEL) array chip.
  • the light emitting unit 2012 includes a plurality of light emitting elements 2121 (for example, a surface emitting laser; VCSEL), and the plurality of light emitting elements 2121 are two-dimensionally arranged.
  • the plurality of light emitting elements 2121 are arranged at intervals in the X direction and the Y direction.
  • the plurality of light-emitting elements 2121 are arranged at intervals due to the gaps between the emitters.
  • three light emitting elements 2121 are arranged in each of the X direction and the Y direction, but in reality, three or more light emitting elements 2121 are arranged along the X direction and the Y direction.
  • FIG. 32 is a reference explanatory diagram of light irradiated onto the measurement area 2050.
  • One circle in the figure indicates a range in which light emitted from a certain light emitting element 2121 (one light emitting element 2121) is irradiated onto the measurement area 2050 via one lens.
  • One circle in the figure corresponds to an image of a light emitting point of one light emitting element 2121.
  • the light projection optical system 2014 projects the image of the light emitting point of the light emitting elements 2121 directly onto the measurement area 2050, as shown in FIG.
  • there may be gaps in the light irradiated onto the measurement area 2050 there will be a region on the measurement area 2050 that cannot be irradiated with light, and a region that cannot be measured will be created on the measurement area 2050.
  • FIG. 33 is an explanatory diagram of light irradiated onto the measurement area 2050 in this embodiment.
  • the light projection optical system 2014 of this embodiment includes a plurality of lenses whose optical axes are shifted in predetermined directions (X direction and Y direction). This suppresses the occurrence of areas on the measurement area 2050 that cannot be irradiated with light, as shown in FIG. 33. This point will be explained below.
  • FIG. 34 is an explanatory diagram of the lens unit 2015.
  • the light projection optical system 2014 includes a lens unit 2015 having a plurality of lens elements.
  • the lens element is constituted by a metalens 2016, and the lens unit 2015 has a plurality of metalens 2016 (the light projection optical system 2014 has a plurality of metalens 2016).
  • the metalens 2016 is an optical element that changes the transmission intensity and phase of light by arranging microstructures smaller than the wavelength of light in a predetermined pattern, and functions as a lens (here, a convex lens). In the figure, an arrangement pattern of microstructures forming a plurality of metalens 2016 is shown.
  • the microstructures forming the metalens 2016 are arranged on a plane (XY plane) parallel to the X direction and the Y direction.
  • the metalens 2016 of this embodiment is composed of a microstructure provided on the surface of a substrate (for example, a glass substrate).
  • the metalens 2016 of this embodiment is composed of a metasurface.
  • the metalens 2016 is not limited to a metasurface, and may be made of a metamaterial in which microstructures are three-dimensionally arranged.
  • a plurality of metalens 2016 are arranged in the lens unit 2015 along the X direction and the Y direction, respectively.
  • two metalens 2016 are arranged side by side along the X direction
  • two metalens 2016 are arranged side by side along the Y direction.
  • the number of metalens 2016 arranged in the X direction or the Y direction may be two or more.
  • a plurality of metalens 2016 The metalens 2016 are arranged with their optical axes shifted in the X direction and the Y direction.
  • the figure shows that metalens 2016 are arranged at intervals t in the X and Y directions.
  • the optical axis of each metalens 2016 is parallel to the Z direction.
  • a plurality of metalens 2016 are arranged on a common plane by disposing the microstructures on a common plane (for example, on the same glass substrate). Furthermore, if multiple convex lenses were arranged two-dimensionally, since the convex lenses have shapes that vary in thickness depending on the distance from the optical axis, streak-like recesses would be formed at the boundaries between adjacent convex lenses. , there is a possibility that a shadow will be formed in the measurement area 2050 due to the influence of this streak-like recess. On the other hand, when a plurality of metalens 2016 are provided on a common plane as in this embodiment, the boundaries between the lenses can be made flat, so that shadows are not formed in the measurement area 2050. It can be prevented. Note that by using the metalens 2016, it is possible to set the diameter of the lens small or to set the shift amount of the optical axis of the lens small. For example, it is possible to set the distance t between optical axes in the figure to be small.
  • FIG. 35 is an explanatory diagram of an image of a light emitting point of the light emitting element 2121 formed by one metalens 2016.
  • a circle in the figure indicates a range in which a certain light emitting element 2121 (one light emitting element 2121) irradiates light onto the measurement area 2050 via a certain metalens 2016 (one metalens 2016).
  • the range (corresponding to one circle in the figure) in which a certain light emitting element 2121 (one light emitting element 2121) irradiates light onto the measurement area 2050 via a certain metalens 2016 (one metalens 2016) is used. This area) is called the ⁇ irradiation spot.'' As shown in FIG. 35, the light emitting point of one light emitting element 2121 forms one image (irradiation spot) in measurement area 2050 via one metalens 2016.
  • FIG. 36 is an explanatory diagram of an image of a light emitting point of a light emitting element 2121 formed by a plurality of metalens 2016.
  • a light emitting point of one light emitting element 2121 forms a plurality of images (irradiation spots) in a measurement area 2050 via a plurality of metalens 2016.
  • a plurality of images (irradiation spots) formed by a certain light emitting element 2121 (one light emitting element 2121) via a plurality of metalens 2016 will be referred to as an irradiation spot group.
  • FIG. 36 is an explanatory diagram of an image of a light emitting point of a light emitting element 2121 formed by a plurality of metalens 2016.
  • a light emitting point of one light emitting element 2121 forms a plurality of images (irradiation spots) in a measurement area 2050 via a plurality of metalens 2016.
  • the irradiation spot group in this case is composed of a plurality of irradiation spots arranged in a 2 ⁇ 2 matrix.
  • a certain light emitting element 2121 (one light emitting element 2121) is arranged in the X direction and the Y direction on the measurement area 2050.
  • N images are formed. That is, the irradiation spot group in this case is composed of a plurality of irradiation spots arranged in N ⁇ N.
  • two irradiation spots adjacent in the X direction and the Y direction are partially overlap.
  • overlapping adjacent irradiation spots it is possible to suppress the formation of a gap between two adjacent irradiation spots, and it is possible to suppress the formation of a region on the measurement area 2050 that cannot be irradiated with light.
  • FIG. 38 shows that two irradiation spots (images of a light emitting point of a certain light emitting element 2121 formed by two metalens 2016 adjacent in a predetermined direction) that are adjacent in a predetermined direction (X direction or Y direction) partially overlap.
  • FIG. 2 is an explanatory diagram of optical conditions for In the figure, two irradiation spots belonging to the same irradiation spot group are shown partially overlapping.
  • S1 in the figure indicates a light-emitting point of a certain light-emitting element 2121 (for the purpose of explanation, the light-emitting point is drawn in the direction perpendicular to the plane of the paper, but the light-emitting point is perpendicular to the Z direction).
  • L 1 and L 2 in the figure indicate two metalens 2016 adjacent to each other in a predetermined direction (X direction or Y direction) (for explanation, the metalens is drawn in a convex lens shape). In the following description, these two metalens may be referred to as a first metalens L1 and a second metalens L2 .
  • I in the figure indicates an image (irradiation spot) formed on the measurement area 2050 via the metalens 2016 (for the sake of explanation, the image is drawn in the direction perpendicular to the paper surface, but the image is drawn in the Z direction). perpendicular to ).
  • image I11 the image (irradiation spot) of the light emitting point S1 by the first metalens L1
  • image I11 the image (irradiation spot) of the light emitting point S1 by the first metalens L2
  • image I11 the image (irradiation spot) of the light emitting point S1 by the first metalens L2
  • the focal length of the metalens 2016 is f
  • the diameter of the light emitting point of the light emitting element 2121 is D
  • the interval (distance between optical axes) between the metalens 2016 arranged in a predetermined direction (X direction or Y direction) is t. .
  • ⁇ 1 in the figure is a line connecting the center of the first metalens L 1 and the center of the image I 11 (dotted chain line in the figure), and a line connecting the center of the first metalens L 1 and the center of the image I 11 (adjacent image This is the angle formed by the line that connects the end of the 12th side.
  • the line connecting the center of the first metalens L1 and the center of the image I11 (dotted chain line in the figure) is an extension of the line connecting the center of the light emitting point S1 and the center of the first metalens L1 . be.
  • the angle ⁇ 1 corresponds to half the angle of view of the irradiation spot.
  • the angle ⁇ 1 can be expressed as follows.
  • ⁇ 2 in the figure is a line connecting the center of the first metalens L 1 and the center of the image I 11 (dotted chain line in the figure), and the center of the second metalens L 2 and the lower end of the image I 12 (adjacent This is the angle formed by the line connecting the image I (the end on the side of 11 ).
  • the line connecting the center of the second metalens L2 and the lower end of the image I12 is on the extension of the line connecting the upper end of the light emitting point S1 and the center of the second metalens L2 .
  • the angle ⁇ 2 can be expressed as follows.
  • two adjacent irradiation spots (two irradiation spots belonging to the same irradiation spot group) partially overlap.
  • it is possible to suppress the formation of a gap between two adjacent irradiation spots and it is possible to suppress the formation of a region on the measurement area 2050 that cannot be irradiated with light.
  • the metalens 2016, it is possible to set the diameter of the lens small and to set the shift amount of the optical axis of the lens small, so that the light emitting point of the light emitting element 2121 can be set small. Even if the diameter D is small, it is easy to realize the light projecting optical system 2014 that satisfies the conditions of the above expression.
  • FIG. 37 is an explanatory diagram of images of light emitting points (a group of irradiation spots) of a plurality of light emitting elements 2121 formed by a plurality of metalens 2016. A large number of circular images (irradiation spots) in the figure are similar to the image shown in FIG. 33. Further, the area surrounded by the thick line in FIG. 37 is an image of a certain light emitting point (one light emitting point) formed by a plurality of metalens 2016, and indicates a group of irradiation spots. The area surrounded by thick lines in the figure is similar to the irradiation spot group shown in FIG. 36.
  • two metalens 2016 are arranged side by side in the X direction and two in the Y direction, so two irradiation spots are arranged in the X direction and two in the Y direction in the area surrounded by the thick line in the figure. ing.
  • N metalens 2016 are arranged side by side in the X direction and Y direction
  • N irradiation spots are arranged side by side in the X direction and Y direction in the area surrounded by the thick line in the figure. will be done.
  • the plurality of light emitting elements 2121 are two-dimensionally arranged along the X direction and the Y direction, the plurality of irradiation spot groups (area surrounded by thick lines in FIG.
  • hatched images are also arranged in the X direction and the Y direction. It will be arranged two-dimensionally along.
  • a plurality of hatched images (a plurality of irradiation spots arranged at intervals) in FIG. 37 are light emitting points of a plurality of light emitting elements 2121 formed by a certain metalens 2016 (one metalens 2016). are each image (irradiation spot).
  • the hatched image is similar to the image shown in FIG. 32.
  • Images (irradiation spots) indicated by white circles are arranged between the hatched images.
  • N metalens 2016 are arranged side by side in the X direction and the Y direction, there are N-1 images (images shown by white circles) between the two images shown by hatching. It will be placed.
  • two irradiation spot groups (areas surrounded by thick lines) that are adjacent in the X direction and the Y direction partially overlap.
  • overlapping adjacent irradiation spot groups it is possible to suppress the formation of a gap between two adjacent irradiation spot groups, and it is possible to suppress the formation of a region on the measurement area 2050 that cannot be irradiated with light.
  • FIG. 39 is an explanatory diagram of optical conditions for partially overlapping two irradiation spot groups adjacent in a predetermined direction (X direction or Y direction).
  • S2 in the figure indicates a light emitting point adjacent to the light emitting point S1 in a predetermined direction (X direction or Y direction).
  • the distance between the light emitting elements 2121 adjacent to each other in a predetermined direction is P.
  • the metalens indicated by LN will be referred to as the N-th metalens.
  • I 1N in the figure indicates an image of the light emitting point S 1 by the N-th metalens L N.
  • I 21 in the figure indicates an image of the light emitting point S 2 by the first metalens L 1 .
  • Images I 11 to I 1N belong to the same irradiation spot group, and image I 21 belongs to an adjacent irradiation spot group to the irradiation spot group of images I 11 to I 1N .
  • ⁇ 3 in the figure is a line connecting the center of the first metalens L 1 and the center of the image I 11 (dotted chain line in the figure), the center of the Nth metalens L N (second metalens L 2 ), and the image I 1N (I 12 ) with the line connecting the upper end (the end on the adjacent image I 21 side).
  • the line connecting the center of the N-th metalens L N and the upper end of the image I 1N (I 12 ) is the line connecting the lower end of the light-emitting point S 1 and the center of the N-th metalens L N (second metalens L 2 ).
  • the angle ⁇ 3 corresponds to half the angle of view of the irradiation spot group.
  • ⁇ 4 in the figure is a line connecting the center of the first metalens L 1 and the center of the image I 11 , and a line connecting the center of the first metalens L 1 and the lower end of the image I 21 (the side of the adjacent image I 1N) . This is the angle formed by the line connecting the ends).
  • the line connecting the center of the first metalens L1 and the lower end of the image I21 is an extension of the line connecting the upper end of the light emitting point S2 and the center of the first metalens L1 .
  • the angle ⁇ 4 can be expressed as the following equation.
  • the measuring device 2001 of this embodiment includes a light emitting unit 2012 and a light projecting optical system 2014 that irradiates the measurement area 2050 with light from the light emitting unit 2012.
  • the light emitting section 2012 has a plurality of light emitting elements 2121 arranged at intervals, as shown in FIGS. 31 to 33.
  • the light projection optical system 2014 includes a plurality of metalens 2016 whose optical axes are shifted.
  • the light projection optical system 2014 of this embodiment irradiates light from a light emitting element 2121 through another metalens 2016 between the lights from a plurality of light emitting elements 2121 irradiated through a certain metalens 2016.
  • the light projection optical system 2014 of this embodiment uses light emitted from each of the plurality of light emitting elements 2121 via a certain metalens 2016 to form a plurality of irradiation spots (hatched in FIG. 37) at intervals.
  • an irradiation spot irradiation spot indicated by a white circle in FIG. 37
  • the plurality of metalens 2016 are provided on a common plane. Thereby, formation of a shadow in the measurement area 2050 can be suppressed.
  • images of light emitting points of a certain light emitting element 2121 (one light emitting element 2121) formed by two metalens 2016 adjacent in a predetermined direction partially overlap. That is, in this embodiment, the irradiation spots shown in FIG. 36 partially overlap. Thereby, it is possible to further suppress the occurrence of areas that cannot be irradiated with light. Note that even if the images (irradiation spots) of the light emitting points of a certain light emitting element 2121 (one light emitting element 2121) formed by two metalens 2016 adjacent to each other in a predetermined direction do not overlap, a certain metalens 2016 Between the light from the plurality of light emitting elements 2121 (the hatched irradiation spots in FIG.
  • the light emitting unit 2012 and the light projection optical system 2014 satisfy the condition shown in the above-mentioned formula 3 (or t ⁇ D) (see FIG. 38).
  • images (irradiation spots) of light emitting points of a certain light emitting element 2121 (one light emitting element 2121) formed by two metalens 2016 adjacent to each other in a predetermined direction can be partially overlapped, and light can be irradiated. It is possible to suppress the occurrence of areas where it is not possible.
  • images emitted from the light emitting points of two light emitting elements 2121 adjacent in a predetermined direction and formed by a plurality of metalens 2016 partially overlap. That is, in this embodiment, the irradiation spot groups shown in FIG. 37 partially overlap. Thereby, it is possible to further suppress the occurrence of areas that cannot be irradiated with light. Note that even if the images (irradiation spot group) formed by a plurality of metalens 2016 that are irradiated from the light emitting points of two adjacent light emitting elements 2121 in a predetermined direction do not overlap, the irradiation is performed via a certain metalens 2016. The light from the light emitting elements 2121 (see the irradiation spots indicated by white circles in FIG.
  • the light emitting unit 2012 and the light projecting optical system 2014 satisfy the condition shown in Equation 6 above (see FIG. 39). This makes it possible to partially overlap images (irradiation spot group) irradiated from the light emitting points of two light emitting elements 2121 adjacent to each other in a predetermined direction and formed by the plurality of metalens 2016, resulting in areas that cannot be irradiated with light. can be suppressed.
  • the light emitting unit 2012 and the light projecting optical system 2014 satisfy both the conditions shown in the above-mentioned Equation 3 (or t ⁇ D) and Equation 6 (see FIGS. 38 and 39).
  • the irradiation spots in the same irradiation spot group can overlap, and the irradiation spots can also overlap between adjacent irradiation spot groups, and it is possible to suppress the occurrence of areas where light cannot be irradiated.
  • Japanese patent application filed on June 30, 2022 Japanese patent application No. 2022-106333
  • Japanese patent application filed on June 30, 2022 Japanese patent application No. 2022-106334
  • Japanese patent application filed on August 9, 2022 It is based on a Japanese patent application (Japanese Patent Application No. 2022-127331), the contents of which are incorporated herein by reference.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radiation Pyrometers (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

Provided is a measurement device comprising: a light-emitting unit (12) that emits light having a wavelength corresponding to the temperature, within a prescribed wavelength band range; a filter that allows light of a prescribed wavelength band to pass therethrough, and that allows reflection light of the light which has been emitted from the light-emitting unit (12) to pass therethrough; a light-receiving sensor (22) that has a plurality of light-receiving elements (222); and a splitting element (25) that is disposed between the filter and the light-receiving sensor (22) and that splits light which has been passed through the filter to two or more light-receiving elements.

Description

測定装置、照射装置、及び、バンドパスフィルタMeasuring device, irradiation device, and bandpass filter
 本開示は、測定装置、照射装置、及び、バンドパスフィルタに関する。 The present disclosure relates to a measurement device, an irradiation device, and a bandpass filter.
 特許文献1には、パルス光を射出してから反射光を受光するまでの光の飛行時間に基づいて、反射物までの距離を測定する測距装置が記載されている。 Patent Document 1 describes a distance measuring device that measures the distance to a reflective object based on the flight time of light from emitting pulsed light to receiving reflected light.
日本国特開2021-152536号公報Japanese Patent Application Publication No. 2021-152536
 太陽光等の背景光によるノイズの影響を低減させるため、受光用光学系にバンドパスフィルタを設けることがある。但し、発光部から出射される光の波長が温度に応じて変化する場合、バンドパスフィルタの通過帯域を拡張する必要がある。この結果、受光素子の受光データに混入するノイズが増加して、測定精度が低下するおそれがある。 In order to reduce the influence of noise caused by background light such as sunlight, a bandpass filter may be provided in the light receiving optical system. However, if the wavelength of the light emitted from the light emitting section changes depending on the temperature, it is necessary to expand the passband of the bandpass filter. As a result, there is a risk that noise mixed into the light reception data of the light receiving element will increase, and measurement accuracy will deteriorate.
 また、発光素子となるVCSELを2次元配置して発光部(光源)を構成する場合、エミッタ間の隙間によって、複数の発光素子が間隔をあけて配置される。この結果、測定エリアに照射される光に隙間があいてしまい、測定できない領域が生じてしまう。 Furthermore, when a light emitting section (light source) is constructed by two-dimensionally arranging VCSELs serving as light emitting elements, a plurality of light emitting elements are arranged at intervals due to gaps between emitters. As a result, a gap is created in the light irradiated onto the measurement area, resulting in an area where measurement cannot be performed.
 本開示は、ノイズの影響を抑制することを第一の目的とする。 The primary objective of the present disclosure is to suppress the influence of noise.
 本開示は、光を照射できない領域が生じることを抑制することを第二の目的とする。 A second objective of the present disclosure is to suppress the occurrence of areas where light cannot be irradiated.
 上記第一の目的を達成するための本開示の一形態は、所定の波長帯域の範囲で温度に応じた波長の光を照射する発光部と、前記所定の波長帯域の光を通過させ、前記発光部から照射された光の反射光を通過させるフィルタと、複数の受光素子を有する受光センサと、前記フィルタと前記受光センサとの間に配置され、前記フィルタを通過した光を2以上の前記受光素子に分散させる分散素子と、を備える測定装置である。 One form of the present disclosure for achieving the above-mentioned first object includes: a light emitting section that irradiates light with a wavelength corresponding to temperature within a predetermined wavelength band; a filter that allows reflected light of the light irradiated from the light emitting part to pass through; a light receiving sensor having a plurality of light receiving elements; and a light receiving sensor that is disposed between the filter and the light receiving sensor, and transmits the light that has passed through the filter to two or more of the light receiving elements. This is a measurement device including a dispersion element that disperses light into a light receiving element.
 上記第一の目的を達成するための本開示の他の一形態は、温度に応じた波長の光を照射する発光部と、前記発光部から照射された光の反射光を通過させるバンドパスフィルタと、前記バンドパスフィルタを通過した光を受光する受光センサとを備え、温度に応じて前記バンドパスフィルタを通過可能な光の波長が変化する、測定装置である。 Another form of the present disclosure for achieving the above first object includes a light emitting section that irradiates light with a wavelength depending on the temperature, and a bandpass filter that passes reflected light of the light irradiated from the light emitting section. and a light-receiving sensor that receives the light that has passed through the band-pass filter, and the wavelength of the light that can pass through the band-pass filter changes depending on the temperature.
 上記第二の目的を達成するための本開示の一形態は、間隔をあけて配置された複数の発光素子を有する発光部と、光軸をシフトさせた複数のメタレンズを有し、或る前記メタレンズを介して照射される複数の前記発光素子の光の間に、別の前記メタレンズを介して前記発光素子からの光を照射する光学系とを備える、測定装置である。 One form of the present disclosure for achieving the above-mentioned second object includes a light-emitting section having a plurality of light-emitting elements arranged at intervals, and a plurality of metalens with shifted optical axes. The measuring device includes an optical system that irradiates light from the light emitting element through another metalens between the light from the plurality of light emitting elements that is irradiated through the metalens.
 その他、本願が開示する課題、及びその解決方法は、発明を実施するための形態の欄、及び図面により明らかにされる。 Other problems disclosed in the present application and methods for solving the problems will be made clear by the detailed description section and the drawings.
 本開示によれば、ノイズの影響を抑制することができる。 According to the present disclosure, the influence of noise can be suppressed.
 また、本開示によれば、光を照射できない領域が生じることを抑制することができる。 Furthermore, according to the present disclosure, it is possible to suppress the occurrence of areas where light cannot be irradiated.
図1は、測定装置1の全体構成の説明図である。FIG. 1 is an explanatory diagram of the overall configuration of the measuring device 1. As shown in FIG. 図2は、測定装置1の概略説明図である。FIG. 2 is a schematic explanatory diagram of the measuring device 1. 図3は、測定方法の一例を説明するためのタイミングチャートである。FIG. 3 is a timing chart for explaining an example of the measurement method. 図4は、本実施形態の受光部20の説明図である。FIG. 4 is an explanatory diagram of the light receiving section 20 of this embodiment. 図5は、参考例の受光部20の説明図である。FIG. 5 is an explanatory diagram of the light receiving section 20 of a reference example. 図6は、第1測定方法に用いられる対応テーブルである。FIG. 6 is a correspondence table used in the first measurement method. 図7は、第1測定方法の説明図である。FIG. 7 is an explanatory diagram of the first measurement method. 図8は、第1測定方法の説明図である。FIG. 8 is an explanatory diagram of the first measurement method. 図9は、第1測定方法の説明図である。FIG. 9 is an explanatory diagram of the first measurement method. 図10は、第2測定方法に用いられるテーブルである。FIG. 10 is a table used in the second measurement method. 図11は、第2測定方法の説明図である。FIG. 11 is an explanatory diagram of the second measurement method. 図12は、発光素子121と受光素子222との対応関係の説明図である。FIG. 12 is an explanatory diagram of the correspondence between the light emitting element 121 and the light receiving element 222. 図13は、発光素子121と受光素子222との別の対応関係の説明図である。FIG. 13 is an explanatory diagram of another correspondence relationship between the light emitting element 121 and the light receiving element 222. 図14は、測定装置1001の全体構成の説明図である。FIG. 14 is an explanatory diagram of the overall configuration of the measuring device 1001. 図15は、測定装置1001の概略説明図である。FIG. 15 is a schematic explanatory diagram of the measuring device 1001. 図16は、測定方法の一例を説明するためのタイミングチャートである。FIG. 16 is a timing chart for explaining an example of the measurement method. 図17は、受光部1020の説明図である。FIG. 17 is an explanatory diagram of the light receiving section 1020. 図18は、受光部1020の説明図である。FIG. 18 is an explanatory diagram of the light receiving section 1020. 図19は、本実施形態のバンドパスフィルタBPFの特性を示す概略説明図である。FIG. 19 is a schematic explanatory diagram showing the characteristics of the bandpass filter BPF of this embodiment. 図20は、バンドパスフィルタBPFの構造の拡大説明図である。FIG. 20 is an enlarged explanatory diagram of the structure of the bandpass filter BPF. 図21は、バンドパスフィルタBPFの構造の拡大説明図である。FIG. 21 is an enlarged explanatory diagram of the structure of the bandpass filter BPF. 図22は、バンドパスフィルタBPFの特性を示すグラフである。FIG. 22 is a graph showing the characteristics of the bandpass filter BPF. 図23は、バンドパスフィルタBPFの別の特性を示すグラフである。FIG. 23 is a graph showing another characteristic of the bandpass filter BPF. 図24は、バンドパスフィルタBPFの配置の変形例の説明図である。FIG. 24 is an explanatory diagram of a modification of the arrangement of the bandpass filter BPF. 図25は、別のバンドパスフィルタBPF’の配置の説明図である。FIG. 25 is an explanatory diagram of the arrangement of another bandpass filter BPF'. 図26は、別のバンドパスフィルタBPF’の配置の説明図である。FIG. 26 is an explanatory diagram of the arrangement of another bandpass filter BPF'. 図27は、別のバンドパスフィルタBPF’の配置の説明図である。FIG. 27 is an explanatory diagram of the arrangement of another bandpass filter BPF'. 図28は、測定装置2001の全体構成の説明図である。FIG. 28 is an explanatory diagram of the overall configuration of the measuring device 2001. 図29は、測定装置2001の概略説明図である。FIG. 29 is a schematic explanatory diagram of the measuring device 2001. 図30は、測定方法の一例を説明するためのタイミングチャートである。FIG. 30 is a timing chart for explaining an example of the measurement method. 図31は、発光部2012の発光素子2121の配置の説明図である。FIG. 31 is an explanatory diagram of the arrangement of the light emitting elements 2121 of the light emitting section 2012. 図32は、測定エリア2050に照射される光の参考説明図である。FIG. 32 is a reference explanatory diagram of light irradiated onto the measurement area 2050. 図33は、本実施形態における測定エリア2050に照射される光の説明図である。FIG. 33 is an explanatory diagram of light irradiated onto the measurement area 2050 in this embodiment. 図34は、レンズユニット2015の説明図である。FIG. 34 is an explanatory diagram of the lens unit 2015. 図35は、1つのメタレンズ2016によって形成された発光素子2121の発光点の像(照射スポット)の説明図である。FIG. 35 is an explanatory diagram of an image (irradiation spot) of a light emitting point of the light emitting element 2121 formed by one metalens 2016. 図36は、複数のメタレンズ2016によって形成された発光素子2121の発光点の像(照射スポット群)の説明図である。FIG. 36 is an explanatory diagram of an image of a light emitting point (irradiation spot group) of the light emitting element 2121 formed by a plurality of metalens 2016. 図37は、複数のメタレンズ2016によって形成された複数の発光素子2121の発光点の像(複数の照射スポット群)の説明図である。FIG. 37 is an explanatory diagram of images of light emitting points (a group of irradiation spots) of a plurality of light emitting elements 2121 formed by a plurality of metalens 2016. 図38は、所定方向に隣り合う2つの照射スポットの一部が重なるための光学条件の説明図である。FIG. 38 is an explanatory diagram of optical conditions for partially overlapping two irradiation spots adjacent in a predetermined direction. 図39は、所定方向に隣接する2つの照射スポット群の一部が重なるための光学条件の説明図である。FIG. 39 is an explanatory diagram of optical conditions for partially overlapping two irradiation spot groups adjacent in a predetermined direction.
 以下、本開示を実施するための形態について図面を参照しつつ説明する。なお、以下の説明において、同一の又は類似する構成について共通の符号を付して重複した説明を省略することがある。 Hereinafter, embodiments for carrying out the present disclosure will be described with reference to the drawings. In addition, in the following description, the same or similar configurations may be given the same reference numerals and redundant descriptions may be omitted.
 [第一実施形態]
 <全体構成>
 図1は、測定装置1の全体構成の説明図である。図2は、測定装置1の概略説明図である。
[First embodiment]
<Overall configuration>
FIG. 1 is an explanatory diagram of the overall configuration of the measuring device 1. As shown in FIG. FIG. 2 is a schematic explanatory diagram of the measuring device 1.
 以下の説明では、図2に示すように各方向を定めている。Z方向は、受光用光学系24の光軸に沿った方向である。なお、測定装置1の測定対象となる対象物90は、測定装置1に対してZ方向に離れていることになる。また、X方向及びY方向は、Z方向に対して垂直な方向である。なお、発光部12を構成する複数の発光素子121は、X方向及びY方向に沿って2次元配置されている。受光センサ22の複数の画素221も、X方向及びY方向に沿って2次元配置されている。 In the following explanation, each direction is defined as shown in FIG. 2. The Z direction is a direction along the optical axis of the light receiving optical system 24. Note that the object 90 to be measured by the measuring device 1 is located away from the measuring device 1 in the Z direction. Further, the X direction and the Y direction are directions perpendicular to the Z direction. Note that the plurality of light emitting elements 121 constituting the light emitting section 12 are two-dimensionally arranged along the X direction and the Y direction. The plurality of pixels 221 of the light receiving sensor 22 are also two-dimensionally arranged along the X direction and the Y direction.
 測定装置1は、対象物90までの距離を測定する装置である。測定装置1は、いわゆるLiDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)としての機能を有する装置である。測定装置1は、測定光を出射し、対象物90の表面で反射した反射光を検出し、測定光を出射してから反射光を受光するまでの時間を計測することによって、対象物90までの距離をTOF方式(Time of flight)で測定する。測定装置1は、照射部10と、受光部20と、制御部30とを有する。また、本実施形態の測定装置1は、温度センサ41を有する。 The measuring device 1 is a device that measures the distance to the target object 90. The measuring device 1 is a device having a function of so-called LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging). The measuring device 1 emits measurement light, detects the reflected light reflected from the surface of the object 90, and measures the time from emitting the measurement light until receiving the reflected light. Measure the distance using the TOF method (Time of flight). The measuring device 1 includes an irradiation section 10, a light receiving section 20, and a control section 30. Furthermore, the measuring device 1 of this embodiment includes a temperature sensor 41.
 照射部10は、対象物90に向かって測定光を照射する照射装置である。照射部10は、所定の画角で測定エリア50(図2参照)に測定光を照射する。照射部10は、発光部12と、投光用光学系14とを有する。発光部12は、光を出射する部材(光源)である。例えば、発光部12は、面発光レーザー(VCSEL)アレイチップで構成されている。発光部12は、発光素子121(例えば面発光レーザー;VCSEL)を複数有しており、複数の発光素子121はX方向及びY方向に沿って2次元配置されている。投光用光学系14は、発光部12から出射された光を測定エリア50に照射する光学系である。発光部12は、それぞれの発光素子121を個別に発光させることができる。発光部12のそれぞれの発光素子121は、投光用光学系14を介して、測定エリア50の所定の領域に対応付けられている。或る発光素子121から出射した光は、投光用光学系14を介して、測定エリア50の対応する領域に照射されることになる。但し、照射部10は、発光部12の発光面全体から光を出射して、測定エリア50の全体に光を一括照射するように構成されても良い。なお、発光素子121が照射する光の波長は、温度に応じて変化する。この点については後述する。 The irradiation unit 10 is an irradiation device that irradiates measurement light toward the object 90. The irradiation unit 10 irradiates the measurement area 50 (see FIG. 2) with measurement light at a predetermined angle of view. The irradiation section 10 has a light emitting section 12 and a light projection optical system 14. The light emitting unit 12 is a member (light source) that emits light. For example, the light emitting section 12 is composed of a surface emitting laser (VCSEL) array chip. The light emitting unit 12 includes a plurality of light emitting elements 121 (for example, a surface emitting laser; VCSEL), and the plurality of light emitting elements 121 are two-dimensionally arranged along the X direction and the Y direction. The light projection optical system 14 is an optical system that irradiates the measurement area 50 with the light emitted from the light emitting section 12 . The light emitting unit 12 can cause each light emitting element 121 to emit light individually. Each light emitting element 121 of the light emitting unit 12 is associated with a predetermined region of the measurement area 50 via the light projection optical system 14. Light emitted from a certain light emitting element 121 is irradiated onto a corresponding region of the measurement area 50 via the light projection optical system 14. However, the irradiation section 10 may be configured to emit light from the entire light emitting surface of the light emitting section 12 and irradiate the entire measurement area 50 with the light at once. Note that the wavelength of the light emitted by the light emitting element 121 changes depending on the temperature. This point will be discussed later.
 受光部20は、対象物90からの反射光を受光する。受光部20は、測定エリア50(図2参照)からの反射光を受光することになる。受光部20は、受光センサ22と、受光用光学系24とを有する。受光センサ22は、2次元配置された複数の画素221を有する。例えば、VGAの受光センサ22の場合、480×640の画素221が2次元配置されている。各画素221は、受光素子222を有しており、受光素子222は、受光量に応じた信号(受光データ)を出力する。受光用光学系24は、測定エリア50からの反射光を受光部20に受光させる光学系である。受光用光学系24は、測定エリア50の像を受光センサ22の受光面で結像させる。受光センサ22のそれぞれの画素221は、受光用光学系24を介して、測定エリア50の所定の領域に対応付けられている。受光センサ22の或る画素221は、受光用光学系24を介して、測定エリア50の対応する領域からの光(反射光及び背景光)を受光することになる。また、受光センサ22のそれぞれの画素221は、発光部12の所定の発光素子121と対応付けられている。或る発光素子121から出射した光は、投光用光学系14及び受光用光学系24を介して、対応する画素221に受光されることになる。なお、受光センサ22や受光用光学系24については、後述する。 The light receiving unit 20 receives reflected light from the target object 90. The light receiving unit 20 receives reflected light from the measurement area 50 (see FIG. 2). The light receiving section 20 includes a light receiving sensor 22 and a light receiving optical system 24. The light receiving sensor 22 has a plurality of pixels 221 arranged two-dimensionally. For example, in the case of the VGA light receiving sensor 22, 480×640 pixels 221 are two-dimensionally arranged. Each pixel 221 has a light receiving element 222, and the light receiving element 222 outputs a signal (light receiving data) according to the amount of light received. The light receiving optical system 24 is an optical system that causes the light receiving unit 20 to receive reflected light from the measurement area 50. The light receiving optical system 24 forms an image of the measurement area 50 on the light receiving surface of the light receiving sensor 22 . Each pixel 221 of the light receiving sensor 22 is associated with a predetermined region of the measurement area 50 via the light receiving optical system 24. A certain pixel 221 of the light receiving sensor 22 receives light (reflected light and background light) from a corresponding region of the measurement area 50 via the light receiving optical system 24. Further, each pixel 221 of the light receiving sensor 22 is associated with a predetermined light emitting element 121 of the light emitting section 12. Light emitted from a certain light emitting element 121 is received by the corresponding pixel 221 via the light projecting optical system 14 and the light receiving optical system 24. Note that the light receiving sensor 22 and the light receiving optical system 24 will be described later.
 制御部30は、測定装置1の制御を司る。制御部30は、照射部10を制御し、照射部10から照射させる光を制御する。また、制御部30は、受光部20の出力結果に基づいて、対象物90までの距離をTOF方式(Time of flight)で測定する。制御部30は、不図示の演算装置及び記憶装置を有する。演算装置は、例えばCPU、GPUなどの演算処理装置である。演算装置の一部がアナログ演算回路で構成されても良い。記憶装置は、主記憶装置と補助記憶装置とにより構成され、プログラムやデータを記憶する装置である。記憶装置に記憶されているプログラムを演算装置が実行することにより、対象物90までの距離を測定するための各種処理が実行される。図1には、各種処理の機能ブロックが示されている。 The control unit 30 controls the measuring device 1. The control unit 30 controls the irradiation unit 10 and controls the light emitted from the irradiation unit 10 . Furthermore, the control unit 30 measures the distance to the object 90 using the TOF method (Time of Flight) based on the output result of the light receiving unit 20. The control unit 30 includes an arithmetic unit and a storage device (not shown). The computing device is, for example, a computing processing device such as a CPU or a GPU. A part of the arithmetic device may be constituted by an analog arithmetic circuit. A storage device is a device that is composed of a main storage device and an auxiliary storage device, and stores programs and data. Various processes for measuring the distance to the target object 90 are executed by the arithmetic device executing the program stored in the storage device. FIG. 1 shows functional blocks for various processes.
 制御部30は、設定部32と、タイミング制御部34と、測距部36とを有する。設定部32は、各種設定を行う。タイミング制御部34は、各部の処理タイミングを制御する。例えば、タイミング制御部34は、発光部12から光を射出させるタイミングなどを制御する。測距部36は、対象物90までの距離を測定する。測距部36は、信号処理部362と、時間検出部364と、距離算出部366とを有する。信号処理部362は、受光センサ22の出力信号(受光データ)を処理する。時間検出部364は、光の飛行時間(光を照射してから反射光が到達するまでの時間)を検出する。距離算出部366は、対象物90までの距離を算出する。 The control section 30 includes a setting section 32, a timing control section 34, and a distance measuring section 36. The setting unit 32 performs various settings. The timing control section 34 controls the processing timing of each section. For example, the timing control section 34 controls the timing at which light is emitted from the light emitting section 12. The distance measuring section 36 measures the distance to the target object 90. The distance measurement section 36 includes a signal processing section 362, a time detection section 364, and a distance calculation section 366. The signal processing unit 362 processes the output signal (light reception data) of the light reception sensor 22. The time detection unit 364 detects the flight time of light (the time from when the light is irradiated until the reflected light arrives). The distance calculation unit 366 calculates the distance to the target object 90.
 図3は、測定方法の一例を説明するためのタイミングチャートである。 FIG. 3 is a timing chart for explaining an example of the measurement method.
 制御部30(タイミング制御部34)は、照射部10の発光部12に所定の周期でパルス光を出射させる。図3の上側には、発光部12がパルス光を出射するタイミング(出射タイミング)が示されている。発光部12から出射された光は、投光用光学系14を介して測定エリア50に照射される。測定エリア50内の対象物90の表面で反射した光は、受光用光学系24を介して受光センサ22に受光される。受光センサ22の画素221は、パルス状の反射光を受光することになる。図3の中央には、パルス状の反射光が到達するタイミング(到達タイミング)が示されている。図3の下側には、受光センサ22の或る画素221の画素データS(或る画素221の受光素子222の受光データ)が示されている。受光センサ22の画素データSは、画素221の受光量を示すデータである。 The control unit 30 (timing control unit 34) causes the light emitting unit 12 of the irradiation unit 10 to emit pulsed light at a predetermined period. The upper side of FIG. 3 shows the timing at which the light emitting section 12 emits pulsed light (emission timing). The light emitted from the light emitting section 12 is irradiated onto the measurement area 50 via the light projection optical system 14. The light reflected on the surface of the object 90 within the measurement area 50 is received by the light receiving sensor 22 via the light receiving optical system 24. The pixel 221 of the light receiving sensor 22 receives the pulsed reflected light. In the center of FIG. 3, the timing at which the pulsed reflected light arrives (arrival timing) is shown. At the bottom of FIG. 3, pixel data S of a certain pixel 221 of the light receiving sensor 22 (light reception data of the light receiving element 222 of a certain pixel 221) is shown. Pixel data S of the light receiving sensor 22 is data indicating the amount of light received by the pixel 221.
 制御部30(タイミング制御部34)は、発光部12の全ての発光素子121から光を出射させて測定エリア50の全体に光を一括して照射しても良いし、発光部12の一部の発光素子121(例えば1つの発光素子121)から光を出射させて測定エリア50の所定の領域のみに光を照射しても良い。発光部12の一部の発光素子121(例えば1つの発光素子121)から光を出射させる場合、制御部30(信号処理部362)は、発光させた発光素子121に対応する画素221の画素データSを取得することになる。制御部30が取得する画素データSについては、後述する。 The control unit 30 (timing control unit 34) may emit light from all the light emitting elements 121 of the light emitting unit 12 to irradiate the entire measurement area 50 at once, or may emit light from all the light emitting elements 121 of the light emitting unit 12, or may emit light to the entire measurement area 50 at once, or may emit light from all the light emitting elements 121 of the light emitting unit 12. The light emitting element 121 (for example, one light emitting element 121) may emit light to irradiate only a predetermined region of the measurement area 50 with the light. When emitting light from some of the light emitting elements 121 (for example, one light emitting element 121) of the light emitting unit 12, the control unit 30 (signal processing unit 362) uses pixel data of the pixel 221 corresponding to the light emitting element 121 that has emitted light. You will get S. The pixel data S acquired by the control unit 30 will be described later.
 制御部30の測距部36(信号処理部362)は、各画素221の画素データSに基づいて、反射光の到達タイミングを検出する。例えば、信号処理部362は、各画素221の画素データのピークのタイミングに基づいて、反射光の到達タイミングを検出する。
 測距部36(時間検出部364)は、光の出射タイミングと、光の到達タイミングとに基づいて、光を照射してから反射光が到達するまでの時間Tfを検出する。時間Tfは、測定装置1と対象物90との間を光が往復する時間に相当する。そして、測距部36(距離算出部366)は、時間Tfに基づいて、対象物90までの距離Lを算出する。なお、光を照射してから反射光が到達するまでの時間をTfとし、光の速度をCとしたとき、距離Lは、L=C×Tf/2となる。制御部30は、受光部20の画素221ごとに検出した時間Tfに基づいて、画素221ごとに対象物90までの距離を算出することによって、距離画像を生成する。
The distance measuring section 36 (signal processing section 362) of the control section 30 detects the arrival timing of the reflected light based on the pixel data S of each pixel 221. For example, the signal processing unit 362 detects the arrival timing of the reflected light based on the timing of the peak of pixel data of each pixel 221.
The distance measuring section 36 (time detecting section 364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. The time Tf corresponds to the time the light travels back and forth between the measuring device 1 and the target object 90. Then, the distance measuring section 36 (distance calculating section 366) calculates the distance L to the target object 90 based on the time Tf. Note that, when the time from irradiation of light until the reflected light arrives is Tf, and the speed of light is C, the distance L is L=C×Tf/2. The control unit 30 generates a distance image by calculating the distance to the object 90 for each pixel 221 based on the time Tf detected for each pixel 221 of the light receiving unit 20.
 <受光部20について>
 図5は、参考例の受光部20の説明図である。
<About the light receiving section 20>
FIG. 5 is an explanatory diagram of the light receiving section 20 of a reference example.
 受光用光学系24は、集光レンズ241と、バンドパスフィルタBPFとを有する。集光レンズ241は、測定エリア50の像を受光センサ22の受光面に結像させる光学素子である。バンドパスフィルタBPFは、特定の波長の光を通過させ、当該特定波長以外の光をカットするフィルタである。以下の説明では、バンドパスフィルタBPFを通過する光の波長帯域のことを「通過帯域」と呼び、バンドパスフィルタBPFにカットされる光の波長帯域のことを「遮断帯域」と呼ぶことがある。バンドパスフィルタBPFは、反射光を透過させる必要があるため、少なくとも、発光部12から照射される光の波長を透過可能である。つまり、バンドパスフィルタBPFの通過帯域には、発光部12から出射される光の波長が少なくとも含まれる必要がある。受光用光学系24がバンドパスフィルタBPFを備えることにより、太陽光等の背景光における遮断帯域の光をカットできるため、背景光によるノイズの影響を抑制することができる。 The light receiving optical system 24 includes a condensing lens 241 and a bandpass filter BPF. The condensing lens 241 is an optical element that forms an image of the measurement area 50 on the light receiving surface of the light receiving sensor 22. The bandpass filter BPF is a filter that passes light of a specific wavelength and cuts light of other than the specific wavelength. In the following explanation, the wavelength band of light that passes through the band-pass filter BPF may be referred to as a "pass band", and the wavelength band of light that is cut by the band-pass filter BPF may be referred to as a "cut-off band". . Since the bandpass filter BPF needs to transmit reflected light, it can transmit at least the wavelength of the light emitted from the light emitting section 12. That is, the passband of the bandpass filter BPF needs to include at least the wavelength of the light emitted from the light emitting section 12. Since the light-receiving optical system 24 includes the band-pass filter BPF, it is possible to cut off light in the cutoff band of background light such as sunlight, and therefore it is possible to suppress the influence of noise caused by the background light.
 一方、発光部12から出射する光の波長λは、温度に応じて変化する。例えば、発光部12の温度が上昇すると、発光部12から出射する光の波長λは長くなる。ここでは一例として、発光部12から出射する光の波長λは、温度に応じて875~935nmの範囲で変化するものとする。また、発光部12の温度がT1~T3の範囲で変化するものとし、温度T1のときの波長λ1は875nmであり、温度T2のときの波長λ2は905nmであり、温度T3のときの波長λ3は935nmであるものとする。発光部12は、λ1(=875nm)~λ3(=935nm)の波長帯域の範囲で、温度に応じた波長λの光を出射することになる。 On the other hand, the wavelength λ of the light emitted from the light emitting section 12 changes depending on the temperature. For example, when the temperature of the light emitting section 12 increases, the wavelength λ of light emitted from the light emitting section 12 becomes longer. Here, as an example, it is assumed that the wavelength λ of the light emitted from the light emitting section 12 changes in the range of 875 to 935 nm depending on the temperature. Further, it is assumed that the temperature of the light emitting unit 12 changes in the range of T1 to T3, and when the temperature is T1, the wavelength λ1 is 875 nm, when the temperature is T2, the wavelength λ2 is 905 nm, and when the temperature is T3, the wavelength λ3 is 875 nm. is assumed to be 935 nm. The light emitting section 12 emits light with a wavelength λ depending on the temperature within the wavelength band λ1 (=875 nm) to λ3 (=935 nm).
 バンドパスフィルタBPFは、発光部12から照射される光の波長を透過させる必要があるため、発光部12から出射する光の波長が温度に応じて所定の波長帯域(λ1(=875nm)~λ3(=935nm)の範囲)で変化する場合、バンドパスフィルタBPFは、その波長帯域(λ1~λ3の範囲)の光を透過させる必要がある。つまり、発光部12から出射される光の波長帯域(λ1~λ3の範囲)の全てが含まれるように、バンドパスフィルタBPFの通過帯域を拡張する必要がある。但し、バンドパスフィルタBPFの通過帯域が拡張された結果、バンドパスフィルタBPFを通過する背景光(背景光における通過帯域の光)が増加してしまう。このため、参考例に示す構成の場合、画素データS(画素221の受光量を示すデータ)に含まれるノイズが増加してしまう。 Since the band pass filter BPF needs to transmit the wavelength of the light emitted from the light emitting section 12, the wavelength of the light emitted from the light emitting section 12 falls within a predetermined wavelength band (λ1 (=875 nm) to λ3) depending on the temperature. (range of 935 nm), the bandpass filter BPF needs to transmit light in that wavelength band (range of λ1 to λ3). That is, it is necessary to expand the passband of the bandpass filter BPF so that the entire wavelength band (range of λ1 to λ3) of light emitted from the light emitting section 12 is included. However, as a result of expanding the passband of the bandpass filter BPF, the amount of background light (light in the passband in the background light) that passes through the bandpass filter BPF increases. Therefore, in the case of the configuration shown in the reference example, noise included in the pixel data S (data indicating the amount of light received by the pixel 221) increases.
 図4は、本実施形態の受光部20の説明図である。 FIG. 4 is an explanatory diagram of the light receiving section 20 of this embodiment.
 本実施形態の受光用光学系24は、バンドパスフィルタBPFと、分散素子25とを有する。なお、図中の受光用光学系24は集光レンズ241を備えるが、後述するように、受光用光学系24は集光レンズ241を備えていなくても良い。本実施形態においても、バンドパスフィルタBPFの通過帯域は、発光部12から出射される光の波長帯域(λ1(=875nm)~λ3(=935nm)の範囲)の全てを含むように、設定されている。 The light receiving optical system 24 of this embodiment includes a bandpass filter BPF and a dispersion element 25. Note that although the light-receiving optical system 24 in the figure includes a condenser lens 241, the light-receiving optical system 24 may not include the condenser lens 241, as will be described later. Also in this embodiment, the passband of the bandpass filter BPF is set to include the entire wavelength band (range of λ1 (=875 nm) to λ3 (=935 nm)) of light emitted from the light emitting section 12. ing.
 分散素子25は、光を分散させる光学素子である。分散素子25は、バンドパスフィルタBPFと受光センサ22との間に配置されている。バンドパスフィルタBPFを通過した光が分散素子25に入射し、分散素子25によって分散された光は、受光センサ22の受光素子222に入射する。本実施形態の分散素子25は、バンドパスフィルタBPFを通過した光(バンドパスフィルタBPFの通過帯域の光)を、複数の受光素子222にわたって分散させるように設定されている。言い換えると、分散素子25は、バンドパスフィルタBPFを通過した光を、複数の受光素子222に分光する。ここでは、或る発光素子121に対して3個の受光素子222が対応付けられているものとし、この3個の受光素子222のことをそれぞれ第1受光素子222A、第2受光素子222B及び第3受光素子222Cと呼ぶことがある。また、第1受光素子222Aの受光データをS1、第2受光素子222Bの受光データをS2、第3受光素子222Cの受光データをS3と示すことがある。分散素子25は、この発光素子121に対応する測定エリア50の所定の領域(この発光素子121から出射した光が照射される測定エリア50上の領域)から届く光であって、バンドパスフィルタBPFの通過帯域の光を、この発光素子121に対応付けられている3個の受光素子222(第1受光素子222A、第2受光素子222B及び第3受光素子222C)にわたって分散させる。
 なお、分散素子25がバンドパスフィルタBPFの通過帯域の光を分散する範囲は、3個分の受光素子222にわたる範囲に限られるものではなく、2以上の複数の受光素子222にわたる範囲であれば良い。
The dispersion element 25 is an optical element that disperses light. The dispersion element 25 is arranged between the bandpass filter BPF and the light receiving sensor 22. The light that has passed through the bandpass filter BPF is incident on the dispersion element 25 , and the light dispersed by the dispersion element 25 is incident on the light receiving element 222 of the light receiving sensor 22 . The dispersion element 25 of this embodiment is set to disperse the light that has passed through the bandpass filter BPF (light in the passband of the bandpass filter BPF) over the plurality of light receiving elements 222. In other words, the dispersion element 25 separates the light that has passed through the bandpass filter BPF into the plurality of light receiving elements 222 . Here, it is assumed that three light-receiving elements 222 are associated with a certain light-emitting element 121, and these three light-receiving elements 222 are referred to as a first light-receiving element 222A, a second light-receiving element 222B, and a second light-receiving element 222B. It may be called a 3-light receiving element 222C. Further, the light reception data of the first light receiving element 222A may be indicated as S1, the light reception data of the second light receiving element 222B may be indicated as S2, and the light reception data of the third light receiving element 222C may be indicated as S3. The dispersion element 25 receives light from a predetermined area of the measurement area 50 corresponding to this light emitting element 121 (the area on the measurement area 50 that is irradiated with the light emitted from this light emitting element 121), and filters the band pass filter BPF. The light in the passband is dispersed over the three light receiving elements 222 (first light receiving element 222A, second light receiving element 222B, and third light receiving element 222C) associated with this light emitting element 121.
Note that the range in which the dispersion element 25 disperses the light in the passband of the band-pass filter BPF is not limited to the range covering three light receiving elements 222, but may be any range covering two or more plurality of light receiving elements 222. good.
 分散素子25は、プリズム、回折格子、メタマテリアル等で構成することができる。メタマテリアルは、基板(例えばガラス基板)に光の波長よりも小さい微小構造体を配置した光学素子である。分散素子25は、微小構造体を3次元配置したメタマテリアルで構成しても良いし、微小構造体を2次元配置したメタマテリアル(メタサーフェス)で構成しても良い。 The dispersion element 25 can be composed of a prism, a diffraction grating, a metamaterial, or the like. A metamaterial is an optical element in which a microstructure smaller than the wavelength of light is arranged on a substrate (for example, a glass substrate). The dispersion element 25 may be made of a metamaterial in which microstructures are arranged three-dimensionally, or may be made of a metamaterial (metasurface) in which microstructures are arranged two-dimensionally.
 メタマテリアルによって、集光機能を有する光学素子(メタレンズ)を構成することが可能である。このため、本実施形態の分散素子25をメタマテリアルで構成する場合、バンドパスフィルタBPFを通過した光を受光センサ22に集光させる機能(集光機能)を分散素子25に兼ね備えさせることが可能である。これにより、受光用光学系24の小型化を図ることが可能である。なお、メタマテリアルで構成された分散素子25が集光機能を有する場合、受光用光学系24は、図中の集光レンズ241を備えていなくても良い。一方、受光用光学系24が集光レンズ241を備えることにより、分散素子25に集光機能を兼ね備えさせる必要がないため、分散素子25の設計上の制約が軽減されるので、メタマテリアルにより分散素子25を構成することが容易になる。 Using metamaterials, it is possible to construct optical elements (metalens) that have a light-gathering function. Therefore, when the dispersive element 25 of this embodiment is made of a metamaterial, it is possible to make the dispersive element 25 have a function of condensing the light that has passed through the bandpass filter BPF onto the light receiving sensor 22 (light condensing function). It is. Thereby, it is possible to reduce the size of the light receiving optical system 24. Note that when the dispersion element 25 made of a metamaterial has a light condensing function, the light receiving optical system 24 does not need to include the condensing lens 241 shown in the figure. On the other hand, since the light-receiving optical system 24 includes the condensing lens 241, there is no need for the dispersive element 25 to have a condensing function, and the design constraints of the dispersive element 25 are alleviated. It becomes easy to configure the element 25.
 また、メタマテリアルによって、偏光フィルタ機能を有する光学素子を構成することが可能である。このため、本実施形態の分散素子25をメタマテリアルで構成する場合、所定方向に振動する光を通過させつつ、所定方向と交差する方向に振動する光を吸収する機能(偏光フィルタ機能)を分散素子25に兼ね備えさせることが可能である。例えば、発光部12が出射する光が所定方向に振動する光である場合、メタマテリアルで構成された分散素子25が所定方向に振動する光を通過させつつ、所定方向と交差する方向に振動する光を吸収することによって、ノイズの影響を抑制することができる。また、例えば、分散素子25が偏光軸を鉛直方向とし吸収軸を水平方向とする偏光フィルタ機能を有することによって、対象物90から直接届く反射光は通過させつつ、路面を反射した反射光(水平方向に振動する光)を吸収することができる。このように、分散素子25が偏光フィルタ機能を有することによって、不要な光によるノイズの影響を抑制できる。 Furthermore, it is possible to construct an optical element having a polarizing filter function using a metamaterial. For this reason, when the dispersion element 25 of this embodiment is made of a metamaterial, the function of absorbing the light vibrating in a direction crossing the predetermined direction (polarizing filter function) is distributed while allowing the light vibrating in a predetermined direction to pass through. It is possible to have the element 25 have both. For example, when the light emitted by the light emitting unit 12 is light that vibrates in a predetermined direction, the dispersion element 25 made of a metamaterial passes the light that vibrates in the predetermined direction and vibrates in a direction that intersects with the predetermined direction. By absorbing light, the influence of noise can be suppressed. Further, for example, by having the dispersion element 25 have a polarizing filter function in which the polarization axis is vertical and the absorption axis is horizontal, the reflected light directly arriving from the object 90 is allowed to pass, while the reflected light reflected from the road surface (horizontal can absorb light that vibrates in different directions. In this way, by having the dispersion element 25 having a polarization filter function, it is possible to suppress the influence of noise caused by unnecessary light.
 分散素子25は、プリズムや回折格子で構成されても良い。但し、分散素子25をプリズムで構成した場合、光の偏角が小さいため、複数の受光素子222にわたって光を分散させるためには分散素子25と受光センサ22との間隔を長く設定する必要がある。また、分散素子25を回折格子で構成した場合、回折格子の刻線数が少ないと、プリズムと同様に、分散素子25と受光センサ22との間隔を長く設定する必要がある。また、刻線数が多い回折格子で分散素子25を構成しても、回折効率に応じた光の損失によって、受光素子222が受光する光の強度が低下する。これに対し、分散素子25をメタマテリアルで構成した場合、分散素子25と受光センサ22との間隔を短く設定できるとともに、受光素子222の受光する光の強度の低下も抑制できる。例えば、分散素子25をプリズムで構成した場合には、分散素子25と受光センサ22との間隔を数10mm程度に設定する必要があるのに対し、分散素子25をメタマテリアルで構成した場合には、分散素子25と受光センサ22との間隔を約10μm程度に設定することが可能である。 The dispersion element 25 may be composed of a prism or a diffraction grating. However, when the dispersive element 25 is configured with a prism, the polarization angle of the light is small, so in order to disperse the light across the plurality of light receiving elements 222, it is necessary to set the distance between the dispersive element 25 and the light receiving sensor 22 to be long. . Further, when the dispersive element 25 is formed of a diffraction grating, if the number of lines of the diffraction grating is small, it is necessary to set the distance between the dispersive element 25 and the light receiving sensor 22 to be long, as in the case of a prism. Further, even if the dispersion element 25 is configured with a diffraction grating having a large number of lines, the intensity of the light received by the light receiving element 222 decreases due to the loss of light depending on the diffraction efficiency. On the other hand, when the dispersive element 25 is made of a metamaterial, the distance between the dispersive element 25 and the light-receiving sensor 22 can be set short, and the decrease in the intensity of the light received by the light-receiving element 222 can also be suppressed. For example, when the dispersive element 25 is composed of a prism, it is necessary to set the distance between the dispersive element 25 and the light receiving sensor 22 to about several tens of mm, whereas when the dispersive element 25 is composed of a metamaterial, , it is possible to set the distance between the dispersion element 25 and the light receiving sensor 22 to about 10 μm.
 例えば、温度がT1のとき、発光部12は波長λ1(=875nm)の光を出射し、波長λ1の反射光は、分散素子25によって、第1受光素子222Aに分光される。また、温度がT2のとき、発光部12は波長λ2(=905nm)の光を出射し、波長λ2の反射光は、分散素子25によって、第2受光素子222Bに分光される。また、温度がT3のとき、発光部12は波長λ3(=935nm)の光を出射し、波長λ3の反射光は、分散素子25によって、第3受光素子222Cに分光される。このように、分散素子25は、反射光(発光部12が出射した波長の光)を、波長に応じた方向の受光素子222に分光する。 For example, when the temperature is T1, the light emitting section 12 emits light with a wavelength λ1 (=875 nm), and the reflected light with the wavelength λ1 is dispersed by the dispersion element 25 to the first light receiving element 222A. Further, when the temperature is T2, the light emitting section 12 emits light with a wavelength λ2 (=905 nm), and the reflected light with a wavelength λ2 is separated by the dispersion element 25 to the second light receiving element 222B. Further, when the temperature is T3, the light emitting section 12 emits light of wavelength λ3 (=935 nm), and the reflected light of wavelength λ3 is separated by the dispersion element 25 to the third light receiving element 222C. In this manner, the dispersion element 25 disperses the reflected light (light with the wavelength emitted by the light emitting section 12) to the light receiving element 222 in a direction according to the wavelength.
 背景光のうち、波長λ1~λ3の範囲の波長の背景光は、バンドパスフィルタBPFを通過する。本実施形態では、バンドパスフィルタBPFを通過した背景光は、分散素子25に入射し、3個の受光素子222にわたって分散される。このため、本実施形態に示す構成の場合、参考例と比べて、画素データS(画素221の受光量を示すデータ;反射光を受光した受光素子222の受光データ)に含まれるノイズの影響を約1/3に減らすことができる。本実施形態に示す構成の場合、参考例と比べて、SN比が約3倍に向上する。 Among the background lights, background lights with wavelengths in the range of wavelengths λ1 to λ3 pass through the bandpass filter BPF. In this embodiment, the background light that has passed through the bandpass filter BPF enters the dispersion element 25 and is dispersed across the three light receiving elements 222. Therefore, in the case of the configuration shown in this embodiment, compared to the reference example, the influence of noise included in the pixel data S (data indicating the amount of light received by the pixel 221; light reception data of the light receiving element 222 that received reflected light) is reduced. It can be reduced to about 1/3. In the case of the configuration shown in this embodiment, the SN ratio is improved about three times as compared to the reference example.
 図1に示すように、測定装置1は、温度センサ41を備えている。温度センサ41は、測定装置1の温度(特に発光部12の温度)を測定するセンサである。温度センサ41は、測定結果を示す温度データを制御部30に出力する。制御部30は、温度センサ41の温度データに対応する受光素子222の受光データに基づいて距離を測定する。以下、この点について説明する。 As shown in FIG. 1, the measuring device 1 includes a temperature sensor 41. The temperature sensor 41 is a sensor that measures the temperature of the measuring device 1 (particularly the temperature of the light emitting section 12). The temperature sensor 41 outputs temperature data indicating the measurement result to the control unit 30. The control unit 30 measures the distance based on the light reception data of the light receiving element 222 that corresponds to the temperature data of the temperature sensor 41. This point will be explained below.
 <第1測定方法>
 図6は、第1測定方法に用いられる対応テーブルである。
 制御部30の信号処理部362には、図6に示す対応テーブルが予め記憶されている。対応テーブルには、温度Tと、画素データSとすべき受光データとが対応付けられている。
<First measurement method>
FIG. 6 is a correspondence table used in the first measurement method.
The signal processing unit 362 of the control unit 30 stores a correspondence table shown in FIG. 6 in advance. In the correspondence table, the temperature T and the received light data to be used as the pixel data S are associated with each other.
 なお、温度がT1~T12のとき、発光部12は、875~895nmの波長の光を出射し、分散素子25は、この波長帯域の光を第1受光素子222Aに向けて分光する。温度がT12~T23のとき、発光部12は、895~915nmの波長の光を出射し、分散素子25は、この波長帯域の光を第2受光素子222Bに向けて分光する。温度がT23~T3のとき、発光部12は、915~935nmの波長の光を出射し、分散素子25は、この波長帯域の光を第3受光素子222Cに向けて分光する。 Note that when the temperature is T1 to T12, the light emitting unit 12 emits light with a wavelength of 875 to 895 nm, and the dispersion element 25 spectrally disperses the light in this wavelength band toward the first light receiving element 222A. When the temperature is between T12 and T23, the light emitting section 12 emits light with a wavelength of 895 to 915 nm, and the dispersion element 25 separates the light in this wavelength band toward the second light receiving element 222B. When the temperature is T23 to T3, the light emitting section 12 emits light with a wavelength of 915 to 935 nm, and the dispersion element 25 disperses the light in this wavelength band toward the third light receiving element 222C.
 図7~図9は、第1測定方法の説明図である。
 信号処理部362は、温度センサ41から取得した温度データに基づいて対応テーブルを参照し、画素データSとすべき受光データを決定する。例えば、温度センサ41の温度データがT1~T12の範囲(ここではT1以上T12以下)の場合、図7に示すように、信号処理部362は、対応テーブルに基づいて、光を出射する発光素子121に対応する3個の受光素子222(第1受光素子222A、第2受光素子222B及び第3受光素子222C)の中から、第1受光素子222Aの受光データS1を取得する。同様に、制御部30は、温度センサ41の温度データがT12~T23の範囲の場合、図8に示すように、対応テーブルに基づいて、第2受光素子222Bの受光データS2を取得する。また、制御部30は、温度センサ41の温度データがT23~T3の範囲の場合、図9に示すように、対応テーブルに基づいて、第3受光素子222Cの受光データS3を取得する。信号処理部362は、温度に応じて選択した受光素子222から受光データを取得することによって、発光素子121に対応する画素221の画素データSを取得する。
7 to 9 are explanatory diagrams of the first measurement method.
The signal processing unit 362 refers to the correspondence table based on the temperature data acquired from the temperature sensor 41 and determines the received light data to be used as the pixel data S. For example, when the temperature data of the temperature sensor 41 is in the range of T1 to T12 (here, T1 or more and T12 or less), as shown in FIG. The light reception data S1 of the first light reception element 222A is acquired from among the three light reception elements 222 (the first light reception element 222A, the second light reception element 222B, and the third light reception element 222C) corresponding to the light reception element 121. Similarly, when the temperature data of the temperature sensor 41 is in the range T12 to T23, the control unit 30 acquires the light reception data S2 of the second light receiving element 222B based on the correspondence table, as shown in FIG. Furthermore, when the temperature data of the temperature sensor 41 is in the range of T23 to T3, the control unit 30 acquires light reception data S3 of the third light receiving element 222C based on the correspondence table, as shown in FIG. The signal processing unit 362 obtains pixel data S of the pixel 221 corresponding to the light emitting element 121 by obtaining light reception data from the light receiving element 222 selected according to the temperature.
 制御部30の測距部36(信号処理部362)は、温度に応じて選択した受光素子222から取得した受光データ(画素221の画素データS;図3参照)に基づいて、反射光の到達タイミングを検出する。そして、測距部36(時間検出部364)は、光の出射タイミングと、光の到達タイミングとに基づいて、光を照射してから反射光が到達するまでの時間Tfを検出する。また、測距部36(距離算出部366)は、時間Tfに基づいて、対象物90までの距離Lを算出する。 The distance measuring unit 36 (signal processing unit 362) of the control unit 30 determines the arrival of the reflected light based on the light reception data (pixel data S of the pixel 221; see FIG. 3) acquired from the light receiving element 222 selected according to the temperature. Detect timing. Then, the distance measuring section 36 (time detection section 364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. Further, the distance measuring section 36 (distance calculating section 366) calculates the distance L to the target object 90 based on the time Tf.
 ところで、第1測定方法が採用された測定装置1の場合、第1受光素子222A~第3受光素子222Cが所定の受光データを出力する状況(基準状況)の下において温度センサ41の温度データを変化させると、制御部30は、異なる距離を出力するという現象が現れる。以下、この点について説明する。
 まず、発光部12が所定の温度(基準温度)であり、測定装置1と対象物90との間に所定の距離(基準距離)をあけ、第1受光素子222A~第3受光素子222Cが所定の受光データS1~S3を出力する状況を設定し、この状況を基準状況とする。なお、基準温度や基準距離は、任意に設定可能である。例えば、基準温度がT2の場合、発光部12は波長λ2(=875nm)の光を出射し、分散素子25は、波長λ2の反射光を第2受光素子222Bに向けて分光するとともに、バンドパスフィルタBPFを通過する波長λ1~λ3の光を波長に応じて3個の受光素子222にわたって分散させる状況になり、3個の受光素子222は、この基準状況に応じた受光データS1~S3をそれぞれ出力することになる。なお、制御部30は、温度T2が温度T12~T23の範囲であることから、図8に示すように第2受光素子222Bの受光データS2を画素データSとして取得し、画素データSに基づいて算出した距離を出力することになる。
 次に、上記の基準状況下において、温度センサ41の温度データを変化させる。なお、ここでは、制御部30が取得する温度データを変化させるだけであり(言い換えると、制御部30にダミーの温度データを入力するだけであり)、発光部12が出射する光の波長は基準状況と同じであり、受光素子222が出力する受光データも基準状況と同じある。例えば、基準温度がT2の場合、温度センサ41の温度データをT1に変化させるときにも、発光部12は波長λ2(=875nm)の光を出射し、3個の受光素子222は、それぞれ基準状況と同じ受光データS1~S3をそれぞれ出力することになる。基準状況下において温度センサ41の温度データを変化させると、制御部30が取得する受光データが変化するため、制御部30は、異なる距離を出力することになる。例えば、基準状況下において温度センサ41の温度データをT2からT1に変化させると、発光部12の温度や、測定装置1と対象物90との距離が保たれており、第1受光素子222A~第3受光素子222Cが出力する受光データS1~S3が同じ状況下であるにも関わらず、制御部30は、出力する距離を、第2受光素子222Bの受光データS2に応じた距離から、第1受光素子222Aの受光データS1に応じた距離に変化させることになる。このように、第1測定方法が採用された測定装置1の場合、受光素子222が所定の受光データを出力する状況(基準状況)の下において温度センサ41の温度データが変化すると、制御部30は、異なる距離を出力することになる。つまり、受光素子222が所定の受光データを出力する状況(基準状況)の下において温度センサ41の温度データを変化させることによって、温度センサ41の温度データに対応する受光素子222の受光データに基づいて距離を測定することを検証することが可能である。
By the way, in the case of the measuring device 1 in which the first measurement method is adopted, the temperature data of the temperature sensor 41 is measured under a situation (standard situation) in which the first light receiving element 222A to the third light receiving element 222C output predetermined light reception data. When the distance is changed, a phenomenon occurs in which the control unit 30 outputs different distances. This point will be explained below.
First, the light emitting unit 12 is at a predetermined temperature (reference temperature), there is a predetermined distance (reference distance) between the measuring device 1 and the object 90, and the first to third light receiving elements 222A to 222C are at a predetermined temperature. A situation for outputting the received light data S1 to S3 is set, and this situation is set as a reference situation. Note that the reference temperature and reference distance can be set arbitrarily. For example, when the reference temperature is T2, the light emitting unit 12 emits light with a wavelength λ2 (=875 nm), and the dispersion element 25 disperses the reflected light with a wavelength λ2 toward the second light receiving element 222B, and the bandpass The situation is such that the light with wavelengths λ1 to λ3 passing through the filter BPF is dispersed across the three light receiving elements 222 according to the wavelength, and the three light receiving elements 222 respectively receive light reception data S1 to S3 according to this reference situation. It will be output. Note that since the temperature T2 is in the range of temperatures T12 to T23, the control unit 30 acquires the light reception data S2 of the second light receiving element 222B as the pixel data S as shown in FIG. The calculated distance will be output.
Next, the temperature data of the temperature sensor 41 is changed under the above reference situation. Note that here, only the temperature data acquired by the control unit 30 is changed (in other words, dummy temperature data is simply input to the control unit 30), and the wavelength of the light emitted by the light emitting unit 12 is the same as the standard. The situation is the same as that of the reference situation, and the light reception data output by the light receiving element 222 is also the same as the reference situation. For example, when the reference temperature is T2, even when changing the temperature data of the temperature sensor 41 to T1, the light emitting unit 12 emits light of wavelength λ2 (=875 nm), and the three light receiving elements 222 each The same light reception data S1 to S3 as the situation will be output. If the temperature data of the temperature sensor 41 is changed under the reference situation, the light reception data acquired by the control unit 30 will change, so the control unit 30 will output a different distance. For example, when the temperature data of the temperature sensor 41 is changed from T2 to T1 under the standard situation, the temperature of the light emitting part 12 and the distance between the measuring device 1 and the object 90 are maintained, and the first light receiving elements 222A to Even though the light reception data S1 to S3 output by the third light reception element 222C are under the same situation, the control unit 30 changes the output distance from the distance according to the light reception data S2 of the second light reception element 222B. The distance is changed according to the light reception data S1 of 1 light receiving element 222A. As described above, in the case of the measuring device 1 in which the first measuring method is adopted, when the temperature data of the temperature sensor 41 changes under the condition (reference condition) in which the light receiving element 222 outputs predetermined received light data, the controller 30 will output different distances. In other words, by changing the temperature data of the temperature sensor 41 under a situation in which the light receiving element 222 outputs predetermined light receiving data (reference condition), the light receiving data of the light receiving element 222 corresponding to the temperature data of the temperature sensor 41 is changed. It is possible to verify that the distance is measured by
 <第2測定方法>
 分散素子25が受光素子222に向けて出射する光の角度は、波長に応じて徐々に変化する。このため、分散素子25が2つの受光素子222(例えば第1受光素子222Aと第2受光素子222B)の境界部に向けて反射光を分光し、2つの受光素子222が反射光を受光することがある。このような場合、第1測定方法のように3個の受光素子222うちの1つの受光素子222の受光データに基づいて距離を測定するよりも、反射光を受光する2つの受光素子222の受光データに基づいて距離を測定する方が有利である。第2測定方法では、温度データに応じて2以上の受光素子222の受光データに基づいて距離を測定することが可能である。
<Second measurement method>
The angle of the light emitted from the dispersion element 25 toward the light receiving element 222 gradually changes depending on the wavelength. Therefore, the dispersion element 25 separates the reflected light toward the boundary between the two light receiving elements 222 (for example, the first light receiving element 222A and the second light receiving element 222B), and the two light receiving elements 222 receive the reflected light. There is. In such a case, rather than measuring the distance based on the light reception data of one of the three light receiving elements 222 as in the first measuring method, the distance is measured based on the light receiving data of the two light receiving elements 222 that receive the reflected light. It is advantageous to measure distances based on data. In the second measurement method, it is possible to measure the distance based on the light reception data of two or more light receiving elements 222 according to the temperature data.
 図10は、第2測定方法に用いられるテーブルである。図11は、第2測定方法の説明図である。 FIG. 10 is a table used in the second measurement method. FIG. 11 is an explanatory diagram of the second measurement method.
 制御部30の記憶部(不図示)には、図10に示す重みテーブルが予め記憶されている。重みテーブルには、温度Tと、重み係数とが対応付けられている。重み係数は、温度センサ41の温度データに対応付けた重みデータに相当する。重み係数には、第1重み係数W1と、第2重み係数W2と、第3重み係数W3とが含まれている。第1重み係数W1は、第1受光素子222Aの受光データS1に対する重み係数である。第2重み係数W2は、第2受光素子222Bの受光データS2に対する重み係数である。第3重み係数W3は、第3受光素子222Cの受光データS3に対する重み係数である。 A weight table shown in FIG. 10 is stored in advance in the storage unit (not shown) of the control unit 30. The weight table associates the temperature T with a weighting coefficient. The weighting coefficient corresponds to weighting data associated with temperature data of the temperature sensor 41. The weighting coefficients include a first weighting coefficient W1, a second weighting coefficient W2, and a third weighting coefficient W3. The first weighting coefficient W1 is a weighting coefficient for the light reception data S1 of the first light receiving element 222A. The second weighting coefficient W2 is a weighting coefficient for the light reception data S2 of the second light receiving element 222B. The third weighting coefficient W3 is a weighting coefficient for the light reception data S3 of the third light receiving element 222C.
 信号処理部362は、温度センサ41から取得した温度データに対応する重み係数(重みデータ)を、記憶部に記憶されている重みテーブルから取得する。すなわち、信号処理部362は、温度センサ41から取得した温度データに基づいて重みテーブルを参照し、温度データに対応する重み係数(第1重み係数W1、第2重み係数W2及び第3重み係数W3)を取得する。そして、信号処理部362は、光を出射した発光素子121に対応する3個の受光素子222(第1受光素子222A、第2受光素子222B及び第3受光素子222C)の受光データS1~S3をそれぞれ取得し、それぞれの受光素子222の受光データS1~S3に対して重み係数に応じた重み付けを行うことによって、画素データSを算出する。具体的には、信号処理部362は、次式に基づいて画素データSを算出する。
Figure JPOXMLDOC01-appb-M000003
The signal processing unit 362 acquires a weighting coefficient (weighting data) corresponding to the temperature data acquired from the temperature sensor 41 from a weighting table stored in the storage unit. That is, the signal processing unit 362 refers to the weighting table based on the temperature data acquired from the temperature sensor 41, and determines the weighting coefficients (first weighting coefficient W1, second weighting coefficient W2, and third weighting coefficient W3) corresponding to the temperature data. ) to obtain. Then, the signal processing unit 362 receives light reception data S1 to S3 of the three light receiving elements 222 (first light receiving element 222A, second light receiving element 222B, and third light receiving element 222C) corresponding to the light emitting element 121 that has emitted the light. The pixel data S is calculated by acquiring the respective light receiving data S1 to S3 of the respective light receiving elements 222 and weighting them according to the weighting coefficients. Specifically, the signal processing unit 362 calculates the pixel data S based on the following equation.
Figure JPOXMLDOC01-appb-M000003
 例えば、温度T1の場合(波長λ1(=875nm)の反射光が分散素子25によって第1受光素子222Aに分光される場合)に対して、重み係数はそれぞれW1=1、W2=0、W3=0に設定されている。また、温度T2の場合(波長λ2(=905nm)の反射光が分散素子25によって第2受光素子222Bに分光される場合)に対して、重み係数はそれぞれW1=0、W2=1、W3=0に設定されている。また、温度T3の場合(波長λ3(=935nm)の反射光が分散素子25によって第3受光素子222Cに分光される場合)、重み係数はそれぞれW1=0、W2=0、W3=1に設定されている。そして、温度TがT1とT2の中間の温度の場合、温度TがT2よりもT1に近いほど、W1はW2よりも大きくなるようにW1,W2が設定されている(W3はゼロに設定されている)。これにより、分散素子25が第1受光素子222Aと第2受光素子222Bとの境界部に反射光を分光する状態であっても、精度良く距離を測定することができる。また、温度TがT2とT3の中間の温度の場合、温度TがT3よりもT2に近いほど、W2はW3よりも大きくなるようにW2,W3が設定されている(W1はゼロに設定されている)。これにより、分散素子25が第2受光素子222Bと第3受光素子222Cとの境界部に反射光を分光する状態であっても、精度良く距離を測定することができる。 For example, for the case of temperature T1 (when the reflected light of wavelength λ1 (=875 nm) is dispersed by the dispersion element 25 to the first light receiving element 222A), the weighting coefficients are W1=1, W2=0, W3= It is set to 0. In addition, for the case of temperature T2 (when the reflected light of wavelength λ2 (=905 nm) is dispersed by the dispersion element 25 to the second light receiving element 222B), the weighting coefficients are W1=0, W2=1, W3= It is set to 0. In addition, in the case of temperature T3 (when the reflected light with wavelength λ3 (=935 nm) is dispersed by the dispersion element 25 to the third light receiving element 222C), the weighting coefficients are set to W1 = 0, W2 = 0, and W3 = 1, respectively. has been done. When temperature T is between T1 and T2, W1 and W2 are set such that the closer temperature T is to T1 than T2, W1 becomes larger than W2 (W3 is set to zero). ing). Thereby, even if the dispersion element 25 is in a state where the reflected light is dispersed at the boundary between the first light receiving element 222A and the second light receiving element 222B, the distance can be measured with high accuracy. Furthermore, when temperature T is between T2 and T3, W2 and W3 are set such that the closer temperature T is to T2 than T3, W2 becomes larger than W3 (W1 is set to zero). ing). Thereby, even if the dispersion element 25 disperses the reflected light at the boundary between the second light receiving element 222B and the third light receiving element 222C, the distance can be measured with high accuracy.
 なお、制御部30の測距部36(信号処理部362)は、算出した画素データSに基づいて、反射光の到達タイミングを検出する(図3参照)。そして、測距部36(時間検出部364)は、光の出射タイミングと、光の到達タイミングとに基づいて、光を照射してから反射光が到達するまでの時間Tfを検出する。また、測距部36(距離算出部366)は、時間Tfに基づいて、対象物90までの距離Lを算出する。 Note that the distance measuring unit 36 (signal processing unit 362) of the control unit 30 detects the arrival timing of the reflected light based on the calculated pixel data S (see FIG. 3). Then, the distance measuring section 36 (time detection section 364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. Further, the distance measuring section 36 (distance calculating section 366) calculates the distance L to the target object 90 based on the time Tf.
 ところで、第2測定方法が採用された測定装置1の場合であっても、受光素子222が所定の受光データを出力する状況(基準状況)の下において温度センサ41の温度データが変化すると、制御部30は、異なる距離を出力することになる。つまり、受光素子222が所定の受光データを出力する状況(基準状況)の下において温度センサ41の温度データを変化させることによって、温度センサ41の温度データに対応する受光素子222の受光データに基づいて距離を測定することを検証することが可能である。なお、第2測定方法が採用された測定装置1の場合、温度センサ41の温度データを徐々に変化させると、制御部30が出力する距離が徐々に変化する。 By the way, even in the case of the measuring device 1 in which the second measurement method is adopted, if the temperature data of the temperature sensor 41 changes under a situation (reference situation) in which the light receiving element 222 outputs predetermined light reception data, the control The unit 30 will output different distances. In other words, by changing the temperature data of the temperature sensor 41 under a situation in which the light receiving element 222 outputs predetermined light receiving data (reference condition), the light receiving data of the light receiving element 222 corresponding to the temperature data of the temperature sensor 41 is changed. It is possible to verify that the distance is measured by In addition, in the case of the measuring device 1 employing the second measuring method, when the temperature data of the temperature sensor 41 is gradually changed, the distance outputted by the control unit 30 is gradually changed.
 <照射方法について>
 図12は、発光素子121と受光素子222との対応関係の説明図である。また、図12は、画素221と受光素子222との対応関係の説明図でもある。
<About the irradiation method>
FIG. 12 is an explanatory diagram of the correspondence between the light emitting element 121 and the light receiving element 222. Further, FIG. 12 is also an explanatory diagram of the correspondence relationship between the pixel 221 and the light receiving element 222.
 既に説明した通り、発光部12は、複数の発光素子121を備えている。図中には、発光部12の複数の発光素子121のうちの隣接する2つの発光素子121(#1、#2)が示されている。また、図中には、2つの発光素子121(#1、#2)に対応する測定エリア50上の領域(#1、#2)が示されている。また、図中には、2つの発光素子121(#1、#2)や測定エリア50上の2つの領域(#1、#2)と対応する2つの受光センサ22の画素221(#1、#2)が示されている。2つの画素221(#1、#2)は、発光部12の2つの発光素子121(#1、#2)とそれぞれ対応する。発光素子121#1から出射した光は、測定エリア50上の領域#1に照射されるとともに、領域#1からの光(反射光及び背景光)は、画素221#1に受光されることになる。また、発光素子121#2から出射した光は、測定エリア50上の領域#2に照射されるとともに、領域#2からの光(反射光及び背景光)は、画素221#2に受光されることになる。 As already explained, the light emitting section 12 includes a plurality of light emitting elements 121. In the figure, two adjacent light emitting elements 121 (#1, #2) among the plurality of light emitting elements 121 of the light emitting section 12 are shown. Also shown in the figure are regions (#1, #2) on the measurement area 50 corresponding to two light emitting elements 121 (#1, #2). Additionally, in the figure, pixels 221 (#1, #2) of two light receiving sensors 22 corresponding to two light emitting elements 121 (#1, #2) and two areas (#1, #2) on the measurement area 50 are shown. #2) is shown. The two pixels 221 (#1, #2) correspond to the two light emitting elements 121 (#1, #2) of the light emitting section 12, respectively. The light emitted from the light emitting element 121#1 is irradiated onto the area #1 on the measurement area 50, and the light (reflected light and background light) from the area #1 is received by the pixel 221#1. Become. Further, the light emitted from the light emitting element 121#2 is irradiated onto the region #2 on the measurement area 50, and the light (reflected light and background light) from the region #2 is received by the pixel 221#2. It turns out.
 図12に示す形態では、受光センサ22の1つの画素221に複数(ここでは3個)の受光素子222が含まれている。ここでは、受光用光学系24の分散素子25は、測定エリア50上の領域#1から届く光(反射光及び背景光)を、画素221#1に属する複数の受光素子222(ここでは第1受光素子222A~第3受光素子222C)にわたって分光する。同様に、受光用光学系24の分散素子25は、測定エリア50上の領域#2から届く光(反射光及び背景光)を、画素221#2に属する複数の受光素子222にわたって分光する。 In the form shown in FIG. 12, one pixel 221 of the light receiving sensor 22 includes a plurality of (here, three) light receiving elements 222. Here, the dispersion element 25 of the light-receiving optical system 24 distributes light (reflected light and background light) arriving from region #1 on the measurement area 50 to a plurality of light-receiving elements 222 (here, the first The light is dispersed across the light receiving element 222A to the third light receiving element 222C). Similarly, the dispersion element 25 of the light-receiving optical system 24 disperses the light (reflected light and background light) arriving from the region #2 on the measurement area 50 across the plurality of light-receiving elements 222 belonging to the pixel 221 #2.
 図12では、発光素子121#1に対応付けられている複数の受光素子222(画素221#1に属する複数の受光素子222)と、発光素子121#2に対応付けられている複数の受光素子222(画素221#2に属する複数の受光素子222)は、別々であり、重複していない。このように、2つの発光素子121のうちの一方の発光素子121に対応付けられている複数の受光素子222と、他方の発光素子121に対応付けられている複数の受光素子222とが異なる場合、制御部30(タイミング制御部34)は、その2つの発光素子121から光を同時に出射させることが可能である。なお、図12に示す場合、制御部30(タイミング制御部34)は、発光部12の全ての発光素子121から光を出射させて測定エリア50の全体に光を一括して照射しても良いし、発光部12の一部の発光素子121(例えば1つの発光素子121)から光を出射させて測定エリア50の所定の領域のみに光を照射しても良い。 In FIG. 12, a plurality of light-receiving elements 222 (a plurality of light-receiving elements 222 belonging to pixel 221#1) are associated with the light-emitting element 121#1, and a plurality of light-receiving elements are associated with the light-emitting element 121#2. 222 (a plurality of light receiving elements 222 belonging to pixel 221#2) are separate and do not overlap. In this way, when the multiple light receiving elements 222 associated with one of the two light emitting elements 121 are different from the multiple light receiving elements 222 associated with the other light emitting element 121 , the control section 30 (timing control section 34) can cause the two light emitting elements 121 to emit light simultaneously. In addition, in the case shown in FIG. 12, the control unit 30 (timing control unit 34) may emit light from all the light emitting elements 121 of the light emitting unit 12 to irradiate the entire measurement area 50 with light at once. However, light may be emitted from some of the light emitting elements 121 (for example, one light emitting element 121) of the light emitting unit 12 to irradiate only a predetermined region of the measurement area 50 with the light.
 図13は、発光素子121と受光素子222との別の対応関係の説明図である。図13は、画素221と受光素子222との別の対応関係の説明図でもある。 FIG. 13 is an explanatory diagram of another correspondence relationship between the light emitting element 121 and the light receiving element 222. FIG. 13 is also an explanatory diagram of another correspondence relationship between the pixel 221 and the light receiving element 222.
 図13に示す形態では、受光センサ22の1つの画素221は、1個の受光素子222により構成されている。ここでは、受光用光学系24の分散素子25は、測定エリア50上の領域#1から届く光(反射光及び背景光)を、3個の画素221により構成されている3個の受光素子222にわたって分光する。また、受光用光学系24の分散素子25は、測定エリア50上の領域#2から届く光(反射光及び背景光)を、3個の画素221により構成されている3個の受光素子222にわたって分光する。
 図13では、発光素子121#1に対応付けられている複数の受光素子222と、発光素子121#2に対応付けられている複数の受光素子222は、一部が重複している。このように、2つの発光素子121のそれぞれに対応付けられている複数の受光素子222が一部重複する場合、制御部30(タイミング制御部34)は、その2つの発光素子121から光を同時に出射させることを行わない。この場合、制御部30は、発光部12の一部の発光素子121(例えば1つの発光素子121)から光を出射させて測定エリア50の所定の領域のみに光を照射する。なお、2以上の発光素子121から光を同時に出射させる場合には、制御部30(タイミング制御部34)は、対応する複数の受光素子222が重複しない複数の発光素子121から光を照射する。
In the form shown in FIG. 13, one pixel 221 of the light receiving sensor 22 is constituted by one light receiving element 222. Here, the dispersion element 25 of the light-receiving optical system 24 transmits light (reflected light and background light) arriving from area #1 on the measurement area 50 to three light-receiving elements 222 composed of three pixels 221. spectroscopy across the spectrum. Further, the dispersion element 25 of the light receiving optical system 24 spreads the light (reflected light and background light) arriving from region #2 on the measurement area 50 over the three light receiving elements 222 made up of the three pixels 221. Spectroscopy.
In FIG. 13, the plurality of light receiving elements 222 associated with the light emitting element 121#1 and the plurality of light receiving elements 222 associated with the light emitting element 121#2 partially overlap. In this way, when the plurality of light receiving elements 222 associated with each of the two light emitting elements 121 partially overlap, the control unit 30 (timing control unit 34) transmits light from the two light emitting elements 121 at the same time. Do not emit radiation. In this case, the control unit 30 causes some of the light emitting elements 121 (for example, one light emitting element 121) of the light emitting unit 12 to emit light to irradiate only a predetermined region of the measurement area 50 with the light. Note that when light is emitted simultaneously from two or more light emitting elements 121, the control unit 30 (timing control unit 34) irradiates light from a plurality of light emitting elements 121 whose corresponding plurality of light receiving elements 222 do not overlap.
 なお、図4に示すように、分散素子25の前(測定エリア50の側)に集光レンズ241を配置し、測定エリア50上の異なる領域(#1、#2)から届く光(反射光及び背景光)を集光レンズ241によって分散素子25の異なる位置に集光させることが望ましい。これにより、図12に示すように、画素221#1に属する複数の受光素子222と、画素221#2に属する複数の受光素子222とが重複しないように、受光用光学系24を構成し易くなる。 As shown in FIG. 4, a condensing lens 241 is placed in front of the dispersion element 25 (on the side of the measurement area 50) to collect light (reflected light) arriving from different areas (#1, #2) on the measurement area 50. It is desirable that the condensing lens 241 condenses the light and background light at different positions on the dispersion element 25. This makes it easy to configure the light receiving optical system 24 so that the plurality of light receiving elements 222 belonging to the pixel 221#1 and the plurality of light receiving elements 222 belonging to the pixel 221#2 do not overlap, as shown in FIG. Become.
 ===小括===
 本実施形態の測定装置1は、発光部12と、バンドパスフィルタBPF(フィルタに相当)と、受光センサ22と、分散素子25とを備えている。発光部12は、所定の波長帯域の範囲で温度に応じた波長の光を照射する。バンドパスフィルタBPFは、所定の波長帯域の光を通過させ、発光部12から照射された光の反射光を通過させる。受光センサ22は、複数の受光素子222を有する。分散素子25は、バンドパスフィルタBPFと受光センサ22との間に配置され、バンドパスフィルタBPFを通過した光を2以上の受光素子222に分散させる。このような構成によれば、温度に応じた波長の光が発光部12から照射され、その光は、バンドパスフィルタBPFを通過した後、分散素子25によって波長に応じた特定の受光素子222に分光される。一方、バンドパスフィルタBPFを通過した背景光は、分散素子25によって、2以上の受光素子222にわたって分散される。このため、反射光を受光する受光素子222が受光する背景光を軽減させることができ、反射光を受光する受光素子222から出力される受光データに含まれるノイズの影響を抑制することができる。
===Summary===
The measuring device 1 of this embodiment includes a light emitting section 12, a band pass filter BPF (corresponding to a filter), a light receiving sensor 22, and a dispersion element 25. The light emitting unit 12 emits light of a wavelength depending on the temperature within a predetermined wavelength band. The bandpass filter BPF passes light in a predetermined wavelength band, and passes reflected light of the light emitted from the light emitting unit 12. The light receiving sensor 22 has a plurality of light receiving elements 222. The dispersion element 25 is disposed between the bandpass filter BPF and the light receiving sensor 22, and disperses the light that has passed through the bandpass filter BPF to the two or more light receiving elements 222. According to such a configuration, light with a wavelength corresponding to the temperature is emitted from the light emitting section 12, and after passing through the band pass filter BPF, the light is directed by the dispersion element 25 to a specific light receiving element 222 according to the wavelength. It is spectrally separated. On the other hand, the background light that has passed through the band-pass filter BPF is dispersed across two or more light receiving elements 222 by the dispersion element 25. Therefore, the background light received by the light receiving element 222 that receives reflected light can be reduced, and the influence of noise included in the light reception data output from the light receiving element 222 that receives reflected light can be suppressed.
 また、本実施形態の測定装置1は、温度センサ41と、制御部30とを更に備えており、制御部30は、温度センサ41の温度データに対応する受光素子222の受光データに基づいて距離を測定する。これにより、ノイズの影響を抑制した受光データに基づいて距離を測定できるため、測定精度が向上する。 The measuring device 1 of this embodiment further includes a temperature sensor 41 and a control section 30, and the control section 30 determines the distance based on the light reception data of the light receiving element 222 corresponding to the temperature data of the temperature sensor 41. Measure. As a result, the distance can be measured based on the received light data with the influence of noise suppressed, thereby improving measurement accuracy.
 また、本実施形態の測定装置1は、温度センサ41と、制御部30とを更に備えており、受光素子222が所定の受光データを出力する状況(基準状況)の下において温度センサ41の温度データが変化すると、制御部30は、異なる距離を出力する。つまり、受光素子222が所定の受光データを出力する状況(基準状況)の下において温度センサ41の温度データを変化させることによって、温度センサ41の温度データに対応する受光素子222の受光データに基づいて距離を測定することを検証することが可能である。 The measuring device 1 of the present embodiment further includes a temperature sensor 41 and a control unit 30, and the temperature of the temperature sensor 41 under a situation (reference situation) in which the light receiving element 222 outputs predetermined light reception data. When the data changes, the control unit 30 outputs a different distance. In other words, by changing the temperature data of the temperature sensor 41 under a situation in which the light receiving element 222 outputs predetermined light receiving data (reference condition), the light receiving data of the light receiving element 222 corresponding to the temperature data of the temperature sensor 41 is changed. It is possible to verify that the distance is measured by
 また、本実施形態の測定装置1は、温度センサ41の温度データに対応付けた重み係数(重みデータに相当)を記憶する記憶部を備える(図10参照)。そして、制御部30は、温度センサ41の温度データに対応する重み係数(重みデータ)を記憶部から取得し、2以上の受光素子222のそれぞれの受光データに対して重み係数に応じた重み付けを行うことによって、受光データSを算出し、その受光データSに基づいて距離を測定する。これにより、分散素子25が受光素子222の境界部に反射光を分光する状態であっても、精度良く距離を測定することができる。 Furthermore, the measuring device 1 of this embodiment includes a storage unit that stores weighting coefficients (corresponding to weighting data) associated with temperature data of the temperature sensor 41 (see FIG. 10). Then, the control unit 30 acquires a weighting coefficient (weighting data) corresponding to the temperature data of the temperature sensor 41 from the storage unit, and weights each light reception data of the two or more light receiving elements 222 according to the weighting coefficient. By doing so, the received light data S is calculated, and the distance is measured based on the received light data S. Thereby, even if the dispersion element 25 disperses the reflected light at the boundary of the light receiving element 222, the distance can be measured with high accuracy.
 また、本実施形態の分散素子25は、メタマテリアルにより構成されている。これにより、分散素子25と受光センサ22との間隔を短く設定することができ、測定装置1の小型化を図ることができる。
 分散素子25をメタマテリアルにより構成する場合、分散素子25が集光機能(フィルタを通過した光を受光センサ22に集光させる機能)を有することが望ましい。これにより、受光用光学系24の小型化を図ることが可能である。また、分散素子25をメタマテリアルにより構成する場合、分散素子25が偏光フィルタ機能(所定方向に振動する光を通過させつつ、所定方向と交差する方向に振動する光を吸収する機能)を有することが望ましい。これにより、不要な光によるノイズの影響を抑制できる。
 なお、分散素子25は、プリズム又は回折格子により構成されても良い。これにより、分散素子25を安価に構成できる。
Further, the dispersion element 25 of this embodiment is made of a metamaterial. Thereby, the distance between the dispersion element 25 and the light receiving sensor 22 can be set short, and the measuring device 1 can be downsized.
When the dispersive element 25 is made of a metamaterial, it is desirable that the dispersive element 25 has a light condensing function (a function of condensing the light that has passed through the filter onto the light receiving sensor 22). Thereby, it is possible to reduce the size of the light receiving optical system 24. Further, when the dispersive element 25 is made of a metamaterial, the dispersive element 25 should have a polarization filter function (a function of allowing light vibrating in a predetermined direction to pass and absorbing light vibrating in a direction crossing the predetermined direction). is desirable. This makes it possible to suppress the influence of noise caused by unnecessary light.
Note that the dispersion element 25 may be configured by a prism or a diffraction grating. Thereby, the dispersion element 25 can be constructed at low cost.
 また、本実施形態の測定装置1は、バンドパスフィルタBPFと分散素子25との間に配置された集光レンズ241を備える。これにより、分散素子25が集光機能を兼ね備える必要が無いため、分散素子25の設計上の制約を軽減できる。 Furthermore, the measuring device 1 of this embodiment includes a condenser lens 241 disposed between the bandpass filter BPF and the dispersion element 25. This eliminates the need for the dispersion element 25 to also have a light condensing function, so that restrictions on the design of the dispersion element 25 can be alleviated.
 [第二実施形態]
 <全体構成>
 図14は、測定装置1001の全体構成の説明図である。図15は、測定装置1001の概略説明図である。
[Second embodiment]
<Overall configuration>
FIG. 14 is an explanatory diagram of the overall configuration of the measuring device 1001. FIG. 15 is a schematic explanatory diagram of the measuring device 1001.
 以下の説明では、図15に示すように各方向を定めている。Z方向は、受光用光学系1024の光軸に沿った方向である。なお、測定装置1001の測定対象となる対象物1090は、測定装置1001に対してZ方向に離れていることになる。また、X方向及びY方向は、Z方向に対して垂直な方向である。X方向は、投光用光学系1014の光軸と受光用光学系1024の光軸の並ぶ方向である。Y方向は、X方向及びZ方向に垂直な方向である。 In the following description, each direction is defined as shown in FIG. 15. The Z direction is a direction along the optical axis of the light receiving optical system 1024. Note that the object 1090 to be measured by the measuring device 1001 is separated from the measuring device 1001 in the Z direction. Further, the X direction and the Y direction are directions perpendicular to the Z direction. The X direction is a direction in which the optical axis of the light projecting optical system 1014 and the optical axis of the light receiving optical system 1024 are lined up. The Y direction is a direction perpendicular to the X direction and the Z direction.
 測定装置1001は、対象物1090までの距離を測定する装置である。測定装置1001は、いわゆるLiDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)としての機能を有する装置である。測定装置1001は、測定光を出射し、対象物1090の表面で反射した反射光を検出し、測定光を出射してから反射光を受光するまでの時間を計測することによって、対象物1090までの距離をTOF方式(Time of flight)で測定する。測定装置1001は、照射部1010と、発光部1020と、制御部1030とを有する。 The measuring device 1001 is a device that measures the distance to the target object 1090. The measuring device 1001 is a device having a function of so-called LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging). The measuring device 1001 emits measurement light, detects the reflected light reflected from the surface of the target object 1090, and measures the time from when the measurement light is emitted to when the reflected light is received. Measure the distance using the TOF method (Time of flight). The measuring device 1001 includes an irradiation section 1010, a light emitting section 1020, and a control section 1030.
 照射部1010は、対象物1090に向かって測定光を照射する照射装置である。照射部1010は、所定の画角で測定エリア1050(図15参照)に測定光を照射する。照射部1010は、発光部1012と、投光用光学系1014とを有する。発光部1012は、光を出射する部材(光源)である。例えば、発光部1012は、面発光レーザー(VCSEL)アレイチップで構成されている。発光部1012は、発光素子1121(例えば面発光レーザー;VCSEL)を複数有しており、複数の発光素子1121はX方向及びY方向に沿って2次元配置されている。投光用光学系1014は、発光部1012から出射された光を測定エリア1050に照射する光学系である。発光部1012は、それぞれの発光素子1121を個別に発光させることができる。発光部1012のそれぞれの発光素子1121は、投光用光学系1014を介して、測定エリア1050の所定の領域に対応付けられている。或る発光素子1121から出射した光は、投光用光学系1014を介して、測定エリア1050の対応する領域に照射されることになる。但し、照射部1010は、発光部1012の発光面全体から光を出射して、測定エリア1050の全体に光を一括照射するように構成されても良い。また、投光用光学系1014が回転可能なミラー(例えばポリゴンミラー)を備えており、ミラーを回転させることによって発光部1012の光を測定エリア1050に照射しても良い。なお、発光部1012が照射する光の波長は、温度に応じて変化する。この点については後述する。 The irradiation unit 1010 is an irradiation device that irradiates measurement light toward the object 1090. The irradiation unit 1010 irradiates the measurement area 1050 (see FIG. 15) with measurement light at a predetermined angle of view. The irradiation unit 1010 includes a light emitting unit 1012 and a light projection optical system 1014. The light emitting unit 1012 is a member (light source) that emits light. For example, the light emitting unit 1012 is configured with a surface emitting laser (VCSEL) array chip. The light emitting unit 1012 includes a plurality of light emitting elements 1121 (for example, a surface emitting laser; VCSEL), and the plurality of light emitting elements 1121 are two-dimensionally arranged along the X direction and the Y direction. The light projection optical system 1014 is an optical system that irradiates the measurement area 1050 with the light emitted from the light emitting section 1012. The light emitting unit 1012 can cause each light emitting element 1121 to emit light individually. Each light emitting element 1121 of the light emitting unit 1012 is associated with a predetermined region of the measurement area 1050 via the light projection optical system 1014. Light emitted from a certain light emitting element 1121 is irradiated onto a corresponding region of the measurement area 1050 via the light projection optical system 1014. However, the irradiation unit 1010 may be configured to emit light from the entire light emitting surface of the light emitting unit 1012 and irradiate the entire measurement area 1050 with light at once. Further, the light projecting optical system 1014 may include a rotatable mirror (for example, a polygon mirror), and the light from the light emitting unit 1012 may be irradiated onto the measurement area 1050 by rotating the mirror. Note that the wavelength of the light emitted by the light emitting section 1012 changes depending on the temperature. This point will be discussed later.
 発光部1020は、対象物1090からの反射光を受光する。発光部1020は、測定エリア1050(図15参照)からの反射光を受光することになる。発光部1020は、受光センサ1022受光センサ1022と、受光用光学系1024とを有する。受光センサ1022は、2次元配置された複数の画素1221を有する。例えば、VGAの受光センサ1022の場合、480×640の画素1221が2次元配置されている。各画素1221は、受光素子を有しており、受光素子は、受光量に応じた信号(受光データ)を出力する。受光用光学系1024は、測定エリア1050からの反射光を発光部1020に受光させる光学系である。受光用光学系1024は、測定エリア1050の像を受光センサ1022の受光面で結像させる。受光センサ1022のそれぞれの画素1221は、受光用光学系1024を介して、測定エリア1050の所定の領域に対応付けられている。受光センサ1022の或る画素1221は、受光用光学系1024を介して、測定エリア1050の対応する領域からの光(反射光及び背景光)を受光することになる。また、受光センサ1022のそれぞれの画素1221は、発光部1012の所定の発光素子1121と対応付けられている。或る発光素子1121から出射した光は、投光用光学系1014及び受光用光学系1024を介して、対応する画素1221に受光されることになる。なお、受光用光学系1024については、後述する。 The light emitting unit 1020 receives reflected light from the target object 1090. The light emitting unit 1020 receives reflected light from the measurement area 1050 (see FIG. 15). The light emitting unit 1020 includes a light receiving sensor 1022 and a light receiving optical system 1024. The light receiving sensor 1022 has a plurality of pixels 1221 arranged two-dimensionally. For example, in the case of the VGA light receiving sensor 1022, 480×640 pixels 1221 are two-dimensionally arranged. Each pixel 1221 has a light receiving element, and the light receiving element outputs a signal (light receiving data) according to the amount of light received. The light receiving optical system 1024 is an optical system that causes the light emitting unit 1020 to receive reflected light from the measurement area 1050. The light receiving optical system 1024 forms an image of the measurement area 1050 on the light receiving surface of the light receiving sensor 1022. Each pixel 1221 of the light receiving sensor 1022 is associated with a predetermined region of the measurement area 1050 via the light receiving optical system 1024. A certain pixel 1221 of the light receiving sensor 1022 receives light (reflected light and background light) from a corresponding region of the measurement area 1050 via the light receiving optical system 1024. Further, each pixel 1221 of the light receiving sensor 1022 is associated with a predetermined light emitting element 1121 of the light emitting section 1012. Light emitted from a certain light emitting element 1121 is received by the corresponding pixel 1221 via the light projecting optical system 1014 and the light receiving optical system 1024. Note that the light receiving optical system 1024 will be described later.
 制御部1030は、測定装置1001の制御を司る。制御部1030は、照射部1010を制御し、照射部1010から照射させる光を制御する。また、制御部1030は、発光部1020の出力結果に基づいて、対象物1090までの距離をTOF方式(Time of flight)で測定する。制御部1030は、不図示の演算装置及び記憶装置を有する。演算装置は、例えばCPU、GPU、MPU、ASICなどで構成された演算処理装置である。演算装置の一部がアナログ演算回路で構成されても良い。記憶装置は、主記憶装置と補助記憶装置とにより構成され、プログラムやデータを記憶する装置である。記憶装置に記憶されているプログラムを演算装置が実行することにより、対象物1090までの距離を測定するための各種処理が実行される。図14には、各種処理の機能ブロックが示されている。 The control unit 1030 controls the measurement device 1001. The control unit 1030 controls the irradiation unit 1010 and controls the light emitted from the irradiation unit 1010. Furthermore, the control unit 1030 measures the distance to the object 1090 using a TOF method (Time of Flight) based on the output result of the light emitting unit 1020. The control unit 1030 includes an arithmetic unit and a storage device (not shown). The arithmetic device is an arithmetic processing device composed of, for example, a CPU, GPU, MPU, ASIC, or the like. A part of the arithmetic device may be constituted by an analog arithmetic circuit. A storage device is a device that is composed of a main storage device and an auxiliary storage device, and stores programs and data. Various processes for measuring the distance to the target object 1090 are executed by the arithmetic device executing the program stored in the storage device. FIG. 14 shows functional blocks for various processes.
 制御部1030は、設定部1032と、タイミング制御部1034タイミング制御部1034と、測距部1036とを有する。設定部1032は、各種設定を行う。タイミング制御部1034は、各部の処理タイミングを制御する。例えば、タイミング制御部1034は、発光部1012から光を射出させるタイミングなどを制御する。測距部1036は、対象物1090までの距離を測定する。測距部1036は、信号処理部1362と、時間検出部1364と、距離算出部1366とを有する。信号処理部1362は、受光センサ1022の出力信号(受光データ)を処理する。時間検出部1364は、光の飛行時間(光を照射してから反射光が到達するまでの時間)を検出する。距離算出部1366は、対象物1090までの距離を算出する。 The control section 1030 includes a setting section 1032, a timing control section 1034, and a distance measuring section 1036. The setting unit 1032 performs various settings. The timing control unit 1034 controls the processing timing of each unit. For example, the timing control unit 1034 controls the timing at which light is emitted from the light emitting unit 1012. The distance measuring unit 1036 measures the distance to the target object 1090. The distance measurement section 1036 includes a signal processing section 1362, a time detection section 1364, and a distance calculation section 1366. The signal processing unit 1362 processes the output signal (light reception data) of the light reception sensor 1022. The time detection unit 1364 detects the flight time of light (the time from when the light is irradiated until the reflected light arrives). The distance calculation unit 1366 calculates the distance to the target object 1090.
 図16は、測定方法の一例を説明するためのタイミングチャートである。 FIG. 16 is a timing chart for explaining an example of the measurement method.
 制御部1030(タイミング制御部1034)は、照射部1010の発光部1012に所定の周期でパルス光を出射させる。図16の上側には、発光部1012がパルス光(測定光)を出射するタイミングが示されている。発光部1012から出射された光は、投光用光学系1014を介して測定エリア1050に照射される。測定エリア1050内の対象物1090の表面で反射した光は、受光用光学系1024を介して受光センサ1022に受光される。受光センサ1022の画素1221は、パルス状の反射光を受光することになる。図16の中央には、パルス状の反射光が到達するタイミングが示されている。図16の下側には、受光センサ1022の或る画素1221の画素データS(或る画素1221の受光素子の受光データ)が示されている。受光センサ1022の画素データSは、画素1221の受光量を示すデータである。 The control unit 1030 (timing control unit 1034) causes the light emitting unit 1012 of the irradiation unit 1010 to emit pulsed light at a predetermined period. The upper side of FIG. 16 shows the timing at which the light emitting unit 1012 emits pulsed light (measuring light). The light emitted from the light emitting unit 1012 is irradiated onto the measurement area 1050 via the light projection optical system 1014. The light reflected from the surface of the object 1090 within the measurement area 1050 is received by the light receiving sensor 1022 via the light receiving optical system 1024. The pixel 1221 of the light receiving sensor 1022 receives the pulsed reflected light. The center of FIG. 16 shows the timing at which the pulsed reflected light arrives. At the bottom of FIG. 16, pixel data S of a certain pixel 1221 of the light receiving sensor 1022 (light reception data of a light receiving element of a certain pixel 1221) is shown. Pixel data S of the light receiving sensor 1022 is data indicating the amount of light received by the pixel 1221.
 制御部1030(タイミング制御部1034)は、発光部1012の全ての発光素子1121から光を出射させて測定エリア1050の全体に光を一括して照射しても良いし、発光部1012の一部の発光素子1121(例えば1つの発光素子1121)から光を出射させて測定エリア1050の所定の領域のみに光を照射しても良い。発光部1012の一部の発光素子1121(例えば1つの発光素子1121)から光を出射させる場合、制御部1030(信号処理部1362)は、発光させた発光素子1121に対応する画素1221の画素データSを取得することになる。 The control unit 1030 (timing control unit 1034) may emit light from all the light emitting elements 1121 of the light emitting unit 1012 to irradiate the entire measurement area 1050 with light at once, or may emit light from all of the light emitting elements 1121 of the light emitting unit 1012, or may emit light from all of the light emitting elements 1121 of the light emitting unit 1012. The light emitting element 1121 (for example, one light emitting element 1121) may emit light to irradiate only a predetermined region of the measurement area 1050 with light. When emitting light from some of the light emitting elements 1121 (for example, one light emitting element 1121) of the light emitting unit 1012, the control unit 1030 (signal processing unit 1362) generates pixel data of the pixel 1221 corresponding to the light emitting element 1121 that has emitted light. You will get S.
 制御部1030の測距部1036(信号処理部1362)は、各画素1221の画素データSに基づいて、反射光の到達タイミングを検出する。例えば、信号処理部1362は、画素データSのピークのタイミングに基づいて、反射光の到達タイミングを検出する。
 測距部1036(時間検出部1364)は、光の出射タイミングと、光の到達タイミングとに基づいて、光を照射してから反射光が到達するまでの時間Tfを検出する。時間Tfは、測定装置1001と対象物1090との間を光が往復する時間に相当する。そして、測距部1036(距離算出部1366)は、時間Tfに基づいて、対象物1090までの距離Lを算出する。なお、光を照射してから反射光が到達するまでの時間をTfとし、光の速度をCとしたとき、距離Lは、L=C×Tf/2となる。制御部1030は、発光部1020の画素1221ごとに検出した時間Tfに基づいて、画素1221ごとに対象物1090までの距離を算出することによって、距離画像を生成する。
The distance measuring section 1036 (signal processing section 1362) of the control section 1030 detects the arrival timing of the reflected light based on the pixel data S of each pixel 1221. For example, the signal processing unit 1362 detects the arrival timing of the reflected light based on the timing of the peak of the pixel data S.
The distance measurement unit 1036 (time detection unit 1364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. The time Tf corresponds to the time during which light travels back and forth between the measuring device 1001 and the target object 1090. Then, the distance measuring section 1036 (distance calculating section 1366) calculates the distance L to the target object 1090 based on the time Tf. Note that, when the time from irradiation of light until the reflected light arrives is Tf, and the speed of light is C, the distance L is L=C×Tf/2. The control unit 1030 generates a distance image by calculating the distance to the object 1090 for each pixel 1221 based on the time Tf detected for each pixel 1221 of the light emitting unit 1020.
 <発光部1020について>
 図17及び図18は、発光部1020の説明図である。
<About the light emitting section 1020>
17 and 18 are explanatory diagrams of the light emitting section 1020.
 受光用光学系1024は、集光レンズ1241集光レンズ1241と、バンドパスフィルタBPFとを有する。集光レンズ1241は、測定エリア1050の像を受光センサ1022の受光面に結像させる光学素子である。バンドパスフィルタBPFは、特定の波長の光を通過させ、当該特定波長以外の光をカットする(減衰させる)フィルタである。バンドパスフィルタBPFでは、特定の波長の光の透過率は高く、当該特定波長以外の光の透過率は低い(当該特定波長以外の光の減衰率は高い)。以下の説明では、バンドパスフィルタBPFを通過する光の波長帯域のことを「通過帯域」と呼び、バンドパスフィルタBPFにカット(減衰)される光の波長帯域のことを「遮断帯域」と呼ぶことがある。例えば、通過帯域は、バンドパスフィルタBPFの透過率が50%以上となる波長帯域であり、遮断帯域は透過率が50%未満となる帯域である。バンドパスフィルタBPFは、反射光を透過させる必要があるため、少なくとも、発光部1012から照射される波長の光を透過可能である(発光部1012から照射される光の波長に対する透過率は高い)。つまり、バンドパスフィルタBPFの通過帯域には、発光部1012から出射される光の波長が少なくとも含まれる必要がある。受光用光学系1024がバンドパスフィルタBPFを備えることにより、太陽光等の背景光における遮断帯域の光をカットできるため、背景光によるノイズの影響を抑制することができる。 The light receiving optical system 1024 includes a condenser lens 1241 and a bandpass filter BPF. The condensing lens 1241 is an optical element that forms an image of the measurement area 1050 on the light receiving surface of the light receiving sensor 1022. The bandpass filter BPF is a filter that passes light of a specific wavelength and cuts (attenuates) light of other wavelengths. In the bandpass filter BPF, the transmittance of light having a specific wavelength is high, and the transmittance of light other than the specific wavelength is low (the attenuation rate of light other than the specific wavelength is high). In the following explanation, the wavelength band of light that passes through the band-pass filter BPF is called the "pass band", and the wavelength band of light that is cut (attenuated) by the band-pass filter BPF is called the "cut-off band". Sometimes. For example, the passband is a wavelength band in which the transmittance of the bandpass filter BPF is 50% or more, and the cutoff band is a band in which the transmittance is less than 50%. Since the bandpass filter BPF needs to transmit reflected light, it can transmit at least the light of the wavelength emitted from the light emitting section 1012 (the transmittance for the wavelength of the light emitted from the light emitting section 1012 is high). . That is, the passband of the bandpass filter BPF needs to include at least the wavelength of the light emitted from the light emitting section 1012. Since the light-receiving optical system 1024 includes the band-pass filter BPF, it is possible to cut off light in the cutoff band of background light such as sunlight, and therefore it is possible to suppress the influence of noise caused by the background light.
 一方、発光部1012から出射する光の波長λは、温度に応じて変化する。例えば、発光部1012の温度が上昇すると、発光部1012から出射する光の波長λは長くなる。ここでは一例として、発光部1012は、温度がT1(例えばT1=25℃)のときに波長λ1(例えばλ1=940nm)の光を出射し、温度がT2(例えばT2=105℃)のときに波長λ2(例えばλ2=946nm)の光を出射する。つまり、発光部1012は、使用温度T1~T2の範囲において、λ1~λ2の波長帯域の範囲で、温度に応じた波長λの光を出射することになる。 On the other hand, the wavelength λ of the light emitted from the light emitting section 1012 changes depending on the temperature. For example, when the temperature of the light emitting section 1012 increases, the wavelength λ of light emitted from the light emitting section 1012 becomes longer. Here, as an example, the light emitting unit 1012 emits light with a wavelength λ1 (for example, λ1 = 940 nm) when the temperature is T1 (for example, T1 = 25 °C), and when the temperature is T2 (for example, T2 = 105 °C) Light with a wavelength λ2 (for example, λ2=946 nm) is emitted. In other words, the light emitting section 1012 emits light with a wavelength λ corresponding to the temperature within the wavelength band λ1 to λ2 within the operating temperature range T1 to T2.
 バンドパスフィルタBPFは、発光部1012から照射される光を高い透過率で透過させる必要がある。仮にバンドパスフィルタBPFの通過帯域が温度によらずに一定な場合、使用温度T1~T2の範囲で発光部1012から出射される光の波長帯域(λ1~λ2)の全ての波長に対して高い透過率であることが求められる。つまり、発光部1012から出射される光の波長帯域(λ1~λ2の範囲)の全てが含まれるように、バンドパスフィルタBPFの通過帯域を拡張する必要がある。但し、バンドパスフィルタBPFの通過帯域が拡張された結果、バンドパスフィルタBPFを通過する背景光(背景光における通過帯域の光)が増加してしまう。このため、バンドパスフィルタBPFの通過帯域が温度によらずに一定な場合、画素データS(画素1221の受光量を示すデータ)に含まれるノイズが増加してしまう。
 そこで、本実施形態では、温度に応じてバンドパスフィルタBPFの通過可能な光の波長が変化する。言い換えると、本実施形態のバンドパスフィルタBPFは、温度に応じて特性を変化させる(バンドパスフィルタBPFが温度依存性を有する)。以下、この点について説明する。
The bandpass filter BPF needs to transmit the light emitted from the light emitting section 1012 with high transmittance. If the pass band of the band pass filter BPF is constant regardless of temperature, it is high for all wavelengths in the wavelength band (λ1 to λ2) of light emitted from the light emitting unit 1012 in the range of operating temperature T1 to T2. Transmittance is required. In other words, it is necessary to expand the passband of the bandpass filter BPF so that it includes the entire wavelength band (range of λ1 to λ2) of light emitted from the light emitting section 1012. However, as a result of expanding the passband of the bandpass filter BPF, the amount of background light (light in the passband in the background light) that passes through the bandpass filter BPF increases. For this reason, if the passband of the bandpass filter BPF is constant regardless of temperature, noise included in the pixel data S (data indicating the amount of light received by the pixel 1221) will increase.
Therefore, in this embodiment, the wavelength of light that can pass through the bandpass filter BPF changes depending on the temperature. In other words, the bandpass filter BPF of this embodiment changes its characteristics depending on the temperature (the bandpass filter BPF has temperature dependence). This point will be explained below.
 図19は、本実施形態のバンドパスフィルタBPFの特性を示す概略説明図である。図中のグラフの横軸は波長(単位:nm)を示し、縦軸は透過率(単位:%)を示している。図中の2つのグラフは、異なる2つの温度でのバンドパスフィルタBPFの波長と透過率との関係を示している。図中の太線は、温度T11のときのバンドパスフィルタBPFの特性を示すグラフである。図中の細線は、温度T12(>T11)のときのバンドパスフィルタBPFの特性を示すグラフである。 FIG. 19 is a schematic explanatory diagram showing the characteristics of the bandpass filter BPF of this embodiment. The horizontal axis of the graph in the figure shows the wavelength (unit: nm), and the vertical axis shows the transmittance (unit: %). The two graphs in the figure show the relationship between wavelength and transmittance of the bandpass filter BPF at two different temperatures. The thick line in the figure is a graph showing the characteristics of the bandpass filter BPF at the temperature T11. The thin line in the figure is a graph showing the characteristics of the bandpass filter BPF at the temperature T12 (>T11).
 図に示すように、バンドパスフィルタBPFの通過帯域(バンドパスフィルタBPFを通過する光の波長帯域)は、温度に応じて変化する。ここでは、温度が上昇すると、バンドパスフィルタBPFの通過帯域の波長が長くなる。つまり、バンドパスフィルタBPFの通過帯域は、温度が高くなると、グラフが図中の右側(長波長側)にシフトする。図中には、温度がT11のときのバンドパスフィルタBPFの通過帯域の中心波長(透過率がピークとなるピーク波長)はλ11であるのに対し、温度がT12(>T12)のときのバンドパスフィルタBPFの通過帯域の中心波長はλ12(>λ12)である。波長λ11に対するバンドパスフィルタBPFの透過率は、温度がT12(>T11)のときよりも温度がT11のときの方が高い。また、波長λ12に対するバンドパスフィルタBPFの透過率は、温度がT11のときよりも温度がT12(>T11)のときの方が高い。 As shown in the figure, the passband of the bandpass filter BPF (the wavelength band of light passing through the bandpass filter BPF) changes depending on the temperature. Here, as the temperature rises, the wavelength of the passband of the bandpass filter BPF becomes longer. That is, as the temperature increases, the passband of the bandpass filter BPF shifts to the right side (longer wavelength side) in the graph. In the figure, when the temperature is T11, the center wavelength of the passband of the bandpass filter BPF (the peak wavelength at which the transmittance peaks) is λ11, whereas when the temperature is T12 (>T12), the center wavelength of the passband of the bandpass filter BPF is λ11. The center wavelength of the passband of the pass filter BPF is λ12 (>λ12). The transmittance of the bandpass filter BPF for the wavelength λ11 is higher when the temperature is T11 than when the temperature is T12 (>T11). Further, the transmittance of the bandpass filter BPF for the wavelength λ12 is higher when the temperature is T12 (>T11) than when the temperature is T11.
 図19における温度T11は前述の温度T1(例えばT1=25℃)と同程度に設定されており、λ11は、前述の波長λ1(例えばλ1=940nm)と同程度に設定されている。また、温度T12は前述の温度T2(例えばT2=105℃)と同程度に設定されており、λ11は、前述のλ2(例えばλ2=946nm)と同程度に設定されている。これにより、温度がT1(例えばT1=25℃)であり、発光部1012が波長λ1(例えばλ1=940nm)の光を出射するとき、波長λ1に対するバンドパスフィルタBPFの透過率が高くなり、波長λ1の反射光がバンドパスフィルタBPFを通過し易くなる。また、温度がT2(例えばT2=105℃)であり、発光部1012が波長λ2(例えばλ2=946nm)の光を出射するとき、波長λ2に対するバンドパスフィルタBPFの透過率が高くなり、波長λ2の反射光がバンドパスフィルタBPFを通過し易くなる。なお、温度T1(≒T11)のときのバンドパスフィルタBPFの通過帯域の中心波長(ピーク波長)をλ1に設定しつつ、温度T2(≒T12)のときのバンドパスフィルタBPFの通過帯域の中心波長(ピーク波長)をλ2に設定するためのバンドパスフィルタBPFの構造については後述する。 The temperature T11 in FIG. 19 is set to be approximately the same as the above-mentioned temperature T1 (for example, T1=25° C.), and λ11 is set to be approximately equal to the above-mentioned wavelength λ1 (for example, λ1=940 nm). Further, the temperature T12 is set to be approximately the same as the above-mentioned temperature T2 (for example, T2=105° C.), and λ11 is set to be approximately the same as the above-mentioned λ2 (for example, λ2=946 nm). As a result, when the temperature is T1 (for example, T1 = 25°C) and the light emitting section 1012 emits light of wavelength λ1 (for example, λ1 = 940 nm), the transmittance of the bandpass filter BPF for wavelength λ1 becomes high, and the wavelength It becomes easier for the reflected light of λ1 to pass through the bandpass filter BPF. Further, when the temperature is T2 (for example, T2 = 105°C) and the light emitting unit 1012 emits light of wavelength λ2 (for example, λ2 = 946 nm), the transmittance of the bandpass filter BPF for wavelength λ2 becomes high, and the wavelength λ2 It becomes easier for the reflected light to pass through the bandpass filter BPF. Note that while the center wavelength (peak wavelength) of the passband of the bandpass filter BPF at temperature T1 (≈T11) is set to λ1, the center wavelength of the passband of the bandpass filter BPF at temperature T2 (≈T12) is set to λ1. The structure of the bandpass filter BPF for setting the wavelength (peak wavelength) to λ2 will be described later.
 上記の通り、発光部1012の温度が上昇すると発光部1012から出射する光の波長λが長くなるため、本実施形態では、温度が上昇すると透過率のピークとなる波長(中心波長)が長くなるようなバンドパスフィルタBPFが用いられている。これにより、発光部1012から出射する光の波長λが温度に応じて変化しても、その波長λに対するバンドパスフィルタBPFの透過率が高くなるとともに、その波長以外の波長に対するバンドパスフィルタBPFの透過率が低くなる。この結果、画素データSに含まれるノイズの影響を低減させることができ、SN比が向上する。 As mentioned above, as the temperature of the light emitting section 1012 increases, the wavelength λ of the light emitted from the light emitting section 1012 becomes longer. Therefore, in this embodiment, as the temperature increases, the wavelength at which the transmittance peaks (center wavelength) becomes longer. A bandpass filter BPF is used. As a result, even if the wavelength λ of the light emitted from the light emitting unit 1012 changes depending on the temperature, the transmittance of the bandpass filter BPF for the wavelength λ becomes high, and the transmittance of the bandpass filter BPF for wavelengths other than the wavelength λ increases. Transmittance becomes low. As a result, the influence of noise included in the pixel data S can be reduced, and the S/N ratio can be improved.
 <バンドパスフィルタBPFの構造について>
 図20及び図21は、バンドパスフィルタBPFの構造の拡大説明図である。図20は、バンドパスフィルタBPFの上面図である。図21は、バンドパスフィルタBPFの断面図である。なお、図20には突起1027が7つだけ描かれているが、バンドパスフィルタBPFの表面には、無数の突起1027が図20に示す配置パターンで配置されている。
<About the structure of bandpass filter BPF>
20 and 21 are enlarged explanatory diagrams of the structure of the bandpass filter BPF. FIG. 20 is a top view of the bandpass filter BPF. FIG. 21 is a cross-sectional view of the bandpass filter BPF. Although only seven protrusions 1027 are depicted in FIG. 20, countless protrusions 1027 are arranged on the surface of the bandpass filter BPF in the arrangement pattern shown in FIG. 20.
 バンドパスフィルタBPFは、基材1025と、薄膜1026とにより構成されている。 The bandpass filter BPF is composed of a base material 1025 and a thin film 1026.
 基材1025は、光透過性の部材で構成されている。例えば、基材1025はガラス基板(ここでは有機ガラス)で構成される。基材1025の表面には薄膜1026が形成されている。例えば、基材1025は光透過性の樹脂(透過率の高い樹脂)で構成されており、ここでは基材1025はPMMA(Poly Methyl Methacrylate;ポリメタクリル酸メチル樹脂)で構成されている。但し、基材1025は、他の有機ガラスで構成されても良い。また、ここでは、基材1025は、薄膜1026と比べて線膨張係数が大きい材料で構成されている。但し、基材1025は、薄膜1026と同程度の線膨張係数でも良く、薄膜1026と比べて線膨張係数が大きい材料でなくても良い。 The base material 1025 is made of a light-transmitting member. For example, the base material 1025 is composed of a glass substrate (here, organic glass). A thin film 1026 is formed on the surface of the base material 1025. For example, the base material 1025 is made of a light-transmissive resin (resin with high transmittance), and here the base material 1025 is made of PMMA (Poly Methyl Methacrylate). However, the base material 1025 may be made of other organic glass. Further, here, the base material 1025 is made of a material having a larger coefficient of linear expansion than the thin film 1026. However, the base material 1025 may have a coefficient of linear expansion comparable to that of the thin film 1026, and does not need to be made of a material having a coefficient of linear expansion larger than that of the thin film 1026.
 薄膜1026は、基材1025の表面に形成されている。また、薄膜1026は、基材1025とは異なる屈折率となる材料で構成されている。例えば、薄膜1026の屈折率n2が基材1025の屈折率n1よりも高く(大きく)なるように、薄膜1026は高い屈折率の材料で構成されている。ここでは、薄膜1026は、酸化チタン(TiO)で構成されている。但し、薄膜1026は、酸化チタン(TiO)に限られるものではなく、例えばアモルファスシリコン(α-Si)で構成されても良い。また、ここでは、薄膜1026は、基材1025と比べて線膨張係数が小さい材料(無機材料)で構成されている。但し、薄膜1026は、基材1025と同程度の線膨張係数でも良く、基材1025と比べて線膨張係数が小さい材料でなくても良い。 A thin film 1026 is formed on the surface of the base material 1025. Further, the thin film 1026 is made of a material having a different refractive index from that of the base material 1025. For example, the thin film 1026 is made of a material with a high refractive index so that the refractive index n2 of the thin film 1026 is higher (larger) than the refractive index n1 of the base material 1025. Here, the thin film 1026 is made of titanium oxide (TiO 2 ). However, the thin film 1026 is not limited to titanium oxide (TiO 2 ), and may be made of, for example, amorphous silicon (α-Si). Further, here, the thin film 1026 is made of a material (inorganic material) having a smaller coefficient of linear expansion than the base material 1025. However, the thin film 1026 may have a coefficient of linear expansion comparable to that of the base material 1025, and does not need to be made of a material having a coefficient of linear expansion smaller than that of the base material 1025.
 薄膜1026の表面には、複数の突起1027が設けられている。薄膜1026の表面の突起1027は、光の波長よりも小さい微小構造体であり、ナノポストと呼ばれることもある。突起1027は、柱状に構成されており、ここでは、突起1027は円柱状に構成されている。但し、突起1027は、角柱状(例えば6角柱状、8角柱状など)に構成されても良い。多数の突起1027は、所定のパターンで配置されている。ここでは、図20に示すように、柱状の突起1027が格子状に配置されている。言い換えると、薄膜1026の表面に、ナノポストアレイが形成されている。多数の突起1027を格子状に配置することによって、突起1027を所定の間隔で配置できる。また、柱状の突起1027が格子状に配置されることによって、バンドパスフィルタBPFの偏光依存性を弱めることができる。また、図20に示すように、多数の突起1027を三角格子状(正三角格子状)に配置することによって、周囲の6つの突起1027との間隔が等しくなるように、各突起1027を高密度で配置することができる。なお、突起1027の配置パターンは、図20に示すパターンに限られるものではない。例えば、周囲の3つの突起1027との間隔が等しくなるように多数の突起1027を六角格子状に配置しても良い。突起1027は、薄膜1026と同じ材料で構成されており、ここでは酸化チタン(TiO)で構成されている。 A plurality of protrusions 1027 are provided on the surface of the thin film 1026. The protrusions 1027 on the surface of the thin film 1026 are microstructures smaller than the wavelength of light, and are sometimes called nanoposts. The protrusion 1027 has a columnar shape, and here the protrusion 1027 has a cylindrical shape. However, the protrusion 1027 may be configured in a prismatic shape (for example, a hexagonal column, an octagonal column, etc.). A large number of protrusions 1027 are arranged in a predetermined pattern. Here, as shown in FIG. 20, columnar projections 1027 are arranged in a grid pattern. In other words, a nanopost array is formed on the surface of the thin film 1026. By arranging a large number of protrusions 1027 in a grid pattern, the protrusions 1027 can be arranged at predetermined intervals. Furthermore, by arranging the columnar protrusions 1027 in a grid pattern, the polarization dependence of the bandpass filter BPF can be weakened. In addition, as shown in FIG. 20, by arranging a large number of protrusions 1027 in a triangular lattice shape (regular triangular lattice shape), each protrusion 1027 is arranged in a high density so that the intervals with the surrounding six protrusions 1027 are equal. It can be placed in Note that the arrangement pattern of the protrusions 1027 is not limited to the pattern shown in FIG. 20. For example, a large number of protrusions 1027 may be arranged in a hexagonal lattice so that the intervals between the protrusions 1027 and the three surrounding protrusions 1027 are equal. The protrusion 1027 is made of the same material as the thin film 1026, and here it is made of titanium oxide (TiO 2 ).
 光の波長よりも小さい突起1027が所定パターンで配置されることによって、光の位相や方向が制御され、強め合う波長(若しくは弱め合う波長)が制御される。本実施形態では、温度変化に応じて基材1025及び薄膜1026(及び突起1027)が膨張・収縮するため、温度に応じて、突起1027の間隔が変化する(突起1027の密度が変化する)。これにより、温度に応じて、バンドパスフィルタBPFを通過可能な光の波長が変化し、バンドパスフィルタBPFの通過帯域の中心波長が変化する。本実施形態では、温度が上昇すると突起1027の間隔が広がるため、バンドパスフィルタBPFの通過帯域の中心波長が長くなる(長波長側にシフトする)。このため、温度が上昇したときに、発光部1012から出射する光の波長λは長くなるのに適応するように、バンドパスフィルタBPFの通過帯域を変化させることができる。なお、温度に応じて、突起1027の形状(円柱の高さや直径)も変化し、この影響によってもバンドパスフィルタBPFを通過可能な光の波長が変化し、バンドパスフィルタBPFの通過帯域の中心波長が変化することになる。本実施形態では、突起1027同士の間隔を調整することによって、バンドパスフィルタBPFの特性(温度依存性)を調整可能である。また、突起1027の高さや直径を調整することによって、バンドパスフィルタBPFの特性(通過帯域や遮断帯域の波長など)を調整可能である。 By arranging the protrusions 1027 smaller than the wavelength of the light in a predetermined pattern, the phase and direction of the light are controlled, and the wavelengths that strengthen each other (or wavelengths that weaken each other) are controlled. In this embodiment, the base material 1025 and the thin film 1026 (and the protrusions 1027) expand and contract in response to temperature changes, so the interval between the protrusions 1027 changes (the density of the protrusions 1027 changes) in accordance with the temperature. As a result, the wavelength of light that can pass through the band-pass filter BPF changes depending on the temperature, and the center wavelength of the passband of the band-pass filter BPF changes. In this embodiment, as the temperature rises, the interval between the protrusions 1027 increases, so the center wavelength of the passband of the bandpass filter BPF becomes longer (shifts to the longer wavelength side). Therefore, the passband of the bandpass filter BPF can be changed to accommodate the fact that the wavelength λ of the light emitted from the light emitting section 1012 becomes longer when the temperature rises. Note that the shape of the protrusion 1027 (the height and diameter of the cylinder) also changes depending on the temperature, and this effect also changes the wavelength of light that can pass through the bandpass filter BPF. The wavelength will change. In this embodiment, the characteristics (temperature dependence) of the bandpass filter BPF can be adjusted by adjusting the interval between the protrusions 1027. In addition, by adjusting the height and diameter of the protrusion 1027, it is possible to adjust the characteristics of the bandpass filter BPF (such as the wavelength of the passband and cutoff band).
 なお、本実施形態では、線膨張係数の大きい基材1025の上に、突起1027を有する薄膜1026が形成されている。これにより、突起1027(薄膜1026)の線膨張係数が小さくても、温度に応じた突起1027の間隔の変化を大きくできる。但し、基材1025の線膨張係数が薄膜1026(及び突起1027)の線膨張係数よりも大きくなくても良く、例えば、基材1025の線膨張係数が薄膜1026(及び突起1027)の線膨張係数と同程度でも良い。 Note that in this embodiment, a thin film 1026 having protrusions 1027 is formed on a base material 1025 having a large coefficient of linear expansion. Thereby, even if the coefficient of linear expansion of the protrusions 1027 (thin film 1026) is small, the change in the interval between the protrusions 1027 depending on the temperature can be increased. However, the linear expansion coefficient of the base material 1025 does not have to be larger than the linear expansion coefficient of the thin film 1026 (and protrusions 1027). It may be about the same level as .
 図22は、バンドパスフィルタBPFの特性を示すグラフである。図中の太線は、温度が25℃(前述のT1,T11に相当)のときのバンドパスフィルタBPFの特性を示すグラフである。図中の細線は、温度が105℃(前述のT2,T12に相当)のときのバンドパスフィルタBPFの特性を示すグラフである。 FIG. 22 is a graph showing the characteristics of the bandpass filter BPF. The thick line in the figure is a graph showing the characteristics of the bandpass filter BPF when the temperature is 25° C. (corresponding to T1 and T11 described above). The thin line in the figure is a graph showing the characteristics of the bandpass filter BPF when the temperature is 105° C. (corresponding to T2 and T12 described above).
 図22に示すバンドパスフィルタBPFでは、基材1025はPMMA(Poly Methyl Methacrylate;ポリメタクリル酸メチル樹脂)で構成されており、薄膜1026(及び突起1027)は酸化チタン(TiO)で構成されている。基材1025の線膨張係数は56×10-6/℃であり、薄膜1026の線膨張係数は10×10-6/℃である。また、基材1025の屈折率は1.49であり、薄膜1026の屈折率は2.5である。薄膜1026の厚さは146nmである。突起1027は図20に示すパターンで配置されており、突起1027同士の間隔は557nmである。また、円柱状の突起1027の半径は185nmであり、突起1027の高さは145nmである。 In the bandpass filter BPF shown in FIG. 22, the base material 1025 is made of PMMA (Poly Methyl Methacrylate), and the thin film 1026 (and protrusion 1027) is made of titanium oxide (TiO 2 ). There is. The linear expansion coefficient of the base material 1025 is 56×10 −6 /°C, and the linear expansion coefficient of the thin film 1026 is 10×10 −6 /°C. Further, the refractive index of the base material 1025 is 1.49, and the refractive index of the thin film 1026 is 2.5. The thickness of thin film 1026 is 146 nm. The protrusions 1027 are arranged in the pattern shown in FIG. 20, and the interval between the protrusions 1027 is 557 nm. Further, the radius of the cylindrical protrusion 1027 is 185 nm, and the height of the protrusion 1027 is 145 nm.
 図22に示すバンドパスフィルタBPFは、25℃のときに940nm(前述のλ1,λ11に相当)に対する透過率が高くなる。本実施形態の発光部1012は25℃のときに940nmの波長の光を出射するため、このバンドパスフィルタBPFは、反射光を透過し易くなる。また、このバンドパスフィルタBPFは、105℃のときに946nm(前述のλ2,λ12に相当)に対する透過率が高くなる。本実施形態の発光部1012は105℃のときに946nmの波長の光を出射するため、このバンドパスフィルタBPFは、反射光を透過し易くなる。 The bandpass filter BPF shown in FIG. 22 has a high transmittance at 940 nm (corresponding to λ1 and λ11 described above) at 25°C. Since the light emitting unit 1012 of this embodiment emits light with a wavelength of 940 nm at 25° C., this bandpass filter BPF easily transmits reflected light. Further, this bandpass filter BPF has a high transmittance at 946 nm (corresponding to the above-mentioned λ2 and λ12) at 105°C. Since the light emitting unit 1012 of this embodiment emits light with a wavelength of 946 nm when the temperature is 105° C., this bandpass filter BPF easily transmits reflected light.
 なお、ここでは、発光部1012は、25℃のときに940nmの波長の光を出射し、105℃のときに946nmの波長の光を出射することを想定している。但し、発光部1012の構造に応じて、発光部1012の温度特性が異なることがある。この場合、発光部1012の温度特性に応じて、バンドパスフィルタBPFの突起1027の配置(突起1027の間隔)や形状(高さや直径)を変更することになる(また、基材1025や薄膜1026の材質を変更しても良い)。 Here, it is assumed that the light emitting unit 1012 emits light with a wavelength of 940 nm when the temperature is 25°C, and emits light with a wavelength of 946 nm when the temperature is 105°C. However, depending on the structure of the light emitting section 1012, the temperature characteristics of the light emitting section 1012 may differ. In this case, the arrangement (interval between the protrusions 1027) and shape (height and diameter) of the protrusions 1027 of the bandpass filter BPF are changed depending on the temperature characteristics of the light emitting part 1012 (also, the base material 1025 and the thin film 1027) (You may change the material.)
 図23は、バンドパスフィルタBPFの別の特性を示すグラフである。なお、ここでは、発光部1012は、25℃のときに1298nmの波長の光を出射し、105℃のときに1308nmの波長の光を出射することを想定している。
 図23に示すバンドパスフィルタBPFでは、基材1025の線膨張係数は56×10-6/℃であり、薄膜1026の線膨張係数は10×10-6/℃である。また、基材1025の屈折率は1.49であり、薄膜1026の屈折率は3.48である。薄膜1026の厚さは270nmである。突起1027は図20に示すパターンで配置されており、突起1027同士の間隔は860nmである。また、円柱状の突起1027の直径は210nm(半径は105nm)であり、突起1027の高さは160nmである。
 図23に示すバンドパスフィルタBPFは、25℃のときに1298nm(前述のλ1,λ11に相当)に対する透過率が高くなる。また、このバンドパスフィルタBPFは、105℃のときに1308nm(前述のλ2,λ12に相当)に対する透過率が高くなる。このように、バンドパスフィルタBPFの突起1027の形状及び配置を適宜変更することによって、発光部1012の温度特性に適応したバンドパスフィルタBPFを作成することが可能である。
FIG. 23 is a graph showing another characteristic of the bandpass filter BPF. Note that here, it is assumed that the light emitting unit 1012 emits light with a wavelength of 1298 nm when the temperature is 25°C, and emits light with a wavelength of 1308 nm when the temperature is 105°C.
In the bandpass filter BPF shown in FIG. 23, the linear expansion coefficient of the base material 1025 is 56×10 −6 /°C, and the linear expansion coefficient of the thin film 1026 is 10×10 −6 /°C. Further, the refractive index of the base material 1025 is 1.49, and the refractive index of the thin film 1026 is 3.48. The thickness of thin film 1026 is 270 nm. The protrusions 1027 are arranged in the pattern shown in FIG. 20, and the interval between the protrusions 1027 is 860 nm. Further, the diameter of the cylindrical protrusion 1027 is 210 nm (radius is 105 nm), and the height of the protrusion 1027 is 160 nm.
The bandpass filter BPF shown in FIG. 23 has a high transmittance at 1298 nm (corresponding to λ1 and λ11 described above) at 25°C. Further, this bandpass filter BPF has a high transmittance at 1308 nm (corresponding to the above-mentioned λ2 and λ12) at 105°C. In this way, by appropriately changing the shape and arrangement of the protrusions 1027 of the bandpass filter BPF, it is possible to create a bandpass filter BPF adapted to the temperature characteristics of the light emitting section 1012.
 ところで、上記の突起1027は柱状(円柱状又は角柱状)に構成されているが、突起1027の形状は、柱状に限られるものではない。例えば、突起1027は、凸条(筋状の凸部)に構成されても良い。この場合、薄膜1026の表面には、多数の突起1027が所定方向に所定の間隔をあけて配置されることになる。凸条の突起1027を所定方向に配置したバンドパスフィルタBPFは、柱状の突起1027を格子状に配置したバンドパスフィルタBPF(図20参照)と比べて、バンドパスフィルタBPFの偏光依存性が強くなる。このため、凸条の突起1027を所定方向に配置したバンドパスフィルタBPFは、特定の方向に振動する光を通過又は遮断する場合に用いることが望ましい。なお、突起1027が凸条に構成された場合においても、突起1027同士の間隔(凸条同士の間隔)や突起1027の形状(凸条の高さや幅)を調整することによって、バンドパスフィルタBPFの特性(通過帯域や遮断帯域の波長など)を調整可能である。 Incidentally, although the projection 1027 described above is configured in a columnar shape (cylindrical or prismatic), the shape of the projection 1027 is not limited to the columnar shape. For example, the protrusion 1027 may be configured as a protrusion (a striped protrusion). In this case, a large number of protrusions 1027 are arranged on the surface of the thin film 1026 at predetermined intervals in a predetermined direction. The bandpass filter BPF in which the protrusions 1027 are arranged in a predetermined direction has stronger polarization dependence than the bandpass filter BPF in which the columnar protrusions 1027 are arranged in a lattice pattern (see FIG. 20). Become. For this reason, it is desirable to use the bandpass filter BPF in which the protrusions 1027 are arranged in a predetermined direction when passing or blocking light vibrating in a specific direction. Note that even when the protrusions 1027 are configured as protrusions, the bandpass filter BPF can be adjusted by adjusting the distance between the protrusions 1027 (the distance between the protrusions) and the shape of the protrusions 1027 (height and width of the protrusions). The characteristics (passband, cutoff band wavelength, etc.) can be adjusted.
 図24は、バンドパスフィルタBPFの配置の変形例の説明図である。
 変形例では、バンドパスフィルタBPFは、集光レンズ1241と受光センサ1022との間に配置されている。バンドパスフィルタBPFが集光レンズ1241よりも後側(集光レンズ1241から見て対象物1090とは反対側)に配置されることによって、バンドパスフィルタBPFの角度依存性の影響を抑制することができる(言い換えると、バンドパスフィルタBPFに入射する光の角度を抑制できる)。なお、バンドパスフィルタBPFが集光レンズ1241よりも前側(対象物1090側)に配置されている場合には、バンドパスフィルタBPFに対して広い角度から光が入射するため、バンドパスフィルタBPFの角度依存性の影響を受け易くなる。本実施形態のように温度に応じて透過率が変化するようにバンドパスフィルタBPFを構成した場合、バンドパスフィルタBPFの角度依存性が強くなることが起こり得るため、変形例のようにバンドパスフィルタBPFを集光レンズ1241と受光センサ1022との間に配置することは特に有効である。
FIG. 24 is an explanatory diagram of a modification of the arrangement of the bandpass filter BPF.
In a modified example, the bandpass filter BPF is arranged between the condenser lens 1241 and the light receiving sensor 1022. By arranging the bandpass filter BPF behind the condensing lens 1241 (on the opposite side of the object 1090 when viewed from the condensing lens 1241), the influence of the angle dependence of the bandpass filter BPF can be suppressed. (In other words, the angle of light incident on the bandpass filter BPF can be suppressed). Note that when the bandpass filter BPF is placed in front of the condenser lens 1241 (on the object 1090 side), light enters the bandpass filter BPF from a wide angle, so the bandpass filter BPF is Becomes more susceptible to angle dependence. When the bandpass filter BPF is configured so that the transmittance changes depending on the temperature as in this embodiment, the angle dependence of the bandpass filter BPF may become strong. It is particularly effective to arrange the filter BPF between the condenser lens 1241 and the light receiving sensor 1022.
 図25~図27は、別のバンドパスフィルタBPF’の配置の説明図である。
 本実施形態のように温度に応じて透過率が変化するようにバンドパスフィルタBPFを構成した場合、バンドパスフィルタBPFの遮断帯域が狭くなることが起こり得る。このため、受光用光学系1024は、前述のバンドパスフィルタBPFとは別のバンドパスフィルタBPF’を更に備えることが望ましい。別のバンドパスフィルタBPF’は、発光部1012が照射する波長帯域の光を通過させつつ、当該波長帯域以外の光をカットする(減衰させる)。例えば、発光部1012が、使用温度の範囲において、λ1~λ2の波長帯域の範囲で温度に応じた波長λの光を出射する場合、別のバンドパスフィルタBPF’は、λ1~λ2の波長帯域の光を通過させつつ、λ1より小さい波長帯域及びλ2より大きい波長帯域の光を減衰させる。これにより、温度に応じて通過帯域が変化するバンドパスフィルタBPFの遮断帯域が狭くても、別のバンドパスフィルタBPF’によって太陽光等の背景光をカットできるため、背景光によるノイズの影響を抑制することができる。
25 to 27 are explanatory diagrams of the arrangement of another bandpass filter BPF'.
When the bandpass filter BPF is configured so that the transmittance changes depending on the temperature as in this embodiment, the cutoff band of the bandpass filter BPF may become narrow. For this reason, it is desirable that the light receiving optical system 1024 further include a bandpass filter BPF' that is different from the above-mentioned bandpass filter BPF. Another band pass filter BPF' passes the light in the wavelength band emitted by the light emitting unit 1012, and cuts (attenuates) the light outside the wavelength band. For example, when the light emitting unit 1012 emits light with a wavelength λ corresponding to the temperature in the wavelength band λ1 to λ2 within the operating temperature range, another bandpass filter BPF' , while attenuating light in a wavelength band smaller than λ1 and in a wavelength band larger than λ2. As a result, even if the cutoff band of the bandpass filter BPF whose passband changes depending on the temperature is narrow, background light such as sunlight can be cut by another bandpass filter BPF', so the influence of noise caused by background light can be reduced. Can be suppressed.
 別のバンドパスフィルタBPF’は、バンドパスフィルタBPFよりも前側(対象物1090側)に配置されても良いし、バンドパスフィルタBPFよりも後側(バンドパスフィルタBPFから見て対象物1090とは反対側)に配置されても良い。但し、図25~図27に示すように、別のバンドパスフィルタBPF’がバンドパスフィルタBPFよりも前側(対象物1090側)に配置された場合には、別のバンドパスフィルタBPF’を通過した光だけをバンドパスフィルタBPFに入射させることができ、別のバンドパスフィルタBPF’でカットされる波長の光をバンドパスフィルタBPFに入射させずに済む。なお、バンドパスフィルタBPFを集光レンズ1241と受光センサ1022との間に配置した場合、別のバンドパスフィルタBPF’は、図26に示すように集光レンズ1241よりも前側(対象物1090側)に配置されても良いし、図27に示すように集光レンズ1241よりも後側(集光レンズ1241から見て対象物1090とは反対側;ここでは集光レンズ1241とバンドパスフィルタBPFとの間)に配置されても良い。なお、図27に示すように別のバンドパスフィルタBPF’を配置した場合には、バンドパスフィルタBPF’の角度依存性の影響を抑制することができる。 Another band-pass filter BPF' may be placed in front of the band-pass filter BPF (on the object 1090 side), or on the rear side of the band-pass filter BPF (on the object 1090 side when viewed from the band-pass filter BPF). may be placed on the opposite side). However, as shown in FIGS. 25 to 27, if another band-pass filter BPF' is placed in front of the band-pass filter BPF (on the object 1090 side), the filter passes through the other band-pass filter BPF'. It is possible to allow only that light to enter the band-pass filter BPF, and there is no need to allow light of a wavelength that is cut by another band-pass filter BPF' to enter the band-pass filter BPF. Note that when the bandpass filter BPF is placed between the condenser lens 1241 and the light receiving sensor 1022, another bandpass filter BPF' is placed on the front side of the condenser lens 1241 (object 1090 side) as shown in FIG. ), or as shown in FIG. 27, the rear side of the condenser lens 1241 (the side opposite to the object 1090 when viewed from the condenser lens 1241; here, the condenser lens 1241 and the bandpass filter BPF may be placed between the Note that when another bandpass filter BPF' is arranged as shown in FIG. 27, the influence of the angle dependence of the bandpass filter BPF' can be suppressed.
 ===小括===
 本実施形態の測定装置1001は、発光部1012と、バンドパスフィルタBPFと、受光センサ1022とを備えている。発光部1012は、温度に応じた波長の光を照射する。バンドパスフィルタBPFは、発光部1012から照射された光の反射光を通過させる。受光センサ1022は、バンドパスフィルタBPFを通過した光を受光する。本実施形態では、温度に応じて、バンドパスフィルタBPFを通過可能な光の波長が変化する。このような構成によれば、温度に応じた波長の光が発光部1012から照射されても、バンドパスフィルタBPFは、その光を通過させることができ、受光センサ1022は、反射光を受光する受光センサ1022が受光する背景光を軽減させることができ、背景光によるノイズの影響を抑制することができる。
===Summary===
The measuring device 1001 of this embodiment includes a light emitting section 1012, a band pass filter BPF, and a light receiving sensor 1022. The light emitting unit 1012 emits light of a wavelength depending on the temperature. The bandpass filter BPF allows the reflected light of the light emitted from the light emitting section 1012 to pass therethrough. The light receiving sensor 1022 receives the light that has passed through the band pass filter BPF. In this embodiment, the wavelength of light that can pass through the bandpass filter BPF changes depending on the temperature. According to such a configuration, even if light with a wavelength corresponding to the temperature is irradiated from the light emitting unit 1012, the band pass filter BPF can pass the light, and the light receiving sensor 1022 can receive the reflected light. The background light received by the light receiving sensor 1022 can be reduced, and the influence of noise caused by the background light can be suppressed.
 また、本実施形態では、バンドパスフィルタBPFは、基材1025と、基材1025の上に形成された薄膜1026とを有し、薄膜1026の表面には、複数の突起1027が所定のパターンで配置されており、温度に応じて突起1027の間隔が変化する。これにより、温度に応じてバンドパスフィルタBPFの特性を変化させることができる。 Furthermore, in this embodiment, the bandpass filter BPF includes a base material 1025 and a thin film 1026 formed on the base material 1025, and a plurality of protrusions 1027 are formed in a predetermined pattern on the surface of the thin film 1026. The distance between the protrusions 1027 changes depending on the temperature. Thereby, the characteristics of the bandpass filter BPF can be changed depending on the temperature.
 また、本実施形態では、基材1025の線膨張係数は、薄膜1026の線膨張係数よりも大きい。これにより、薄膜1026の線膨張係数が小さくても、温度に応じた突起1027の間隔の変化を大きくできる。 Furthermore, in this embodiment, the linear expansion coefficient of the base material 1025 is larger than that of the thin film 1026. Thereby, even if the coefficient of linear expansion of the thin film 1026 is small, the change in the distance between the protrusions 1027 depending on the temperature can be increased.
 また、本実施形態では、基材1025は、光透過性の樹脂で構成されている。これにより、基材1025の線膨張係数を薄膜1026の線膨張係数よりも大きくすることが容易になる。 Furthermore, in this embodiment, the base material 1025 is made of a light-transmitting resin. This makes it easy to make the linear expansion coefficient of the base material 1025 larger than that of the thin film 1026.
 また、本実施形態では、突起1027は、柱状に構成されており、複数の突起1027が格子状に配置されている。これにより、バンドパスフィルタBPFの偏光依存性を弱めることができる。
 一方、突起1027を凸条に構成するとともに、複数の凸条を所定方向に並べて配置することによって、温度に応じて特性を変化させるバンドパスフィルタBPFを構成しても良い。これにより、特定の方向に振動する光を通過又は遮断するようにバンドパスフィルタBPFを構成することができる。
Further, in this embodiment, the protrusions 1027 are configured in a columnar shape, and a plurality of protrusions 1027 are arranged in a grid pattern. Thereby, the polarization dependence of the bandpass filter BPF can be weakened.
On the other hand, by configuring the protrusion 1027 as a protruding strip and arranging a plurality of protrusions in a predetermined direction, a bandpass filter BPF whose characteristics change depending on the temperature may be configured. Thereby, the bandpass filter BPF can be configured to pass or block light vibrating in a specific direction.
 また、図24に示すように、バンドパスフィルタBPFは、集光レンズ1241と受光センサ1022との間に配置されることが望ましい。これにより、バンドパスフィルタBPFの角度依存性の影響を抑制することができる。 Furthermore, as shown in FIG. 24, it is desirable that the bandpass filter BPF be placed between the condenser lens 1241 and the light receiving sensor 1022. Thereby, the influence of the angle dependence of the bandpass filter BPF can be suppressed.
 また、発光部1012が照射する波長帯域の光を通過させる別のバンドパスフィルタBPFを更に備えることが望ましい。これにより、温度に応じて通過帯域が変化するバンドパスフィルタBPFの遮断帯域が狭くても、別のバンドパスフィルタBPF’によって太陽光等の背景光をカットすることによって、背景光によるノイズの影響を抑制することができる。 It is also desirable to further include another bandpass filter BPF that passes light in the wavelength band emitted by the light emitting unit 1012. As a result, even if the cutoff band of the bandpass filter BPF, whose passband changes depending on the temperature, is narrow, background light such as sunlight can be cut by another bandpass filter BPF', thereby reducing the influence of noise caused by background light. can be suppressed.
 また、本実施形態のバンドパスフィルタBPFは、基材1025と、基材1025の上に形成された薄膜1026とを有し、薄膜1026の表面には、複数の突起1027が所定のパターンで配置されており、温度に応じて突起1027の間隔が変化する。これにより、温度に応じて特性を変化させるバンドパスフィルタを実現できる。 Furthermore, the band pass filter BPF of this embodiment includes a base material 1025 and a thin film 1026 formed on the base material 1025, and a plurality of protrusions 1027 are arranged in a predetermined pattern on the surface of the thin film 1026. The distance between the protrusions 1027 changes depending on the temperature. This makes it possible to realize a bandpass filter whose characteristics change depending on the temperature.
 [第三実施形態]
 <全体構成>
 図28は、測定装置2001の全体構成の説明図である。図29は、測定装置2001の概略説明図である。
[Third embodiment]
<Overall configuration>
FIG. 28 is an explanatory diagram of the overall configuration of the measuring device 2001. FIG. 29 is a schematic explanatory diagram of the measuring device 2001.
 以下の説明では、図29に示すように各方向を定めている。Z方向は、投光用光学系2014の光軸に沿った方向である。なお、測定装置2001の測定対象となる対象物2090は、測定装置2001に対してZ方向に離れていることになる。また、X方向及びY方向は、Z方向に対して垂直な方向である。なお、発光部2012を構成する複数の発光素子2121は、X方向及びY方向に沿って2次元配置されている(後述;図31~33参照)。発光部2020の複数の画素2221も、X方向及びY方向に沿って2次元配置されている。 In the following explanation, each direction is defined as shown in FIG. 29. The Z direction is a direction along the optical axis of the light projection optical system 2014. Note that the object 2090 to be measured by the measuring device 2001 is separated from the measuring device 2001 in the Z direction. Further, the X direction and the Y direction are directions perpendicular to the Z direction. Note that the plurality of light emitting elements 2121 constituting the light emitting section 2012 are two-dimensionally arranged along the X direction and the Y direction (described later; see FIGS. 31 to 33). The plurality of pixels 2221 of the light emitting section 2020 are also two-dimensionally arranged along the X direction and the Y direction.
 測定装置2001は、対象物2090までの距離を測定する装置である。測定装置2001は、いわゆるLiDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)としての機能を有する装置である。測定装置2001は、測定光を出射し、対象物2090の表面で反射した反射光を検出し、測定光を出射してから反射光を受光するまでの時間を計測することによって、対象物2090までの距離をTOF方式(Time of flight)で測定する。測定装置2001は、照射部2010と、発光部2020と、制御部2030とを有する。 The measuring device 2001 is a device that measures the distance to the target object 2090. The measurement device 2001 is a device having a function of so-called LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging). The measuring device 2001 emits measurement light, detects the reflected light reflected from the surface of the target object 2090, and measures the time from when the measurement light is emitted to when the reflected light is received. Measure the distance using the TOF method (Time of flight). The measuring device 2001 includes an irradiation section 2010, a light emitting section 2020, and a control section 2030.
 照射部2010は、対象物2090に向かって測定光を照射する照射装置である。照射部2010は、所定の画角で測定エリア2050(図29参照)に測定光を照射することになる。照射部2010は、発光部2012と、投光用光学系2014とを有する。発光部2012は、光を出射する部材(光源)である。例えば、発光部2012は、面発光レーザー(VCSEL)アレイチップで構成されている。投光用光学系2014は、発光部2012から出射された光を測定エリア2050に照射する光学系である。照射部2010の詳しい構成については、後述する。 The irradiation unit 2010 is an irradiation device that irradiates measurement light toward the target object 2090. The irradiation unit 2010 irradiates the measurement area 2050 (see FIG. 29) with measurement light at a predetermined angle of view. The irradiation unit 2010 includes a light emitting unit 2012 and a light projection optical system 2014. The light emitting unit 2012 is a member (light source) that emits light. For example, the light emitting unit 2012 is configured with a surface emitting laser (VCSEL) array chip. The light projection optical system 2014 is an optical system that irradiates the measurement area 2050 with the light emitted from the light emitting section 2012. The detailed configuration of the irradiation unit 2010 will be described later.
 発光部2020は、対象物2090からの反射光を受光する。発光部2020は、測定エリア2050(図29参照)からの反射光を受光することになる。発光部2020は、受光センサ2022受光センサ2022と、受光用光学系2024とを有する。受光センサ2022は、2次元配置された複数の画素2221を有する。例えば、VGAの受光センサ2022の場合、480×640の画素2221が2次元配置されている。各画素2221は、受光素子を有しており、受光素子は、受光量に応じた信号を出力する。受光用光学系2024は、測定エリア2050からの反射光を発光部2020に受光させる光学系である。受光用光学系2024は、測定エリア2050の像を受光センサ2022の受光面で結像させる。 The light emitting unit 2020 receives reflected light from the target object 2090. The light emitting unit 2020 receives reflected light from the measurement area 2050 (see FIG. 29). The light emitting unit 2020 includes a light receiving sensor 2022 and a light receiving optical system 2024. The light receiving sensor 2022 has a plurality of pixels 2221 arranged two-dimensionally. For example, in the case of the VGA light receiving sensor 2022, 480×640 pixels 2221 are two-dimensionally arranged. Each pixel 2221 has a light receiving element, and the light receiving element outputs a signal according to the amount of received light. The light receiving optical system 2024 is an optical system that causes the light emitting unit 2020 to receive reflected light from the measurement area 2050. The light receiving optical system 2024 forms an image of the measurement area 2050 on the light receiving surface of the light receiving sensor 2022 .
 制御部2030は、測定装置2001の制御を司る。制御部2030は、照射部2010を制御し、照射部2010から照射させる光を制御する。また、制御部2030は、発光部2020の出力結果に基づいて、対象物2090までの距離をTOF方式(Time of flight)で測定する。制御部2030は、不図示の演算装置及び記憶装置を有する。演算装置は、例えばCPU、GPUなどの演算処理装置である。演算装置の一部がアナログ演算回路で構成されても良い。記憶装置は、主記憶装置と補助記憶装置とにより構成され、プログラムやデータを記憶する装置である。記憶装置に記憶されているプログラムを演算装置が実行することにより、対象物2090までの距離を測定するための各種処理が実行される。図28には、各種処理の機能ブロックが示されている。 The control unit 2030 controls the measurement device 2001. The control unit 2030 controls the irradiation unit 2010 and controls the light emitted from the irradiation unit 2010. Further, the control unit 2030 measures the distance to the target object 2090 based on the output result of the light emitting unit 2020 using a TOF method (Time of Flight). The control unit 2030 includes an arithmetic unit and a storage device (not shown). The computing device is, for example, a computing processing device such as a CPU or a GPU. A part of the arithmetic device may be constituted by an analog arithmetic circuit. A storage device is a device that is composed of a main storage device and an auxiliary storage device, and stores programs and data. Various processes for measuring the distance to the target object 2090 are executed by the arithmetic device executing the program stored in the storage device. FIG. 28 shows functional blocks for various processes.
 制御部2030は、設定部2032と、タイミング制御部2034と、測距部2036とを有する。設定部2032は、各種設定を行う。タイミング制御部2034は、各部の処理タイミングを制御する。例えば、タイミング制御部2034は、発光部2012から光を射出させるタイミングなどを制御する。測距部2036は、対象物2090までの距離を測定する。測距部2036は、信号処理部2362と、時間検出部2364と、距離算出部2366とを有する。信号処理部2362は、受光センサ2022の出力信号を処理する。時間検出部2364は、光の飛行時間(光を照射してから反射光が到達するまでの時間)を検出する。距離算出部2366は、対象物2090までの距離を算出する。 The control section 2030 includes a setting section 2032, a timing control section 2034, and a distance measuring section 2036. The setting unit 2032 performs various settings. The timing control unit 2034 controls the processing timing of each unit. For example, the timing control unit 2034 controls the timing at which light is emitted from the light emitting unit 2012. The distance measuring unit 2036 measures the distance to the target object 2090. The distance measurement section 2036 includes a signal processing section 2362, a time detection section 2364, and a distance calculation section 2366. The signal processing unit 2362 processes the output signal of the light receiving sensor 2022. The time detection unit 2364 detects the flight time of light (the time from when the light is irradiated until the reflected light arrives). The distance calculation unit 2366 calculates the distance to the target object 2090.
 図30は、測定方法の一例を説明するためのタイミングチャートである。 FIG. 30 is a timing chart for explaining an example of the measurement method.
 制御部2030(タイミング制御部2034)は、照射部2010の発光部2012に所定の周期でパルス光を出射させる。図30の上側には、発光部2012がパルス光を出射するタイミング(出射タイミング)が示されている。発光部2012から出射された光は、投光用光学系2014を介して測定エリア2050に照射される。測定エリア2050内の対象物2090の表面で反射した光は、受光用光学系2024を介して受光センサ2022に受光される。受光センサ2022の各画素2221は、パルス状の反射光を受光することになる。図30の中央には、パルス状の反射光が到達するタイミング(到達タイミング)が示されている。図30の下側には、受光センサ2022の或る画素2221の出力信号が示されている。受光センサ2022の各画素2221は、受光量に応じた信号を出力する。 The control unit 2030 (timing control unit 2034) causes the light emitting unit 2012 of the irradiation unit 2010 to emit pulsed light at a predetermined period. The upper side of FIG. 30 shows the timing at which the light emitting unit 2012 emits pulsed light (emission timing). The light emitted from the light emitting unit 2012 is irradiated onto the measurement area 2050 via the light projection optical system 2014. The light reflected from the surface of the object 2090 within the measurement area 2050 is received by the light receiving sensor 2022 via the light receiving optical system 2024. Each pixel 2221 of the light receiving sensor 2022 receives pulsed reflected light. In the center of FIG. 30, the timing at which the pulsed reflected light arrives (arrival timing) is shown. At the bottom of FIG. 30, an output signal of a certain pixel 2221 of the light receiving sensor 2022 is shown. Each pixel 2221 of the light receiving sensor 2022 outputs a signal according to the amount of light received.
 制御部2030(タイミング制御部2034)は、発光部2012の全ての発光素子2121から光を出射させて測定エリア2050の全体に光を一括して照射しても良いし、発光部2012の一部の発光素子2121(例えば1つの発光素子2121)から光を出射させて測定エリア2050の所定の領域のみに光を照射しても良い。発光部2012の一部の発光素子2121(例えば1つの発光素子2121)から光を出射させる場合、制御部2030は、発光させる発光部2012に対応する画素2221から出力される信号を取得する(発光させる発光部2012に対応していない画素2221の信号は処理しない)。本実施形態の照射部2010は、発光部2012の一部の発光素子2121(例えば1つの発光素子2121)から光を出射して、測定エリア2050の所定の領域(後述する照射スポット群に相当)に光を照射することが可能である。 The control unit 2030 (timing control unit 2034) may emit light from all of the light emitting elements 2121 of the light emitting unit 2012 to irradiate the entire measurement area 2050 with light at once, or may emit light from all of the light emitting elements 2121 of the light emitting unit 2012, or may emit light from all of the light emitting elements 2121 of the light emitting unit 2012. The light emitting element 2121 (for example, one light emitting element 2121) may emit light to irradiate only a predetermined region of the measurement area 2050 with the light. When emitting light from some of the light emitting elements 2121 (for example, one light emitting element 2121) of the light emitting unit 2012, the control unit 2030 acquires a signal output from the pixel 2221 corresponding to the light emitting unit 2012 to emit light (e.g., one light emitting element 2121). (The signal of the pixel 2221 that does not correspond to the light emitting unit 2012 to be used is not processed.) The irradiation unit 2010 of this embodiment emits light from some light emitting elements 2121 (for example, one light emitting element 2121) of the light emitting unit 2012 to a predetermined area of the measurement area 2050 (corresponding to a group of irradiation spots described later). It is possible to irradiate the area with light.
 制御部2030の測距部2036(信号処理部2362)は、各画素2221の出力信号に基づいて、反射光の到達タイミングを検出する。例えば、信号処理部2362は、各画素2221の出力信号のピークのタイミングに基づいて、反射光の到達タイミングを検出する。なお、信号処理部2362は、外乱光(例えば太陽光)の影響を除去するため、画素2221の出力信号のDC成分をカットした信号のピークに基づいて、反射光の到達タイミングを求めても良い。
 測距部2036(時間検出部2364)は、光の出射タイミングと、光の到達タイミングとに基づいて、光を照射してから反射光が到達するまでの時間Tfを検出する。時間Tfは、測定装置2001と対象物2090との間を光が往復する時間に相当する。そして、測距部2036(距離算出部2366)は、時間Tfに基づいて、対象物2090までの距離Lを算出する。なお、光を照射してから反射光が到達するまでの時間をTfとし、光の速度をCとしたとき、距離Lは、L=C×Tf/2となる。制御部2030は、発光部2020の画素2221ごとに検出した時間Tfに基づいて、画素2221ごとに対象物2090までの距離を算出することによって、距離画像を生成する。
The distance measuring section 2036 (signal processing section 2362) of the control section 2030 detects the arrival timing of the reflected light based on the output signal of each pixel 2221. For example, the signal processing unit 2362 detects the arrival timing of the reflected light based on the peak timing of the output signal of each pixel 2221. Note that the signal processing unit 2362 may determine the arrival timing of the reflected light based on the peak of the signal obtained by cutting the DC component of the output signal of the pixel 2221 in order to remove the influence of disturbance light (for example, sunlight). .
The distance measurement unit 2036 (time detection unit 2364) detects the time Tf from when the light is irradiated until the reflected light arrives, based on the light emission timing and the light arrival timing. The time Tf corresponds to the time during which light travels back and forth between the measuring device 2001 and the target object 2090. Then, the distance measuring section 2036 (distance calculating section 2366) calculates the distance L to the target object 2090 based on the time Tf. Note that, when the time from irradiation of light until the reflected light arrives is Tf, and the speed of light is C, the distance L is L=C×Tf/2. The control unit 2030 generates a distance image by calculating the distance to the object 2090 for each pixel 2221 based on the time Tf detected for each pixel 2221 of the light emitting unit 2020.
 <照射部2010について>
 図31は、発光部2012の発光素子2121の配置の説明図である。
 発光部2012は、例えば面発光レーザー(VCSEL)アレイチップで構成されている。発光部2012は、発光素子2121(例えば面発光レーザー;VCSEL)を複数有しており、複数の発光素子2121は2次元配置されている。複数の発光素子2121は、X方向及びY方向に間隔をあけて、配置されている。このように、発光素子2121となるVCSELを2次元配置した場合、エミッタ間の隙間によって、複数の発光素子2121が間隔をあけて配置される。図中には、発光素子2121がX方向及びY方向にそれぞれ3個ずつ配置されているが、実際には、3以上の発光素子2121がX方向及びY方向に沿って配置されている。
<About the irradiation section 2010>
FIG. 31 is an explanatory diagram of the arrangement of the light emitting elements 2121 of the light emitting section 2012.
The light emitting unit 2012 is composed of, for example, a surface emitting laser (VCSEL) array chip. The light emitting unit 2012 includes a plurality of light emitting elements 2121 (for example, a surface emitting laser; VCSEL), and the plurality of light emitting elements 2121 are two-dimensionally arranged. The plurality of light emitting elements 2121 are arranged at intervals in the X direction and the Y direction. In this way, when the VCSELs serving as the light-emitting elements 2121 are two-dimensionally arranged, the plurality of light-emitting elements 2121 are arranged at intervals due to the gaps between the emitters. In the figure, three light emitting elements 2121 are arranged in each of the X direction and the Y direction, but in reality, three or more light emitting elements 2121 are arranged along the X direction and the Y direction.
 図32は、測定エリア2050に照射される光の参考説明図である。図中の1つの丸印は、或る発光素子2121(1つの発光素子2121)から出射した光が1つのレンズを介して測定エリア2050に照射される範囲を示している。図中の1つの丸印は、1つの発光素子2121の発光点の像に相当する。
 図31に示すように複数の発光素子2121が間隔をあけて配置されている場合、投光用光学系2014が発光素子2121の発光点の像をそのまま測定エリア2050に投影すると、図32に示すように、測定エリア2050に照射される光に隙間があくことがある。この結果、測定エリア2050上に光を照射できない領域が生じてしまい、測定エリア2050上に測定できない領域が生じてしまう。
FIG. 32 is a reference explanatory diagram of light irradiated onto the measurement area 2050. One circle in the figure indicates a range in which light emitted from a certain light emitting element 2121 (one light emitting element 2121) is irradiated onto the measurement area 2050 via one lens. One circle in the figure corresponds to an image of a light emitting point of one light emitting element 2121.
When a plurality of light emitting elements 2121 are arranged at intervals as shown in FIG. 31, when the light projection optical system 2014 projects the image of the light emitting point of the light emitting elements 2121 directly onto the measurement area 2050, as shown in FIG. As such, there may be gaps in the light irradiated onto the measurement area 2050. As a result, there will be a region on the measurement area 2050 that cannot be irradiated with light, and a region that cannot be measured will be created on the measurement area 2050.
 図33は、本実施形態における測定エリア2050に照射される光の説明図である。
 本実施形態の投光用光学系2014は、所定方向(X方向及びY方向)に光軸をシフトさせた複数のレンズを有している。これにより、図33に示すように、測定エリア2050上に光を照射できない領域が生じることを抑制している。以下、この点について説明する。
FIG. 33 is an explanatory diagram of light irradiated onto the measurement area 2050 in this embodiment.
The light projection optical system 2014 of this embodiment includes a plurality of lenses whose optical axes are shifted in predetermined directions (X direction and Y direction). This suppresses the occurrence of areas on the measurement area 2050 that cannot be irradiated with light, as shown in FIG. 33. This point will be explained below.
 図34は、レンズユニット2015の説明図である。
 投光用光学系2014は、複数のレンズエレメントを有するレンズユニット2015を有する。本実施形態では、レンズエレメントはメタレンズ2016で構成されており、レンズユニット2015は、複数のメタレンズ2016を有する(投光用光学系2014は、複数のメタレンズ2016を有する)。メタレンズ2016は、光の波長よりも小さい微小構造体を所定のパターンで配置することによって光の透過強度や位相を変化させ、レンズ(ここでは凸レンズ)として機能する光学エレメントである。
 図中には、複数のメタレンズ2016を構成する微小構造体の配置パターンが示されている。メタレンズ2016を構成する微小構造体は、X方向及びY方向に平行な平面(XY平面)上に配置されている。本実施形態のメタレンズ2016は、基板(例えばガラス基板)の表面に設けられた微小構造体で構成されている。つまり、本実施形態のメタレンズ2016は、メタサーフェスにより構成されている。但し、メタレンズ2016は、メタサーフェスに限られるものではなく、微小構造体を3次元配置させたメタマテリアルにより構成されても良い。
FIG. 34 is an explanatory diagram of the lens unit 2015.
The light projection optical system 2014 includes a lens unit 2015 having a plurality of lens elements. In this embodiment, the lens element is constituted by a metalens 2016, and the lens unit 2015 has a plurality of metalens 2016 (the light projection optical system 2014 has a plurality of metalens 2016). The metalens 2016 is an optical element that changes the transmission intensity and phase of light by arranging microstructures smaller than the wavelength of light in a predetermined pattern, and functions as a lens (here, a convex lens).
In the figure, an arrangement pattern of microstructures forming a plurality of metalens 2016 is shown. The microstructures forming the metalens 2016 are arranged on a plane (XY plane) parallel to the X direction and the Y direction. The metalens 2016 of this embodiment is composed of a microstructure provided on the surface of a substrate (for example, a glass substrate). In other words, the metalens 2016 of this embodiment is composed of a metasurface. However, the metalens 2016 is not limited to a metasurface, and may be made of a metamaterial in which microstructures are three-dimensionally arranged.
 レンズユニット2015には、X方向及びY方向に沿ってそれぞれ複数個のメタレンズ2016が配置されている。図中には、X方向に沿って2個のメタレンズ2016が並んで配置されており、Y方向に沿って2個のメタレンズ2016が並んで配置されている。但し、X方向又はY方向に並ぶメタレンズ2016の数は、2以上でも良い。 A plurality of metalens 2016 are arranged in the lens unit 2015 along the X direction and the Y direction, respectively. In the figure, two metalens 2016 are arranged side by side along the X direction, and two metalens 2016 are arranged side by side along the Y direction. However, the number of metalens 2016 arranged in the X direction or the Y direction may be two or more.
 複数のメタレンズ2016メタレンズ2016は、X方向及びY方向に光軸をずらして配置されている。図中には、メタレンズ2016がX方向及びY方向に間隔tをあけて配置されていることが示されている。それぞれのメタレンズ2016の光軸は、Z方向に平行である。 A plurality of metalens 2016 The metalens 2016 are arranged with their optical axes shifted in the X direction and the Y direction. The figure shows that metalens 2016 are arranged at intervals t in the X and Y directions. The optical axis of each metalens 2016 is parallel to the Z direction.
 本実施形態では、共通の平面上(例えば同じガラス基板上)に微小構造体が配置されることによって、複数のメタレンズ2016が共通の平面上に配置されている。なお、仮に複数の凸レンズが2次元配置された場合、凸レンズは光軸からの距離に応じて厚さが異なる形状であるために、隣接する凸レンズの境界部に筋状の凹部が形成されてしまい、この筋状の凹部の影響によって測定エリア2050に影が形成されるおそれがある。これに対し、本実施形態のように複数のメタレンズ2016を共通の平面上に設けた場合には、レンズの境界部を平坦にすることができるため、測定エリア2050に影が形成されることを防止できる。なお、メタレンズ2016を用いることによって、レンズの直径を小さく設定したり、レンズの光軸のシフト量を小さく設定したりすることが可能である。例えば、図中の光軸間距離tを小さく設定することが可能である。 In this embodiment, a plurality of metalens 2016 are arranged on a common plane by disposing the microstructures on a common plane (for example, on the same glass substrate). Furthermore, if multiple convex lenses were arranged two-dimensionally, since the convex lenses have shapes that vary in thickness depending on the distance from the optical axis, streak-like recesses would be formed at the boundaries between adjacent convex lenses. , there is a possibility that a shadow will be formed in the measurement area 2050 due to the influence of this streak-like recess. On the other hand, when a plurality of metalens 2016 are provided on a common plane as in this embodiment, the boundaries between the lenses can be made flat, so that shadows are not formed in the measurement area 2050. It can be prevented. Note that by using the metalens 2016, it is possible to set the diameter of the lens small or to set the shift amount of the optical axis of the lens small. For example, it is possible to set the distance t between optical axes in the figure to be small.
 図35は、1つのメタレンズ2016によって形成された発光素子2121の発光点の像の説明図である。図中の丸印は、或る発光素子2121(1つの発光素子2121)が或るメタレンズ2016(1つのメタレンズ2016)を介して測定エリア2050に光を照射する範囲を示している。以下の説明では、或る発光素子2121(1つの発光素子2121)が或るメタレンズ2016(1つのメタレンズ2016)を介して測定エリア2050に光を照射する範囲(図中の1つの丸印に相当する範囲)のことを「照射スポット」と呼ぶ。図35に示すように、1つの発光素子2121の発光点は、1つのメタレンズ2016を介して、測定エリア2050に1つの像(照射スポット)を形成する。 FIG. 35 is an explanatory diagram of an image of a light emitting point of the light emitting element 2121 formed by one metalens 2016. A circle in the figure indicates a range in which a certain light emitting element 2121 (one light emitting element 2121) irradiates light onto the measurement area 2050 via a certain metalens 2016 (one metalens 2016). In the following explanation, the range (corresponding to one circle in the figure) in which a certain light emitting element 2121 (one light emitting element 2121) irradiates light onto the measurement area 2050 via a certain metalens 2016 (one metalens 2016) is used. This area) is called the ``irradiation spot.'' As shown in FIG. 35, the light emitting point of one light emitting element 2121 forms one image (irradiation spot) in measurement area 2050 via one metalens 2016.
 図36は、複数のメタレンズ2016によって形成された発光素子2121の発光点の像の説明図である。図36に示すように、1つの発光素子2121の発光点は、複数のメタレンズ2016を介して、測定エリア2050に複数の像(照射スポット)を形成する。以下の説明では、或る発光素子2121(1つの発光素子2121)が複数のメタレンズ2016を介して形成した複数の像(照射スポット)のことを照射スポット群と呼ぶ。図34に示すように、X方向に沿って2個のメタレンズ2016が並んで配置され、Y方向に沿って2個のメタレンズ2016が並んで配置される場合、或る発光素子2121の発光点は、測定エリア2050上でX方向に2つ並び、Y方向に2つ並ぶ複数の像(照射スポット群)を形成する。つまり、この場合の照射スポット群は、2×2で配置された複数の照射スポットにより構成される。なお、複数のメタレンズ2016がX方向及びY方向にN個ずつ並んで配置される場合には、或る発光素子2121(1つの発光素子2121)は、測定エリア2050上でX方向及びY方向にN個ずつ並ぶ像(照射スポット群)を形成する。つまり、この場合の照射スポット群は、N×Nで配置された複数の照射スポットにより構成される。 FIG. 36 is an explanatory diagram of an image of a light emitting point of a light emitting element 2121 formed by a plurality of metalens 2016. As shown in FIG. 36, a light emitting point of one light emitting element 2121 forms a plurality of images (irradiation spots) in a measurement area 2050 via a plurality of metalens 2016. In the following description, a plurality of images (irradiation spots) formed by a certain light emitting element 2121 (one light emitting element 2121) via a plurality of metalens 2016 will be referred to as an irradiation spot group. As shown in FIG. 34, when two metalens 2016 are arranged side by side along the X direction and two metalens 2016 are arranged side by side along the Y direction, the light emitting point of a certain light emitting element 2121 is , a plurality of images (irradiation spot group) are formed on the measurement area 2050, with two images aligned in the X direction and two images aligned in the Y direction. That is, the irradiation spot group in this case is composed of a plurality of irradiation spots arranged in a 2×2 matrix. Note that when a plurality of N metalens 2016 are arranged side by side in the X direction and the Y direction, a certain light emitting element 2121 (one light emitting element 2121) is arranged in the X direction and the Y direction on the measurement area 2050. N images (irradiation spot group) are formed. That is, the irradiation spot group in this case is composed of a plurality of irradiation spots arranged in N×N.
 図36に示すように、X方向及びY方向に隣り合う2つの照射スポット(所定方向に隣接する2つのメタレンズ2016によって形成された或る発光素子2121の発光点の2つの像)は、一部が重なり合う。隣接する照射スポットが重なることによって、隣り合う2つの照射スポットの間に隙間が形成されることを抑制でき、測定エリア2050上に光を照射できない領域が生じることを抑制できる。 As shown in FIG. 36, two irradiation spots adjacent in the X direction and the Y direction (two images of a light emitting point of a certain light emitting element 2121 formed by two metalens 2016 adjacent in a predetermined direction) are partially overlap. By overlapping adjacent irradiation spots, it is possible to suppress the formation of a gap between two adjacent irradiation spots, and it is possible to suppress the formation of a region on the measurement area 2050 that cannot be irradiated with light.
 図38は、所定方向(X方向又はY方向)に隣り合う2つの照射スポット(所定方向に隣接する2つのメタレンズ2016によって形成された或る発光素子2121の発光点の像)の一部が重なるための光学条件の説明図である。図中には、同じ照射スポット群に属する2つの照射スポットの一部が重なる様子が描かれている。 FIG. 38 shows that two irradiation spots (images of a light emitting point of a certain light emitting element 2121 formed by two metalens 2016 adjacent in a predetermined direction) that are adjacent in a predetermined direction (X direction or Y direction) partially overlap. FIG. 2 is an explanatory diagram of optical conditions for In the figure, two irradiation spots belonging to the same irradiation spot group are shown partially overlapping.
 図中のSは、或る発光素子2121の発光点を示している(説明のため、発光点が紙面垂直方向に向かって描かれているが、発光点はZ方向に垂直である)。また、図中のL,Lは、所定方向(X方向又はY方向)に隣接する2つのメタレンズ2016を示している(説明のため、メタレンズが凸レンズ形状に描かれている)。以下の説明では、この2つのメタレンズのことを第1メタレンズL及び第2メタレンズLと呼ぶことがある。図中のIは、メタレンズ2016を介して測定エリア2050に形成される像(照射スポット)を示している(説明のため、像が紙面垂直方向に向かって描かれているが、像はZ方向に垂直である)。以下の説明では、第1メタレンズLによる発光点Sの像(照射スポット)のことを像I11と呼び、第1メタレンズLによる発光点Sの像(照射スポット)のことを像I12と呼ぶことがある。 S1 in the figure indicates a light-emitting point of a certain light-emitting element 2121 (for the purpose of explanation, the light-emitting point is drawn in the direction perpendicular to the plane of the paper, but the light-emitting point is perpendicular to the Z direction). Further, L 1 and L 2 in the figure indicate two metalens 2016 adjacent to each other in a predetermined direction (X direction or Y direction) (for explanation, the metalens is drawn in a convex lens shape). In the following description, these two metalens may be referred to as a first metalens L1 and a second metalens L2 . I in the figure indicates an image (irradiation spot) formed on the measurement area 2050 via the metalens 2016 (for the sake of explanation, the image is drawn in the direction perpendicular to the paper surface, but the image is drawn in the Z direction). perpendicular to ). In the following explanation, the image (irradiation spot) of the light emitting point S1 by the first metalens L1 will be referred to as image I11 , and the image (irradiation spot) of the light emitting point S1 by the first metalens L2 will be referred to as image I11. It is sometimes called I12 .
 ここでは、メタレンズ2016の焦点距離をfとし、発光素子2121の発光点の直径をDとし、所定方向(X方向又はY方向)に並ぶメタレンズ2016の間隔(光軸間の距離)をtとする。 Here, the focal length of the metalens 2016 is f, the diameter of the light emitting point of the light emitting element 2121 is D, and the interval (distance between optical axes) between the metalens 2016 arranged in a predetermined direction (X direction or Y direction) is t. .
 図中のθは、第1メタレンズLの中心と像I11の中心とを結ぶ線(図中の一点鎖線)と、第1メタレンズLの中心と像I11の上端(隣接する像I12の側の端部)とを結ぶ線とのなす角度である。なお、第1メタレンズLの中心と像I11の中心とを結ぶ線(図中の一点鎖線)は、発光点Sの中心と第1メタレンズLの中心とを結ぶ線の延長線上にある。また、第1メタレンズLの中心と像I11の上端とを結ぶ線は、発光点Sの下端と第1メタレンズLの中心とを結ぶ線の延長線上にある。角度θは、照射スポットの画角の半分に相当する。角度θは次式のように示すことができる。 θ 1 in the figure is a line connecting the center of the first metalens L 1 and the center of the image I 11 (dotted chain line in the figure), and a line connecting the center of the first metalens L 1 and the center of the image I 11 (adjacent image This is the angle formed by the line that connects the end of the 12th side. Note that the line connecting the center of the first metalens L1 and the center of the image I11 (dotted chain line in the figure) is an extension of the line connecting the center of the light emitting point S1 and the center of the first metalens L1 . be. Further, the line connecting the center of the first metalens L1 and the upper end of the image I11 is on the extension of the line connecting the lower end of the light emitting point S1 and the center of the first metalens L1 . The angle θ 1 corresponds to half the angle of view of the irradiation spot. The angle θ 1 can be expressed as follows.
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 また、図中のθは、第1メタレンズLの中心と像I11の中心とを結ぶ線(図中の一点鎖線)と、第2メタレンズLの中心と像I12の下端(隣接する像I11の側の端部)とを結ぶ線とのなす角度である。なお、第2メタレンズLの中心と像I12の下端とを結ぶ線は、発光点Sの上端と第2メタレンズLの中心とを結ぶ線の延長線上にある。角度θは次式のように示すことができる。 In addition, θ 2 in the figure is a line connecting the center of the first metalens L 1 and the center of the image I 11 (dotted chain line in the figure), and the center of the second metalens L 2 and the lower end of the image I 12 (adjacent This is the angle formed by the line connecting the image I (the end on the side of 11 ). Note that the line connecting the center of the second metalens L2 and the lower end of the image I12 is on the extension of the line connecting the upper end of the light emitting point S1 and the center of the second metalens L2 . The angle θ 2 can be expressed as follows.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 角度θが角度θよりも小さいとき(θ<θ)、像I11と像I12の一部が重なり合う。このため、像I11と像I12の一部が重なり合うための条件は、次の通りである。 When the angle θ 2 is smaller than the angle θ 121 ), the images I 11 and I 12 partially overlap. Therefore, the conditions for the images I 11 and I 12 to partially overlap are as follows.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 なお、上式の括弧内の大小関係に基づいて(2t-D)/2f<D/2fになるため、像I11と像I12の一部が重なり合うための条件は、t<Dと表すこともできる。 Furthermore, since (2t-D)/2f<D/2f is established based on the size relationship in the parentheses in the above equation, the condition for a portion of image I 11 and image I 12 to overlap is expressed as t<D. You can also do that.
 上式の条件(又はt<D)を満たすことによって、隣接する2つの照射スポット(同じ照射スポット群に属する2つの照射スポット)の一部が重なり合う。これにより、隣り合う2つの照射スポットの間に隙間が形成されることを抑制でき、測定エリア2050上に光を照射できない領域が生じることを抑制できる。このため、上式の条件を満たすことが望ましい。なお、本実施形態では、メタレンズ2016を用いることによって、レンズの直径を小さく設定したり、レンズの光軸のシフト量を小さく設定したりすることが可能であるため、発光素子2121の発光点の直径Dが小さくても、上式の条件を満たす投光用光学系2014を実現することが容易となる。 By satisfying the above condition (or t<D), two adjacent irradiation spots (two irradiation spots belonging to the same irradiation spot group) partially overlap. Thereby, it is possible to suppress the formation of a gap between two adjacent irradiation spots, and it is possible to suppress the formation of a region on the measurement area 2050 that cannot be irradiated with light. For this reason, it is desirable to satisfy the above condition. Note that in this embodiment, by using the metalens 2016, it is possible to set the diameter of the lens small and to set the shift amount of the optical axis of the lens small, so that the light emitting point of the light emitting element 2121 can be set small. Even if the diameter D is small, it is easy to realize the light projecting optical system 2014 that satisfies the conditions of the above expression.
 図37は、複数のメタレンズ2016によって形成された複数の発光素子2121の発光点の像(複数の照射スポット群)の説明図である。図中の多数の円形状の像(照射スポット)は、図33に示す像と同様である。
 また、図37の太線で囲まれた領域は、複数のメタレンズ2016によって形成された或る発光点(1つの発光点)の像であり、照射スポット群を示している。図中の太線で囲んだ領域は、図36に示す照射スポット群と同様である。ここでは、メタレンズ2016がX方向及びY方向に2個ずつ並んで配置されるため、図中の太線で囲まれた領域には、X方向及びY方向に2個ずつ照射スポットが並んで配置されている。なお、メタレンズ2016がX方向及びY方向にN個ずつ並んで配置される場合には、図中の太線で囲まれた領域には、X方向及びY方向にN個ずつ照射スポットが並んで配置されることになる。図31に示すように、複数の発光素子2121がX方向及びY方向に沿って2次元配置されているため、複数の照射スポット群(図37の太線で囲んだ領域)もX方向及びY方向に沿って2次元配置されることになる。
 また、図37のハッチングの施された複数の像(間隔をあけて配置された複数の照射スポット)は、或るメタレンズ2016(1つのメタレンズ2016)によって形成された複数の発光素子2121の発光点のそれぞれの像(照射スポット)である。ハッチングで示された像は、図32に示す像と同様である。ハッチングで示された像の間には、白丸で示された像(照射スポット)が配置されている。ここでは、メタレンズ2016がX方向及びY方向に2個ずつ並んで配置されるため、ハッチングで示された2つの像の間には、1つの白丸で示された像が配置されている。なお、メタレンズ2016がX方向及びY方向にN個ずつ並んで配置される場合には、ハッチングで示された2つの像の間には、N-1個の像(白丸で示される像)が配置されることになる。
FIG. 37 is an explanatory diagram of images of light emitting points (a group of irradiation spots) of a plurality of light emitting elements 2121 formed by a plurality of metalens 2016. A large number of circular images (irradiation spots) in the figure are similar to the image shown in FIG. 33.
Further, the area surrounded by the thick line in FIG. 37 is an image of a certain light emitting point (one light emitting point) formed by a plurality of metalens 2016, and indicates a group of irradiation spots. The area surrounded by thick lines in the figure is similar to the irradiation spot group shown in FIG. 36. Here, two metalens 2016 are arranged side by side in the X direction and two in the Y direction, so two irradiation spots are arranged in the X direction and two in the Y direction in the area surrounded by the thick line in the figure. ing. Note that when N metalens 2016 are arranged side by side in the X direction and Y direction, N irradiation spots are arranged side by side in the X direction and Y direction in the area surrounded by the thick line in the figure. will be done. As shown in FIG. 31, since the plurality of light emitting elements 2121 are two-dimensionally arranged along the X direction and the Y direction, the plurality of irradiation spot groups (area surrounded by thick lines in FIG. 37) are also arranged in the X direction and the Y direction. It will be arranged two-dimensionally along.
In addition, a plurality of hatched images (a plurality of irradiation spots arranged at intervals) in FIG. 37 are light emitting points of a plurality of light emitting elements 2121 formed by a certain metalens 2016 (one metalens 2016). are each image (irradiation spot). The hatched image is similar to the image shown in FIG. 32. Images (irradiation spots) indicated by white circles are arranged between the hatched images. Here, since two metalens 2016 are arranged side by side in the X direction and two in the Y direction, one image shown by a white circle is arranged between two images shown by hatching. Note that when N metalens 2016 are arranged side by side in the X direction and the Y direction, there are N-1 images (images shown by white circles) between the two images shown by hatching. It will be placed.
 図37に示すように、X方向及びY方向に隣り合う2つの照射スポット群(太線で囲まれた領域)は、一部が重なり合う。隣接する照射スポット群が重なることによって、隣接する2つの照射スポット群の間に隙間が形成されることを抑制でき、測定エリア2050上に光を照射できない領域が生じることを抑制できる。 As shown in FIG. 37, two irradiation spot groups (areas surrounded by thick lines) that are adjacent in the X direction and the Y direction partially overlap. By overlapping adjacent irradiation spot groups, it is possible to suppress the formation of a gap between two adjacent irradiation spot groups, and it is possible to suppress the formation of a region on the measurement area 2050 that cannot be irradiated with light.
 図39は、所定方向(X方向又はY方向)に隣接する2つの照射スポット群の一部が重なるための光学条件の説明図である。 FIG. 39 is an explanatory diagram of optical conditions for partially overlapping two irradiation spot groups adjacent in a predetermined direction (X direction or Y direction).
 図中のS2は、発光点Sと所定方向(X方向又はY方向)に隣接する発光点を示している。ここでは、所定方向に隣接する発光素子2121の距離(発光点同士の中心間距離)をPとする。図中のL、Lは、所定方向の端部に位置するメタレンズ2016を示している(Nは、所定方向に並ぶメタレンズ2016の数を示している。図中には、説明の簡略化のため、N=2として第2メタレンズLが描かれている)。以下の説明では、Lで示されたメタレンズのことを第Nメタレンズと呼ぶ。図中のI1Nは、第NメタレンズLによる発光点Sの像を示している。また、図中のI21は、第1メタレンズLによる発光点Sの像を示している。像I11~像I1Nは、同じ照射スポット群に属しており、像I21は、像I11~像I1Nの照射スポット群と隣接する照射スポット群に属している。 S2 in the figure indicates a light emitting point adjacent to the light emitting point S1 in a predetermined direction (X direction or Y direction). Here, the distance between the light emitting elements 2121 adjacent to each other in a predetermined direction (the distance between the centers of the light emitting points) is P. L 1 and L N in the figure indicate the metalens 2016 located at the end in a predetermined direction (N indicates the number of metalens 2016 lined up in the predetermined direction. Therefore, the second metalens L2 is drawn with N=2). In the following description, the metalens indicated by LN will be referred to as the N-th metalens. I 1N in the figure indicates an image of the light emitting point S 1 by the N-th metalens L N. Further, I 21 in the figure indicates an image of the light emitting point S 2 by the first metalens L 1 . Images I 11 to I 1N belong to the same irradiation spot group, and image I 21 belongs to an adjacent irradiation spot group to the irradiation spot group of images I 11 to I 1N .
 図中のθは、第1メタレンズLの中心と像I11の中心とを結ぶ線(図中の一点鎖線)と、第NメタレンズL(第2メタレンズL)の中心と像I1N(I12)の上端(隣接する像I21の側の端部)とを結ぶ線とのなす角度である。なお、第NメタレンズLの中心と像I1N(I12)の上端とを結ぶ線は、発光点Sの下端と第NメタレンズL(第2メタレンズL)の中心とを結ぶ線の延長線上にある。角度θは、照射スポット群の画角の半分に相当する。角度θは、次式のように示すことができる(上式は、N=2の場合の角度θを示す式であり、上式の右辺第1項は、レンズ1個分の像中心のズレ量を示している)。 θ 3 in the figure is a line connecting the center of the first metalens L 1 and the center of the image I 11 (dotted chain line in the figure), the center of the Nth metalens L N (second metalens L 2 ), and the image I 1N (I 12 ) with the line connecting the upper end (the end on the adjacent image I 21 side). Note that the line connecting the center of the N-th metalens L N and the upper end of the image I 1N (I 12 ) is the line connecting the lower end of the light-emitting point S 1 and the center of the N-th metalens L N (second metalens L 2 ). It is an extension of The angle θ 3 corresponds to half the angle of view of the irradiation spot group. The angle θ 3 can be expressed as the following equation (the above equation shows the angle θ 3 when N=2, and the first term on the right side of the above equation is the image center of one lens. (indicates the amount of deviation).
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 また、図中のθは、第1メタレンズLの中心と像I11の中心とを結ぶ線と、第1メタレンズLの中心と像I21の下端(隣接する像I1Nの側の端部)とを結ぶ線とのなす角度である。なお、第1メタレンズLの中心と像I21の下端とを結ぶ線は、発光点Sの上端と第1メタレンズLの中心とを結ぶ線の延長線上である。角度θは、次式のように示すことができる。 In addition, θ 4 in the figure is a line connecting the center of the first metalens L 1 and the center of the image I 11 , and a line connecting the center of the first metalens L 1 and the lower end of the image I 21 (the side of the adjacent image I 1N) . This is the angle formed by the line connecting the ends). Note that the line connecting the center of the first metalens L1 and the lower end of the image I21 is an extension of the line connecting the upper end of the light emitting point S2 and the center of the first metalens L1 . The angle θ 4 can be expressed as the following equation.
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 角度θが角度θよりも小さいとき(θ<θ)、像I1Nと像I21とが重なり合う。このため、像I1Nと像I21の一部が重なり合うための条件は、次の通りである(なお、次式では、右辺にtの項を残すように変形している)。 When angle θ 4 is smaller than angle θ 343 ), image I 1N and image I 21 overlap. Therefore, the conditions for the image I 1N and the image I 21 to partially overlap are as follows (note that the following equation is modified so that the term t is left on the right side).
Figure JPOXMLDOC01-appb-M000009

 上式の条件を満たすことによって、隣接する2つの照射スポット群の一部が重なり合う。これにより、隣り合う2つの照射スポット群の間に隙間が形成されることを抑制でき、測定エリア2050上に光を照射できない領域が生じることを抑制できる。このため、上式の条件を満たすことが望ましい。なお、前述の数式3に示す条件(又はt<D)はtの上限を規定しているのに対し、上記の数式6に示す条件は、tの下限を規定している。
Figure JPOXMLDOC01-appb-M000009

By satisfying the above condition, two adjacent irradiation spot groups partially overlap. Thereby, it is possible to suppress the formation of a gap between two adjacent irradiation spot groups, and it is possible to suppress the formation of a region on the measurement area 2050 that cannot be irradiated with light. For this reason, it is desirable to satisfy the above condition. Note that the condition shown in Equation 3 above (or t<D) defines the upper limit of t, whereas the condition shown in Equation 6 above defines the lower limit of t.
 ===小括===
 本実施形態の測定装置2001は、発光部2012と、発光部2012の光を測定エリア2050に照射する投光用光学系2014とを備えている。発光部2012は、図31~33に示すように、間隔をあけて配置された複数の発光素子2121を有する。投光用光学系2014は、光軸をシフトさせた複数のメタレンズ2016を有する。本実施形態の投光用光学系2014は、或るメタレンズ2016を介して照射される複数の発光素子2121の光の間に、別のメタレンズ2016を介して発光素子2121からの光を照射する。すなわち、本実施形態の投光用光学系2014は、或るメタレンズ2016を介して複数の発光素子2121のそれぞれから出射される光によって間隔をあけて複数の照射スポット(図37のハッチングを施した照射スポット)を形成するとともに、この複数の照射スポットの間に、別のメタレンズ2016を介して発光素子2121から照射される光によって照射スポット(図37の白丸に示す照射スポット)を形成する。これにより、図32に示す場合と比べて、光を照射できない領域が生じることを抑制できる。
===Summary===
The measuring device 2001 of this embodiment includes a light emitting unit 2012 and a light projecting optical system 2014 that irradiates the measurement area 2050 with light from the light emitting unit 2012. The light emitting section 2012 has a plurality of light emitting elements 2121 arranged at intervals, as shown in FIGS. 31 to 33. The light projection optical system 2014 includes a plurality of metalens 2016 whose optical axes are shifted. The light projection optical system 2014 of this embodiment irradiates light from a light emitting element 2121 through another metalens 2016 between the lights from a plurality of light emitting elements 2121 irradiated through a certain metalens 2016. That is, the light projection optical system 2014 of this embodiment uses light emitted from each of the plurality of light emitting elements 2121 via a certain metalens 2016 to form a plurality of irradiation spots (hatched in FIG. 37) at intervals. At the same time, an irradiation spot (irradiation spot indicated by a white circle in FIG. 37) is formed between the plurality of irradiation spots by light emitted from the light emitting element 2121 via another metalens 2016. Thereby, compared to the case shown in FIG. 32, it is possible to suppress the occurrence of areas where light cannot be irradiated.
 本実施形態では、複数のメタレンズ2016は、共通の平面上に設けられている。これにより、測定エリア2050に影が形成されることを抑制できる。 In this embodiment, the plurality of metalens 2016 are provided on a common plane. Thereby, formation of a shadow in the measurement area 2050 can be suppressed.
 本実施形態では、所定方向に隣接する2つのメタレンズ2016によって形成された或る発光素子2121(1つの発光素子2121)の発光点の像の一部が重なり合う。つまり、本実施形態では、図36に示す照射スポットの一部が重なり合う。これにより、光を照射できない領域が生じることを更に抑制できる。なお、所定方向に隣接する2つのメタレンズ2016によって形成された或る発光素子2121(1つの発光素子2121)の発光点の像(照射スポット)が重なり合わない場合であっても、或るメタレンズ2016を介して照射される複数の発光素子2121の光(図37のハッチングを施した照射スポット)の間に、別のメタレンズ2016を介して発光素子2121からの光(図37の白丸に示す照射スポット)を照射することができれば、図32に示す場合と比べて、光を照射できない領域が生じることを抑制できる。 In this embodiment, images of light emitting points of a certain light emitting element 2121 (one light emitting element 2121) formed by two metalens 2016 adjacent in a predetermined direction partially overlap. That is, in this embodiment, the irradiation spots shown in FIG. 36 partially overlap. Thereby, it is possible to further suppress the occurrence of areas that cannot be irradiated with light. Note that even if the images (irradiation spots) of the light emitting points of a certain light emitting element 2121 (one light emitting element 2121) formed by two metalens 2016 adjacent to each other in a predetermined direction do not overlap, a certain metalens 2016 Between the light from the plurality of light emitting elements 2121 (the hatched irradiation spots in FIG. 37) that is irradiated via another metalens 2016, the light from the light emitting elements 2121 (the irradiation spots shown in white circles in FIG. 37) is emitted through another metalens 2016. ), it is possible to suppress the occurrence of areas where light cannot be irradiated, compared to the case shown in FIG. 32.
 本実施形態では、発光部2012及び投光用光学系2014は、前述の数式3(又はt<D)に示す条件を満たしている(図38参照)。これにより、所定方向に隣接する2つのメタレンズ2016によって形成された或る発光素子2121(1つの発光素子2121)の発光点の像(照射スポット)の一部を重なり合わせることができ、光を照射できない領域が生じることを抑制できる。 In this embodiment, the light emitting unit 2012 and the light projection optical system 2014 satisfy the condition shown in the above-mentioned formula 3 (or t<D) (see FIG. 38). As a result, images (irradiation spots) of light emitting points of a certain light emitting element 2121 (one light emitting element 2121) formed by two metalens 2016 adjacent to each other in a predetermined direction can be partially overlapped, and light can be irradiated. It is possible to suppress the occurrence of areas where it is not possible.
 本実施形態では、所定方向に隣接する2つの発光素子2121の発光点から照射され複数のメタレンズ2016によって形成された像の一部が重なり合う。つまり、本実施形態では、図37に示す照射スポット群の一部が重なり合う。これにより、光を照射できない領域が生じることを更に抑制できる。なお、所定方向に隣接する2つの発光素子2121の発光点から照射され複数のメタレンズ2016によって形成された像(照射スポット群)が重なり合わない場合であっても、或るメタレンズ2016を介して照射される複数の発光素子2121の光(図37のハッチングを施した照射スポット参照)の間に、別のメタレンズ2016を介して発光素子2121からの光(図37の白丸に示す照射スポット参照)を照射することができれば、図32に示す場合と比べて、光を照射できない領域が生じることを抑制できる。 In this embodiment, images emitted from the light emitting points of two light emitting elements 2121 adjacent in a predetermined direction and formed by a plurality of metalens 2016 partially overlap. That is, in this embodiment, the irradiation spot groups shown in FIG. 37 partially overlap. Thereby, it is possible to further suppress the occurrence of areas that cannot be irradiated with light. Note that even if the images (irradiation spot group) formed by a plurality of metalens 2016 that are irradiated from the light emitting points of two adjacent light emitting elements 2121 in a predetermined direction do not overlap, the irradiation is performed via a certain metalens 2016. The light from the light emitting elements 2121 (see the irradiation spots indicated by white circles in FIG. 37) is transmitted between the light from the plurality of light emitting elements 2121 (see the irradiation spots indicated by the white circles in FIG. 37) through another metalens 2016. If irradiation is possible, it is possible to suppress the occurrence of areas where light cannot be irradiated, compared to the case shown in FIG. 32.
 本実施形態では、発光部2012及び投光用光学系2014は、前述の数式6に示す条件を満たしている(図39参照)。これにより、所定方向に隣接する2つの発光素子2121の発光点から照射され複数のメタレンズ2016によって形成された像(照射スポット群)の一部を重なり合わせることができ、光を照射できない領域が生じることを抑制できる。 In this embodiment, the light emitting unit 2012 and the light projecting optical system 2014 satisfy the condition shown in Equation 6 above (see FIG. 39). This makes it possible to partially overlap images (irradiation spot group) irradiated from the light emitting points of two light emitting elements 2121 adjacent to each other in a predetermined direction and formed by the plurality of metalens 2016, resulting in areas that cannot be irradiated with light. can be suppressed.
 本実施形態では、発光部2012及び投光用光学系2014は、前述の数式3(又はt<D)及び数式6に示す条件を両方とも満たしている(図38及び図39参照)。これにより、同じ照射スポット群における照射スポットが重なり合うとともに、隣接する照射スポット群の間においても照射スポット照射スポットが重なり合うことができ、光を照射できない領域が生じることを抑制できる。 In this embodiment, the light emitting unit 2012 and the light projecting optical system 2014 satisfy both the conditions shown in the above-mentioned Equation 3 (or t<D) and Equation 6 (see FIGS. 38 and 39). Thereby, the irradiation spots in the same irradiation spot group can overlap, and the irradiation spots can also overlap between adjacent irradiation spot groups, and it is possible to suppress the occurrence of areas where light cannot be irradiated.
 以上、本開示の実施形態につき詳述したが、本開示は上記の実施形態に限定されるものではなく、様々な変形例が含まれる。また、上記の実施形態は本開示を分かりやすく説明するために構成を詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、上記の実施形態の構成の一部について、他の構成に追加、削除、置換することが可能である。 Although the embodiments of the present disclosure have been described in detail above, the present disclosure is not limited to the above-described embodiments, and includes various modifications. Furthermore, the configurations of the embodiments described above are explained in detail in order to explain the present disclosure in an easy-to-understand manner, and the embodiments are not necessarily limited to those having all the configurations described. Further, some of the configurations of the above embodiments can be added to, deleted from, or replaced with other configurations.
 本出願は、2022年6月30日出願の日本特許出願(特願2022-106333)、2022年6月30日出願の日本特許出願(特願2022-106334)、2022年8月9日出願の日本特許出願(特願2022-127331)に基づくものであり、その内容はここに参照として取り込まれる。 This application is a Japanese patent application filed on June 30, 2022 (Japanese patent application No. 2022-106333), a Japanese patent application filed on June 30, 2022 (Japanese patent application No. 2022-106334), and a Japanese patent application filed on August 9, 2022. It is based on a Japanese patent application (Japanese Patent Application No. 2022-127331), the contents of which are incorporated herein by reference.

Claims (26)

  1.  所定の波長帯域の範囲で温度に応じた波長の光を照射する発光部と、
     前記所定の波長帯域の光を通過させ、前記発光部から照射された光の反射光を通過させるフィルタと、
     複数の受光素子を有する受光センサと、
     前記フィルタと前記受光センサとの間に配置され、前記フィルタを通過した光を2以上の前記受光素子に分散させる分散素子と、を備える測定装置。
    a light emitting part that emits light of a wavelength according to temperature within a predetermined wavelength band;
    a filter that passes light in the predetermined wavelength band and passes reflected light of the light emitted from the light emitting section;
    a light receiving sensor having a plurality of light receiving elements;
    A measuring device comprising: a dispersion element disposed between the filter and the light receiving sensor and dispersing light that has passed through the filter to two or more of the light receiving elements.
  2.  請求項1に記載の測定装置であって、
     温度センサと、
     前記温度センサの温度データに対応する前記受光素子の受光データに基づいて距離を測定する制御部と、を備える測定装置。
    The measuring device according to claim 1,
    temperature sensor;
    A measuring device comprising: a control unit that measures a distance based on light reception data of the light receiving element corresponding to temperature data of the temperature sensor.
  3.  請求項1に記載の測定装置であって、
     温度センサと、
     前記受光素子が所定の受光データを出力する状況下において、前記温度センサの温度データが変化すると、異なる距離を出力する制御部と、を備える測定装置。
    The measuring device according to claim 1,
    temperature sensor;
    A measuring device comprising: a control section that outputs a different distance when temperature data of the temperature sensor changes under a situation in which the light receiving element outputs predetermined light reception data.
  4.  請求項2又は3に記載の測定装置であって、
     前記温度センサの前記温度データに対応付けた重みデータを記憶する記憶部を備え、
     前記制御部は、
      前記温度センサの前記温度データに対応する前記重みデータを前記記憶部から取得し、
      2以上の前記受光素子のそれぞれの受光データに、前記重みデータに応じた重み付けを行うことによって、前記距離を測定する、測定装置。
    The measuring device according to claim 2 or 3,
    comprising a storage unit that stores weight data associated with the temperature data of the temperature sensor,
    The control unit includes:
    acquiring the weight data corresponding to the temperature data of the temperature sensor from the storage unit;
    A measuring device that measures the distance by weighting each light reception data of the two or more light receiving elements according to the weight data.
  5.  請求項1に記載の測定装置であって、
     前記分散素子は、メタマテリアルにより構成されている、測定装置。
    The measuring device according to claim 1,
    The measuring device, wherein the dispersion element is made of a metamaterial.
  6.  請求項5に記載の測定装置であって、
     前記分散素子は、前記フィルタを通過した光を前記受光センサに集光させる機能を有する、測定装置。
    The measuring device according to claim 5,
    The dispersion element has a function of condensing the light that has passed through the filter onto the light receiving sensor.
  7.  請求項5又は6に記載の測定装置であって、
     前記分散素子は、所定方向に振動する光を通過させつつ、前記所定方向と交差する方向に振動する光を吸収する機能を有する、測定装置。
    The measuring device according to claim 5 or 6,
    The dispersion element has a function of absorbing light vibrating in a direction intersecting the predetermined direction while allowing light vibrating in a predetermined direction to pass therethrough.
  8.  請求項1に記載の測定装置であって、
     前記分散素子は、プリズム又は回折格子により構成されている、測定装置。
    The measuring device according to claim 1,
    A measurement device in which the dispersion element is configured by a prism or a diffraction grating.
  9.  請求項1に記載の測定装置であって、
     前記フィルタと前記分散素子との間に配置され、前記フィルタを通過した光を前記分散素子に集光する集光レンズを備える、測定装置。
    The measuring device according to claim 1,
    A measuring device, comprising: a condenser lens that is disposed between the filter and the dispersion element, and condenses light that has passed through the filter onto the dispersion element.
  10.  温度に応じた波長の光を照射する発光部と、
     前記発光部から照射された光の反射光を通過させるバンドパスフィルタと、
     前記バンドパスフィルタを通過した光を受光する受光センサと
    を備え、
     温度に応じて前記バンドパスフィルタを通過可能な光の波長が変化する、
    測定装置。
    A light emitting part that emits light of a wavelength according to the temperature,
    a bandpass filter that passes reflected light of the light emitted from the light emitting section;
    and a light receiving sensor that receives the light that has passed through the bandpass filter,
    The wavelength of light that can pass through the bandpass filter changes depending on the temperature.
    measuring device.
  11.  請求項10に記載の測定装置であって、
     前記バンドパスフィルタは、基材と、前記基材の上に形成された薄膜とを有し、
     前記薄膜の表面には、複数の突起が所定のパターンで配置されており、
     温度に応じて、前記突起の間隔が変化する、
    測定装置。
    The measuring device according to claim 10,
    The bandpass filter has a base material and a thin film formed on the base material,
    A plurality of protrusions are arranged in a predetermined pattern on the surface of the thin film,
    The distance between the protrusions changes depending on the temperature.
    measuring device.
  12.  請求項11に記載の測定装置であって、
     前記基材の線膨張係数は、前記薄膜の線膨張係数よりも大きい、測定装置。
    The measuring device according to claim 11,
    The linear expansion coefficient of the base material is larger than the linear expansion coefficient of the thin film.
  13.  請求項12に記載の測定装置であって、
     前記基材は、光透過性の樹脂で構成されている、測定装置。
    The measuring device according to claim 12,
    In the measuring device, the base material is made of a light-transmitting resin.
  14.  請求項11又は12に記載の測定装置であって、
     前記突起は、柱状に構成されており、
     複数の前記突起が格子状に配置されている、
    測定装置。
    The measuring device according to claim 11 or 12,
    The protrusion has a columnar shape,
    a plurality of the protrusions are arranged in a grid pattern;
    measuring device.
  15.  請求項11又は12に記載の測定装置であって、
     前記突起は、凸条に構成されており、
     複数の前記凸条が所定方向に並んで配置されている、
    測定装置。
    The measuring device according to claim 11 or 12,
    The protrusion is configured as a convex strip,
    a plurality of the protrusions are arranged in a predetermined direction;
    measuring device.
  16.  請求項10~12のいずれかに記載の測定装置であって、
     前記受光センサに光を集光する集光レンズを更に備え、
     前記バンドパスフィルタは、前記集光レンズと前記受光センサとの間に配置されている、測定装置。
    The measuring device according to any one of claims 10 to 12,
    further comprising a condensing lens that condenses light on the light receiving sensor,
    The measurement device, wherein the bandpass filter is disposed between the condenser lens and the light receiving sensor.
  17.  請求項10~12のいずれかに記載の測定装置であって、
     前記発光部は、所定の波長帯域の範囲で温度に応じた波長の光を照射し、
     前記所定の波長帯域の光を通過させ前記バンドパスフィルタとは別のバンドパスフィルタを更に備える、測定装置。
    The measuring device according to any one of claims 10 to 12,
    The light emitting unit emits light of a wavelength depending on the temperature within a predetermined wavelength band,
    A measurement device further comprising a bandpass filter different from the bandpass filter that passes light in the predetermined wavelength band.
  18.  基材と、
     前記基材の上に形成された薄膜と、
    を有し、
     前記薄膜の表面には、複数の突起が所定のパターンで配置されており、
     温度に応じて、前記突起の間隔が変化することによって、通過可能な光の波長が変化する、バンドパスフィルタ。 
    base material and
    a thin film formed on the base material;
    has
    A plurality of protrusions are arranged in a predetermined pattern on the surface of the thin film,
    A bandpass filter in which the wavelength of light that can be passed changes by changing the distance between the protrusions depending on the temperature.
  19.  間隔をあけて配置された複数の発光素子を有する発光部と、
     光軸をシフトさせた複数のメタレンズを有し、或る前記メタレンズを介して照射される複数の前記発光素子の光の間に、別の前記メタレンズを介して前記発光素子からの光を照射する光学系とを備える、測定装置。
    a light emitting section having a plurality of light emitting elements arranged at intervals;
    It has a plurality of metalens with shifted optical axes, and between the light from the plurality of light emitting elements that is irradiated through one metalens, the light from the light emitting element is irradiated through another metalens. A measuring device comprising an optical system.
  20.  請求項19に記載の測定装置であって、
     複数の前記メタレンズは、共通の平面上に設けられている、測定装置。
    The measuring device according to claim 19,
    A measuring device, wherein the plurality of metalens are provided on a common plane.
  21.  請求項19又は20に記載の測定装置であって、
     所定方向に隣接する2つの前記メタレンズによって形成された或る前記発光素子の発光点の像の一部が重なり合う、測定装置。
    The measuring device according to claim 19 or 20,
    A measuring device in which images of a light emitting point of a certain light emitting element formed by two metalens adjacent in a predetermined direction partially overlap.
  22.  請求項19又は20に記載の測定装置であって、
     前記発光素子の発光点の直径をD、
     所定方向に並ぶ前記メタレンズの間隔をt、
    としたとき、
     t<Dの条件を満たす、測定装置。
    The measuring device according to claim 19 or 20,
    The diameter of the light emitting point of the light emitting element is D,
    The interval between the metalens arranged in a predetermined direction is t,
    When
    A measuring device that satisfies the condition t<D.
  23.  請求項19又は20に記載の測定装置であって、
     所定方向に隣接する2つの前記発光素子の発光点から照射され複数の前記メタレンズによって形成された像の一部が重なり合う、測定装置。
    The measuring device according to claim 19 or 20,
    A measuring device in which images emitted from light emitting points of two adjacent light emitting elements in a predetermined direction and formed by a plurality of metalens overlap partially.
  24.  請求項19又は20に記載の測定装置であって、
     前記メタレンズの焦点距離をf、
     前記発光素子の発光点の直径をD、
     所定方向に並ぶ前記発光素子の間隔をP、
     前記所定方向に並ぶ前記メタレンズの間隔をt、
     前記所定方向に並ぶ前記メタレンズの数をN、
    としたとき、
    Figure JPOXMLDOC01-appb-M000001

    の条件を満たす、測定装置。
    The measuring device according to claim 19 or 20,
    The focal length of the metalens is f,
    The diameter of the light emitting point of the light emitting element is D,
    The interval between the light emitting elements arranged in a predetermined direction is P,
    The interval between the metalens arranged in the predetermined direction is t,
    The number of the metalens arranged in the predetermined direction is N,
    When
    Figure JPOXMLDOC01-appb-M000001

    A measuring device that meets the requirements of
  25.  請求項19又は20に記載の測定装置であって、
     前記メタレンズの焦点距離をf、
     前記発光素子の発光点の直径をD、
     所定方向に並ぶ前記発光素子の間隔をP、
     前記所定方向に並ぶ前記メタレンズの間隔をt、
     前記所定方向に並ぶ前記メタレンズの数をN、
    としたとき、
     t<D
    且つ
    Figure JPOXMLDOC01-appb-M000002

    の条件を満たす、測定装置。
    The measuring device according to claim 19 or 20,
    The focal length of the metalens is f,
    The diameter of the light emitting point of the light emitting element is D,
    The interval between the light emitting elements arranged in a predetermined direction is P,
    The interval between the metalens arranged in the predetermined direction is t,
    The number of the metalens arranged in the predetermined direction is N,
    When
    t<D
    and
    Figure JPOXMLDOC01-appb-M000002

    A measuring device that meets the requirements of
  26.  間隔をあけて配置された複数の発光素子を有する発光部と、
     光軸をシフトさせた複数のメタレンズを有し、或る前記メタレンズを介して照射される複数の前記発光素子の光の間に、別の前記メタレンズを介して前記発光素子からの光を照射する光学系とを備える、照射装置。
     
    a light emitting section having a plurality of light emitting elements arranged at intervals;
    It has a plurality of metalens with shifted optical axes, and between the light from the plurality of light emitting elements that is irradiated through one metalens, the light from the light emitting element is irradiated through another metalens. An irradiation device comprising an optical system.
PCT/JP2023/020859 2022-06-30 2023-06-05 Measurement device, irradiation device, and band-pass filter WO2024004538A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2022106333A JP2024005894A (en) 2022-06-30 2022-06-30 Measurement device
JP2022106334A JP2024005895A (en) 2022-06-30 2022-06-30 Measurement device and irradiation device
JP2022-106333 2022-06-30
JP2022-106334 2022-06-30
JP2022127331A JP2024024483A (en) 2022-08-09 2022-08-09 Measurement device and bandpass filter
JP2022-127331 2022-08-09

Publications (1)

Publication Number Publication Date
WO2024004538A1 true WO2024004538A1 (en) 2024-01-04

Family

ID=89382776

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/020859 WO2024004538A1 (en) 2022-06-30 2023-06-05 Measurement device, irradiation device, and band-pass filter

Country Status (1)

Country Link
WO (1) WO2024004538A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120170114A1 (en) * 2011-01-04 2012-07-05 Triton Systems, Inc. Metamaterial filter
WO2019017243A1 (en) * 2017-07-18 2019-01-24 パイオニア株式会社 Optical device
US20200067278A1 (en) * 2018-08-22 2020-02-27 Samsung Electronics Co., Ltd. Back side emitting light source array device and electronic apparatus having the same
JP2021509956A (en) * 2018-01-05 2021-04-08 トルンプ フォトニック コンポーネンツ ゲゼルシャフト ミット ベシュレンクテル ハフツング Laser configuration with optical filter
US20210124019A1 (en) * 2019-10-24 2021-04-29 Lookit.ai Lidar optical system with flat optics and rotating mirror enabling 360-degree field-of-view at high frame rate, high spatial resolution and low power consumption
JP2021513087A (en) * 2018-02-13 2021-05-20 センス・フォトニクス, インコーポレイテッドSense Photonics, Inc. Methods and systems for high resolution long range flash LIDAR

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120170114A1 (en) * 2011-01-04 2012-07-05 Triton Systems, Inc. Metamaterial filter
WO2019017243A1 (en) * 2017-07-18 2019-01-24 パイオニア株式会社 Optical device
JP2021509956A (en) * 2018-01-05 2021-04-08 トルンプ フォトニック コンポーネンツ ゲゼルシャフト ミット ベシュレンクテル ハフツング Laser configuration with optical filter
JP2021513087A (en) * 2018-02-13 2021-05-20 センス・フォトニクス, インコーポレイテッドSense Photonics, Inc. Methods and systems for high resolution long range flash LIDAR
US20200067278A1 (en) * 2018-08-22 2020-02-27 Samsung Electronics Co., Ltd. Back side emitting light source array device and electronic apparatus having the same
US20210124019A1 (en) * 2019-10-24 2021-04-29 Lookit.ai Lidar optical system with flat optics and rotating mirror enabling 360-degree field-of-view at high frame rate, high spatial resolution and low power consumption

Similar Documents

Publication Publication Date Title
CN103673887B (en) Copolymerization Jiao&#39;s measuring device
CN107192349B (en) Optical detection device
CN116893160A (en) Path resolved optical sampling architecture
US11940263B2 (en) Detector for determining a position of at least one object
KR20180095659A (en) Lada scanning device and Lada scanning device system
JP2022534950A (en) Active illumination system that changes the illumination wavelength according to the angle of view
US11061243B2 (en) Dichroic-mirror array
WO2024004538A1 (en) Measurement device, irradiation device, and band-pass filter
US20230333319A1 (en) Optical Signal Routing Devices and Systems
TWI431321B (en) Optical system and method for shaping a profile of a laser beam
US20230266167A1 (en) Optical sensor device
US10598910B2 (en) Waveguide for multispectral fusion
JP7091131B2 (en) Electromagnetic wave detection device and information acquisition system
KR20170015108A (en) Image sensor
US20220283271A1 (en) Beam scanning system for lidar
JP2024005894A (en) Measurement device
JP2024024483A (en) Measurement device and bandpass filter
KR102188699B1 (en) Optical element for spatial beam shaping
JP3593030B2 (en) Light emitting unit and photoelectric sensor
JP7171454B2 (en) INSPECTION DEVICE, LIGHTING DEVICE, AND SHADING SUPPRESSION METHOD
WO2022224851A1 (en) Electromagnetic wave detecting device
JP6775196B2 (en) Photodetector
JP2024005895A (en) Measurement device and irradiation device
JP2020046340A (en) Light projection device, light projection/reception device, ranging device, method of controlling light projection device, program, and recording medium
CN117859070A (en) Coaxial lidar system using diffractive waveguide

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23830988

Country of ref document: EP

Kind code of ref document: A1