WO2020156085A1 - 图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序 - Google Patents

图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序 Download PDF

Info

Publication number
WO2020156085A1
WO2020156085A1 PCT/CN2020/071175 CN2020071175W WO2020156085A1 WO 2020156085 A1 WO2020156085 A1 WO 2020156085A1 CN 2020071175 W CN2020071175 W CN 2020071175W WO 2020156085 A1 WO2020156085 A1 WO 2020156085A1
Authority
WO
WIPO (PCT)
Prior art keywords
lens
image
positions
image processing
exposure period
Prior art date
Application number
PCT/CN2020/071175
Other languages
English (en)
French (fr)
Inventor
高宫诚
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080002874.4A priority Critical patent/CN112204947A/zh
Publication of WO2020156085A1 publication Critical patent/WO2020156085A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to an image processing device, a camera device, an unmanned aircraft, an image processing method and a program.
  • Patent Document 1 describes a technique that uses a distortion correction parameter table to correct the aberration of inputted screen coordinate pixels, and the distortion correction parameter table is used to store each pixel position coordinate data corresponding to lens parameters.
  • Patent Document 1 Japanese Patent Laid-Open No. 2011-61444.
  • the lens included in the optical system such as the focus lens is moved, it may cause unexpected changes in the angle of view due to distortion, etc.
  • An image processing device includes an acquisition unit for acquiring information representing a plurality of positions of the lens that moves within the exposure period of the imaging element.
  • the image processing device includes a correction unit that corrects an image obtained by exposing the image sensor through the lens during an exposure period based on a plurality of positions.
  • the correction unit may calculate the average position of the lens during the exposure period based on a plurality of positions, and correct the distortion of the image based on the calculated average position of the lens.
  • the correction unit may correct the distortion of the image based on the average position of the lens and the distortion coefficient corresponding to the lens position.
  • the correction unit may calculate a plurality of pixel positions corresponding to a plurality of pixels on the image for each combination of the plurality of positions and the plurality of pixels based on the plurality of positions and the positions of the plurality of pixels included in the imaging element, The pixels respectively calculate the average position of multiple pixel positions to correct the image.
  • the imaging element can be exposed according to different exposure periods of the plurality of pixel columns included in the imaging element.
  • the acquiring unit may acquire information indicating a plurality of positions of the lens that move within the respective exposure periods of the plurality of pixel columns.
  • the correction unit may correct the images respectively acquired by the plurality of pixel rows based on a plurality of positions within the exposure period of each of the plurality of pixel rows.
  • the lens may be a focusing lens movable in the direction of the optical axis.
  • the lens may reciprocate in the optical axis direction during the exposure period.
  • the acquiring unit may acquire information representing multiple positions of the reciprocating lens during the exposure period.
  • the lens In order to adjust the focus of the optical system including the lens, the lens may be moved in one direction in the optical axis direction during the exposure period.
  • the acquiring unit may acquire information representing multiple positions of the lens moving in one direction during the exposure period.
  • the exposure time period may be a time period during which the imaging element is repeatedly exposed in order to separately acquire each of a plurality of moving image constituent images constituting the moving image.
  • the correcting unit may correct each of the moving image constituting images based on a plurality of positions within the exposure period for each of the plurality of moving image constituting images.
  • the imaging device may include the above-mentioned image processing device.
  • the camera device may include an image sensor.
  • the lens may be a focusing lens movable in the direction of the optical axis.
  • the imaging device may include a focus adjustment part that adjusts the focus of the optical system including the lens based on the image corrected by the correction part.
  • An unmanned aircraft includes the aforementioned camera device and moves.
  • An image processing method includes a stage of acquiring information indicating a plurality of positions of a lens that moves within an exposure period of an imaging element.
  • the image processing method includes a stage of correcting an image obtained by exposing an imaging element through a lens during an exposure period based on a plurality of positions.
  • the program according to one aspect of the present invention may be a program for causing a computer to function as the above-mentioned image processing apparatus.
  • FIG. 1 is a diagram showing an example of an external perspective view of an imaging device 100 according to this embodiment.
  • FIG. 2 is a diagram showing functional blocks of the imaging device 100 according to this embodiment.
  • FIG. 3 is a diagram schematically illustrating processing performed by the correction unit 140.
  • Fig. 4 shows an example of the distortion coefficient.
  • FIG. 5 schematically shows an example of the positional relationship between the coordinates on the image sensor 120 and the image coordinates (x, y).
  • FIG. 6 is a diagram illustrating a calculation method of the position average value of the focus lens 210.
  • FIG. 7 is a flowchart showing an example of the execution procedure of the imaging device 100.
  • FIG. 8 is a diagram illustrating another correction method executed by the correction unit 140.
  • FIG. 9 is a diagram schematically illustrating correction processing when reading pixel data by scroll reading.
  • FIG. 10 shows an unmanned aerial vehicle (UAV) equipped with a camera device 100.
  • UAV unmanned aerial vehicle
  • FIG. 11 shows an example of a computer 1200 that may fully or partially embody aspects of the present invention.
  • the blocks can represent (1) the stages of the process of performing operations or (2) the "parts" of the device that perform operations.
  • the specific stages and “sections” can be implemented by programmable circuits and/or processors.
  • Dedicated circuits may include digital and/or analog hardware circuits. May include integrated circuits (ICs) and/or discrete circuits.
  • the programmable circuit may include a reconfigurable hardware circuit.
  • Reconfigurable hardware circuits can include logical AND, logical OR, logical exclusive OR, logical NAND, logical NOR and other logical operations, flip-flops, registers, field programmable gate array (FPGA), programmable logic array (PLA) And other storage elements.
  • the computer-readable medium may include any tangible device that can store instructions for execution by a suitable device.
  • the computer-readable medium with instructions stored thereon includes a product including instructions that can be executed to create means for performing operations specified by the flowchart or block diagram.
  • a computer-readable medium it may include electronic storage media, magnetic storage media, optical storage media, electromagnetic storage media, semiconductor storage media, and the like.
  • Computer readable media may include floppy (registered trademark) disk, floppy disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory) , Electrically erasable programmable read-only memory (EEPROM), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disc (DVD), Blu-ray (registered trademark) disc, memory stick, Integrated circuit cards, etc.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • EEPROM Electrically erasable programmable read-only memory
  • SRAM compact disc read-only memory
  • DVD digital versatile disc
  • Blu-ray registered trademark
  • the computer-readable instructions may include any one of source code or object code described in any combination of one or more programming languages.
  • the source code or object code includes traditional procedural programming languages.
  • Traditional procedural programming languages can be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or Smalltalk, JAVA (registered trademark), C++, etc.
  • the computer-readable instructions can be provided locally or via a wide area network (WAN) such as a local area network (LAN) or the Internet to a processor or programmable circuit of a general-purpose computer, a special-purpose computer, or other programmable data processing device.
  • WAN wide area network
  • LAN local area network
  • the processor or programmable circuit can execute computer-readable instructions to create means for performing the operations specified in the flowchart or block diagram.
  • Examples of processors include computer processors, processing units, microprocessors, digital signal processors, controllers, microcontrollers, and the like.
  • FIG. 1 is a diagram showing an example of an external perspective view of an imaging device 100 according to this embodiment.
  • FIG. 2 is a diagram showing functional blocks of the imaging device 100 according to this embodiment.
  • the imaging device 100 includes an imaging unit 102 and a lens unit 200.
  • the imaging unit 102 includes an image sensor 120, an image processing unit 104, an imaging control unit 110, a memory 130, an instruction unit 162, and a display unit 160.
  • the image sensor 120 is an imaging element such as CCD or CMOS.
  • the image sensor 120 receives light through an optical system included in the lens unit 200.
  • the image sensor 120 outputs image data of an optical image formed by the optical system of the lens unit 200 to the image processing unit 104.
  • the imaging control unit 110 and the image processing unit 104 may be constituted by a microprocessor such as a CPU or an MPU, a microcontroller such as an MCU, or the like.
  • the memory 130 may be a computer-readable recording medium, and may include at least one of flash memory such as SRAM, DRAM, EPROM, EEPROM, and USB memory.
  • the memory 130 stores a program necessary for the imaging control unit 110 to control the image sensor 120 and the like, a program necessary for the image processing unit 104 to execute image processing, and the like.
  • the memory 130 may be provided inside the housing of the imaging device 100.
  • the storage 130 may be configured to be detachable from the housing of the imaging device 100.
  • the instruction unit 162 is a user interface that accepts instructions to the imaging device 100 from the user.
  • the display unit 160 displays images captured by the image sensor 120 and processed by the image processing unit 104, various setting information of the imaging device 100, and the like.
  • the display part 160 may be composed of a touch panel.
  • the imaging control unit 110 controls the lens unit 200 and the image sensor 120.
  • the imaging control unit 110 controls the adjustment of the focal position and focal length of the optical system included in the lens unit 200.
  • the imaging control unit 110 outputs a control command to the lens control unit 220 included in the lens unit 200 based on the information indicating the user's instruction, thereby controlling the lens unit 200.
  • the lens unit 200 includes a focus lens 210, a zoom lens 211, a lens drive unit 212, a lens drive unit 213, a lens control unit 220, a memory 222, a position sensor 214, and a position sensor 215.
  • the focus lens 210 and the zoom lens 211 may include at least one lens.
  • the focus lens 210 and the zoom lens 211 are lenses included in the optical system. At least a part or all of the focus lens 210 and the zoom lens 211 are configured to be movable along the optical axis of the optical system.
  • the imaging control unit 110 includes a focus adjustment unit 112.
  • the focus adjustment unit 112 controls the focus lens 210 to adjust the focus of the optical system included in the lens unit 200.
  • the lens unit 200 may be an interchangeable lens provided to be detachable from the imaging unit 102.
  • the lens driving part 212 may include a driving device such as a stepping motor, a DC motor, a coreless motor, or an ultrasonic motor. At least a part or all of the focus lens 210 receives power from a driving device included in the lens driving unit 212 via a cam ring, a guide shaft, and other mechanism members, and moves.
  • the lens driving part 213 may include a driving device such as a stepping motor, a DC motor, a coreless motor, or an ultrasonic motor. At least a part or all of the zoom lens 211 receives power from a driving device included in the lens driving unit 213 via a cam ring, a guide shaft, and other mechanism members, and moves.
  • the lens control section 220 drives at least one of the lens drive section 212 and the lens drive section 213 in accordance with a lens control instruction from the imaging section 102, and makes at least one of the focus lens 210 and the zoom lens 211 along the optical axis direction via a mechanism member Move to perform at least one of zoom action and focus action.
  • the lens control commands are, for example, zoom control commands and focus control commands.
  • the memory 222 stores control values for the focus lens and zoom lens that are moved by the lens drive unit 212.
  • the memory 222 may include at least one of flash memory such as SRAM, DRAM, EPROM, EEPROM, and USB memory.
  • the position sensor 214 detects the position of the focus lens 210.
  • the position sensor 215 detects the position of the zoom lens 211.
  • the position sensor 214 and the position sensor 215 may be magnetoresistive (MR) sensors or the like.
  • the imaging control unit 110 outputs a control command to the image sensor 120 based on information indicating an instruction from the user through the instruction unit 162 or the like, thereby causing the image sensor 120 to perform control including imaging operation control.
  • the image captured by the image sensor 120 is processed by the image processing unit 104 and stored in the memory 130.
  • the image acquired by the image sensor 120 is input to the image processing unit 104.
  • the correction unit 140 corrects the image acquired by the image sensor 120.
  • the display unit 160 displays the image corrected by the correction unit 140.
  • the memory 130 stores the image corrected by the correction unit 140.
  • the image corrected by the correction unit 140 may be transferred from the memory 130 to a recording medium such as a memory card.
  • the image processing unit 104 includes a correction unit 140 and an acquisition unit 142.
  • the acquisition section 142 acquires information representing a plurality of positions of the focus lens 210 that moves within the exposure period of the image sensor 120.
  • the acquisition unit 142 acquires information indicating a plurality of positions of the focus lens 210 from the focus adjustment unit 112.
  • the correction unit 140 corrects the image acquired by exposing the image sensor 120 through the focus lens 210 during the exposure period based on a plurality of positions.
  • the correction unit 140 calculates the average position of the focus lens 210 during the exposure period based on the acquired positions of the focus lens 210, and corrects the distortion of the image based on the calculated average position of the focus lens 210. Specifically, the correction unit 140 corrects the distortion of the image based on the average position of the focus lens 210 and the distortion coefficient corresponding to the position of the focus lens 210.
  • the correction unit 140 can calculate the amount of the image for each combination of the multiple positions of the focus lens 210 and the multiple pixels of the image sensor 120. Each pixel corresponds to multiple pixel positions.
  • the correction unit 140 may calculate the calculated average position of the plurality of pixel positions for the plurality of pixels, so as to correct the image.
  • the image sensor 120 may be exposed according to different exposure periods of the multiple pixel columns included in the image sensor 120.
  • the imaging control unit 110 may read pixel information from the image sensor 120 by scroll reading.
  • the acquisition unit 142 acquires information indicating a plurality of positions of the focus lens 210 moved within the exposure period of each of the plurality of pixel columns.
  • the correction unit 140 corrects the images respectively acquired by the plurality of pixel rows based on a plurality of positions in the exposure period of each of the plurality of pixel rows.
  • the focus lens 210 reciprocates in the optical axis direction during the exposure period.
  • the focus adjustment part 112 causes the focus lens 210 to wobbling within the exposure period.
  • the acquisition section 142 acquires information indicating a plurality of positions of the focus lens 210 that reciprocate within the exposure period.
  • the correction section 140 corrects the image based on the multiple positions of the focus lens 210 that reciprocate within the exposure period.
  • the focus lens 210 may be moved in one direction in the optical axis direction during the exposure period.
  • the acquisition section 142 acquires information representing a plurality of positions of the focus lens 210 moving in one direction within the exposure period.
  • the correction part 140 corrects the image based on a plurality of positions of the focus lens 210 moving in one direction within the exposure period.
  • the exposure period may be a period during which the image sensor 120 is repeatedly exposed in order to obtain each of a plurality of dynamic image constituent images constituting the dynamic image.
  • the correcting unit 140 may correct each of the moving image constituting images based on a plurality of positions within the exposure period for each of the plurality of moving image constituting images.
  • the focus adjustment unit 112 adjusts the focus of the optical system including the focus lens 210 based on the image corrected by the correction unit 140. For example, the focus adjustment unit 112 determines the position of the focus lens 210 in the optical axis direction based on the contrast value of the image corrected by the correction unit 140, and moves the focus lens 210 to the determined position.
  • FIG. 3 is a diagram schematically illustrating processing performed by the correction unit 140.
  • FIG. 3 relates to the processing performed by the correction unit 140 when the exposure periods of all horizontal pixel columns included in the image sensor 120 are the same. A specific description will be given of processing when the image sensor 120 performs continuous shooting through global reading.
  • the image sensor 120 has N (N is a natural number) horizontal pixel columns.
  • the imaging control unit 110 exposes the horizontal pixel column 1 to the horizontal pixel column N included in the image sensor 120 at a timing based on the vertical synchronization signal VD.
  • FIG. 3 shows the exposure period from time t1 to time t7 and the exposure period from time t9 to time t15.
  • the focus adjustment unit 112 detects the position of the focus lens 210 in the optical axis direction multiple times. Specifically, the focus adjustment unit 112 starts from the vertical synchronization signal VD at time t0, and detects the position of the focus lens 210 in the optical axis direction at a predetermined time interval and a predetermined number of times. In addition, the focus adjusting unit 112 starts from the vertical synchronization signal VD at time t7, and detects the position of the focus lens 210 in the optical axis direction at a predetermined time interval and a predetermined number of times.
  • the lens control unit 220 detects LP1, LP2, LP3, LP4, LP5, LP6, and LP7 during the exposure period from time t1 to time t7.
  • the lens control unit 220 detects LP9, LP10, LP11, LP12, LP13, LP14, and LP15 during the exposure period from time t9 to time t15.
  • LPi (i is a natural number) represents the position of the focus lens 210 in the optical axis direction.
  • the focus adjustment unit 112 obtains lens position information from the lens control unit 220 and outputs it to the image processing unit 104.
  • the lens position information indicates the position in the exposure period from time t1 to time t7. LP1, LP2, LP3, LP4, LP5, LP6 and LP7 detected.
  • the acquisition unit 142 acquires lens position information output from the focus adjustment unit 112.
  • the correction unit 140 calculates the average value of LP1, LP2, LP3, LP4, LP5, LP6, and LP7 based on the lens position information.
  • the correction unit 140 calculates the distortion coefficient based on the average value of LP1, LP2, LP3, LP4, LP5, LP6, and LP7.
  • the distortion coefficient is information indicating distortion and distortion determined according to the position of the focus lens 210. Regarding the distortion coefficient, there are related explanations in Figure 4 and so on.
  • the average value is a value calculated by dividing the sum of LP1 to LP7 by the time from time t1 to time t7. For the specific calculation method of the average, there are related explanations in Figure 6 and so on.
  • the image processing unit 104 acquires the pixel data 310 of the horizontal pixel column 1 to the horizontal pixel column N from the image sensor 120 based on the vertical synchronization signal VD at time t7.
  • the image processing unit 104 corrects the image of the pixel data 310 based on the pixel data 310 acquired from the image sensor 120 and the distortion coefficient corresponding to the average position of the focus lens 210 to generate a corrected image 311.
  • the corrected image 311 generated by the correction unit 140 is stored in the memory 130 and is output to the display unit 160 as a display image of the display unit 160.
  • the focus adjustment unit 112 obtains lens position information from the lens control unit 220 and outputs it to the image processing unit 104.
  • the lens position information indicates the exposure period from time t9 to time t15. LP9, LP10, LP11, LP12, LP13, LP14 and LP15 detected within.
  • the acquisition unit 142 acquires lens position information output from the focus adjustment unit 112.
  • the correction unit 140 calculates the average value of LP9, LP10, LP11, LP12, LP13, LP14, and LP15 based on the lens position information.
  • the correction unit 140 calculates the distortion coefficient based on the average value of LP9, LP10, LP11, LP12, LP13, LP14, and LP15.
  • the image processing unit 104 acquires the pixel data 320 of the horizontal pixel column 1 to the horizontal pixel column N from the image sensor 120 based on the vertical synchronization signal VD at time t15.
  • the image processing unit 104 corrects the image of the pixel data 320 based on the pixel data 320 acquired from the image sensor 120 and the distortion coefficient corresponding to the average position of the focus lens 210 to generate a corrected image 321.
  • the corrected image 321 generated by the correction unit 140 is stored in the memory 130 and is output to the display unit 160 as a display image of the display unit 160.
  • the imaging device 100 Based on the vertical synchronization signal VD, the imaging device 100 performs one exposure processing, image data reading processing, image correction processing, and processing to be displayed on the display unit 160 as described above. The imaging device 100 repeatedly executes the above-mentioned processing for each vertical synchronization signal VD.
  • Fig. 4 shows an example of the distortion coefficient.
  • the horizontal axis of FIG. 4 is the position of the focus lens 210, and the vertical axis is the distortion coefficient value.
  • the distortion coefficients include k1, k2, and k3.
  • the memory 130 stores distortion coefficient data indicating the dependence of k1, k2, and k3 on the focus lens 210.
  • the distortion coefficient data can be calculated in advance based on the lens design data of the optical system of the lens unit 200 and stored in the memory 130.
  • the distortion coefficient data may also be calculated in advance through experiments and stored in the memory 130.
  • the distortion coefficient data of k1, k2, and k3 may be data representing a function with the position of the focus lens 210 as a variable.
  • the correlation function can be obtained by fitting the pre-calculated distortion coefficients k1, k2, and k3 to a function with the position of the focus lens 210 as a variable.
  • the distortion coefficient data of k1, k2, and k3 may be mapping data for mapping the position of the focus lens 210 to k1, k2, and k3.
  • the distortion coefficients k1, k2, and k3 can be used to express the relationship between the coordinates (x distorted , y distorted ) on the image sensor 120 and the normalized image coordinates (x, y) with the following equation 1.
  • the coordinates (x, y) represent the coordinates in the corrected image.
  • LP in Equation 1 represents the position of the focus lens 210.
  • the distortion coefficients k1, k2, and k3 depend on LP, and therefore, are expressed as k1(LP), k2(LP), and k3(LP) in Equation 1.
  • FIG. 5 schematically shows an example of the positional relationship between the coordinates (x distorted , y distorted ) and the image coordinates (x, y) on the image sensor 120.
  • the correction unit 140 applies the coordinates of each pixel in the pixel data to (x distorted , y distorted ) in Equation 1, and applies k1, k2, and k3 calculated based on the average position of the focus lens 210 and the aforementioned distortion coefficient data to k1(LP), k2(LP), and k3(LP) are calculated to satisfy the coordinates (x, y) of formula 1. Then, the correction section 140 uses the pixel value of the coordinate (x distorted , y distorted ) in the image data acquired from the image sensor 120 as the pixel value of the coordinate (x, y), thereby generating a corrected image.
  • the correction unit 140 uses the distortion coefficient determined according to the average position of the focus lens 210 during the exposure period to correct the image. As a result, as shown in FIG. 3, it is possible to generate the corrected image 311 and the corrected image 321 in which the change in the angle of view caused by the movement of the focus lens 210 is suppressed.
  • FIG. 6 is a diagram for explaining a calculation method of the position average value of the focus lens 210.
  • T is the exposure time.
  • Te is the time from t1 to t7.
  • Td is the time interval for detecting the position of the focus lens 210.
  • T 6Td.
  • the position average value of the focus lens 210 is a value calculated from the time average value of LP1 to LP7.
  • the weighted sum of LP1 to LP7 is divided by T to calculate the average position of the focus lens 210. According to ⁇ i ⁇ LPi, the weighted sum of LP1 to LP7 is calculated. i is a natural number from 1 to 7.
  • ⁇ i is the weight coefficient of the weighted sum.
  • the sum of ⁇ 1 to ⁇ 7 is T.
  • the ⁇ i may be specified based on the period time included in the exposure period in the period from time ti-Td/2 to time ti+Td/2. Specifically, ⁇ 1 and ⁇ 7 are Td/2, and ⁇ 2 to ⁇ 6 are Td.
  • the weight coefficient ⁇ i is determined according to the time when the position of the focus lens 210 is detected, so that the time average value of the position of the focus lens 210 can be calculated.
  • the average value may be a value calculated by dividing the sum of LP1 to LP7 by 7. That is, the average value may be a value calculated by dividing the position of the focus lens 210 detected during the exposure period by the number of detections of the position of the focus lens 210.
  • FIG. 7 is a flowchart showing an example of the execution procedure of the imaging device 100. When the vertical synchronization signal VD is triggered, this flowchart starts.
  • the focus adjustment unit 112 outputs a timing signal for detecting the position of the focus lens 210 to the lens control unit 220, thereby detecting the position of the focus lens 210.
  • the imaging control unit 110 starts to expose the image sensor 120.
  • the lens control unit 220 detects the position of the focus lens 210 based on the timing signal output from the focus adjustment unit 112.
  • the imaging control unit 110 determines whether to end the exposure. For example, when a trigger of a new vertical synchronization signal VD is detected, the imaging control unit 110 determines that the exposure has ended. Until it is determined that the exposure is ended, the image sensor 120 is kept exposed, and the process of S604 is repeatedly executed.
  • the correction unit 140 obtains the lens position information of the focus lens 210 from the lens control unit 220 through the focus adjustment unit 112, and calculates the average position of the focus lens 210.
  • the correction unit 140 acquires pixel data output from the image sensor 120.
  • the correction unit 140 calculates the image coordinates (x, y) corresponding to the pixel coordinates (x distorted , y distorted ) of the image sensor 120 based on the distortion coefficient data.
  • the correction unit 140 reflects each coordinate pixel value of the pixel data output from the image sensor 120 as the pixel value of the image coordinate calculated in S612, thereby generating a corrected image.
  • the image processing unit 104 stores the corrected image generated by the correction unit 140 in the memory 130. After the processing of S616 is completed, the processing of this flowchart ends.
  • the corrected image stored in the memory 130 in S616 is output to the display unit 160, for example, as a display moving image composition image.
  • the corrected image stored in the memory 130 is recorded in the memory 130 as a movie constituent image of the movie data.
  • the correction unit 140 may associate the image coordinates calculated in S612 with the average position of the focus lens 210 and store them in the memory 130.
  • the correction unit 140 may use the image coordinates stored in the memory 130 without performing the processing of S612. This can reduce the amount of calculation used to calculate the image coordinates.
  • the correction unit 140 corrects image distortion based on the average position of the focus lens 210 during the exposure period. It is thereby possible to generate a corrected image in which the influence of the change in the angle of view that occurs due to the movement of the focus lens 210 during the exposure period is suppressed.
  • FIG. 8 is a diagram illustrating another correction method executed by the correction unit 140.
  • FIG. 8 enlarges part of the image coordinates, showing the relationship between the pixel coordinates (x distorted , y distorted ) of the image sensor 120 and the corrected image coordinates (x, y).
  • the correction unit 140 calculates the image coordinates (x, y) based on each position of the focus lens 210 detected during the exposure period.
  • the correction processing performed based on other correction methods will be described with reference to the example shown in FIG. 3.
  • the correction unit 140 calculates the image coordinates (x, y) based on LP1, LP2, LP3, LP4, LP5, LP6, and LP7 in the exposure period from time t1 to t7, respectively. Specifically, the correction unit 140 applies the coordinates of the pixels in the pixel data acquired from the image sensor 120 to (x distorted , y distorted ) in Equation 1, and applies k1, k2, and k3 calculated from the LP1 and the distortion coefficient data, respectively Go to k1(LP), k2(LP), and k3(LP) of Equation 1, and calculate the coordinates (x, y) that satisfy Equation 1.
  • the calculated coordinates (x, y) are shown as (x1, y1) in FIG. 8.
  • the correction unit 140 applies the coordinates of the pixels in the pixel data to (x distorted , y distorted ) in Equation 1, and applies k1, k2, and k3 calculated from LP2 and the distortion coefficient data to For k1(LP), k2(LP) and k3(LP) of Equation 1, the coordinates (x, y) satisfying Equation 1 are calculated.
  • the calculated coordinates (x, y) are shown as (x2, y2) in FIG. 8.
  • the correction unit 140 applies the pixel value of the coordinate (x distorted , y distorted ) in the pixel data as the pixel value of the average coordinate (x, y) of (xi, yi) (i is a natural number from 1 to 7).
  • the correction unit 140 performs the same processing on each pixel of the pixel data acquired from the image sensor 120 to generate a corrected image.
  • the correction method described in association with FIG. 8 can also provide a corrected image in which the influence of the change in the viewing angle due to the movement of the focus lens 210 during the exposure period is suppressed. Compared with the correction method described in connection with FIGS. 3 and 5, it is possible to generate a corrected image that further reduces the influence of viewing angle changes.
  • FIG. 9 is a diagram schematically illustrating a correction process of the correction unit 140 when reading pixel data from the image sensor 120 by scroll reading.
  • the imaging control unit 110 sequentially shifts the exposure start time of the horizontal pixel row 1 to the horizontal pixel row N to perform exposure. Therefore, the exposure period of the image sensor 120 is different according to the pixel columns included in the image sensor 120.
  • the exposure period of the horizontal pixel column 1 is from time t1 to time t7.
  • the exposure period of the horizontal pixel column N is from time t7 to time t13.
  • the exposure period of the horizontal pixel column 1+i is from time t1+ ⁇ T ⁇ (i-1) to time t7+ ⁇ T ⁇ (i-1) (i is a natural number from 1 to N).
  • ⁇ T is the interval between the exposure start time of adjacent horizontal pixel rows.
  • the positions of the focus lens 210 detected during the exposure period of the horizontal pixel column 1 are LP1, LP2, LP3, LP4, LP5, LP6, and LP7.
  • the correction unit 140 calculates the distortion coefficient k1 corresponding to the average value of LP1, LP2, LP3, LP4, LP5, LP6, and LP7 based on the average value of LP1, LP2, LP3, LP4, LP5, LP6, and LP7 and the distortion coefficient data. k2 and k3.
  • the correction unit 140 applies the calculated distortion coefficients k1, k2, and k3 to Equation 1, and corrects the pixel data 810 of the horizontal pixel column 1 to generate corrected pixel data 811.
  • the positions of the focus lens 210 detected during the exposure period of the horizontal pixel column 2 are LP2, LP3, LP4, LP5, LP6, and LP7.
  • the correction unit 140 calculates the distortion coefficients k1, k2, and k3 corresponding to the average value of LP2, LP3, LP4, LP5, LP6, and LP7 based on the average value of LP2, LP3, LP4, LP5, LP6, and LP7 and the distortion coefficient data.
  • the correction unit 140 applies the calculated distortion coefficients k1, k2, and k3 to the calculation formula 1, and corrects the pixel data of the horizontal pixel column 2 to generate corrected pixel data.
  • the average value is a value calculated by dividing the weighted sum of LP1 to LP7 by the time from time t1 to time t7. Specifically, the average value is a value calculated by the calculation method described in relation to FIG. 6 and the like.
  • the correction unit 140 performs the same processing on the horizontal pixel column 3 to the horizontal pixel column N, and generates corrected pixel data.
  • the positions of the focus lens 210 detected during the exposure period of the horizontal pixel column N are LP7, LP8, LP9, LP10, LP11, LP12, and LP13
  • the correction unit 140 is based on LP7, LP8, LP9, LP10, LP11, LP12 and Calculate the average value of LP13 and distortion coefficient data, calculate the average value of LP7, LP8, LP9, LP10, LP11, LP12 and LP13, and calculate the distortion coefficient k1, k2 and k3 corresponding to the calculated average value.
  • the correction unit 140 applies the calculated distortion coefficients k1, k2, and k3 to Equation 1, and corrects the pixel data 820 of the horizontal pixel column N, thereby generating corrected pixel data 821.
  • the correction unit 140 generates one corrected image based on corrected pixel data generated from each horizontal pixel column.
  • the correction unit 140 applies the distortion coefficients k1, k2, and k3 based on the average position of the focus lens 210 during the exposure time period to the formula 1 for the pixel column group with the same position of the focus lens 210 detected during the exposure time period, to generate Correct the pixel data.
  • FIG. 9 and the like explain in relation to correction processing of image data read by scroll reading based on the average value of the positions of the focus lens 210.
  • the correction method described in connection with FIG. 8 and the like can be applied to the correction of image data read by scroll reading.
  • the correction unit 140 may calculate the image coordinates corresponding to each position of the focus lens 210 for each pixel column, and apply the calculated average coordinates of the image coordinates as the pixel value of the corrected image.
  • the imaging device 100 it is possible to provide an image in which the influence of the change in the angle of view caused by the movement of the focus lens 210 is suppressed.
  • This effect is particularly effective for a small lens device that is relatively small compared to the size of the image sensor.
  • the optical system is miniaturized relative to the size of the image sensor, the effect of distortion caused by the movement of the focus lens becomes significant. Therefore, for example, if the focus lens swings during live view shooting, a change in angle of view accompanying the swing may be observed in the live view image.
  • the imaging device 100 the influence of the angle of view variation caused by the movement of the focus lens 210 can be suppressed, so that the angle of view variation caused by the swing is not easily observed on the live view image.
  • the contrast value used for the focus control is detected from the corrected image, so that the image area of the contrast value detection target can be suppressed from changing due to the movement of the focus lens 210. Therefore, it is possible to more accurately perform focus control based on the contrast value.
  • the processing described in connection with the imaging device 100 of this embodiment is not only applicable to the swing of the focus lens 210, but also applicable to processing when the focus lens 210 is moved in one direction during the exposure period.
  • the processing described in relation to the imaging device 100 of the present embodiment is not only applicable to images during framing shooting, but also to correction processing when generating moving image data for recording and correction processing when generating still image data for recording.
  • the processing described in connection with the imaging device 100 of this embodiment is also applicable to the movement of lenses other than the focus lens 210. That is, the correction section 140 corrects the image acquired by exposing the image sensor 120 during the relevant exposure period based on a plurality of positions of the lens other than the focus lens 210 moved during the exposure period of the image sensor 120.
  • the aforementioned imaging device 100 may be mounted on a mobile body.
  • the camera device 100 may also be mounted on an unmanned aerial vehicle (UAV) as shown in FIG. 10.
  • UAV 10 may include a UAV main body 20, a universal joint 50, a plurality of camera devices 60, and a camera device 100.
  • the universal joint 50 and the camera device 100 are an example of a camera system.
  • UAV10 is an example of a mobile body propelled by a propulsion unit.
  • the concept of moving objects refers to flying objects such as other airplanes moving in the air, vehicles moving on the ground, ships moving on the water, etc. in addition to UAVs.
  • the UAV main body 20 includes a plurality of rotors. Multiple rotors are an example of a propulsion section.
  • the UAV main body 20 makes the UAV 10 fly by controlling the rotation of a plurality of rotors.
  • the UAV main body 20 uses, for example, four rotors to fly the UAV 10. The number of rotors is not limited to four.
  • UAV10 can also be a fixed-wing aircraft without rotors.
  • the imaging device 100 is an imaging camera that captures a subject included in a desired imaging range.
  • the universal joint 50 rotatably supports the imaging device 100.
  • the universal joint 50 is an example of a supporting mechanism.
  • the gimbal 50 uses an actuator to rotatably support the imaging device 100 with a pitch axis.
  • the universal joint 50 uses an actuator to further rotatably support the imaging device 100 around the roll axis and the yaw axis, respectively.
  • the gimbal 50 can change the posture of the imaging device 100 by rotating the imaging device 100 around at least one of the yaw axis, the pitch axis, and the roll axis.
  • the plurality of imaging devices 60 are sensing cameras that photograph the surroundings of the UAV 10 in order to control the flight of the UAV 10.
  • the two camera devices 60 can be installed on the nose of the UAV 10, that is, on the front.
  • the other two camera devices 60 may be provided on the bottom surface of the UAV 10.
  • the two camera devices 60 on the front side may be paired to function as a so-called stereo camera.
  • the two imaging devices 60 on the bottom side may also be paired to function as a stereo camera.
  • the three-dimensional spatial data around the UAV 10 can be generated based on the images captured by the plurality of camera devices 60.
  • the number of imaging devices 60 included in the UAV 10 is not limited to four.
  • the UAV 10 may include at least one camera device 60.
  • the UAV 10 may also include at least one camera 60 on the nose, tail, side, bottom and top surfaces of the UAV 10, respectively.
  • the viewing angle that can be set in the camera device 60 may be larger than the viewing angle that can be set in the camera device 100.
  • the imaging device 60 may have a single focus lens or a fisheye lens.
  • the remote operation device 300 communicates with the UAV 10 to remotely operate the UAV 10.
  • the remote operation device 300 can perform wireless communication with the UAV 10.
  • the remote operation device 300 transmits to the UAV 10 instruction information indicating various commands related to the movement of the UAV 10 such as ascending, descending, accelerating, decelerating, forwarding, retreating, and rotating.
  • the instruction information includes, for example, instruction information for raising the height of the UAV 10.
  • the indication information may indicate the height at which the UAV10 should be located.
  • the UAV 10 moves to be located at the height indicated by the instruction information received from the remote operation device 300.
  • the instruction information may include an ascending instruction to raise the UAV10. UAV10 rises while receiving the rise command. When the height of UAV10 has reached the upper limit height, even if the ascending instruction is accepted, the ascent of UAV10 can be restricted.
  • FIG. 11 shows an example of a computer 1200 that can embody various aspects of the present invention in whole or in part.
  • the program installed on the computer 1200 can make the computer 1200 function as an operation associated with the device according to the embodiment of the present invention or one or more "parts" of the device.
  • a program installed on the computer 1200 can make the computer 1200 function as the correction unit 140 or the image processing unit 104.
  • the program can enable the computer 1200 to perform related operations or related functions of one or more "parts".
  • This program can make the computer 1200 execute the process or stages of the process involved in the embodiment of the present invention.
  • Such a program may be executed by the CPU 1212, so that the computer 1200 executes specified operations associated with some or all blocks in the flowcharts and block diagrams described in this specification.
  • the computer 1200 of this embodiment includes a CPU 1212 and a RAM 1214, which are connected to each other through a host controller 1210.
  • the computer 1200 further includes a communication interface 1222, an input/output unit, which is connected to the host controller 1210 through the input/output controller 1220.
  • the computer 1200 also includes a ROM 1230.
  • the CPU 1212 operates according to the programs stored in the ROM 1230 and RAM 1214 to control each unit.
  • the communication interface 1222 communicates with other electronic devices via a network.
  • the hard disk drive can store programs and data used by the CPU 1212 in the computer 1200.
  • the ROM 1230 stores therein a boot program executed by the computer 1200 during operation, and/or a program that depends on the hardware of the computer 1200.
  • the program is provided via a computer-readable recording medium such as CR-ROM, USB memory, or IC card, or a network.
  • the program is installed in RAM 1214 or ROM 1230 which is also an example of a computer-readable recording medium, and is executed by CPU 1212.
  • the information processing described in these programs is read by the computer 1200 and causes cooperation between the programs and the various types of hardware resources described above.
  • the operation or processing of information can be implemented as the computer 1200 is used, thereby constituting an apparatus or method.
  • the CPU 1212 may execute a communication program loaded in the RAM 1214, and based on the processing described in the communication program, instruct the communication interface 1222 to perform communication processing.
  • the communication interface 1222 reads the transmission data stored in the transmission buffer provided in a recording medium such as RAM 1214 or USB memory, and transmits the read transmission data to the network or from the network The received received data is written into the receiving buffer provided on the recording medium, etc.
  • the CPU 1212 can make the RAM 1214 read all or necessary parts of files or databases stored in an external recording medium such as a USB memory, and perform various types of processing on the data on the RAM 1214. Then, the CPU 1212 can write the processed data back to the external recording medium.
  • an external recording medium such as a USB memory
  • CPU 1212 can perform various types of operations, information processing, conditional judgment, conditional transfer, unconditional transfer, and information retrieval described in various parts of this disclosure, including those specified by the program's instruction sequence. /Replace and other various types of processing, and write the results back to RAM1214.
  • the CPU 1212 can search for information in files, databases, and the like in the recording medium. For example, when multiple entries having the attribute value of the first attribute respectively associated with the attribute value of the second attribute are stored in the recording medium, the CPU 1212 may retrieve the attribute value of the specified first attribute from the multiple entries. And read the attribute value of the second attribute stored in the entry to obtain the attribute value of the second attribute associated with the first attribute meeting the predetermined condition.
  • the above-mentioned programs or software modules may be stored on the computer 1200 or on a computer-readable storage medium near the computer 1200.
  • a recording medium such as a hard disk or RAM provided in a server system connected to a dedicated communication network or the Internet can be used as a computer-readable storage medium so that the program can be provided to the computer 1200 via the network.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)
  • Focusing (AREA)

Abstract

如果使诸如聚焦镜头等光学***所包括的镜头移动,可能会因畸变失真等导致非预期的视角变动。本发明的图像处理装置包括:获取部,其用于获取表示在摄像元件曝光时段内移动的镜头的多个位置的信息;以及校正部,其基于多个位置,对曝光时段内通过镜头使摄像元件曝光而获取的图像进行校正。本发明的图像处理方法包括:获取表示在摄像元件曝光时段内移动的镜头的多个位置的信息的阶段;以及基于多个位置,对曝光时段内通过镜头使摄像元件曝光而获取的图像进行校正的阶段。

Description

图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序 技术领域
本发明涉及一种图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序。
背景技术
专利文献1中记载了一项技术,其利用畸变校正参数表对所输入的屏幕坐标像素的像差进行校正,所述畸变校正参数表用于保存对应于镜头参数的各像素位置坐标数据。
专利文献1日本专利特开2011-61444号公报。
发明内容
发明所要解决的技术问题:
如果使聚焦镜头等光学***所包括的镜头移动,可能会因畸变失真等导致非预期的视角变动。
用于解决问题的技术手段:
本发明的一个方面所涉及的图像处理装置包括获取部,该获取部用于获取表示在摄像元件曝光时段内移动的镜头的多个位置的信息。图像处理装置包括校正部,该校正部基于多个位置,对曝光时段内通过镜头使摄像元件曝光而获取的图像进行校正。
校正部可以基于多个位置,计算出曝光时段内镜头的平均位置,并基于计算出的镜头的平均位置,对图像的畸变进行校正。
校正部可以基于镜头的平均位置和对应于镜头位置的畸变系数,对图像的畸变进行校正。
校正部可以基于多个位置及摄像元件所包括的多个像素的位置,针对多个位置及多个像素的各个组合,计算出图像上与多个像素分别对应的多个像素位置,对多个像素分别计算出多个像素位置的平均位置,从而对图像进行校正。
可以按照摄像元件所包括的多个像素列各不相同的曝光期对摄像元件进行曝光。获取部可以获取表示在多个像素列各自的曝光时段内移动的镜头的多个位置的信息。校正部可以基于多个像素列各自的曝光时段内的多个位置,对通过多个像素列分别获取的图像进行校正。
镜头可以是可沿光轴方向移动的聚焦镜头。
为了调节包括镜头的光学***的焦点,镜头可以在曝光时段内沿光轴方向往复移动。获取部可以获取表示曝光时段内往复移动的镜头多个位置的信息。
为了调节包括镜头的光学***的焦点,镜头可以在曝光时段内在光轴方向沿一个方向移动。获取部可以获取表示曝光时段内沿一个方向移动的镜头多个位置的信息。
曝光时段可以是为了分别获取构成动态影像的多个动态影像构成图像的每一个图像而使摄像元件反复曝光的时段。校正部可以基于用于分别获取多个动态影像构成图像的每一个图像的曝光时段内的多个位置,对各个动态影像构成图像进行校正。
本发明的一个方面所涉及的摄像装置可以包括上述图像处理装置。摄像装置可以包括图像传感器。
镜头可以是可沿光轴方向移动的聚焦镜头。摄像装置可以包括焦点调节部,该焦点调节部基于经校正部校正后的图像,对包括镜头的光学***的焦点进行调节。
本发明的一个方面所涉及的无人驾驶航空器包括上述摄像装置并进行移动。
本发明的一个方面所涉及的图像处理方法包括获取表示在摄像元件曝光时段内移动的镜头的多个位置的信息的阶段。图像处理方法包括基于多个位置,对曝光时段内通过镜头使摄像元件曝光而获取的图像进行校正的阶段。
本发明的一个方面所涉及的程序可以是一种用于使计算机作为上述图像处理装置而发挥功能的程序。
根据本发明的一个方面,可以抑制随着镜头移动而出现的视角变动的影响。
此外,上述发明内容未列举本发明的必要的全部特征。此外,这些特征组的子组合也可以构成发明。
附图说明
图1是示出本实施方式所涉及的摄像装置100的外观立体图的一个示例的图。
图2是示出本实施方式所涉及的摄像装置100的功能块的图。
图3是概略说明校正部140进行的处理的图。
图4示出畸变系数的一个示例。
图5示意性地示出图像传感器120上的坐标和图像坐标(x,y)的位置关系的一个示例。
图6是说明聚焦镜头210的位置平均值的计算方法的图。
图7是示出摄像装置100的执行步骤的一个示例的流程图。
图8是说明校正部140执行的其他校正方法的图。
图9是概略说明通过滚动读取来读取像素数据时的校正处理的图。
图10示出搭载有摄像装置100的无人驾驶航空器(UAV)。
图11示出可全部或部分地体现本发明的多个方面的计算机1200的一个示例。
符号说明:
10 UAV
20 UAV主体
50 万向节
60 摄像装置
100 摄像装置
102 摄像部
104 图像处理部
110 摄像控制部
112 焦点调节部
120 图像传感器
130 存储器
140 校正部
142 获取部
160 显示部
162 指示部
200 镜头部
210 聚焦镜头
211 变焦镜头
212 镜头驱动部
213 镜头驱动部
220 镜头控制部
222 存储器
214、215 位置传感器
300 远程操作装置
310 像素数据
311 校正图像
320 像素数据
321 校正图像
810 像素数据
811 校正像素数据
820 像素数据
821 校正像素数据
1200 计算机
1210 主机控制器
1212 CPU
1214 RAM
1220 输入/输出控制器
1222 通信接口
1230 ROM
具体实施方式
以下,通过发明的实施方式来对本发明进行说明,但是以下的实施方式并不限定权利要求书所涉及的发明。此外,实施方式中所说明的所有特征组合对于发明的解决方案未必是必须的。对本领域普通技术人员来说,显然可以对以下实施方式加以各种变更或改良。从权利要求书的描述显而易见的是,加以了这样的变更或改良的方式都可包含在本发明的技术范围之内。
权利要求书、说明书、说明书附图以及说明书摘要中包含作为著作权所保护对象的事项。任何人只要如专利局的文档或者记录所表示的那样进行这些文件的复制,著作权人就不会提出异议。但是,在除此以外的情况下,保留一切的著作权。
本发明的各种实施方式可参照流程图及框图来描述,这里,方框可表示(1)执行操作的过程的阶段或者(2)具有执行操作的作用的装置的“部”。特定的阶段和“部”可以通过可编程电路和/或处理器来实现。专用电路可以包括数字和/或模拟硬件电路。可以包括集成电路(IC)和/或分立电路。可编程电路可以包括可重构硬件电路。可重构硬件电路可以包括逻辑与、逻辑或、逻辑异或、逻辑与非、逻辑或非及其它逻辑操作、触发器、寄存器、现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)等存储元件等。
计算机可读介质可以包括能够存储由合适设备执行的指令的任何有形设备。其结果是,其上存储有指令的计算机可读介质包括一种包括指令的产品,该指令可被执行以创建用于执行流程图或框图所指定的操作的手段。作为计算机可读介质的示例,可以包括电子存储介质、磁存储介质、光学存储介质、电磁存储介质、半导体存储介质等。计算机可读介质的更具体示例可以包括软盘floppy(注册商标)disk、软盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或闪存)、电可擦除可编程只读存储 器(EEPROM)、静态随机存取存储器(SRAM)、光盘只读存储器(CD-ROM)、数字通用光盘(DVD)、蓝光(注册商标)盘、记忆棒、集成电路卡等。
计算机可读指令可以包括由一种或多种编程语言的任意组合描述的源代码或者目标代码中的任意一个。源代码或者目标代码包括传统的程序式编程语言。传统的程序式编程语言可以为汇编指令、指令集架构(ISA)指令、机器指令、与机器相关的指令、微代码、固件指令、状态设置数据、或者Smalltalk、JAVA(注册商标)、C++等面向对象编程语言以及“C”编程语言或者类似的编程语言。计算机可读指令可以在本地或者经由局域网(LAN)、互联网等广域网(WAN)提供给通用计算机、专用计算机或者其它可编程数据处理装置的处理器或可编程电路。处理器或可编程电路可以执行计算机可读指令,以创建用于执行流程图或框图所指定操作的手段。作为处理器的示例,包括计算机处理器、处理单元、微处理器、数字信号处理器、控制器、微控制器等。
图1是示出本实施方式所涉及的摄像装置100的外观立体图的一个示例的图。图2是示出本实施方式所涉及的摄像装置100的功能块的图。
摄像装置100包括摄像部102及镜头部200。摄像部102包括图像传感器120、图像处理部104、摄像控制部110、存储器130、指示部162及显示部160。
图像传感器120是CCD或CMOS等摄像元件。图像传感器120通过镜头部200所具有的光学***接收光。图像传感器120将通过镜头部200所具有的光学***而成像的光学图像的图像数据输出到图像处理部104。
摄像控制部110及图像处理部104可以由CPU或MPU等微处理器、MCU等微控制器等构成。存储器130可以为计算机可读记录介质,可以包括SRAM、DRAM、EPROM、EEPROM及USB存储器等闪存中的至少一个。存储器130存储摄像控制部110对图像传感器120等进行控制所需的程序、图像处理部104执行图像处理所需的程序等。存储器130可以设置于摄像装置100的壳体内部。存储器130可以设置成可从摄像装置100的壳体上拆卸下来。
指示部162是从用户处接受对摄像装置100的指示的用户界面。显示部160对图像传感器120所拍摄、图像处理部104所处理的图像及摄像装置100的各种设置信息等进行显示。显示部160可以由触控面板组成。
摄像控制部110对镜头部200及图像传感器120进行控制。摄像控制部110对镜头部200所具有的光学***的焦点位置、焦点距离的调整进行控制。摄像控制部110基于表示用户的指示的信息,将控制指令输出到镜头部200所包括的镜头控制部220,从而对镜头部200进行控制。
镜头部200包括聚焦镜头210、变焦镜头211、镜头驱动部212、镜头驱动部213、镜头控制部220、存储器222、位置传感器214及位置传感器215。聚焦镜头210及变焦镜头211可以包括至少一个镜头。聚焦镜头210和变焦镜头211是包含于光学***中的镜头。聚焦镜头210和变焦镜头211的至少一部分或全部被配置为能够沿着光学***的光轴移动。
摄像控制部110包括焦点调节部112。焦点调节部112通过控制聚焦镜头210,对镜头部200所具有的光学***的焦点进行调节。
镜头部200可以是被设置成能够相对摄像部102拆装的可更换镜头。镜头驱动部212可以包括步进马达、DC马达、无芯马达或超声波马达等驱动装置。聚焦镜头210的至少一部分或全部经由凸轮环、导轴等机构构件接受来自镜头驱动部212所包括的驱动装置的动力,从而移动。镜头驱动部213可以包括步进马达、DC马达、无芯马达或超声波马达等驱动装置。变焦镜头211的至少一部分或全部经由凸轮环、导轴等机构构件受到来自镜头驱动部213所包括的驱动装置的动力,从而移动。
镜头控制部220按照来自摄像部102的镜头控制指令来驱动镜头驱动部212和镜头驱动部213中的至少一个,并经由机构构件使聚焦镜头210和变焦镜头211中的至少一个沿着光轴方向移动,以执行变焦动作和聚焦动作中的至少一个。镜头控制指令例如为变焦控制指令及聚焦控制指令。
存储器222存储通过镜头驱动部212进行移动的聚焦镜头、变焦镜头用的控制值。存储器222可以包括SRAM、DRAM、EPROM、EEPROM及USB存储器等闪存中的至少一个。
位置传感器214检测聚焦镜头210的位置。位置传感器215检测变焦镜头211的位置。位置传感器214以及位置传感器215可以为磁阻(MR)传感器等。
摄像控制部110基于通过指示部162等表示来自用户的指示的信息,将控制指令输出到图像传感器120,从而使图像传感器120执行包括摄像动作控制的控制。由图像传感器120所拍摄的图像被图像处理部104处理,并存储到存储器130中。
由图像传感器120所获取的图像被输入到图像处理部104中。校正部140对图像传感器120所获取的图像进行校正。显示部160对经校正部140校正后的图像进行显示。存储器130对经校正部140校正后的图像进行存储。可以将经校正部140校正后的图像从存储器130传输到存储卡等记录介质。
图像处理部104包括校正部140和获取部142。获取部142获取表示在图像传感器120的曝光时段内移动的聚焦镜头210的多个位置的信息。例如,获取部142从焦点调节部112获取表示聚焦镜头210的多个位置的信息。校正部140基于多个位置,对曝光时段内通过聚焦镜头210使图像传感器120曝光而获取的图像进行校正。
例如,校正部140基于所获取的聚焦镜头210的多个位置,计算出曝光时段内聚焦镜头210的平均位置,并基于计算出的聚焦镜头210的平均位置,对图像的畸变进行校正。具体而言,校正部140基于聚焦镜头210的平均位置和对应于聚焦镜头210的位置的畸变系数,对图像的畸变进行校正。
校正部140可以基于聚焦镜头210的多个位置及图像传感器120所包括的多个像素位置,针对聚焦镜头210的多个位置及图像传感器120的多个像素的各个组合,计算出与图像上多个像素分别对应的多个像素位置。校正部140可以分别对多个像素计算出所计算的多个像素位置的平均位置,从而对图像进行校正。
可以按照图像传感器120所包括的多个像素列各不相同的曝光期对图像传感器120进行曝光。例如,摄像控制部110可以通过滚动读取,从图像传感器120读取像素信息。这时,获取部142获取表示在多个像素列各自的曝光时段内移动的聚焦镜头210的多个位置的信息。校正部140基于多个像素列各自的曝光时段内的多个位置,对通过多个像素列分别所获取的图像进行校正。
为了调节包括聚焦镜头210的光学***的焦点,聚焦镜头210在曝光时段内沿光轴方向往复移动。例如,焦点调节部112使聚焦镜头210在曝光时段内进行摆动(wobbling)。获取部142获取表示在曝光时段内往复移动的聚焦镜头210的多个位置的信息。校正部140基于在曝光时段内往复移动的聚焦镜头210的多个位置,对图像进行校正。
为了调节包括聚焦镜头210的光学***的焦点,聚焦镜头210可以在曝光时段内在光轴方向沿一个方向移动。获取部142获取表示在曝光时段内沿一个方向移动的聚焦镜头210的多个位置的信息。校正部140基于在曝光时段内沿一个方向移动的聚焦镜头210的多个位置,对图像进行校正。
曝光时段可以是为了分别获取构成动态影像的多个动态影像构成图像的每一个而使图像传感器120反复曝光的时段。校正部140可以基于用于分别获取多个动态影像构成图像的每一个图像的曝光时段内的多个位置,对各个动态影像构成图像进行校正。
焦点调节部112基于经校正部140校正后的图像,对包括聚焦镜头210的光学***的焦点进行调节。例如,焦点调节部112基于经校正部140校正后的图像的对比度值,确定聚焦镜头210的光轴方向位置,并使聚焦镜头210移动到所确定的位置。
图3是概略说明校正部140进行的处理的图。图3关联地说明图像传感器120所包括的所有水平像素列的曝光时段相同时校正部140进行的处理。具体说明图像传感器120通过全局读取以进行连续拍摄时的处理。在本实施方式中,图像传感器120具有N个(N为自然数)水平像素列。
摄像控制部110在基于垂直同步信号VD的时刻,使图像传感器120所包括的水平像素列1~水平像素列N曝光。图3示出了从时刻t1到时刻t7为止的曝光时段以及从时刻t9到时刻t15为止的曝光时段。
从垂直同步信号VD的触发到下一个垂直同步信号VD为止的期间内,焦点调节部112多次检测聚焦镜头210的光轴方向位置。具体为,焦点调节部112从时刻t0的垂直同步信号VD开始,以预先规定的时间间隔,并按预先规定的次数对聚焦镜头210的光轴方向位置进行检测。并且,焦点调节部112从时刻t7的垂直同步信号VD开始,以预先规定的时间间隔,并按预先规定的次数对聚焦镜头210的光轴方向位置进行检测。
如图3所示,镜头控制部220在从时刻t1到时刻t7为止的曝光时段内,检测LP1、LP2、LP3、LP4、LP5、LP6及LP7。并且,镜头控制部220在从时刻t9到时刻t15为止的曝光时段内,检测LP9、LP10、LP11、LP12、LP13、LP14及LP15。LPi(i为自然数)表示聚焦镜头210的光轴方向位置。
根据时刻t7的垂直同步信号VD,焦点调节部112从镜头控制部220获取镜头位置信息,并输出到图像处理部104,所述镜头位置信息表示在从时刻t1到时刻t7为止的曝光时段内所检测的LP1、LP2、LP3、LP4、LP5、LP6及LP7。获取部142获取从焦点调节部112输出的镜头位置信息。
校正部140基于镜头位置信息,计算出LP1、LP2、LP3、LP4、LP5、LP6及LP7的平均值。校正部140基于LP1、LP2、LP3、LP4、LP5、LP6及LP7的平均值,计算出畸变系数。畸变系数是表示根据聚焦镜头210的位置而确定的畸变失真的信息。关于畸变系数,图4等有关联说明。平均值是将LP1到LP7的和除以时刻t1到时刻t7的时间而计算出的值。关于平均值的具体计算方法,图6等有关联说明。
图像处理部104根据时刻t7的垂直同步信号VD,从图像传感器120获取水平像素列1~水平像素列N的像素数据310。图像处理部104基于从图像传感器120获取的像素数据310和对应于聚焦镜头210的位置平均值的畸变系数,对像素数据310的图像进行校正,生成校正图像311。将校正部140所生成的校正图像311存储于存储器130中,并作为显示部160的显示用图像输出到显示部160。
同样,根据时刻t7的垂直同步信号VD,焦点调节部112从镜头控制部220获取镜头位置信息,并输出到图像处理部104,所述镜头位置信息表示在从时刻t9到时刻t15为止的曝光时段内所检测的LP9、LP10、LP11、LP12、LP13、LP14及LP15。获取部142获取从焦点调节部112输出的镜头位置信息。
校正部140基于镜头位置信息,计算出LP9、LP10、LP11、LP12、LP13、LP14及LP15的平均值。校正部140基于LP9、LP10、LP11、LP12、LP13、LP14及LP15的平均值,计算出畸变系数。
图像处理部104根据时刻t15的垂直同步信号VD,从图像传感器120获取水平像素列1~水平像素列N的像素数据320。图像处理部104基于从图像传感器120获取的像素数据320和对应于聚焦镜头210的位置平均值的畸变系数,对像素数据320的图像进行校正,生成校正图像321。将校正部140所生成的校正图像321存储于存储器130中,并作为显示部160的显示用图像输出到显示部160。
摄像装置100根据垂直同步信号VD,如上所述执行一次曝光处理、图像数据的读取处理、图像的校正处理以及显示于显示部160的处理。摄像装置100针对各垂直同步信号VD,反复执行上述处理。
图4示出畸变系数的一个示例。图4的横轴为聚焦镜头210的位置,纵轴为畸变系数值。畸变系数包括k1、k2及k3。存储器130存储表示k1、k2及k3对聚焦镜头210的依赖性的畸变系数数据。可以基于镜头部200所具有的光学***的镜头设计数据,预先计算出畸变系数数据,并存储到存储器130中。也可以通过实验预先计算出畸变系数数据,并存储到存储器130中。k1、k2及k3的畸变系数数据可以是表示以聚焦镜头210的位置为变量的函数的数据。可以通过将预先计算出的畸变系数k1、k2及k3拟合成以聚焦镜头210的位置为变量的函数,从而获取相关函数。k1、k2及k3的畸变系数数据可以是将聚焦镜头210的位置映射到k1、k2及k3的映射数据。
可以使用畸变系数k1、k2及k3,用下列式1来表示图像传感器120上的坐标(x distorted,y distorted)和归一化的图像坐标(x,y)之间的关系。
【式1】
Figure PCTCN2020071175-appb-000001
坐标(x,y)表示校正图像内的坐标。式1中的LP表示聚焦镜头210的位置。畸变系数k1、k2及k3依赖于LP,因此,式1中表示为k1(LP)、k2(LP)及k3(LP)。图5示意性地示出图像传感器120上的坐标(x distorted,y distorted)和图像坐标(x,y)的位置关系的一个示例。
校正部140将像素数据中各像素的坐标应用到式1的(x distorted,y distorted),将根据聚焦镜头210的位置平均值和上述畸变系数数据而计算出的k1、k2及k3分别应用到k1(LP)、k2(LP)及k3(LP),计算出满足式1的坐标(x,y)。然后,校正部140将从图像传感器120获取的 图像数据中的坐标(x distorted,y distorted)的像素值用作坐标(x,y)的像素值,从而生成校正图像。
如图3、图4及图5等的关联说明所述,校正部140使用根据曝光时段内聚焦镜头210的平均位置而确定的畸变系数,对图像进行校正。借此,如图3所示,能够生成因聚焦镜头210的移动而出现的视角变动受到抑制的校正图像311及校正图像321。
图6是用于说明聚焦镜头210的位置平均值的计算方法的图。在图6中,T为曝光时间。也就是说,Te是从t1到t7的时间。Td是检测聚焦镜头210的位置的时间间隔。在图6所示的示例中,T=6Td。聚焦镜头210的位置平均值是根据LP1到LP7的时间平均值而计算出的值。
具体为,将LP1到LP7的加权和除以T,计算出聚焦镜头210的位置平均值。根据Σαi×LPi,计算出LP1到LP7的加权和。i为1到7的自然数。
αi为加权和的权重系数。α1到α7之和为T。可以基于时刻ti-Td/2到时刻ti+Td/2的时段中,曝光时段所包括的时段时间来规定αi。具体而言,α1及α7为Td/2,α2到α6为Td。如上所述,根据检测出聚焦镜头210的位置的时刻,确定权重系数αi,从而可以计算出聚焦镜头210的位置的时间平均值。
另外,平均值可以是LP1到LP7之和除以7而计算出的值。也就是说,平均值可以是曝光时段内所检测出的聚焦镜头210的位置和除以聚焦镜头210的位置检测次数而计算出的值。
图7是示出摄像装置100的执行步骤的一个示例的流程图。垂直同步信号VD被触发时,开始本流程图。
在S600中,焦点调节部112通过将检测聚焦镜头210的位置的定时信号输出到镜头控制部220,从而检测聚焦镜头210的位置。
在S600中,从垂直同步信号VD经过预先规定的时间后,摄像控制部110开始使图像传感器120曝光。在S604中,镜头控制部220依据从焦点调节部112输出的定时信号,检测聚焦镜头210的位置。
在S606中,摄像控制部110判断是否结束曝光。例如,当检测到新的垂直同步信号VD的触发时,摄像控制部110判断为结束曝光。在判断为结束曝光之前,将保持使图像传感器120曝光,反复执行S604的处理。
在S606的判断中,如果判断为结束曝光,则在S608中,校正部140通过焦点调节部112从镜头控制部220获取聚焦镜头210的镜头位置信息,计算出聚焦镜头210的位置平均值。
在S610中,校正部140获取从图像传感器120输出的像素数据。在S612中,校正部140基于畸变系数数据,计算出对应于图像传感器120的各像素坐标(x distorted,y distorted)的图像坐标(x,y)。
在S614中,校正部140将从图像传感器120输出的像素数据的各坐标像素值反映为S612中计算出的图像坐标的像素值,从而生成校正图像。
在S616中,图像处理部104将校正部140所生成的校正图像存储于存储器130中。S616的处理完成后,结束本流程图的处理。
将S616中存储于存储器130中的校正图像例如输出到显示部160作为显示用动态影像构成图像。在拍摄动态影像期间进行上述校正处理时,在动态影像拍摄结束后,将存储于存储器130中的校正图像作为动态影像数据的动态影像构成图像记录到存储器130中。
另外,校正部140可以使S612中计算出的图像坐标与聚焦镜头210的位置平均值相关联地存储到存储器130中。当新计算出的聚焦镜头210的位置平均值与存储于存储器130中的平均值之差小于预先规定的值时,校正部140可以使用存储于存储器130中的图像坐标而不执行S612的处理。从而可以减少用于计算图像坐标的运算量。
如图3至图7等的关联说明所述,校正部140基于曝光时段内聚焦镜头210的位置平均值,对图像畸变进行校正。从而可以生成曝光时段内因聚焦镜头210的移动而出现的视角变化的影响受到抑制的校正图像。
图8是说明校正部140执行的其他校正方法的图。图8放大部分图像坐标,示出图像传感器120的像素坐标(x distorted,y distorted)和校正后的图像坐标(x,y)的关系。
校正部140基于曝光时段内所检测出的聚焦镜头210的各个位置,计算出图像坐标(x,y)。以下将参照图3所示的示例,对基于其他校正方法执行的校正处理进行说明。
校正部140分别根据在时刻t1~t7的曝光时段内的LP1、LP2、LP3、LP4、LP5、LP6及LP7,计算出图像坐标(x,y)。具体为,校正部140将从图像传感器120获取的像素数据中的像素的坐标应用到式1的(x distorted,y distorted),将根据LP1和畸变系数数据计算出的k1、k2及k3分别应用到式1的k1(LP)、k2(LP)及k3(LP),计算出满足式1的坐标(x,y)。计算出的坐标(x,y)在图8中用(x1,y1)示出。
对LP2也进行同样的操作,校正部140将像素数据中的像素的坐标应用到式1的(x distorted,y distorted),将根据LP2和畸变系数数据计算出的k1、k2及k3分别应用到式1的k1(LP)、k2(LP)及k3(LP),计算出满足式1的坐标(x,y)。计算出的坐标(x,y)在图8中用(x2,y2)示出。对于LP3、LP4、LP5、LP6及LP7也进行同样的操作,使用根据LPi(i为3到7的自 然数)和畸变系数数据计算出的k1、k2及k3,计算出满足式1的坐标(x,y),并将其计算为对应各LPi的(xi,yi)。
校正部140将像素数据中的坐标(x distorted,y distorted)的像素值应用作(xi,yi)(i为1到7的自然数)的平均坐标(x,y)的像素值。校正部140对于从图像传感器120获取的像素数据的各像素进行同样的处理,生成校正图像。
通过图8中关联说明的校正方法,也能提供曝光时段内因聚焦镜头210的移动而出现的视角变化的影响得到抑制的校正图像。和图3及图5等关联说明的校正方法相比,还能生成进一步降低视角变化的影响的校正图像。
图9是概略说明通过滚动读取从图像传感器120读取像素数据时校正部140的校正处理的图。进行滚动读取时,摄像控制部110依次错开水平像素列1到水平像素列N的曝光开始时刻进行曝光。从而使得图像传感器120的曝光时段按图像传感器120所包括的像素列而不同。
如图9所示,水平像素列1的曝光时段为时刻t1到时刻t7。水平像素列N的曝光时段为时刻t7到时刻t13。一般而言,水平像素列1+i的曝光时段为时刻t1+δT×(i-1)到时刻t7+δT×(i-1)(i为1到N的自然数)。δT是相邻的水平像素列的曝光开始时刻的间隔。
在水平像素列1的曝光时段内所检测的聚焦镜头210的位置为LP1、LP2、LP3、LP4、LP5、LP6及LP7。校正部140基于LP1、LP2、LP3、LP4、LP5、LP6及LP7的平均值和畸变系数数据,计算出对应于LP1、LP2、LP3、LP4、LP5、LP6及LP7的平均值的畸变系数k1、k2及k3。校正部140将计算出的畸变系数k1、k2及k3应用到式1,对水平像素列1的像素数据810进行校正,从而生成校正像素数据811。
同样,在水平像素列2的曝光时段内所检测的聚焦镜头210的位置为LP2、LP3、LP4、LP5、LP6及LP7。校正部140基于LP2、LP3、LP4、LP5、LP6及LP7的平均值和畸变系数数据,计算出对应于LP2、LP3、LP4、LP5、LP6及LP7的平均值的畸变系数k1、k2及k3。校正部140将计算出的畸变系数k1、k2及k3应用到计算式1,对水平像素列2的像素数据进行校正,从而生成校正像素数据。平均值是将LP1到LP7的加权和除以时刻t1到时刻t7的时间而计算出的值。具体而言,平均值为通过图6等关联说明的计算方法计算出的值。
校正部140对水平像素列3到水平像素列N进行同样的处理,生成校正像素数据。例如,在水平像素列N的曝光时段内所检测的聚焦镜头210的位置为LP7、LP8、LP9、LP10、LP11、LP12及LP13,校正部140基于LP7、LP8、LP9、LP10、LP11、LP12及LP13的平均值和畸变系数数据,计算出LP7、LP8、LP9、LP10、LP11、LP12及LP13的平均值,并计算出与所计算的平均值相对应的畸变系数k1、k2及k3。校正部140将计算出的畸变系数k1、k2及 k3应用到式1,对水平像素列N的像素数据820进行校正,从而生成校正像素数据821。校正部140基于根据各水平像素列而生成的校正像素数据,生成一个校正图像。
如上所述,从图像传感器120进行滚动读取时,由于各水平像素列的曝光时段互不相同,所检测的图像传感器120的位置也可能会因像素列而异。对此,校正部140针对曝光时段内所检测的聚焦镜头210的位置相同的像素列组,将基于曝光时段内聚焦镜头210的位置平均值的畸变系数k1、k2及k3应用到式1,生成校正像素数据。
图9等关联地说明基于聚焦镜头210的位置平均值对通过滚动读取而读取的图像数据进行校正处理。可以将图8等关联说明的校正方法应用到通过滚动读取而读取的图像数据的校正。例如,校正部140可以针对各像素列计算出对应于聚焦镜头210的各个位置的图像坐标,并将计算出的图像坐标的平均坐标应用作校正后的图像的像素值。
如以上说明所述,根据摄像装置100,可以提供因聚焦镜头210的移动而出现的视角变化的影响得到抑制的图像。该效果对于相对图像传感器的尺寸而言较小的小型镜头装置尤其有效。例如,如果相对于图像传感器的尺寸,使光学***小型化,则聚焦镜头的移动所致的畸变像差的影响显著。因此,例如若在实时取景拍摄期间聚焦镜头进行摆动,可能会在实时取景图像上观察到伴随摆动而出现的视角变动。如上所述,根据摄像装置100,可以抑制因聚焦镜头210的移动而出现的视角变动的影响,从而在实时取景图像上不易观察到摆动所致的视角变动。此外,根据校正图像检测对焦控制所使用的对比度值,从而可以抑制对比度值检测对象的图像区域因聚焦镜头210的移动而变动。因此,能够更准确地基于对比度值进行对焦控制。
另外,本实施方式的摄像装置100关联说明的处理不仅适用于聚焦镜头210的摆动,也适用于曝光时段内使聚焦镜头210沿一个方向移动时的处理。并且,本实施方式的摄像装置100关联说明的处理不仅适用于实施取景拍摄期间的图像,也适用于生成记录用动态影像数据时的校正处理及生成记录用静态图像数据时的校正处理。此外,本实施方式的摄像装置100关联说明的处理还适用于聚焦镜头210以外的镜头的移动。即,校正部140基于在图像传感器120曝光时段内移动的聚焦镜头210以外的镜头的多个位置,针对在相关曝光时段内使图像传感器120曝光而获取的图像进行校正。
上述摄像装置100可以搭载于移动体上。摄像装置100还可以搭载于如图10所示的无人驾驶航空器(UAV)上。UAV10可以包括UAV主体20、万向节50、多个摄像装置60及摄像装置100。万向节50及摄像装置100为摄像***的一个示例。UAV10为由推进部推进的移动体的一个示例。移动体的概念是指除UAV之外,包括在空中移动的其他飞机等飞行体、在地面上移动的车辆、在水上移动的船舶等。
UAV主体20包括多个旋翼。多个旋翼为推进部的一个示例。UAV主体20通过控制多个旋翼的旋转而使UAV10飞行。UAV主体20使用例如四个旋翼来使UAV10飞行。旋翼的数量不限于四个。另外,UAV10也可以是没有旋翼的固定翼机。
摄像装置100为对包含在所期望的摄像范围内的被摄体进行拍摄的摄像用相机。万向节50可旋转地支撑摄像装置100。万向节50为支撑机构的一个示例。例如,万向节50使用致动器以俯仰轴可旋转地支撑摄像装置100。万向节50使用致动器进一步分别以滚转轴和偏航轴为中心可旋转地支撑摄像装置100。万向节50可通过使摄像装置100以偏航轴、俯仰轴以及滚转轴中的至少一个为中心旋转,来变更摄像装置100的姿势。
多个摄像装置60是为了控制UAV10的飞行而对UAV10的周围进行拍摄的传感用相机。两个摄像装置60可以设置于UAV10的机头、即正面。并且,其它两个摄像装置60可以设置于UAV10的底面。正面侧的两个摄像装置60可以成对,起到所谓的立体相机的作用。底面侧的两个摄像装置60也可以成对,起到立体相机的作用。可以根据由多个摄像装置60所拍摄的图像来生成UAV10周围的三维空间数据。UAV10所包括的摄像装置60的数量不限于四个。UAV10包括至少一个摄像装置60即可。UAV10也可以在UAV10的机头、机尾、侧面、底面及顶面分别包括至少一个摄像装置60。摄像装置60中可设置的视角可大于摄像装置100中可设置的视角。摄像装置60也可以具有单焦点镜头或鱼眼镜头。
远程操作装置300与UAV10通信,以远程操作UAV10。远程操作装置300可以与UAV10进行无线通信。远程操作装置300向UAV10发送表示上升、下降、加速、减速、前进、后退、旋转等与UAV10的移动有关的各种指令的指示信息。指示信息包括例如使UAV10的高度上升的指示信息。指示信息可以表示UAV10应该位于的高度。UAV10进行移动,以位于从远程操作装置300接收的指示信息所表示的高度。指示信息可以包括使UAV10上升的上升指令。UAV10在接受上升指令的期间上升。在UAV10的高度已达到上限高度时,即使接受上升指令,也可以限制UAV10上升。
图11表示可整体或部分地体现本发明的多个方面的计算机1200的一个示例。安装在计算机1200上的程序能够使计算机1200作为与本发明的实施方式所涉及的装置相关联的操作或者该装置的一个或多个“部”而起作用。例如,安装在计算机1200上的程序能够使计算机1200作为校正部140或图像处理部104而起作用。或者,该程序能够使计算机1200执行相关操作或者相关一个或多个“部”的功能。该程序能够使计算机1200执行本发明的实施方式所涉及的过程或者该过程的阶段。这种程序可以由CPU1212执行,以使计算机1200执行与本说明书所述的流程图及框图中的一些或者全部方框相关联的指定操作。
本实施方式的计算机1200包括CPU1212和RAM1214,它们通过主机控制器1210相互连接。计算机1200还包括通信接口1222、输入/输出单元,它们通过输入/输出控制器1220与主机控制器1210连接。计算机1200还包括ROM1230。CPU1212根据存储在ROM1230和RAM1214中的程序进行操作,从而控制各单元。
通信接口1222经由网络与其他电子设备通信。硬盘驱动器可以存储计算机1200内的CPU1212所使用的程序及数据。ROM1230在其中存储运行时由计算机1200执行的引导程序等、和/或依赖于计算机1200的硬件的程序。程序通过CR-ROM、USB存储器或IC卡之类的计算机可读记录介质或者网络来提供。程序安装在也作为计算机可读记录介质的示例的RAM1214或ROM1230中,并通过CPU1212执行。这些程序中记述的信息处理由计算机1200读取,并引起程序与上述各种类型的硬件资源之间的协作。可以随着计算机1200的使用而实现信息的操作或者处理,从而构成装置或方法。
例如,当在计算机1200和外部设备之间执行通信时,CPU1212可以执行加载在RAM1214中的通信程序,并且基于通信程序中描述的处理,命令通信接口1222进行通信处理。在CPU1212的控制下,通信接口1222读取存储在诸如RAM1214或USB存储器之类的记录介质中所提供的发送缓冲区中的发送数据,并将读取的发送数据发送到网络,或者将从网络接收的接收数据写入记录介质上所提供的接收缓冲区等。
另外,CPU1212可以使RAM1214读取存储在诸如USB存储器等外部记录介质中的文件或数据库的全部或必要部分,并对RAM1214上的数据执行各种类型的处理。接着,CPU1212可以将处理过的数据写回到外部记录介质中。
诸如各种类型的程序、数据、表格和数据库的各种类型的信息可以存储在记录介质中并且接受信息处理。对于从RAM1214读取的数据,CPU1212可执行在本公开的各处所描述的、包括由程序的指令序列所指定的各种类型的操作、信息处理、条件判断、条件转移、无条件转移、信息的检索/替换等各种类型的处理,并将结果写回到RAM1214中。此外,CPU1212可以检索记录介质内的文件、数据库等中的信息。例如,在记录介质中存储具有分别与第二属性的属性值建立了关联的第一属性的属性值的多个条目时,CPU1212可以从相关多个条目中检索出与指定第一属性的属性值的条件相匹配的条目,并读取该条目内存储的第二属性的属性值,从而获取与满足预定条件的第一属性相关联的第二属性的属性值。
上述程序或软件模块可以存储在计算机1200上或计算机1200附近的计算机可读存储介质上。此外,连接到专用通信网络或因特网的服务器***中提供的诸如硬盘或RAM之类的记录介质可以用作计算机可读存储介质,从而可以经由网络将程序提供给计算机1200。
应该注意的是,权利要求书、说明书以及附图中所示的装置、***、程序以及方法中的动作、顺序、步骤以及阶段等各项处理的执行顺序,只要没有特别明示“在…之前”、“事先”等,且只要前面处理的输出并不用在后面的处理中,则可以任意顺序实现。关于权利要求书、说明书以及附图中的操作流程,为方便起见而使用“首先”、“接着”等进行了说明,但并不意味着必须按照这样的顺序实施。
以上使用实施方式对本发明进行了说明,但是本发明的技术范围并不限于上述实施方式所描述的范围。对本领域普通技术人员来说,显然可对上述实施方式加以各种变更或改良。从权利要求书的描述显而易见的是,加以了这样的变更或改良的方式都可包含在本发明的技术范围之内。

Claims (14)

  1. 一种图像处理装置,其特征在于,包括:
    获取部,其用于获取表示在摄像元件曝光时段内移动的镜头的多个位置的信息;以及
    校正部,其基于所述多个位置,对所述曝光时段内通过所述镜头使所述摄像元件曝光而获取的图像进行校正。
  2. 如权利要求1所述的图像处理装置,其特征在于,
    所述校正部基于所述多个位置,计算出所述曝光时段内所述镜头的平均位置,并基于计算出的所述镜头的平均位置,对所述图像的畸变进行校正。
  3. 如权利要求2所述的图像处理装置,其特征在于,
    所述校正部基于所述镜头的平均位置和对应于所述镜头位置的畸变系数,对所述图像的畸变进行校正。
  4. 如权利要求1所述的图像处理装置,其特征在于,
    所述校正部基于所述多个位置及所述摄像元件所包括的多个像素的位置,针对所述多个位置及所述多个像素的各个组合,计算出与所述图像上所述多个像素分别对应的多个像素位置,并对所述多个像素分别计算出所述多个像素位置的平均位置,从而对所述图像进行校正。
  5. 如权利要求1或2所述的图像处理装置,其特征在于,
    在按照所述摄像元件所包括的多个像素列各不相同的曝光时段对所述摄像元件进行曝光,
    所述获取部获取表示在所述多个像素列各自的曝光时段内移动的镜头的多个位置的信息,
    所述校正部基于所述多个像素列各自的曝光时段内的所述多个位置,对通过所述多个像素列分别获取的所述图像进行校正。
  6. 如权利要求1或2所述的图像处理装置,其特征在于,
    所述镜头是可沿光轴方向移动的聚焦镜头。
  7. 如权利要求6所述的图像处理装置,其特征在于,
    为了调节包括所述镜头的光学***的焦点,所述镜头在所述曝光时段内沿所述光轴方向往复移动,
    所述获取部获取表示在所述曝光时段内所述往复移动的所述镜头的多个位置的信息。
  8. 如权利要求6所述的图像处理装置,其特征在于,
    为了调节包括所述镜头的光学***的焦点,所述镜头在所述曝光时段内沿所述光轴方向且沿一个方向移动,
    所述获取部获取表示在所述曝光时段内沿所述一个方向移动的所述镜头的多个位置的信息。
  9. 如权利要求1或2所述的图像处理装置,其特征在于,
    所述曝光时段是为了获取构成动态影像的多个动态影像构成图像的每一个图像而使所述摄像元件反复曝光的时段,
    所述校正部基于用于获取所述多个动态影像构成图像的每一个图像的所述曝光时段内的所述多个位置,对各个动态影像构成图像进行校正。
  10. 一种摄像装置,其特征在于,包括:
    如权利要求1至9中任一项所述的图像处理装置;以及
    所述摄像元件。
  11. 如权利要求10所述的摄像装置,其特征在于,
    所述镜头是可沿光轴方向移动的聚焦镜头。
    所述摄像装置还包括焦点调节部,所述焦点调节部基于经所述校正部校正后的所述图像,对包括所述镜头的光学***的焦点进行调节。
  12. 一种无人驾驶航空器,其特征在于,包括如权利要求10所述的摄像装置并进行移动。
  13. 一种图像处理方法,其特征在于,包括:
    获取表示在摄像元件曝光时段内移动的镜头的多个位置的信息的阶段;以及
    基于所述多个位置,对所述曝光时段内通过所述镜头使所述摄像元件曝光而获取的图像进行校正的阶段。
  14. 一种程序,其特征在于,其是用于使计算机作为如权利要求1或2所述的图像处理装置而发挥功能的程序。
PCT/CN2020/071175 2019-01-31 2020-01-09 图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序 WO2020156085A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080002874.4A CN112204947A (zh) 2019-01-31 2020-01-09 图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019015752A JP6746857B2 (ja) 2019-01-31 2019-01-31 画像処理装置、撮像装置、無人航空機、画像処理方法、及びプログラム
JP2019-015752 2019-01-31

Publications (1)

Publication Number Publication Date
WO2020156085A1 true WO2020156085A1 (zh) 2020-08-06

Family

ID=71840865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071175 WO2020156085A1 (zh) 2019-01-31 2020-01-09 图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序

Country Status (3)

Country Link
JP (1) JP6746857B2 (zh)
CN (1) CN112204947A (zh)
WO (1) WO2020156085A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011061444A (ja) * 2009-09-09 2011-03-24 Hitachi Information & Communication Engineering Ltd 収差補正装置及び収差補正方法
CN103038689A (zh) * 2011-05-16 2013-04-10 松下电器产业株式会社 透镜单元以及摄像装置
US20140300799A1 (en) * 2013-04-05 2014-10-09 Olympus Corporation Imaging device, method for controlling imaging device, and information storage device
CN104380709A (zh) * 2012-06-22 2015-02-25 富士胶片株式会社 摄像装置及其动作控制方法

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1320813C (zh) * 2003-06-20 2007-06-06 北京中星微电子有限公司 一种镜头成像畸变校正的方法
US7596286B2 (en) * 2003-08-06 2009-09-29 Sony Corporation Image processing apparatus, image processing system, imaging apparatus and image processing method
JP4522207B2 (ja) * 2004-09-17 2010-08-11 キヤノン株式会社 カメラシステム、カメラ本体及び交換レンズ
JP4310645B2 (ja) * 2004-12-28 2009-08-12 ソニー株式会社 撮像画像信号の歪み補正方法および撮像画像信号の歪み補正装置
JP2009296561A (ja) * 2008-05-02 2009-12-17 Olympus Imaging Corp 撮像装置及び撮像方法
JP5272699B2 (ja) * 2008-12-15 2013-08-28 株式会社ニコン 画像処理装置、撮像装置、プログラムおよび画像処理方法
JP5934940B2 (ja) * 2012-05-17 2016-06-15 パナソニックIpマネジメント株式会社 撮像装置、半導体集積回路および撮像方法
JP5963542B2 (ja) * 2012-05-30 2016-08-03 キヤノン株式会社 画像処理装置、その制御方法及びプログラム
JP6136019B2 (ja) * 2014-02-03 2017-05-31 パナソニックIpマネジメント株式会社 動画像撮影装置、および、動画像撮影装置の合焦方法
JP6313685B2 (ja) * 2014-05-01 2018-04-18 キヤノン株式会社 撮像装置およびその制御方法
JP6516443B2 (ja) * 2014-11-10 2019-05-22 オリンパス株式会社 カメラシステム
WO2017122348A1 (ja) * 2016-01-15 2017-07-20 オリンパス株式会社 フォーカス制御装置、内視鏡装置及びフォーカス制御装置の作動方法
WO2018025659A1 (ja) * 2016-08-05 2018-02-08 ソニー株式会社 撮像装置、固体撮像素子、カメラモジュール、駆動制御部、および撮像方法
JP6906947B2 (ja) * 2016-12-22 2021-07-21 キヤノン株式会社 画像処理装置、撮像装置、画像処理方法およびコンピュータのプログラム
US10705312B2 (en) * 2017-02-02 2020-07-07 Canon Kabushiki Kaisha Focus control apparatus, image capturing apparatus, and focus control method
CN110337805B (zh) * 2017-03-01 2021-03-23 富士胶片株式会社 摄像装置、图像处理装置、图像处理方法及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011061444A (ja) * 2009-09-09 2011-03-24 Hitachi Information & Communication Engineering Ltd 収差補正装置及び収差補正方法
CN103038689A (zh) * 2011-05-16 2013-04-10 松下电器产业株式会社 透镜单元以及摄像装置
CN104380709A (zh) * 2012-06-22 2015-02-25 富士胶片株式会社 摄像装置及其动作控制方法
US20140300799A1 (en) * 2013-04-05 2014-10-09 Olympus Corporation Imaging device, method for controlling imaging device, and information storage device

Also Published As

Publication number Publication date
JP6746857B2 (ja) 2020-08-26
CN112204947A (zh) 2021-01-08
JP2020123897A (ja) 2020-08-13

Similar Documents

Publication Publication Date Title
WO2018185939A1 (ja) 撮像制御装置、撮像装置、撮像システム、移動体、撮像制御方法、及びプログラム
WO2019120082A1 (zh) 控制装置、***、控制方法以及程序
WO2021013143A1 (zh) 装置、摄像装置、移动体、方法以及程序
WO2020156085A1 (zh) 图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序
WO2019174343A1 (zh) 活动体检测装置、控制装置、移动体、活动体检测方法及程序
WO2020216037A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
WO2018185940A1 (ja) 撮像制御装置、撮像装置、撮像システム、移動体、撮像制御方法、及びプログラム
WO2021031833A1 (zh) 控制装置、摄像***、控制方法以及程序
US11125970B2 (en) Method for lens autofocusing and imaging device thereof
WO2019223614A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
JP7043706B2 (ja) 制御装置、撮像システム、制御方法、及びプログラム
WO2021204020A1 (zh) 装置、摄像装置、摄像***、移动体、方法以及程序
WO2021052216A1 (zh) 控制装置、摄像装置、控制方法以及程序
WO2021249245A1 (zh) 装置、摄像装置、摄像***及移动体
WO2020244440A1 (zh) 控制装置、摄像装置、摄像***、控制方法以及程序
JP6961888B1 (ja) 装置、撮像装置、移動体、プログラム及び方法
JP6569157B1 (ja) 制御装置、撮像装置、移動体、制御方法、及びプログラム
WO2018163300A1 (ja) 制御装置、撮像装置、撮像システム、移動体、制御方法、及びプログラム
WO2021233177A1 (zh) 图像处理装置、摄像装置、移动体、程序以及方法
WO2021031840A1 (zh) 装置、摄像装置、移动体、方法以及程序
JP6878738B1 (ja) 制御装置、撮像システム、移動体、制御方法、及びプログラム
WO2021143425A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
WO2020083342A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
WO2020088438A1 (zh) 控制装置、摄像装置、***、控制方法以及程序
WO2019085794A1 (zh) 控制装置、摄像装置、飞行体、控制方法以及程序

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20748134

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20748134

Country of ref document: EP

Kind code of ref document: A1