WO2024067428A1 - High-resolution high-frame-rate photographing method, and image processing apparatus - Google Patents

High-resolution high-frame-rate photographing method, and image processing apparatus Download PDF

Info

Publication number
WO2024067428A1
WO2024067428A1 PCT/CN2023/120896 CN2023120896W WO2024067428A1 WO 2024067428 A1 WO2024067428 A1 WO 2024067428A1 CN 2023120896 W CN2023120896 W CN 2023120896W WO 2024067428 A1 WO2024067428 A1 WO 2024067428A1
Authority
WO
WIPO (PCT)
Prior art keywords
photosensitive
image
images
data
points
Prior art date
Application number
PCT/CN2023/120896
Other languages
French (fr)
Chinese (zh)
Inventor
王海军
刘远通
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024067428A1 publication Critical patent/WO2024067428A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the technical field of video data acquisition and processing, and in particular to a high-resolution and high-frame-rate camera method and an image processing device.
  • the process of video data acquisition on mobile devices is usually as follows: the original data of the light signal is collected through the camera's photosensitive element (for example, sensor, etc.), transmitted to the image processing module (for example, image signal processor (ISP)), etc.) via the data bus, processed into normal color image data, and finally transmitted to the device to complete data storage or real-time preview.
  • the image processing module for example, image signal processor (ISP)
  • ISP image signal processor
  • the total amount of raw data of the light signal output by the photosensitive element per unit time has an upper limit. That is, without special processing, the photosensitive image sequence output by the photosensitive element cannot meet the requirements of high resolution and high frame rate at the same time.
  • Existing technologies usually convert captured low-resolution and high-frame-rate photosensitive images into high-resolution and high-frame-rate photosensitive images in the following two ways: (i) collecting through a binocular camera to obtain more additional photosensitive data to achieve high resolution; (ii) collecting low-resolution and high-frame-rate photosensitive images, and then using a trained neural network model to perform image processing (for example, upsampling, etc.) to obtain high-resolution photosensitive images.
  • the embodiments of the present application provide a high-resolution and high-frame-rate camera method and image processing device, which can use inter-frame information to fuse low-resolution photosensitive images without increasing the data transmission bandwidth to obtain corresponding high-resolution and high-frame-rate photosensitive images, thereby effectively improving the output image quality of the electronic device.
  • the present application provides a high-resolution and high-frame-rate camera method, the method comprising: acquiring N photosensitive images, the N photosensitive images being obtained by exposing a photosensitive element N times in succession; wherein the N photosensitive images include a first photosensitive image and a second photosensitive image, the first photosensitive image including P photosensitive data, the P photosensitive data being obtained by exposing P photosensitive points on the photosensitive element in a single exposure process, the second photosensitive image including Q photosensitive data, the Q photosensitive data being obtained by exposing Q photosensitive points on the photosensitive element in a single exposure process, the P photosensitive points and the Q photosensitive points having different positions on the photosensitive element, P and Q being integers greater than or equal to 1, and N being an integer greater than or equal to 2; fusing the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
  • the present application obtains multiple frames of low-resolution original photosensitive images by exposing different photosites on the photosensitive element at high frequency in a continuous time domain, and the exposed photosites contained in any two photosensitive images have different positions (this exposure method does not require the transmission bandwidth of the data bus to obtain low-resolution and high-frame-rate photosensitive images). Then the photosensitive data corresponding to all the exposed photosites are combined to obtain a third photosensitive image, so that the third photosensitive image contains the photosensitive data corresponding to all or most of the photosites, that is, the corresponding photosensitive image containing high-resolution information is obtained.
  • the present application can obtain a high-resolution photosensitive image (i.e., a third image) corresponding to a low-resolution photosensitive image through the cooperation of a specific photosensitivity method and a fusion method without increasing hardware overhead and data transmission bandwidth, thereby effectively improving the video or image quality output by the terminal device.
  • the present application uses adjacent frames in the time domain for fusion, and the correlation in the time domain and the spatial domain is stronger, so that the photosensitive data in the fused high-resolution photosensitive image is closer to reality, and the output image/video effect is better.
  • the N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, wherein the K*E A photosensitive image is obtained by sequentially exposing through K consecutive photosensitive cycles, wherein each of the K photosensitive cycles exposes the photosensitive element for E consecutive times;
  • the photosensitive element includes M photosensitive units, each of the M photosensitive units includes C photosensitive points, and within each photosensitive cycle, all photosensitive points within each photosensitive unit are sequentially exposed once, and C is a positive integer greater than or equal to 2.
  • N consecutive photosites are randomly selected in the acquisition time domain.
  • the photosites data contained in each of the N photosites are obtained by exposure of different photosites.
  • fusion can be performed to obtain a high-resolution photosites containing most or all photosites corresponding to the photosites data.
  • only N consecutive photosites need to be selected each time, that is, the terminal device only needs to save the N photosites collected at least, reducing the cache overhead of the terminal device.
  • the photosensitive data contained in the N selected photosensitive images are obtained by exposing each photosensitivity point on the photosensitive element once. At this time, they are directly fused, and the third photosensitive image obtained contains the photosensitive data corresponding to each photosensitivity point on the photosensitive element, that is, a photosensitive image containing high-resolution information is obtained.
  • the M photosensitive units include a first photosensitive unit
  • the method further includes: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures, and the photosensitive data corresponding to the first photosensitive point is calculated; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
  • the photosensitive data corresponding to the exposed photosensitive points in the photosensitive unit can be used to perform spatial interpolation to obtain the photosensitive data corresponding to the unexposed photosensitive points in the photosensitive unit, and the photosensitive data can be used as the corresponding photosensitive data contained in the third photosensitive image, so that the third photosensitive image contains the photosensitive data corresponding to each photosensitive point, and a photosensitive image containing high-resolution information is obtained.
  • the M photosensitive units include a first photosensitive unit
  • the method further includes: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures corresponding to the N photosensitive images, calculating a motion vector based on the photosensitive data contained in the N photosensitive images, and calculating the photosensitive data corresponding to the first photosensitive point based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
  • the edge feature points of the object in multiple photosensitive images can also be matched to calculate the motion vector (i.e., moving direction and speed) of each pixel point on the photosensitive image in the time domain, and then determine which unexposed photosensitive point and which exposed photosensitive point are exposed to the same position of the object, so that the photosensitive data of the exposed photosensitive point is used as the photosensitive data of the unexposed photosensitive point.
  • the photosensitive data corresponding to all unexposed photosensitive points are determined by means of motion vectors, so that the third photosensitive image contains the photosensitive data corresponding to each photosensitive point, and a photosensitive image containing high-resolution information is obtained.
  • the method further includes: fusing N-1 photosensitive images and a fourth photosensitive image that are continuous in time domain among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in time domain.
  • N low-resolution photosensitive images are fused in sequence according to the order of the time domain.
  • the images are collected and fused in real time, and at least N photosensitive images need to be cached. This can effectively reduce cache overhead and user preview delay.
  • the method further includes: processing each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
  • each photosensitizing point on the photosensitive element corresponds to a pixel point on the photosensitive image
  • the photosensitive data corresponding to each photosensitizing point is also the photosensitive data contained in the pixel point corresponding to the photosensitizing point on the photosensitive image. This photosensitive data is used to calculate the pixel value of the pixel point.
  • the present application provides an image processing device, the device comprising: an acquisition unit, for acquiring N photosensitive images, wherein the N photosensitive images are obtained by exposing a photosensitive element N times in succession; wherein the N photosensitive images include a first A photosensitive image and a second photosensitive image, wherein the first photosensitive image includes P photosensitive data, the P photosensitive data are obtained by respectively exposing P photosensitive points on the photosensitive element during a single exposure process, and the second photosensitive image includes Q photosensitive data, the Q photosensitive data are obtained by respectively exposing Q photosensitive points on the photosensitive element during a single exposure process, the P photosensitive points and the Q photosensitive points have different positions on the photosensitive element, P and Q are integers greater than or equal to 1, and N is an integer greater than or equal to 2; a fusion unit is used to fuse the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the
  • the N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, and the K*E photosensitive images are obtained by sequentially exposing through K consecutive photosensitive cycles, and each of the K photosensitive cycles exposes the photosensitive element for E consecutive times, and K is a positive integer greater than or equal to 1;
  • the photosensitive element includes M photosensitive units, each of the M photosensitive units contains C photosensitivity points, and in each photosensitive cycle, all photosensitivity points in each photosensitive unit are exposed once in sequence, and C and M are positive integers greater than or equal to 2.
  • the M photosensitive units include a first photosensitive unit
  • the fusion unit is further used for: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures, and the photosensitive data corresponding to the first photosensitive point is calculated; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
  • the M photosensitive units include a first photosensitive unit
  • the fusion unit is further used to: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures corresponding to the N photosensitive images, calculate a motion vector based on the photosensitive data contained in the N photosensitive images, and calculate the photosensitive data corresponding to the first photosensitive point based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes photosensitive data corresponding to the first photosensitive point.
  • the fusion unit is also used to: fuse N-1 photosensitive images and a fourth photosensitive image that are continuous in time domain among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in time domain.
  • the device further includes: an image signal processing unit, configured to process each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
  • the present application provides an electronic device, wherein the computer device includes at least one processor, a memory and a communication interface, the memory, the communication interface and the at least one processor are interconnected via lines, and instructions are stored in the at least one memory; when the instructions are executed by the processor, any method described in the first aspect above is implemented.
  • an embodiment of the present application provides a chip system, which includes at least one processor, a memory and a communication interface, wherein the memory, the communication interface and the at least one processor are interconnected through lines, and instructions are stored in the at least one memory; when the instructions are executed by the processor, any method described in the first aspect above is implemented.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program.
  • the computer program When the computer program is executed, the method described in any one of the above-mentioned first aspects is implemented.
  • an embodiment of the present application provides a computer program, which includes instructions. When the computer program is executed, the method described in any one of the first aspects above is implemented.
  • FIG1 is a network environment of an electronic device provided by an embodiment of the present invention.
  • FIG2 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present invention.
  • FIG3 is a flow chart of a high-resolution and high-frame-rate camera method provided in an embodiment of the present application
  • FIG4 is a schematic diagram of a data collection method provided in an embodiment of the present application.
  • FIG5 is a pipeline architecture for generating a high-resolution and high-frame-rate video sequence provided in an embodiment of the present application
  • FIG6 is a schematic diagram of an execution process of a high-resolution and high-frame-rate imaging method provided in an embodiment of the present application.
  • FIG7 is an image processing device provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the hardware structure of an image processing device provided by an embodiment of the present invention.
  • the photosensitive element is the core device of the camera on the electronic device. It converts the light signal into an electrical signal, and then converts the electrical signal into a digital signal. These digital signals are then processed by image processing modules such as ISP and converted into pixel values of the image.
  • the photosensitive element contains multiple photosites. The number of photosites on the photosensitive element is the same as the number of pixels on the photosensitive image, and the positions correspond one to one.
  • the photosensitive data output by each photosite participating in the exposure is the photosensitive data contained in the pixel at the corresponding position on the photosensitive image. Usually in one exposure process, a photosite may participate in the photosensitive output or not.
  • the image processing device provided by the embodiment of the present invention may be an electronic device as shown below, which is used to execute the high-resolution and high-frame-rate camera method provided by the embodiment of the present invention, and the electronic device may be a device including a communication function.
  • the electronic device may include at least one of the following items: a terminal, a smart phone, a tablet personal computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a Moving Picture Experts Group (MPEG-1 or MPEG-2) Audio Layer 3 (MP3) player, a mobile medical device, a camera or a wearable device (for example, a head-mounted device (HMD) (such as electronic glasses), electronic clothing, an electronic bracelet, an electronic necklace, an electronic application accessory, an electronic tattoo, a smart watch, etc.).
  • HMD head-mounted device
  • the electronic device in the embodiment of the present application can be any device with image capture or video recording function. Therefore, it is obvious to those skilled in the art that the electronic device is not limited to the above-mentioned device.
  • the term "user” used in various embodiments disclosed in the present application may indicate a person using an electronic device or a device (eg, an artificial intelligence electronic device) using the electronic device.
  • FIG. 1 is a network environment of an electronic device provided by an embodiment of the present invention.
  • the electronic device 101 may include a bus 110, a processor 120, a memory 130, an input/output (I/O) interface 140, a display 150, a communication interface 160, a camera 1 (170), a camera 2 (171), etc.
  • the camera 1 (170) and the camera 2 (171) may be variously referred to as a first camera module and a second camera module or a first image capturing module and a second image capturing module, etc. It should be understood that the electronic device 101 may also include only the camera 1 (170) without including the camera 2 (171).
  • Camera 1 (170) may be a front camera that captures the front from the display 150, and camera 2 (171) may be a rear camera that captures the back and may cooperate with the processor 120.
  • the bus 110 may be a circuit that connects the above elements to each other and transmits communications (e.g., control messages) between the above elements.
  • camera 1 (170) and camera 2 (171) may also be rear cameras and may cooperate with the processor 120.
  • the processor 120 may receive (for example) instructions from the above-mentioned other elements (for example, the memory 130, the I/O interface 140, the display 150, the communication interface 160, etc.) via the bus 110, decipher the received instructions, and perform operations or data processing corresponding to the deciphered instructions.
  • the processor 120 may include at least one of a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), and an image signal processor (ISP), for example, may include a CPU, a GPU, a DSP, and an ISP.
  • CPU central processing unit
  • GPU graphics processing unit
  • DSP digital signal processor
  • ISP image signal processor
  • the memory 130 may store data received from the processor 120 or other components (eg, the I/O interface 140, the display 150, the communication interface 160, etc.).
  • the memory 130 may include, for example, programming modules such as a kernel 131, middleware 132, an application programming interface (API) 133, and an application 134.
  • the programming modules may be configured using software, firmware, hardware, or a combination of two or more of software, firmware, and hardware.
  • the kernel 131 may control or manage system resources (e.g., bus 110, processor 120, memory 130, etc.) used to perform operations or functions implemented in the remaining programming modules (e.g., middleware 132, API 133, or application 134).
  • the kernel 131 may provide an interface that allows the middleware 132, API 133, or application 134 to access various components of the electronic device 101 and control or manage them.
  • the middleware 132 may perform a mediation role so that the API 133 or the application 134 can communicate with the kernel 131 to provide and obtain data.
  • the middleware 132 may perform control (e.g., scheduling or load balancing) for the task request using a method of allocating a priority of using system resources (e.g., bus 110, processor 120, or memory 130, etc.) of the electronic device to at least one of the applications 134.
  • API 133 is an interface that allows application 134 to control functions provided by kernel 131 or middleware 132, and may include at least one interface or function (e.g., instruction) for file control, window control, image processing, character control, etc.
  • interface or function e.g., instruction
  • application 134 may include a short message service (SMS)/multimedia message service (MMS) application, an email application, a calendar application, an alarm application, a health care application (e.g., an application for measuring exercise volume or blood sugar, etc.), or an environmental information application (e.g., an application providing atmospheric pressure, humidity, or temperature information, etc.).
  • SMS short message service
  • MMS multimedia message service
  • application 134 may be an application related to information exchange between electronic device 101 and an external electronic device (e.g., electronic device 104).
  • Applications related to the information exchange may include, for example, a notification relay application for transmitting specific information to an external electronic device or a device management application for managing an external electronic device.
  • the notification relay application may include functionality for transmitting notification information generated from different applications (e.g., SMS/MMS applications, email applications, health care applications, or environmental information applications) of the electronic device 101 to an external electronic device (e.g., the electronic device 104). Additionally or alternatively, for example, the notification relay application may receive notification information from an external electronic device (e.g., the electronic device 104) and provide the notification information to a user.
  • the device management application may manage (e.g., install, delete, or update) functions (e.g., turning on or off the external electronic device itself (or some constituent components) or controlling the brightness (or resolution) of a display) and applications running in the external electronic device or services (e.g., communication services or messaging services) provided by the external electronic device.
  • the application 134 may include a specified application according to a property of an external electronic device (e.g., the electronic device 104) (e.g., the type of the electronic device).
  • an external electronic device e.g., the electronic device 104
  • the application 134 may include an application related to music reproduction.
  • the external electronic device is a mobile medical health care device
  • the application 134 may include an application related to health care.
  • the application 134 may include at least one of an application specified in the electronic device 101 and an application received from an external electronic device (e.g., the server 106 or the electronic device 104).
  • the I/O interface 140 may transmit instructions or data input by a user via an I/O unit (e.g., a sensor, a keyboard, or a touch screen) to the processor 120, the memory 130, and the communication interface 160 via, for example, the bus 110.
  • the I/O interface 140 may provide the processor 120 with data about a user touch input via the touch screen.
  • the I/O interface 140 may output instructions or data received from the processor 120, the memory 130, and the communication interface 160 via the bus 110 via an I/O unit (e.g., a speaker or a display).
  • the I/O interface 140 may output voice data processed by the processor 120 to the user via a speaker.
  • the display 150 can display various information (e.g., multimedia data or text data, etc.) to the user.
  • the communication interface 160 can connect the communication between the electronic device 101 and the external device (e.g., the electronic device 104 or the server 106).
  • the communication interface 160 can be connected to the network 162 via wireless communication or wired communication to communicate with the external device.
  • the wireless communication may include, for example, at least one of wireless fidelity (Wi-Fi), Bluetooth (BT), near field communication (NFC), GPS, or cellular communication (e.g., long term evolution (LTE), advanced LTE (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), universal mobile telecommunications system (UMTS), wireless broadband (WiBro), or global system for mobile communications (GSM), etc.).
  • the wired communication may include at least one of a universal serial bus (USB), a high-definition multimedia interface (HDMI), a recommended standard 232 (RS-232), and a plain old telephone service (POTS).
  • USB universal serial bus
  • HDMI high-definition multimedia interface
  • RS-232 recommended standard 232
  • POTS plain old telephone service
  • the network 162 may be a telecommunication network.
  • the telecommunication network may include at least one of a computer network, the Internet, an Internet of Things, and a telephone network.
  • a protocol e.g., a transport layer protocol, a data link layer protocol, or a physical layer protocol
  • a protocol for communication between the electronic device 101 and an external device may be supported by at least one of the application 134, the application programming interface (API) 133, the middleware 132, the kernel 131, or the communication interface 160.
  • API application programming interface
  • Fig. 2 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present invention.
  • the electronic device may be configured with all or part of the electronic device 101 shown in Fig. 1 .
  • the electronic device 201 may include one or more application processors (AP) 210, a communication module 220, a user identification module (SIM) card 224, a memory 230, a sensor module 240, an input device 250, a display 260, an interface 270, an audio module 280, a camera module 290 , camera module 291 , power management module 295 , battery 296 , indicator 297 and motor 298 .
  • AP application processors
  • SIM user identification module
  • the AP 210 may drive an operating system (OS) or an application to control a plurality of hardware or software elements connected to the AP 210, and perform various data processing and operations including multimedia data.
  • OS operating system
  • the AP 210 may be implemented as a system on a chip (SoC).
  • SoC system on a chip
  • the AP 210 may further include at least one of a graphics processing unit (GPU) and a DSP (not shown).
  • GPU graphics processing unit
  • DSP not shown
  • the communication module 220 may perform data transmission/reception in communication between the electronic device 201 (e.g., the electronic device 101) and other electronic devices (e.g., the electronic device 104 or the server 106) connected via a network.
  • the communication module 220 may include a cellular module 221, a Wi-Fi module 223, a BT module 225, a GPS module 227, an NFC module 228, and a radio frequency (RF) module 229.
  • RF radio frequency
  • the cellular module 221 can provide voice communication, image communication, short message service or Internet service, etc. via a communication network (e.g., LTE, LTE-A, CDMA, WCDMA, UMTS, WiBro or GSM, etc.).
  • a communication network e.g., LTE, LTE-A, CDMA, WCDMA, UMTS, WiBro or GSM, etc.
  • the cellular module 221 can use (for example) a user identification module (e.g., SIM card 224) to perform identification and authentication on electronic devices within the communication network.
  • the cellular module 221 can perform at least a portion of the functions that the AP 210 can provide.
  • the cellular module 221 can perform at least a portion of the multimedia control functions.
  • the cellular module 221 may include a communication processor (CP).
  • the cellular module 221 may be implemented as a SoC.
  • elements such as the cellular module 221 (e.g., a communication processor), the memory 230, the power management module 295, etc.) are shown as elements independent of the AP 210 in FIG. 2, the AP 210 may be implemented to include at least a portion of the above elements (e.g., the cellular module 221).
  • the AP 210 or the cellular module 221 may load instructions or data received from a non-volatile memory connected thereto and at least one of other elements onto a volatile memory and process them.
  • the AP 210 or the cellular module 221 may store data received from at least one of the other elements or data generated by at least one of the other elements in the non-volatile memory.
  • the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 may include, for example, a processor for processing data sent/received via the relevant module.
  • the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 are shown as separate blocks in FIG. 2, at least a portion (e.g., two or more elements) of the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 may be included in one integrated circuit (IC) or IC package.
  • IC integrated circuit
  • each corresponding processor in the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 may be implemented as one SoC.
  • the RF module 229 can perform data transmission/reception, for example, transmission/reception of RF signals.
  • the RF module 229 may include, for example, a transceiver, a power amplifier module (PAM), a frequency filter, or a low noise amplifier (LNA), etc.
  • the RF module 229 may also include a component (e.g., a conductor, a wire, etc.) for transmitting/receiving electromagnetic waves through free space in wireless communication.
  • a component e.g., a conductor, a wire, etc.
  • the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, and the NFC module 228 share one RF module 229, at least one of the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 may perform transmission/reception of RF signals via a separate RF module.
  • the SIM card 224 may be a card including a subscriber identification module and may be inserted into a slot formed in a specific position of the electronic device.
  • the SIM card 224 may include unique identification information (e.g., an integrated circuit card identification code (ICCID)) or subscriber information (e.g., an international mobile subscriber identity code (IMSI)).
  • ICCID integrated circuit card identification code
  • IMSI international mobile subscriber identity code
  • the memory 230 may include an internal memory 232 (or referred to as an embedded memory) or an external memory 234.
  • the internal memory 232 may include, for example, at least one of a volatile memory (e.g., a dynamic random access memory (DRAM), a static RAM (SRAM), a synchronous dynamic RAM (SDRAM)) and a non-volatile memory (e.g., a one-time programmable read-only memory (OTPROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a mask ROM, a flash ROM, a NAND flash memory, or a NOR flash memory, etc.).
  • a volatile memory e.g., a dynamic random access memory (DRAM), a static RAM (SRAM), a synchronous dynamic RAM (SDRAM)
  • a non-volatile memory e.g., a one-time programmable read-only memory
  • the internal memory 232 may be a solid state drive (SSD).
  • the external memory 234 may also include a flash drive (e.g., Compact Flash (CF), Secure Digital (SD), Micro Secure Digital (Micro-SD), Mini Secure Digital (Mini-SD), Extreme Digital (xD) or Memory Stick).
  • the external memory 234 may be functionally connected to the electronic device 201 via various interfaces.
  • the electronic device 201 may also include a storage device (or storage medium), such as a hard disk drive.
  • the sensor module 240 can measure physical quantities or detect the operating state of the electronic device 201, and convert the measured or detected information into an electrical signal.
  • the sensor module 240 may include, for example, at least one of the following items: a gesture sensor 240A, a gyroscope sensor 240B, an atmospheric pressure sensor 240C, a magnetic sensor 240D, an acceleration sensor 240E, a grip sensor 240F, a proximity sensor 240G, a color sensor 240H (e.g., a red, green, and blue (RGB) sensor), a biosensor 240I, a temperature/humidity sensor 240J, a light sensor 240K, or an ultraviolet (UV) sensor 240M.
  • a gesture sensor 240A e.g., a gyroscope sensor 240B
  • an atmospheric pressure sensor 240C e.g., a magnetic sensor 240D
  • an acceleration sensor 240E e.g., a grip sensor 240F
  • the sensor module 240 may include, for example, an electronic nose sensor (not shown), an electromyogram (EMG) sensor (not shown), electroencephalogram (EEG) sensor (not shown), electrocardiogram (ECG) sensor (not shown), infrared (IR) sensor (not shown), iris sensor (not shown) or fingerprint sensor (not shown), etc.
  • the sensor module 240 may also include a control circuit for controlling at least one sensor belonging thereto.
  • the input device 250 may include a touch panel 252, a (digital) pen sensor 254, a key 256, or an ultrasonic input device 258.
  • the touch panel 252 may detect touch input using at least one of a capacitive, resistive, infrared, or ultrasonic method.
  • the touch panel 252 may also include a control circuit.
  • a capacitive touch panel may perform physical contact detection or proximity detection.
  • the touch panel 252 may also include a tactile layer. In this case, the touch panel 252 may provide a tactile response to the user.
  • the (digital) pen sensor 254 may be implemented using a method that is the same as or similar to the method of receiving the user's touch input or using a separate panel for detection.
  • the key 256 may include (for example) a physical button, an optical key, or a keypad.
  • the ultrasonic input device 258 is a unit that recognizes data by detecting sound waves using a microphone (for example, microphone 288) in the electronic device 201 via an input tool that generates an ultrasonic signal, and is capable of wireless detection.
  • the electronic device 201 may use the communication module 220 to receive user input from an external device (for example, a computer or server) connected to the communication module 220.
  • the display 260 may include a panel 262, a hologram device 264, or a projector 266.
  • the panel 262 may be, for example, a liquid crystal display (LCD) or an active matrix organic light emitting diode (AM-OLED), etc.
  • the panel 262 may be implemented as a flexible, transparent, or wearable device.
  • the panel 262 may be configured as a module together with the touch panel 252.
  • the hologram device 264 may display a three-dimensional image in the air using interference of light.
  • the projector 266 may project light onto a screen to display an image.
  • the screen may be located inside or outside the electronic device 201.
  • the display 260 may also include a control circuit for controlling the panel 262, the hologram device 264, or the projector 266.
  • the interface 270 may include, for example, an HDMI 272, a USB 274, an optical interface 276, or a D-subminiature (D-sub) 278.
  • the interface 270 may be included in, for example, the communication interface 160 shown in FIG1 . Additionally or alternatively, the interface 270 may include a mobile high-definition link (MHL) interface, an SD card/multimedia card (MMC) interface, or an infrared data association (IrDA) standard interface.
  • MHL mobile high-definition link
  • MMC SD card/multimedia card
  • IrDA infrared data association
  • the audio module 280 can convert sound and electrical signals bidirectionally. At least a portion of the audio module 280 may be included in, for example, the I/O interface 140 shown in FIG. 1.
  • the audio module 280 may process sound information input or output via, for example, a speaker 282, a receiver 284, an earphone 286, or a microphone 288.
  • Camera module 290 and camera module 291 are devices that can capture still images and moving pictures, and can be manufactured as one module, which can be camera 1 (170) and camera 2 (171) in Figure 1, respectively.
  • camera module 290 and camera module 291 may include one or more image sensors (e.g., front sensors or back sensors), lenses (not shown), image signal processors (ISPs) (not shown), DSPs (not shown), or flashes (e.g., LEDs or xenon lamps).
  • the ISP or DSP may be independent of the components of AP 210, but AP 210 may be implemented to include at least one of the ISP or DSP.
  • the power management module 295 may manage power of the electronic device 201. Although not shown, the power management module 295 may include, for example, a power management integrated circuit (PMIC), a charger integrated circuit (IC), or a battery gauge or a fuel gauge.
  • PMIC power management integrated circuit
  • IC charger integrated circuit
  • fuel gauge battery gauge
  • the PMIC may be installed inside an integrated circuit or a SoC semiconductor.
  • the charging method may be classified into a wired charging method and a wireless charging method.
  • the charging IC may charge the battery and may prevent the introduction of an overvoltage or overcurrent from the charger.
  • the charging IC may include a charging IC for at least one of a wired charging method and a wireless charging method.
  • the wireless charging method may be, for example, a magnetic resonance method, a magnetic induction method, or an electromagnetic wave method, and may additionally include an additional circuit for wireless charging, for example, a circuit (such as a coil ring, a resonant circuit, or a rectifier, etc.).
  • the battery gauge can measure, for example, the remaining amount of the battery 296 and the voltage, current, or temperature when charging.
  • the battery 296 can store or generate electricity and use the stored or generated electricity to power the electronic device 201.
  • the battery 296 may include, for example, a rechargeable battery or a solar cell.
  • the indicator 297 may display a specific state of the electronic device 201 or a portion thereof (e.g., the AP 210), such as a startup state, a message state, or a charging state.
  • the motor 298 may convert an electrical signal into a mechanical vibration.
  • the electronic device 201 may include a processor (e.g., a GPU) for supporting a mobile TV.
  • the processor for supporting a mobile TV may process media data corresponding to a standard (e.g., such as digital multimedia broadcasting (DMB), digital video broadcasting (DVB), a media stream, etc.).
  • DMB digital multimedia broadcasting
  • DVD digital video broadcasting
  • Each of the above-mentioned elements of the electronic device may be configured using one or more components, and the names of the related elements may change according to the type of the electronic device.
  • the electronic device may include at least one of the above-mentioned elements, and may omit a portion of the elements, or may further include additional other elements.
  • a portion of the elements of the electronic device may be combined to form an entity and similarly perform the functions of the related elements before the combination.
  • the above-mentioned camera or camera module may also be referred to as a camera head, a lens module or a lens, wherein the camera or camera module may also include at least one of a focus motor and an anti-shake motor.
  • Figure 3 is a flow chart of a high-resolution and high-frame-rate camera method provided in an embodiment of the present application.
  • the method can be performed by an image processing device (e.g., an electronic device) in the context, and the image processing device includes a camera.
  • the photosensitive element in the camera can use a periodic photosensitivity method to convert the captured light signal into a corresponding digital signal, thereby obtaining a corresponding photosensitive image.
  • the method includes steps S310 and S320.
  • Step S310 Acquire N photosensitive images, wherein the N photosensitive images are obtained by exposing the photosensitive element N times in succession; wherein the N photosensitive images include a first photosensitive image and a second photosensitive image, the first photosensitive image includes P photosensitive data, the P photosensitive data are obtained by exposing P photosensitive points on the photosensitive element in one exposure process, the second photosensitive image includes Q photosensitive data, the Q photosensitive data are obtained by exposing Q photosensitive points on the photosensitive element in one exposure process, the P photosensitive points and the Q photosensitive points are at different positions on the photosensitive element, P and Q are integers greater than or equal to 1, and N is an integer greater than or equal to 2.
  • the photosensitive data corresponding to each photosensitive point (also called the photosensitive data output by each photosensitive point through exposure) is the photosensitive data contained in the corresponding pixel point on the obtained photosensitive image.
  • each photosensitive image in this application is the original photosensitive data generated by exposing the photosensitive element once, usually referred to as RAW Data.
  • the first photosensitive image and the second photosensitive image are any two of the above-mentioned N photosensitive images.
  • the first photosensitive image is obtained by exposing P photosensitive points on the photosensitive element in a single exposure process.
  • the P photosensitive points correspond one-to-one to the P pixel points on the first photosensitive image.
  • Each of the P pixel points contains the photosensitive data output by the corresponding photosensitive point through exposure.
  • each of the Q pixel points on the second photosensitive image contains the photosensitive data output by the corresponding photosensitive point through exposure.
  • the exposed photosensitivity points on the photosensitive element are different, that is, in the N photosensitive images obtained by N exposures, the photosensitive data contained in each photosensitive image is obtained by exposing photosensitivity points at different positions.
  • the following describes in detail the photosensitivity/exposure method of the terminal device in the time domain and spatial domain.
  • the terminal device collects data by periodically sensing light (i.e., K light sensing cycles in this application), the number of exposures to the photosensitive element is the same in each light sensing cycle, and one exposure process of the photosensitive element generates a light sensing image. Then, the photosensitive element is exposed using continuous light sensing cycles to obtain multiple light sensing images.
  • the N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, and the K*E photosensitive images are obtained by exposing in sequence through K consecutive photosensitive cycles, and each of the K photosensitive cycles exposes the photosensitive element for E consecutive times, where K is a positive integer greater than or equal to 1.
  • the photosensitive element is exposed through K consecutive photosensitive cycles, and the photosensitive element is exposed E times in each photosensitive cycle, so that K*E photosensitive images that are continuous in the time domain are obtained.
  • the above N photosensitive images are N photos that are continuous in the time domain among the K*E photosensitive images.
  • the specific exposure method in each photosensitive cycle is as follows:
  • the photosensitive element includes M photosensitive units, each of the M photosensitive units includes C photosensitizing points, and in each photosensitive cycle, all photosensitizing points in each photosensitive unit are exposed once in sequence, and C and M are positive integers greater than or equal to 2.
  • the photosensitive element is divided into at least one photosensitive unit, each of which includes at least one photosensitive point.
  • at least one photosensitive point in each photosensitive unit is exposed during each exposure, and the exposed photosensitive points in the same photosensitive unit are different during any two exposures.
  • each photosensitive unit contains 4 photosites: photosite 1, photosite 2, photosite 3, and photosite 4. If each photosensitive cycle contains 4 exposure processes, and each exposure process exposes a photosite in the photosensitive unit, then in the 4 exposure processes in one photosensitive cycle, the exposure order of the 4 photosites in each photosensitive unit can be: first exposure photosite 3, second exposure photosite 2, third exposure photosite 4, fourth exposure photosite 1; or first exposure photosite 4, second exposure photosite 1, third exposure photosite 2, fourth exposure photosite 3.
  • each photosensitive unit contains 4 photosites: photosite 1, photosite 2, photosite 3, and photosite 4. If each photosensitive cycle contains 3 exposure processes, and each exposure process exposes one or two photosites in the photosensitive unit, then in the 3 exposure processes in one photosensitive cycle, the exposure order of the 4 photosites in each photosensitive unit can be: first exposure photosite 3, second exposure photosite 4. Photosites 2 and 4 are exposed, and photosite 1 is exposed for the third time; or photosites 2 and 4 are exposed for the first time, photosite 1 is exposed for the second time, and photosite 3 is exposed for the third time.
  • FIG. 4 is a schematic diagram of a data acquisition method provided in an embodiment of the present application, which shows a schematic diagram of the process of exposing a photosensitive element four times continuously within a photosensitive cycle in the time domain t.
  • the four exposures respectively obtain four photosensitive images: photosensitive images 1-4, and the four photosensitive images are four consecutive images in a low-resolution and high-frame rate photosensitive image sequence.
  • the plane where the photosensitive element is located is represented by the X-0-Y coordinate system, and the length of each photosensitive point in the x direction and the y direction is 1 respectively. It can be seen that the photosensitive element contains 16 photosensitive points. The 16 photosensitive points are divided into 4 photosensitive units: photosensitive units 1-4. Each photosensitive unit contains 4 photosensitive points.
  • the coordinates of the 4 photosensitive points contained in photosensitive unit 1 are (1, 1) (2, 1) (1, 2) (2, 2) respectively; the coordinates of the 4 photosensitive points contained in photosensitive unit 2 are (3, 1) (3, 2) (4, 1) (4, 2) respectively; the coordinates of the 4 photosensitive points contained in photosensitive unit 3 are (1, 3) (2, 3) (1, 4) (2, 4) respectively; the coordinates of the 4 photosensitive points contained in photosensitive unit 4 are (3, 3) (4, 3) (3, 4) (4, 4) respectively.
  • each photosensitive unit is numbered 1, 2, 3, and 4 from left to right and from top to bottom.
  • the photosites in the shaded part are photosites that are not exposed, and the photosites in the blank part are photosites that are exposed. It can be seen that in the first exposure process in FIG4 , photosites 1 in each photosensitive unit are exposed, in the second exposure process, photosites 2 in each photosensitive unit are exposed, in the third exposure process, photosites 4 in each photosensitive unit are exposed, and in the fourth exposure process, photosites 3 in each photosensitive unit are exposed.
  • each photosensitizing point on the photosensitive element is exposed once within one photosensitizing cycle.
  • photosensitive images 1-4 After the above four exposures, four photosensitive images are obtained as shown in FIG4: photosensitive images 1-4.
  • a cell on each photosensitive image is a pixel, that is, the number of pixel points on the photosensitive image is the same as the number of photosensitive points on the photosensitive element, and the positions correspond one to one.
  • the photosensitive data of the photosensitive point with coordinates (1, 1) on the photosensitive element after exposure is the photosensitive data contained in the pixel with coordinates (1, 1) on the photosensitive image.
  • the pixels in the shaded part are pixels containing photosensitive data
  • the pixels in the blank part are pixels not containing photosensitive data.
  • the pixels at coordinates (1, 1), (3, 1), (1, 3) and (3, 3) in the photosensitive image 1 contain photosensitive data.
  • the photosensitive data contained in these four pixels are obtained by exposing the photosensitive points at the corresponding position coordinates on the photosensitive element during the first exposure process, while the pixels at the remaining positions in the photosensitive image 1 do not contain photosensitive data.
  • the processes of obtaining the photosensitive image 2 obtained by the second exposure, the photosensitive image 3 obtained by the third exposure, and the photosensitive image 4 obtained by the fourth exposure correspond to the same process as the photosensitive image 1 obtained by the first exposure mentioned above, and will not be repeated here.
  • Step S320 Fusing the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
  • the N photosensitive images and the third photosensitive image have the same size and contain the same number of pixels.
  • the pixels contained in each of the N photosensitive images correspond one-to-one to the photosensitive points contained in the photosensitive element.
  • the pixels in the third photosensitive image correspond one-to-one to the photosensitive points in the photosensitive element.
  • the one-to-one correspondence process is described below using the second photosensitizing point on the photosensitive element as an example: the position of the pixel corresponding to the second photosensitizing point in the first photosensitive image, the position of the pixel corresponding to the second photosensitizing point in the second photosensitive image, and the position of the pixel corresponding to the second photosensitizing point in the third photosensitive image are all the same.
  • the process of obtaining the third photosensitive image by fusion processing is as follows: taking the first photosensitive image as an example, the P photosensitive points correspond to the P pixels on the first photosensitive image, and the P pixels on the first photosensitive image correspond to the P photosensitive data output by the P photosensitive points through exposure, and each of the P photosensitive data is used to describe the pixel information of the corresponding pixel.
  • the P photosensitive points also correspond to the P pixels on the third photosensitive image, and the positions of the P pixels on the third photosensitive image correspond to the same positions of the P pixels on the first photosensitive image.
  • the process of fusing the first photosensitive image is as follows: taking the first pixel among the P pixels on the first photosensitive image as an example, the photosensitive data contained in the first pixel on the first photosensitive image is used as the photosensitive data contained in the first pixel in the third photosensitive image.
  • the first pixel is any one of the P pixels.
  • the above process is a process of simply fusing the N acquired photosensitive images.
  • N is equal to the number of exposures in each photosensitive cycle
  • the photosites in each photosensitive unit are exposed in sequence in each photosensitive cycle, when N consecutive photosites are selected in the time domain, and N is equal to the number of exposures in one photosensitive cycle, the number of all photosites contained in the N photosites is exactly equal to the number of photosites on the photosensitive element, and is obtained by exposing each photosite on the photosensitive element once.
  • the simple fusion method in the aforementioned embodiment can be directly used to fuse the N photosites, and each pixel in the obtained third photosensitive image contains a photosensitive data output by the exposure of the corresponding photosite.
  • Mode 1 During each exposure process in a photosensitive cycle, one photosensitive point in each photosensitive unit is exposed, and N is equal to C. In this exposure mode, by making N equal to C, the photosensitive data contained in N photosensitive images can be obtained by exposing each photosensitive point on the photosensitive element once.
  • Mode 2 During each exposure process in a photosensitive cycle, one or more photosites in each photosensitive unit are exposed, and N is equal to the number of exposures in a photosensitive cycle. In this exposure mode, by making N equal to the number of exposures in a photosensitive cycle, the photosensitive data contained in N photosites can be obtained by exposing each photosite on the photosensitive element once.
  • the photosensitive data corresponding to the photosites that are not exposed in this continuous time domain can be calculated by spatial interpolation or calculation of motion vectors. The following is a detailed discussion of these two calculation methods.
  • the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
  • the interpolation process may be nearest neighbor interpolation, bilinear interpolation, high-order interpolation or other feasible interpolation methods, which are not limited in the present application.
  • the photosensitive data corresponding to the photosensitivity points in each photosensitive unit that have not been exposed in the above-mentioned continuous time domain i.e., N exposures
  • the calculated photosensitive data is used as the photosensitive data contained in the corresponding pixel points in the third photosensitive image, so that each pixel point in the third photosensitive image contains a photosensitive data.
  • a motion vector is calculated based on the photosensitive data contained in the N photosensitive images, and the photosensitive data corresponding to the first photosensitive point is calculated based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
  • the first H photosensitive images adjacent to the N photosensitive images in the time domain are obtained, and the H photosensitive images are fused using the simple fusion method to obtain a sixth photosensitive image.
  • H is equal to the number of exposures in each photosensitive cycle, that is, each pixel in the sixth photosensitive image contains a photosensitive data; in this case, H is greater than N.
  • the N photosensitive images are also fused using the simple fusion method to obtain a third photosensitive image.
  • feature point matching is performed on the third photosensitive image and the sixth photosensitive image (for example, by matching feature points on the edge of the object, etc.), specifically: the positions of pixels representing the same object part on the third photosensitive image and the sixth photosensitive image and the corresponding displacements are determined; and then the moving direction and speed of the pixels are calculated through the displacement and the time difference between the third photosensitive image and the sixth photosensitive image, that is, the moving direction and speed (or motion vector) of the object in the photosensitive image.
  • the photosensitive data corresponding to the unexposed photosites on the first photosensitive unit during the N exposures is determined.
  • the specific calculation process is described by taking the first photosites in the first photosensitive unit as an example.
  • the first photosites correspond to the first pixel on the third photosensitive image.
  • the motion vector obtained by the above calculation determines that the first pixel corresponds to the second pixel on the sixth photosensitive image.
  • the second pixel is a pixel representing the same part of the object as the first pixel.
  • the photosensitive data contained in the second pixel is used as the photosensitive data contained in the first pixel, that is, as the photosensitive data corresponding to the first photosites.
  • the photosensitive data of each photosensitive point in the M photosensitive units is calculated.
  • the photosensitive data corresponding to each unexposed photosensitive point in the unit is used, and the photosensitive data of the unexposed photosensitive point is used as the photosensitive data contained in the corresponding pixel point on the third photosensitive image, so that each pixel point on the third photosensitive image contains a photosensitive data.
  • the time difference between the third photosensitive image and the sixth photosensitive image is obtained by performing a difference calculation between the moment corresponding to the third photosensitive image and the moment corresponding to the sixth photosensitive image.
  • the moment corresponding to the third photosensitive image may be the middle moment of the N exposures corresponding to the third photosensitive image in the time domain.
  • the calculation method of the moment corresponding to the sixth photosensitive image is the same as the calculation method of the moment corresponding to the third photosensitive image.
  • the size of the feature points may be the same as the size of the pixels.
  • each pixel in the fused third photosensitive image can contain a photosensitive data, that is, a high-resolution photosensitive image is obtained.
  • the above embodiment introduces the process of fusing N photosensitive images that are continuous in the time domain among K*E photosensitive images to obtain a corresponding high-resolution photosensitive image.
  • the following describes how to generate a corresponding high-resolution and high-frame-rate photosensitive image sequence based on K*E photosensitive images (i.e., a low-resolution and high-frame-rate photosensitive image sequence).
  • the K*E photosensitive images are acquired through continuous acquisition of K photosensitive cycles, and a group of photosensitive images (i.e., N images) that are continuous in time domain can be selected from the K*E photosensitive images each time for the above fusion processing to obtain a high-resolution photosensitive image.
  • a group of photosensitive images i.e., N images
  • N images a group of photosensitive images that are continuous in time domain
  • the two groups of photosensitive images arbitrarily selected from the K*E photosensitive images, if the two groups of photosensitive images contain N-1 common photosensitive images, then the two photosensitive images obtained after the two groups of photosensitive images are respectively subjected to the above fusion processing are adjacent in the high-resolution and high-frame-rate photosensitive image sequence.
  • a pipeline architecture can be used for fusion to obtain a high-resolution and high-frame-rate photosensitive image sequence, that is, according to the acquisition time domain of the low-resolution and high-frame-rate photosensitive image sequence, with one photosensitive image as a step size, N adjacent images in the time domain are selected in turn for fusion processing.
  • the first time the 1st to Nth photosensitive images among K*E photosensitive images are selected for the above-mentioned fusion processing to obtain a high-resolution photosensitive image
  • the second time the 2nd to N+1th photosensitive images are selected for fusion processing to obtain a high-resolution photosensitive image, and so on.
  • the following describes a process of obtaining, after obtaining the third photosensitive image, the next photosensitive image adjacent to the third photosensitive image in a high-resolution and high-frame-rate photosensitive image sequence: fusing N-1 photosensitive images and a fourth photosensitive image that are continuous in time domain among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in time domain.
  • FIG. 5 is a pipeline architecture for generating a high-resolution and high-frame-rate video sequence provided in an embodiment of the present application.
  • the high frame rate low resolution photosensitive image sequence collected by the photosensitive element includes photosensitive images 1-6, wherein photosensitive image 1 is the photosensitive image obtained by the first exposure.
  • the high frame rate low resolution photosensitive image sequence is stored in the storage unit.
  • the digital number on the data stream represents the transmission process of the corresponding photosensitive image.
  • the pipeline architecture is an architecture for real-time acquisition and real-time fusion. Assuming that four consecutive photosensitive images in the time domain are selected for fusion each time, the specific process of the above pipeline architecture is as follows: During the first fusion process, the fusion unit obtains photosensitive images 1-4 from the storage unit for fusion to obtain photosensitive image 7; During the second fusion process, the fusion unit obtains photosensitive images 2-5 from the storage unit for fusion to obtain photosensitive image 8; During the third fusion process, the fusion unit obtains photosensitive images 3-6 from the storage unit for fusion to obtain photosensitive image 9. Then the fusion is performed in sequence according to the above order, and finally a high frame rate and high resolution photosensitive image sequence is obtained.
  • the method further includes: processing each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
  • the above process of processing the photosensitive data to obtain the corresponding pixel value can be implemented by a feasible module such as an image signal processing ISP module on the terminal device.
  • the pixel value corresponding to each of the above-mentioned photosensitive data is the pixel value corresponding to the pixel point where the photosensitive data is located.
  • the pixel value obtained by processing each photosensitive data can be in RGB format, YUV format or other feasible format, which is not limited in this application.
  • Figure 6 is a schematic diagram of the execution process of a high-resolution and high-frame-rate camera method provided in an embodiment of the present application.
  • the method can be implemented by a camera module as shown in Figure 6, and the camera module can be the camera in the embodiment of Figure 1.
  • the execution process of the method includes: obtaining a high frame rate and low resolution photosensitive image sequence by periodically exposing the photosensitive element (i.e., the aforementioned multiple continuous photosensitive cycles), and caching by the storage unit, and then the fusion unit uses the aforementioned pipeline architecture to fuse the high frame rate and low resolution photosensitive image sequence in the time domain order to obtain a high resolution and high frame rate photosensitive image sequence. Finally, the image signal processing unit (such as ISP, etc.) processes the photosensitive data contained in each photosensitive image in the high resolution and high frame rate photosensitive image sequence to obtain the corresponding pixel value, that is, to obtain a high resolution and high frame rate image sequence/video for display to the user.
  • the image signal processing unit such as ISP, etc.
  • the modules included in the camera module in FIG6 are only examples and are not sufficient to limit the integration of the modules. That is, the storage unit, the fusion unit and the image signal processing unit can be independent modules or integrated in one or more modules.
  • FIG. 7 is an image processing device provided in an embodiment of the present application.
  • the device includes an acquisition unit 701 and a fusion unit 702; wherein,
  • the acquisition unit 701 is used to acquire N photosensitive images, wherein the N photosensitive images are obtained by exposing the photosensitive element N times in succession; wherein the N photosensitive images include a first photosensitive image and a second photosensitive image, wherein the first photosensitive image includes P photosensitive data, wherein the P photosensitive data are obtained by exposing P photosensitive points on the photosensitive element in one exposure process, and the second photosensitive image includes Q photosensitive data, wherein the Q photosensitive data are obtained by exposing Q photosensitive points on the photosensitive element in one exposure process, wherein the P photosensitive points and the Q photosensitive points have different positions on the photosensitive element, wherein P and Q are integers greater than or equal to 1, and N is an integer greater than or equal to 2.
  • the fusion unit 702 is used to fuse the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
  • the image processing device also includes a storage unit (not shown in Figure 7) for storing the above-mentioned N photosensitive images or storing the N photosensitive images and the previous H photosensitive images adjacent to the N photosensitive images in the time domain, where H is equal to the number of exposures in one photosensitive cycle.
  • the N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, and the K*E photosensitive images are obtained by sequentially exposing through K consecutive photosensitive cycles, and each of the K photosensitive cycles exposes the photosensitive element for E consecutive times, and K is a positive integer greater than or equal to 1;
  • the photosensitive element includes M photosensitive units, each of the M photosensitive units contains C photosensitivity points, and in each photosensitive cycle, all photosensitivity points in each photosensitive unit are exposed once in sequence, and C and M are positive integers greater than or equal to 2.
  • the M photosensitive units include a first photosensitive unit
  • the fusion unit 702 is further used for: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures, and the photosensitive data corresponding to the first photosensitive point is calculated; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
  • the M photosensitive units include a first photosensitive unit
  • the fusion unit 702 is further used to: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures corresponding to the N photosensitive images, calculate a motion vector based on the photosensitive data contained in the N photosensitive images, and calculate the photosensitive data corresponding to the first photosensitive point based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes photosensitive data corresponding to the first photosensitive point.
  • the fusion unit 702 is also used to: fuse N-1 photosensitive images and a fourth photosensitive image that are continuous in time domain among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in time domain.
  • the device further includes: an image signal processing unit, configured to process each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
  • the specific execution process of the above-mentioned image processing device 700 can refer to the execution process of the high-resolution and high-frame rate camera method described in the embodiment of Figure 3 in the above-mentioned embodiment, and will not be repeated here.
  • FIG8 is a schematic diagram of the hardware structure of an image processing device provided by an embodiment of the present invention.
  • the image processing device 800 may include all or part of the components or modules in the electronic device 101 and the electronic device 201. As shown in FIG8 , the image processing device 800 may be used as an implementation of the image processing device 700.
  • the image processing device 800 includes a processor 802, a memory 804, an input/output interface 806, a communication interface 808, and a bus 810.
  • the processor 802, the memory 804, the input/output interface 806, and the communication interface 808 are connected to each other through the bus 810.
  • the processor 802 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits, which are used to execute relevant programs to implement the functions required to be performed by the units included in the image processing device 800 provided in the embodiment of the present invention, or to perform the high-resolution and high-frame-rate camera method provided in the method embodiment and the content of the invention.
  • the processor 802 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 802 or an instruction in the form of software.
  • the above-mentioned processor 802 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the disclosed methods, steps and logic block diagrams in the embodiments of the present invention may be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc.
  • the software module may be located in a random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, register, or other mature storage media in the art.
  • the storage medium is located in the memory 804, and the processor 802 reads the information in the memory 804 and completes the steps in the above method embodiment in combination with its hardware.
  • the memory 804 may be a read-only memory (ROM), a static storage device, a dynamic storage device or a random access memory (RAM).
  • the memory 804 may store an operating system and other application programs.
  • the program code for implementing the technical solution provided in the embodiment of the present invention is stored in the memory 804, and the processor 802 executes the operations required to be performed by the units included in the image processing device 700, or executes the high-resolution and high-frame-rate camera method provided in the method embodiment of the present invention.
  • the input/output interface 806 is used to receive input data and information, and output data such as operation results.
  • the communication interface 808 uses a transceiver device such as, but not limited to, a transceiver to implement communication between the image processing apparatus 800 and other devices or a communication network.
  • a transceiver device such as, but not limited to, a transceiver to implement communication between the image processing apparatus 800 and other devices or a communication network.
  • the bus 810 may include a path for transmitting information between the various components of the image processing device 800 (eg, the processor 802 , the memory 804 , the input/output interface 806 , and the communication interface 808 ).
  • the image processing device 800 shown in FIG8 only shows the processor 802, the memory 804, the input/output interface 806, the communication interface 808 and the bus 810, in the specific implementation process, those skilled in the art should understand that the image processing device 800 also includes other devices necessary for normal operation, such as a display and a camera. At the same time, according to specific needs, those skilled in the art should understand that the image processing device 800 may also include hardware devices for implementing other additional functions. In addition, those skilled in the art should understand that the image processing device 800 may also only include the devices necessary for implementing the embodiment of the present invention, and does not necessarily include all the devices shown in FIG8.
  • the embodiment of the present application provides a chip system, the chip system includes at least one processor, a memory and a communication interface, the memory, the communication interface and the at least one processor are interconnected by lines, and the at least one memory stores instructions; when the instructions are executed by the processor, some or all of the steps recorded in any one of the above method embodiments are implemented.
  • the embodiment of the present application provides a computer storage medium, the computer storage medium stores a computer program, and when the computer program is executed, some or all of the steps recorded in any one of the above method embodiments are implemented.
  • An embodiment of the present application provides a computer program, which includes instructions.
  • the computer program is executed by a processor, part or all of the steps of any one of the above method embodiments are implemented.
  • the disclosed device can be implemented in other ways.
  • the device embodiments described above are only schematic, and the division of the above units is only a logical function division. There are other ways of dividing, for example, multiple units or components can be combined or integrated into another system, or some features can be ignored or not performed.
  • the coupling or direct coupling or communication connection between each other shown or discussed can be an indirect coupling or communication connection through some interface, device or unit, which can be electrical or other forms.
  • the units described above as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

Disclosed in the present application are a high-resolution high-frame-rate photographing method, and an image processing apparatus. The method comprises: acquiring N photosensitive images, wherein the N photosensitive images are respectively obtained by performing N consecutive instances of exposure on a photosensitive element, a first photosensitive image among the N photosensitive images comprises P pieces of photosensitive data, and a second photosensitive image therein comprises Q pieces of photosensitive data, the P pieces of photosensitive data and the Q pieces of photosensitive data being respectively obtained by exposing P photosensitive points and Q photosensitive points, and the positions of the P photosensitive points and Q photosensitive points on the photosensitive element being different; and performing fusion processing on the N photosensitive images to obtain a third photosensitive image, wherein the third photosensitive image comprises the P pieces of photosensitive data respectively corresponding to the P photosensitive points, and the Q pieces of photosensitive data respectively corresponding to the Q photosensitive points. By means of the present application, a high-resolution high-frame-rate photosensitive image can be obtained without increasing data transmission bandwidths, thereby improving the output image quality of an electronic device.

Description

高分辨率高帧率摄像方法和图像处理装置High-resolution and high-frame-rate imaging method and image processing device
本申请要求于2022年9月26日提交中国专利局、申请号为202211187446.0、申请名称为“高分辨率高帧率摄像方法和图像处理装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the China Patent Office on September 26, 2022, with application number 202211187446.0 and application name “High-resolution and high-frame-rate camera method and image processing device”, the entire contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请涉及视频数据采集与处理技术领域,尤其涉及一种高分辨率高帧率摄像方法和图像处理装置。The present application relates to the technical field of video data acquisition and processing, and in particular to a high-resolution and high-frame-rate camera method and an image processing device.
背景技术Background technique
伴随移动设备的普及和短视频行业兴起,手机摄影对于成像素质越来越高,记录生活的视频规格需求从1080逐行扫描(Progressive Scan,P)+30每秒传输帧数(Pixel/Frame Per Second,FPS)提升至4千(Kilo,K)+30FPS,甚至8K+30FPS等。With the popularization of mobile devices and the rise of the short video industry, mobile phone photography has higher and higher requirements for imaging quality. The video specification requirements for recording life have increased from 1080 progressive scan (Progressive Scan, P) + 30 frames per second (Pixel/Frame Per Second, FPS) to 4 thousand (Kilo, K) + 30FPS, or even 8K + 30FPS.
移动设备视频数据采集的过程通常如下:通过相机的感光元件(例如,传感器Sensor等)采集到光信号原始数据,经由数据总线传输到图像处理模块(例如,影像处理器(Image Signal Processor,ISP)等)处理为正常彩色图像数据,最后传输到设备完成数据存储或实时预览。The process of video data acquisition on mobile devices is usually as follows: the original data of the light signal is collected through the camera's photosensitive element (for example, sensor, etc.), transmitted to the image processing module (for example, image signal processor (ISP)), etc.) via the data bus, processed into normal color image data, and finally transmitted to the device to complete data storage or real-time preview.
因为感光元件到图像处理模块数据总线传输带宽和传输功耗的限制,感光元件单位时间输出的光信号原始数据总量有上限。即若没有经过特殊处理,感光元件输出的感光图像序列不可能同时满足高分辨率和高帧率的要求。Due to the limitation of data bus transmission bandwidth and power consumption from the photosensitive element to the image processing module, the total amount of raw data of the light signal output by the photosensitive element per unit time has an upper limit. That is, without special processing, the photosensitive image sequence output by the photosensitive element cannot meet the requirements of high resolution and high frame rate at the same time.
现有技术通常通过以下两种方式将采集到的低分辨率高帧率感光图像转化为高分辨率高帧率的感光图像:(一)通过双目相机进行采集,获取更多的额外感光数据,来实现高分辨率;(二)采集低分辨率高帧率感光图像,然后利用训练好的神经网络模型进行图像处理(例如,上采样等方式),得到高分辨感光图像。Existing technologies usually convert captured low-resolution and high-frame-rate photosensitive images into high-resolution and high-frame-rate photosensitive images in the following two ways: (i) collecting through a binocular camera to obtain more additional photosensitive data to achieve high resolution; (ii) collecting low-resolution and high-frame-rate photosensitive images, and then using a trained neural network model to perform image processing (for example, upsampling, etc.) to obtain high-resolution photosensitive images.
然而上述现有技术需要依赖于特定硬件(即双目相机),因而适用范围较小且成本高,或者依赖模型先验信息而造成泛化效果不确定,且硬件上需要引入额外的神经网络处理单元。However, the above-mentioned existing technologies need to rely on specific hardware (i.e., binocular cameras), and thus have a small scope of application and high cost, or rely on model prior information resulting in uncertain generalization effects, and additional neural network processing units need to be introduced in the hardware.
发明内容Summary of the invention
本申请实施例提供了一种高分辨率高帧率摄像方法和图像处理装置,可以在不增加数据传输带宽的前提下,利用帧间信息对低分辨率感光图像进行融合,得到对应的高分辨率高帧率感光图像,进而有效提升电子设备的输出画质。The embodiments of the present application provide a high-resolution and high-frame-rate camera method and image processing device, which can use inter-frame information to fuse low-resolution photosensitive images without increasing the data transmission bandwidth to obtain corresponding high-resolution and high-frame-rate photosensitive images, thereby effectively improving the output image quality of the electronic device.
第一方面,本申请提供了一种高分辨率高帧率摄像方法,所述方法包括:获取N张感光图像,所述N张感光图像是通过对感光元件进行连续N次曝光分别得到的;其中,所述N张感光图像中包括第一感光图像和第二感光图像,所述第一感光图像包括P个感光数据,所述P个感光数据是在一次曝光过程中对所述感光元件上的P个感光点分别进行曝光得到的,所述第二感光图像包括Q个感光数据,所述Q个感光数据是通在一次曝光过程中对所述感光元件上的Q个感光点分别进行曝光得到的,所述P个感光点和所述Q个感光点在所述感光元件上的位置不同,P、Q为大于或等于1的整数,N为大于或等于2的整数;将所述N张感光图像进行融合处理,得到第三感光图像;其中,所述第三感光图像包括与所述P个感光点分别对应的所述P个感光数据,以及与所述Q个感光点分别对应的所述Q个感光数据。In a first aspect, the present application provides a high-resolution and high-frame-rate camera method, the method comprising: acquiring N photosensitive images, the N photosensitive images being obtained by exposing a photosensitive element N times in succession; wherein the N photosensitive images include a first photosensitive image and a second photosensitive image, the first photosensitive image including P photosensitive data, the P photosensitive data being obtained by exposing P photosensitive points on the photosensitive element in a single exposure process, the second photosensitive image including Q photosensitive data, the Q photosensitive data being obtained by exposing Q photosensitive points on the photosensitive element in a single exposure process, the P photosensitive points and the Q photosensitive points having different positions on the photosensitive element, P and Q being integers greater than or equal to 1, and N being an integer greater than or equal to 2; fusing the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
从技术效果上看,本申请通过在一段连续时域上,高频曝光感光元件上不同的感光点,从而得到多帧低分辨率的原始感光图像,且任意两张感光图像中包含的被曝光的感光点位置不同(此种曝光方式得到低分辨率高帧率感光图像对数据总线的传输带宽没有要求)。然后将所有被曝光的感光点对应的感光数据进行组合,得到第三感光图像,使得第三感光图像中包含全部或大部分感光点对应的感光数据,即得到了对应的包含高分辨率信息的感光图像。综上,本申请在无需增加硬件开销和数据传输带宽的基础上,通过特定感光方式和融合方式的配合,可以得到低分辨率感光图像对应的高分辨感光图像(即第三图像),进而有效提升终端设备输出的视频或图像质量。此外,本申请利用时域上相邻帧进行融合,时域和空域上的相关性更强,进而使得融合得到的高分辨率感光图像中感光数据更加接近真实,输出图像/视频效果更好。From the technical effect point of view, the present application obtains multiple frames of low-resolution original photosensitive images by exposing different photosites on the photosensitive element at high frequency in a continuous time domain, and the exposed photosites contained in any two photosensitive images have different positions (this exposure method does not require the transmission bandwidth of the data bus to obtain low-resolution and high-frame-rate photosensitive images). Then the photosensitive data corresponding to all the exposed photosites are combined to obtain a third photosensitive image, so that the third photosensitive image contains the photosensitive data corresponding to all or most of the photosites, that is, the corresponding photosensitive image containing high-resolution information is obtained. In summary, the present application can obtain a high-resolution photosensitive image (i.e., a third image) corresponding to a low-resolution photosensitive image through the cooperation of a specific photosensitivity method and a fusion method without increasing hardware overhead and data transmission bandwidth, thereby effectively improving the video or image quality output by the terminal device. In addition, the present application uses adjacent frames in the time domain for fusion, and the correlation in the time domain and the spatial domain is stronger, so that the photosensitive data in the fused high-resolution photosensitive image is closer to reality, and the output image/video effect is better.
在一种可行的实施方式中,所述N张感光图像为K*E张感光图像中在时域上连续的N张,所述K*E 张感光图像是通过连续的K个感光周期依次曝光得到的,所述K个感光周期中的每个感光周期都对所述感光元件进行连续E次曝光;所述感光元件包括M个感光单元,所述M个感光单元中的每个感光单元包含C个感光点,且在所述每个感光周期内,所述每个感光单元内的所有感光点都按序进行了一次曝光,C为大于或等于2的正整数。In a feasible implementation manner, the N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, wherein the K*E A photosensitive image is obtained by sequentially exposing through K consecutive photosensitive cycles, wherein each of the K photosensitive cycles exposes the photosensitive element for E consecutive times; the photosensitive element includes M photosensitive units, each of the M photosensitive units includes C photosensitive points, and within each photosensitive cycle, all photosensitive points within each photosensitive unit are sequentially exposed once, and C is a positive integer greater than or equal to 2.
从技术效果上看,由于在一次感光周期内每个感光单元内的感光点都按顺序进行了一次曝光,因此在采集时域上任意选取连续的N张感光图像,该N张感光图像分别包含的感光数据都是由不同的感光点进行曝光得到的,此时进行融合,便可得到包含大部分或全部感光点对应感光数据的高分辨率感光图像。此外,每次只需要选取连续的N张感光图像,也即终端设备最少只需保存采集到的N张感光图像,减少终端设备的缓存开销。From the technical effect point of view, since the photosites in each photosensitive unit are exposed once in sequence during one photosensitive cycle, N consecutive photosites are randomly selected in the acquisition time domain. The photosites data contained in each of the N photosites are obtained by exposure of different photosites. At this time, fusion can be performed to obtain a high-resolution photosites containing most or all photosites corresponding to the photosites data. In addition, only N consecutive photosites need to be selected each time, that is, the terminal device only needs to save the N photosites collected at least, reducing the cache overhead of the terminal device.
在一种可行的实施方式中,在一个感光周期内的每次曝光中,所述每个感光单元内的一个感光点被曝光,所述N等于所述C。In a feasible implementation, in each exposure within a photosensitive cycle, a photosensitivity point in each photosensitive unit is exposed, and N is equal to C.
从技术效果上看,当一个感光周期内的每次曝光中只曝光一个感光点,且N等于一个感光周期内的曝光次数时,选取的N张感光图像包含的感光数据正好都是由感光元件上的每个感光点曝光一次得到的,此时直接融合,得到的第三感光图像便包含感光元件上每个感光点对应的感光数据,即得到一张包含高分辨信息的感光图像。From a technical effect point of view, when only one photosensitivity point is exposed in each exposure within a photosensitive cycle, and N is equal to the number of exposures within a photosensitive cycle, the photosensitive data contained in the N selected photosensitive images are obtained by exposing each photosensitivity point on the photosensitive element once. At this time, they are directly fused, and the third photosensitive image obtained contains the photosensitive data corresponding to each photosensitivity point on the photosensitive element, that is, a photosensitive image containing high-resolution information is obtained.
在一种可行的实施方式中,所述M个感光单元中包括第一感光单元,所述方法还包括:当所述第一感光单元中的第一感光点在所述N次曝光中都未被曝光时,基于所述第一感光单元在所述N次曝光中被曝光感光点对应的感光数据进行插值,计算得到所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;所述第三感光图像还包括与所述第一感光点对应的感光数据。In a feasible embodiment, the M photosensitive units include a first photosensitive unit, and the method further includes: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures, and the photosensitive data corresponding to the first photosensitive point is calculated; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
从技术效果上看,当在N张感光图像对应的时域内,感光元件上存在部分感光点未进行曝光,即获取的N张感光图像中包含的所有感光数据中不包含这部分未曝光的感光点对应的感光数据,此时,对于一个感光单元内未曝光的感光点,可以利用该感光单元内已曝光的感光点对应的感光数据进行空间上的插值,得到该感光单元内未曝光的感光点对应的感光数据,并将该感光数据作为第三感光图像中对应包含的感光数据,从而使得第三感光图像中包含每个感光点对应的感光数据,得到一张包含高分辨信息的感光图像。From a technical effect point of view, when there are some photosensitivity points on the photosensitive element that are not exposed in the time domain corresponding to N photosensitive images, that is, all the photosensitive data contained in the acquired N photosensitive images do not include the photosensitive data corresponding to these unexposed photosensitive points. At this time, for the unexposed photosensitive points in a photosensitive unit, the photosensitive data corresponding to the exposed photosensitive points in the photosensitive unit can be used to perform spatial interpolation to obtain the photosensitive data corresponding to the unexposed photosensitive points in the photosensitive unit, and the photosensitive data can be used as the corresponding photosensitive data contained in the third photosensitive image, so that the third photosensitive image contains the photosensitive data corresponding to each photosensitive point, and a photosensitive image containing high-resolution information is obtained.
在一种可行的实施方式中,所述M个感光单元中包括第一感光单元,所述方法还包括:当所述第一感光单元中的第一感光点在所述N张感光图像对应的N次曝光中都未被曝光时,基于所述N张感光图像包含的感光数据计算运动矢量,并基于所述运动矢量计算所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;所述第三感光图像还包括与所述第一感光点对应的感光数据。In a feasible embodiment, the M photosensitive units include a first photosensitive unit, and the method further includes: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures corresponding to the N photosensitive images, calculating a motion vector based on the photosensitive data contained in the N photosensitive images, and calculating the photosensitive data corresponding to the first photosensitive point based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
从技术效果上看,除上述利用插值计算未曝光感光点的感光数据外,还可以将多张感光图像中物体边缘特征点进行匹配,计算得到感光图像上每个像素点在时域上的运动矢量(即移动方向和速度),进而确定未被曝光的感光点与哪个已曝光的感光点是对物体的同一位置进行曝光,从而将该已曝光的感光点的感光数据作为该未曝光感光点的感光数据。通过运动矢量的方式,确定所有未曝光感光点对应的感光数据,从而使得第三感光图像中包含每个感光点对应的感光数据,得到一张包含高分辨信息的感光图像。From the technical effect point of view, in addition to the above-mentioned interpolation calculation of the photosensitive data of the unexposed photosensitive point, the edge feature points of the object in multiple photosensitive images can also be matched to calculate the motion vector (i.e., moving direction and speed) of each pixel point on the photosensitive image in the time domain, and then determine which unexposed photosensitive point and which exposed photosensitive point are exposed to the same position of the object, so that the photosensitive data of the exposed photosensitive point is used as the photosensitive data of the unexposed photosensitive point. The photosensitive data corresponding to all unexposed photosensitive points are determined by means of motion vectors, so that the third photosensitive image contains the photosensitive data corresponding to each photosensitive point, and a photosensitive image containing high-resolution information is obtained.
在一种可行的实施方式中,所述方法还包括:将所述N张感光图像中在时域上连续的N-1张感光图像、第四感光图像进行融合处理,得到与所述第三感光图像相邻的第五感光图像;其中,所述第四感光图像为在时域上与所述N张感光图像相邻的下一张感光图像,且所述第四感光图像与所述N-1张感光图像在时域上相邻。In a feasible implementation, the method further includes: fusing N-1 photosensitive images and a fourth photosensitive image that are continuous in time domain among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in time domain.
从技术效果上看,对于时域上连续曝光的感光图像而言,按照时域的顺序,依次利用N张低分辨率的感光图像进行融合,即时采集即时融合,最少只需缓存N张感光图像,可以有效降低缓存开销和用户的预览延迟。From a technical perspective, for photosensitive images that are continuously exposed in the time domain, N low-resolution photosensitive images are fused in sequence according to the order of the time domain. The images are collected and fused in real time, and at least N photosensitive images need to be cached. This can effectively reduce cache overhead and user preview delay.
在一种可行的实施方式中,所述方法还包括:对所述第三感光图像中包含的每个感光数据进行处理,得到所述第三感光图像上每个感光数据对应的像素值。In a feasible implementation, the method further includes: processing each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
其中,感光元件上的每个感光点对应感光图像上的一个像素点,每个感光点对应的感光数据也即是与该感光点在感光图像上对应的像素点包含的感光数据,此感光数据即是用于计算该像素点的像素值。Among them, each photosensitizing point on the photosensitive element corresponds to a pixel point on the photosensitive image, and the photosensitive data corresponding to each photosensitizing point is also the photosensitive data contained in the pixel point corresponding to the photosensitizing point on the photosensitive image. This photosensitive data is used to calculate the pixel value of the pixel point.
第二方面,本申请提供了一种图像处理装置,所述装置包括:获取单元,用于获取N张感光图像,所述N张感光图像是通过对感光元件进行连续N次曝光分别得到的;其中,所述N张感光图像中包括第一 感光图像和第二感光图像,所述第一感光图像包括P个感光数据,所述P个感光数据是在一次曝光过程中对所述感光元件上的P个感光点分别进行曝光得到的,所述第二感光图像包括Q个感光数据,所述Q个感光数据是通在一次曝光过程中对所述感光元件上的Q个感光点分别进行曝光得到的,所述P个感光点和所述Q个感光点在所述感光元件上的位置不同,P、Q为大于或等于1的整数,N为大于或等于2的整数;融合单元,用于将所述N张感光图像进行融合处理,得到第三感光图像;其中,所述第三感光图像包括与所述P个感光点分别对应的所述P个感光数据,以及与所述Q个感光点分别对应的所述Q个感光数据。In a second aspect, the present application provides an image processing device, the device comprising: an acquisition unit, for acquiring N photosensitive images, wherein the N photosensitive images are obtained by exposing a photosensitive element N times in succession; wherein the N photosensitive images include a first A photosensitive image and a second photosensitive image, wherein the first photosensitive image includes P photosensitive data, the P photosensitive data are obtained by respectively exposing P photosensitive points on the photosensitive element during a single exposure process, and the second photosensitive image includes Q photosensitive data, the Q photosensitive data are obtained by respectively exposing Q photosensitive points on the photosensitive element during a single exposure process, the P photosensitive points and the Q photosensitive points have different positions on the photosensitive element, P and Q are integers greater than or equal to 1, and N is an integer greater than or equal to 2; a fusion unit is used to fuse the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
在一种可行的实施方式中,所述N张感光图像为K*E张感光图像中在时域上连续的N张,所述K*E张感光图像是通过连续的K个感光周期依次曝光得到的,所述K个感光周期中的每个感光周期都对所述感光元件进行连续E次曝光,K为大于或等于1的正整数;所述感光元件包括M个感光单元,所述M个感光单元中的每个感光单元包含C个感光点,且在所述每个感光周期内,所述每个感光单元内的所有感光点都按序进行了一次曝光,C和M为大于或等于2的正整数。In a feasible implementation, the N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, and the K*E photosensitive images are obtained by sequentially exposing through K consecutive photosensitive cycles, and each of the K photosensitive cycles exposes the photosensitive element for E consecutive times, and K is a positive integer greater than or equal to 1; the photosensitive element includes M photosensitive units, each of the M photosensitive units contains C photosensitivity points, and in each photosensitive cycle, all photosensitivity points in each photosensitive unit are exposed once in sequence, and C and M are positive integers greater than or equal to 2.
在一种可行的实施方式中,在一个感光周期内的每次曝光中,所述每个感光单元内的一个感光点被曝光,所述N等于所述C。In a feasible implementation, in each exposure within a photosensitive cycle, a photosensitivity point in each photosensitive unit is exposed, and N is equal to C.
在一种可行的实施方式中,所述M个感光单元中包括第一感光单元,所述融合单元还用于:当所述第一感光单元中的第一感光点在所述N次曝光中都未被曝光时,基于所述第一感光单元在所述N次曝光中被曝光感光点对应的感光数据进行插值,计算得到所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;所述第三感光图像还包括与所述第一感光点对应的感光数据。In a feasible embodiment, the M photosensitive units include a first photosensitive unit, and the fusion unit is further used for: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures, and the photosensitive data corresponding to the first photosensitive point is calculated; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
在一种可行的实施方式中,所述M个感光单元中包括第一感光单元,所述融合单元还用于:当所述第一感光单元中的第一感光点在所述N张感光图像对应的N次曝光中都未被曝光时,基于所述N张感光图像包含的感光数据计算运动矢量,并基于所述运动矢量计算所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;所述第三感光图像还包括与所述第一感光点对应的感光数据。In a feasible embodiment, the M photosensitive units include a first photosensitive unit, and the fusion unit is further used to: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures corresponding to the N photosensitive images, calculate a motion vector based on the photosensitive data contained in the N photosensitive images, and calculate the photosensitive data corresponding to the first photosensitive point based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes photosensitive data corresponding to the first photosensitive point.
在一种可行的实施方式中,所述融合单元还用于:将所述N张感光图像中在时域上连续的N-1张感光图像、第四感光图像进行融合处理,得到与所述第三感光图像相邻的第五感光图像;其中,所述第四感光图像为在时域上与所述N张感光图像相邻的下一张感光图像,且所述第四感光图像与所述N-1张感光图像在时域上相邻。In a feasible implementation, the fusion unit is also used to: fuse N-1 photosensitive images and a fourth photosensitive image that are continuous in time domain among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in time domain.
在一种可行的实施方式中,所述装置还包括:图像信号处理单元,用于对所述第三感光图像中包含的每个感光数据进行处理,得到所述第三感光图像上每个感光数据对应的像素值。In a feasible implementation manner, the device further includes: an image signal processing unit, configured to process each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
第三方面,本申请提供了一种电子装置,所述计算机设备包括至少一个处理器,存储器和通信接口,所述存储器、所述通信接口和所述至少一个处理器通过线路互联,所述至少一个存储器中存储有指令;所述指令被所述处理器执行时,上述第一方面中任一所述的方法得以实现。In a third aspect, the present application provides an electronic device, wherein the computer device includes at least one processor, a memory and a communication interface, the memory, the communication interface and the at least one processor are interconnected via lines, and instructions are stored in the at least one memory; when the instructions are executed by the processor, any method described in the first aspect above is implemented.
第四方面,本申请实施例提供了一种芯片***,所述芯片***包括至少一个处理器,存储器和通信接口,所述存储器、所述通信接口和所述至少一个处理器通过线路互联,所述至少一个存储器中存储有指令;所述指令被所述处理器执行时,上述第一方面中任一所述的方法得以实现。In a fourth aspect, an embodiment of the present application provides a chip system, which includes at least one processor, a memory and a communication interface, wherein the memory, the communication interface and the at least one processor are interconnected through lines, and instructions are stored in the at least one memory; when the instructions are executed by the processor, any method described in the first aspect above is implemented.
第五方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,该计算机程序被执行时,上述第一方面中任意一项所述的方法得以实现。In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program. When the computer program is executed, the method described in any one of the above-mentioned first aspects is implemented.
第六方面,本申请实施例提供了一种计算机程序,该计算机程序包括指令,当该计算机程序被执行时,上述第一方面中任意一项所述的方法得以实现。In a sixth aspect, an embodiment of the present application provides a computer program, which includes instructions. When the computer program is executed, the method described in any one of the first aspects above is implemented.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
以下对本申请实施例用到的附图进行介绍。The following is an introduction to the drawings used in the embodiments of the present application.
图1为本发明实施例提供的一种电子装置的网络环境;FIG1 is a network environment of an electronic device provided by an embodiment of the present invention;
图2为本发明实施例提供的一种电子装置的结构示意图;FIG2 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present invention;
图3为本申请实施例提供的一种高分辨率高帧率摄像方法流程图;FIG3 is a flow chart of a high-resolution and high-frame-rate camera method provided in an embodiment of the present application;
图4为本申请实施例提供的一种数据采集方式示意图;FIG4 is a schematic diagram of a data collection method provided in an embodiment of the present application;
图5为本申请实施例提供的一种生成高分辨率高帧率视频序列的流水线架构;FIG5 is a pipeline architecture for generating a high-resolution and high-frame-rate video sequence provided in an embodiment of the present application;
图6为本申请实施例提供的一种高分辨率高帧率摄像方法的执行过程示意图; FIG6 is a schematic diagram of an execution process of a high-resolution and high-frame-rate imaging method provided in an embodiment of the present application;
图7为本申请实施例提供的一种图像处理装置;FIG7 is an image processing device provided by an embodiment of the present application;
图8是本发明实施例提供的一种图像处理装置的硬件结构示意图。FIG. 8 is a schematic diagram of the hardware structure of an image processing device provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面结合本申请实施例中的附图对本申请实施例进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。The embodiments of the present application are described below in conjunction with the drawings in the embodiments of the present application. In the description of the embodiments of the present application, unless otherwise specified, "/" means or, for example, A/B can mean A or B; "and/or" in the text is only a description of the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone. In addition, in the description of the embodiments of the present application, "multiple" means two or more than two.
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、***、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。The terms "first", "second", "third" and "fourth" in the specification and claims of the present application and the drawings are used to distinguish different objects, rather than to describe a specific order. In addition, the terms "including" and "having" and any of their variations are intended to cover non-exclusive inclusions. For example, a process, method, system, product or device that includes a series of steps or units is not limited to the listed steps or units, but optionally includes steps or units that are not listed, or optionally includes other steps or units inherent to these processes, methods, products or devices. Mentioning "embodiment" in this article means that the specific features, structures or characteristics described in conjunction with the embodiment may be included in at least one embodiment of the present application. The appearance of this phrase at various locations in the specification does not necessarily refer to the same embodiment, nor is it an independent or alternative embodiment that is mutually exclusive with other embodiments. It is explicitly and implicitly understood by those skilled in the art that the embodiments described herein can be combined with other embodiments.
下面介绍本申请中涉及的专业术语The following are the professional terms involved in this application:
(1)感光元件:感光元件是电子装置上摄像头的核心设备,通过将光信号转化为电信号,再将电信号转化为数字信号,这些数字信号后续经过ISP等图像处理模块处理即可转化为图像的像素值。具体地,感光元件上包含多个感光点,感光元件上感光点的数量与感光图像上像素点的数量相同,位置一一对应。每个参与曝光的感光点所输出的感光数据即是感光图像上对应位置处像素点中包含的感光数据。通常在一次曝光过程中,一个感光点可以参与感光输出,也可以不参与感光输出。(1) Photosensitive element: The photosensitive element is the core device of the camera on the electronic device. It converts the light signal into an electrical signal, and then converts the electrical signal into a digital signal. These digital signals are then processed by image processing modules such as ISP and converted into pixel values of the image. Specifically, the photosensitive element contains multiple photosites. The number of photosites on the photosensitive element is the same as the number of pixels on the photosensitive image, and the positions correspond one to one. The photosensitive data output by each photosite participating in the exposure is the photosensitive data contained in the pixel at the corresponding position on the photosensitive image. Usually in one exposure process, a photosite may participate in the photosensitive output or not.
本发明实施例提供的图像处理装置,可以如下所示的电子装置,用于执行本发明实施例提供的高分辨率高帧率摄像方法,电子装置可以是包括通信功能的装置。例如,电子装置可包括以下项中的至少一个:终端、智能电话、平板个人计算机(PC)、移动电话、视频电话、电子书阅读器、台式PC、膝上型PC、上网本计算机、个人数字助理(PDA)、便携式多媒体播放器(PMP)、运动图像专家组(MPEG-1或MPEG-2)音频层3(MP3)播放器、移动医疗装置、相机或可穿戴装置(例如,头戴式装置(HMD)(诸如电子眼镜)、电子服装、电子手链、电子项链、电子应用配件、电子纹身、智能手表等)。The image processing device provided by the embodiment of the present invention may be an electronic device as shown below, which is used to execute the high-resolution and high-frame-rate camera method provided by the embodiment of the present invention, and the electronic device may be a device including a communication function. For example, the electronic device may include at least one of the following items: a terminal, a smart phone, a tablet personal computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a Moving Picture Experts Group (MPEG-1 or MPEG-2) Audio Layer 3 (MP3) player, a mobile medical device, a camera or a wearable device (for example, a head-mounted device (HMD) (such as electronic glasses), electronic clothing, an electronic bracelet, an electronic necklace, an electronic application accessory, an electronic tattoo, a smart watch, etc.).
应当理解,本申请实施例中电子装置可以是各种具有图像拍摄或视频录制功能的装置。因此,对本领域技术人员而言明显的是,该电子装置不限于上述装置。It should be understood that the electronic device in the embodiment of the present application can be any device with image capture or video recording function. Therefore, it is obvious to those skilled in the art that the electronic device is not limited to the above-mentioned device.
在下文中,将参照附图来描述电子装置。在本申请公开的各种实施例中使用的术语“用户”可指示使用电子装置的人或使用该电子装置的装置(例如,人工智能电子装置)。Hereinafter, the electronic device will be described with reference to the accompanying drawings. The term "user" used in various embodiments disclosed in the present application may indicate a person using an electronic device or a device (eg, an artificial intelligence electronic device) using the electronic device.
图1为本发明实施例提供的一种电子装置的网络环境。FIG. 1 is a network environment of an electronic device provided by an embodiment of the present invention.
参照图1,电子装置101可包括总线110、处理器120、存储器130、输入/输出(I/O)接口140、显示器150、通信接口160、相机1(170)和相机2(171)等。相机1(170)和相机2(171)可被不同地称为第一相机模块和第二相机模块或者第一图像拍摄模块和第二图像拍摄模块等。应理解,电子装置101也可以仅包括相机1(170),而不包括相机2(171)。1 , the electronic device 101 may include a bus 110, a processor 120, a memory 130, an input/output (I/O) interface 140, a display 150, a communication interface 160, a camera 1 (170), a camera 2 (171), etc. The camera 1 (170) and the camera 2 (171) may be variously referred to as a first camera module and a second camera module or a first image capturing module and a second image capturing module, etc. It should be understood that the electronic device 101 may also include only the camera 1 (170) without including the camera 2 (171).
相机1(170)可以是从显示器150拍摄前面的前置相机,相机2(171)可以是拍摄后面的后置相机并且可与处理器120进行协作。总线110可以是将上述元件彼此连接并在上述元件之间传输通信(例如,控制消息)的电路。作为另一种实现方式相机1(170)和相机2(171)也可以都是后置相机且可以处理器120进行协作。Camera 1 (170) may be a front camera that captures the front from the display 150, and camera 2 (171) may be a rear camera that captures the back and may cooperate with the processor 120. The bus 110 may be a circuit that connects the above elements to each other and transmits communications (e.g., control messages) between the above elements. As another implementation, camera 1 (170) and camera 2 (171) may also be rear cameras and may cooperate with the processor 120.
处理器120可经由总线110从上述其它元件(例如,存储器130、I/O接口140、显示器150、通信接口160等)接收(例如)指令,破译接收到的指令,并执行与破译后的指令相应的操作或数据处理。处理器120可以包括中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)、数字信号处理器(digital signal processor,DSP)和图像信号处理器(Image Signal Processor,ISP)中至少一项,例如可以包括CPU、GPU、DSP和ISP。The processor 120 may receive (for example) instructions from the above-mentioned other elements (for example, the memory 130, the I/O interface 140, the display 150, the communication interface 160, etc.) via the bus 110, decipher the received instructions, and perform operations or data processing corresponding to the deciphered instructions. The processor 120 may include at least one of a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), and an image signal processor (ISP), for example, may include a CPU, a GPU, a DSP, and an ISP.
存储器130可存储从处理器120或其它元件(例如,I/O接口140、显示器150、通信接口160等)接收 到的或由处理器120或其它元件产生的指令或数据。存储器130可包括(例如)编程模块,诸如内核131、中间件132、应用编程接口(Application Programming Interface,API)133、应用134等。该编程模块均可使用软件、固件、硬件或者软件、固件、硬件中的两个或更多个的组合配置。The memory 130 may store data received from the processor 120 or other components (eg, the I/O interface 140, the display 150, the communication interface 160, etc.). The memory 130 may include, for example, programming modules such as a kernel 131, middleware 132, an application programming interface (API) 133, and an application 134. The programming modules may be configured using software, firmware, hardware, or a combination of two or more of software, firmware, and hardware.
内核131可控制或管理用于执行在其余的该编程模块(例如,中间件132、API 133或应用134)中实施的操作或功能的***资源(例如,总线110、处理器120或存储器130等)。此外,内核131可提供允许中间件132、API 133或应用134接入电子装置101的各个元件并对其进行控制或管理的接口。The kernel 131 may control or manage system resources (e.g., bus 110, processor 120, memory 130, etc.) used to perform operations or functions implemented in the remaining programming modules (e.g., middleware 132, API 133, or application 134). In addition, the kernel 131 may provide an interface that allows the middleware 132, API 133, or application 134 to access various components of the electronic device 101 and control or manage them.
中间件132可执行中介作用使得API 133或应用134可与内核131进行通信以提供和获取数据。此外,与从应用134接收到的任务请求相关联,中间件132可使用向应用134中的至少一个分配可使用该电子装置的***资源(例如,总线110、处理器120或存储器130等)的优先级的方法来执行针对任务请求的控制(例如,调度或负载均衡)。The middleware 132 may perform a mediation role so that the API 133 or the application 134 can communicate with the kernel 131 to provide and obtain data. In addition, in association with the task request received from the application 134, the middleware 132 may perform control (e.g., scheduling or load balancing) for the task request using a method of allocating a priority of using system resources (e.g., bus 110, processor 120, or memory 130, etc.) of the electronic device to at least one of the applications 134.
API 133是允许应用134控制由内核131或中间件132提供的功能的接口,并且可包括用于文件控制、窗口控制、图像处理或字符控制等的至少一个接口或功能(例如,指令)。API 133 is an interface that allows application 134 to control functions provided by kernel 131 or middleware 132, and may include at least one interface or function (e.g., instruction) for file control, window control, image processing, character control, etc.
根据本申请公开的各种实施例,应用134可包括短消息服务(SMS)/多媒体消息服务(MMS)应用、电子邮件应用、日历应用、闹钟应用、健康护理应用(例如,用于测量运动量或血糖的应用等)或环境信息应用(例如,提供大气压、湿度或温度信息的应用等)。附加地或可选地,应用134可以是与在电子装置101与外部电子装置(例如,电子装置104)之间的信息交换相关的应用。与该信息交换相关的应用可包括(例如)用于向外部电子装置传输特定信息的通知中继应用或用于管理外部电子装置的装置管理应用。According to various embodiments disclosed in the present application, application 134 may include a short message service (SMS)/multimedia message service (MMS) application, an email application, a calendar application, an alarm application, a health care application (e.g., an application for measuring exercise volume or blood sugar, etc.), or an environmental information application (e.g., an application providing atmospheric pressure, humidity, or temperature information, etc.). Additionally or optionally, application 134 may be an application related to information exchange between electronic device 101 and an external electronic device (e.g., electronic device 104). Applications related to the information exchange may include, for example, a notification relay application for transmitting specific information to an external electronic device or a device management application for managing an external electronic device.
例如,通知中继应用可包括用于向外部电子装置(例如,电子装置104)传输从电子装置101的不同应用(例如,SMS/MMS应用、电子邮件应用、健康护理应用或环境信息应用)产生的通知信息的功能。附加地或可选地,例如,通知中继应用可从外部电子装置(例如,电子装置104)接收通知信息并将该通知信息提供给用户。装置管理应用可管理(例如,安装、删除或更新)在该外部电子装置中运行的功能(例如,外部电子装置本身(或一些构成部件)的开启或关闭或者显示器的亮度(或分辨率)控制)和应用或者由该外部电子装置提供的服务(例如,通信服务或消息服务)。For example, the notification relay application may include functionality for transmitting notification information generated from different applications (e.g., SMS/MMS applications, email applications, health care applications, or environmental information applications) of the electronic device 101 to an external electronic device (e.g., the electronic device 104). Additionally or alternatively, for example, the notification relay application may receive notification information from an external electronic device (e.g., the electronic device 104) and provide the notification information to a user. The device management application may manage (e.g., install, delete, or update) functions (e.g., turning on or off the external electronic device itself (or some constituent components) or controlling the brightness (or resolution) of a display) and applications running in the external electronic device or services (e.g., communication services or messaging services) provided by the external electronic device.
根据本公开的各种实施例,应用134可包括根据外部电子装置(例如,电子装置104)的属性(例如,电子装置的类型)的指定应用。例如,在外部电子装置是MP3播放器的情况下,应用134可包括与音乐再现相关的应用。类似地,在外部电子装置是移动医疗健康护理装置的情况下,应用134可包括与健康护理相关的应用。根据本公开的实施例,应用134可包括在电子装置101中指定的应用和从外部电子装置(例如,服务器106或电子装置104)接收到的应用中的至少一个。According to various embodiments of the present disclosure, the application 134 may include a specified application according to a property of an external electronic device (e.g., the electronic device 104) (e.g., the type of the electronic device). For example, in the case where the external electronic device is an MP3 player, the application 134 may include an application related to music reproduction. Similarly, in the case where the external electronic device is a mobile medical health care device, the application 134 may include an application related to health care. According to an embodiment of the present disclosure, the application 134 may include at least one of an application specified in the electronic device 101 and an application received from an external electronic device (e.g., the server 106 or the electronic device 104).
I/O接口140可经由(例如)总线110向处理器120、存储器130及通信接口160传输由用户经由I/O单元(例如,传感器、键盘或触摸屏)输入的指令或数据。例如,I/O接口140可向处理器120提供有关经由触摸屏输入的用户触摸的数据。此外,例如,I/O接口140可经由I/O单元(例如,扬声器或显示器)输出经由总线110从处理器120、存储器130及通信接口160接收到的指令或数据。例如,I/O接口140可经由扬声器向用户输出由处理器120处理的语音数据。The I/O interface 140 may transmit instructions or data input by a user via an I/O unit (e.g., a sensor, a keyboard, or a touch screen) to the processor 120, the memory 130, and the communication interface 160 via, for example, the bus 110. For example, the I/O interface 140 may provide the processor 120 with data about a user touch input via the touch screen. In addition, for example, the I/O interface 140 may output instructions or data received from the processor 120, the memory 130, and the communication interface 160 via the bus 110 via an I/O unit (e.g., a speaker or a display). For example, the I/O interface 140 may output voice data processed by the processor 120 to the user via a speaker.
显示器150可向用户显示各种信息(例如,多媒体数据或文本数据等)。通信接口160可连接电子装置101与外部装置(例如,电子装置104或服务器106)之间的通信。例如,可经由无线通信或有线通信将通信接口160与网络162连接以与外部装置进行通信。无线通信可包括(例如)无线保真(Wi-Fi)、蓝牙(BT)、近场通信(NFC)、GPS或蜂窝通信(例如,长期演进(LTE)、高级LTE(LTE-A)、码分多址(CDMA)、宽带CDMA(WCDMA)、通用移动电信***(UMTS)、无线宽带(WiBro)或全球移动通信***(GSM)等)中的至少一个。有线通信可包括通用串行总线(USB)、高清多媒体接口(HDMI)、推荐标准232(RS-232)以及普通老式电话服务(POTS)中的至少一个。The display 150 can display various information (e.g., multimedia data or text data, etc.) to the user. The communication interface 160 can connect the communication between the electronic device 101 and the external device (e.g., the electronic device 104 or the server 106). For example, the communication interface 160 can be connected to the network 162 via wireless communication or wired communication to communicate with the external device. The wireless communication may include, for example, at least one of wireless fidelity (Wi-Fi), Bluetooth (BT), near field communication (NFC), GPS, or cellular communication (e.g., long term evolution (LTE), advanced LTE (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), universal mobile telecommunications system (UMTS), wireless broadband (WiBro), or global system for mobile communications (GSM), etc.). The wired communication may include at least one of a universal serial bus (USB), a high-definition multimedia interface (HDMI), a recommended standard 232 (RS-232), and a plain old telephone service (POTS).
根据本申请公开的实施例,网络162可以是电信网络。电信网络可包括计算机网络、互联网、物联网以及电话网络中的至少一个。根据本公开的实施例,可由应用134、应用编程接口(Application Programming Interface,API)133、中间件132、内核131或通信接口160中的至少一个支持针对在电子装置101与外部装置之间的通信的协议(例如,传输层协议、数据链路层协议或物理层协议)。According to an embodiment disclosed in the present application, the network 162 may be a telecommunication network. The telecommunication network may include at least one of a computer network, the Internet, an Internet of Things, and a telephone network. According to an embodiment of the present disclosure, a protocol (e.g., a transport layer protocol, a data link layer protocol, or a physical layer protocol) for communication between the electronic device 101 and an external device may be supported by at least one of the application 134, the application programming interface (API) 133, the middleware 132, the kernel 131, or the communication interface 160.
图2为本发明实施例提供的一种电子装置的结构示意图。例如,电子装置可配置图1中示出的电子装置101的全部或一部分。Fig. 2 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present invention. For example, the electronic device may be configured with all or part of the electronic device 101 shown in Fig. 1 .
参照图2,电子装置201可包括一个或多个应用处理器(AP)210、通信模块220、用户识别模块(SIM)卡224、存储器230、传感器模块240、输入装置250、显示器260、接口270、音频模块280、相机模块 290、相机模块291、电能管理模块295、电池296、指示器297以及电机298。2, the electronic device 201 may include one or more application processors (AP) 210, a communication module 220, a user identification module (SIM) card 224, a memory 230, a sensor module 240, an input device 250, a display 260, an interface 270, an audio module 280, a camera module 290 , camera module 291 , power management module 295 , battery 296 , indicator 297 and motor 298 .
AP 210可驱动操作***(OS)或应用以控制连接到AP 210的多个硬件或软件元件,并且执行包括多媒体数据的各种数据处理以及操作。例如,可将AP 210实施为片上***(SoC)。根据本公开的实施例,AP 210可还包括图形处理单元(GPU)和DSP(未示出)中至少一项。The AP 210 may drive an operating system (OS) or an application to control a plurality of hardware or software elements connected to the AP 210, and perform various data processing and operations including multimedia data. For example, the AP 210 may be implemented as a system on a chip (SoC). According to an embodiment of the present disclosure, the AP 210 may further include at least one of a graphics processing unit (GPU) and a DSP (not shown).
通信模块220(例如,通信接口160)可执行在电子装置201(例如,电子装置101)与经由网络连接的其它电子装置(例如,电子装置104或服务器106)之间的通信中的数据发送/接收。根据本公开的实施例,通信模块220可包括蜂窝模块221、Wi-Fi模块223、BT模块225、GPS模块227、NFC模块228以及射频(RF)模块229。The communication module 220 (e.g., the communication interface 160) may perform data transmission/reception in communication between the electronic device 201 (e.g., the electronic device 101) and other electronic devices (e.g., the electronic device 104 or the server 106) connected via a network. According to an embodiment of the present disclosure, the communication module 220 may include a cellular module 221, a Wi-Fi module 223, a BT module 225, a GPS module 227, an NFC module 228, and a radio frequency (RF) module 229.
蜂窝模块221可经由通信网络(例如,LTE、LTE-A、CDMA、WCDMA、UMTS、WiBro或GSM等)提供语音通信、图像通信、短消息服务或互联网服务等。此外,蜂窝模块221可使用(例如)用户识别模块(例如,SIM卡224)对通信网络内的电子装置执行识别和认证。根据本公开的实施例,蜂窝模块221可执行AP 210可提供的功能中的至少一部分。例如,蜂窝模块221可执行多媒体控制功能中的至少一部分。The cellular module 221 can provide voice communication, image communication, short message service or Internet service, etc. via a communication network (e.g., LTE, LTE-A, CDMA, WCDMA, UMTS, WiBro or GSM, etc.). In addition, the cellular module 221 can use (for example) a user identification module (e.g., SIM card 224) to perform identification and authentication on electronic devices within the communication network. According to an embodiment of the present disclosure, the cellular module 221 can perform at least a portion of the functions that the AP 210 can provide. For example, the cellular module 221 can perform at least a portion of the multimedia control functions.
根据本公开的实施例,蜂窝模块221可包括通信处理器(CP)。另外,例如,可将蜂窝模块221实施为SoC。虽然在图2中将元件(诸如蜂窝模块221(例如,通信处理器)、存储器230、电能管理模块295等)示为独立于AP 210的元件,但是可将AP 210实施为包括上述元件中的至少一部分(例如,蜂窝模块221)。According to an embodiment of the present disclosure, the cellular module 221 may include a communication processor (CP). In addition, for example, the cellular module 221 may be implemented as a SoC. Although elements (such as the cellular module 221 (e.g., a communication processor), the memory 230, the power management module 295, etc.) are shown as elements independent of the AP 210 in FIG. 2, the AP 210 may be implemented to include at least a portion of the above elements (e.g., the cellular module 221).
根据本申请公开的实施例,AP 210或蜂窝模块221(例如,通信处理器)可将从与其连接的非易失性存储器以及其它元件中的至少一个接收到的指令或数据加载到易失性存储器上并对其进行处理。此外,AP 210或蜂窝模块221可在非易失性存储器中存储从其它元件中的至少一个接收到的数据或由其它元件中的至少一个产生的数据。According to the embodiments disclosed in the present application, the AP 210 or the cellular module 221 (e.g., a communication processor) may load instructions or data received from a non-volatile memory connected thereto and at least one of other elements onto a volatile memory and process them. In addition, the AP 210 or the cellular module 221 may store data received from at least one of the other elements or data generated by at least one of the other elements in the non-volatile memory.
Wi-Fi模块223、BT模块225、GPS模块227或NFC模块228均可包括(例如)用于处理经由相关模块发送/接收的数据的处理器。虽然在图2中将蜂窝模块221、Wi-Fi模块223、BT模块225、GPS模块227或NFC模块228示为单独的块,但是蜂窝模块221、Wi-Fi模块223、BT模块225、GPS模块227或NFC模块228中的至少一部分(例如,两个或更多个元件)可被包括在一个集成电路(IC)或IC封装中。例如,可将与蜂窝模块221、Wi-Fi模块223、BT模块225、GPS模块227或NFC模块228中的每个相应的处理器中的至少一部分(例如,与蜂窝模块221相应的通信处理器以及与Wi-Fi模块223相应的Wi-Fi处理器)实施为一个SoC。The Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 may include, for example, a processor for processing data sent/received via the relevant module. Although the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 are shown as separate blocks in FIG. 2, at least a portion (e.g., two or more elements) of the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 may be included in one integrated circuit (IC) or IC package. For example, at least a portion of each corresponding processor in the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 (e.g., a communication processor corresponding to the cellular module 221 and a Wi-Fi processor corresponding to the Wi-Fi module 223) may be implemented as one SoC.
RF模块229可执行数据的发送/接收,例如,RF信号的发送/接收。虽然未示出,但是RF模块229可包括(例如)收发器、功率放大器模块(PAM)、频率滤波器或低噪声放大器(LNA)等。此外,RF模块229可还包括用于在无线通信中通过自由空间发送/接收电磁波的部件(例如,导体、导线等)。虽然图2示出蜂窝模块221、Wi-Fi模块223、BT模块225、GPS模块227以及NFC模块228共享一个RF模块229,但是蜂窝模块221、Wi-Fi模块223、BT模块225、GPS模块227或NFC模块228中的至少一个可经由单独的RF模块执行RF信号的发送/接收。The RF module 229 can perform data transmission/reception, for example, transmission/reception of RF signals. Although not shown, the RF module 229 may include, for example, a transceiver, a power amplifier module (PAM), a frequency filter, or a low noise amplifier (LNA), etc. In addition, the RF module 229 may also include a component (e.g., a conductor, a wire, etc.) for transmitting/receiving electromagnetic waves through free space in wireless communication. Although FIG. 2 shows that the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, and the NFC module 228 share one RF module 229, at least one of the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 may perform transmission/reception of RF signals via a separate RF module.
SIM卡224可以是包括用户识别模块的卡,并且可被***到在该电子装置的特定位置中形成的插槽中。SIM卡224可包括唯一识别信息(例如,集成电路卡识别码(ICCID))或用户信息(例如,国际移动用户识别码(IMSI))。The SIM card 224 may be a card including a subscriber identification module and may be inserted into a slot formed in a specific position of the electronic device. The SIM card 224 may include unique identification information (e.g., an integrated circuit card identification code (ICCID)) or subscriber information (e.g., an international mobile subscriber identity code (IMSI)).
存储器230(例如,存储器130)可包括内部存储器232(或称为嵌入式存储器)或外部存储器234。内部存储器232可包括(例如)易失性存储器(例如,动态随机存取存储器(DRAM)、静态RAM(SRAM)、同步动态RAM(SDRAM))以及非易失性存储器(例如,一次性可编程只读存储器(OTPROM)、可编程ROM(PROM)、可擦除可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)、掩膜ROM、闪存ROM、与非(NAND)闪存、或非(NOR)闪存等)中的至少一个。The memory 230 (e.g., the memory 130) may include an internal memory 232 (or referred to as an embedded memory) or an external memory 234. The internal memory 232 may include, for example, at least one of a volatile memory (e.g., a dynamic random access memory (DRAM), a static RAM (SRAM), a synchronous dynamic RAM (SDRAM)) and a non-volatile memory (e.g., a one-time programmable read-only memory (OTPROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a mask ROM, a flash ROM, a NAND flash memory, or a NOR flash memory, etc.).
根据本公开的实施例,内部存储器232可以是固态驱动器(SSD)。外部存储器234可还包括闪存驱动器(例如,紧凑型闪存(CF)、安全数字(SD)、微型安全数字(Micro-SD)、迷你安全数字(Mini-SD)、极限数字(xD)或记忆棒)。可经由各种接口将外部存储器234功能性地与电子装置201连接。根据本公开的实施例,电子装置201可还包括存储装置(或存储介质),诸如硬盘驱动器。According to an embodiment of the present disclosure, the internal memory 232 may be a solid state drive (SSD). The external memory 234 may also include a flash drive (e.g., Compact Flash (CF), Secure Digital (SD), Micro Secure Digital (Micro-SD), Mini Secure Digital (Mini-SD), Extreme Digital (xD) or Memory Stick). The external memory 234 may be functionally connected to the electronic device 201 via various interfaces. According to an embodiment of the present disclosure, the electronic device 201 may also include a storage device (or storage medium), such as a hard disk drive.
传感器模块240可测量物理量或检测电子装置201的操作状态,并且将测量出或检测到的信息转换为电信号。传感器模块240可包括(例如)以下项中的至少一个:手势传感器240A、陀螺仪传感器240B、大气压传感器240C、磁性传感器240D、加速度传感器240E、握持传感器240F、接近传感器240G、色彩传感器240H(例如,红绿蓝(RGB)传感器)、生物传感器240I、温度/湿度传感器240J、光传感器240K或紫外(UV)传感器240M。附加地或可选地,传感器模块240可包括(例如)电子鼻传感器(未示出)、肌电图(EMG) 传感器(未示出)、脑电图(EEG)传感器(未示出)、心电图(ECG)传感器(未示出)、红外(IR)传感器(未示出)、虹膜传感器(未示出)或指纹传感器(未示出)等。传感器模块240可还包括用于控制属于其的至少一个传感器的控制电路。The sensor module 240 can measure physical quantities or detect the operating state of the electronic device 201, and convert the measured or detected information into an electrical signal. The sensor module 240 may include, for example, at least one of the following items: a gesture sensor 240A, a gyroscope sensor 240B, an atmospheric pressure sensor 240C, a magnetic sensor 240D, an acceleration sensor 240E, a grip sensor 240F, a proximity sensor 240G, a color sensor 240H (e.g., a red, green, and blue (RGB) sensor), a biosensor 240I, a temperature/humidity sensor 240J, a light sensor 240K, or an ultraviolet (UV) sensor 240M. Additionally or optionally, the sensor module 240 may include, for example, an electronic nose sensor (not shown), an electromyogram (EMG) sensor (not shown), electroencephalogram (EEG) sensor (not shown), electrocardiogram (ECG) sensor (not shown), infrared (IR) sensor (not shown), iris sensor (not shown) or fingerprint sensor (not shown), etc. The sensor module 240 may also include a control circuit for controlling at least one sensor belonging thereto.
输入装置250可包括触摸面板252、(数字)笔传感器254、键256或超声输入装置258。触摸面板252可使用电容、电阻、红外或超声方法中的至少一种检测触摸输入。另外,触摸面板252可还包括控制电路。电容触摸面板可执行物理接触检测或接近检测。触摸面板252可还包括触觉层。在这种情况下,触摸面板252可向用户提供触觉反应。The input device 250 may include a touch panel 252, a (digital) pen sensor 254, a key 256, or an ultrasonic input device 258. The touch panel 252 may detect touch input using at least one of a capacitive, resistive, infrared, or ultrasonic method. In addition, the touch panel 252 may also include a control circuit. A capacitive touch panel may perform physical contact detection or proximity detection. The touch panel 252 may also include a tactile layer. In this case, the touch panel 252 may provide a tactile response to the user.
例如,可使用与接收用户的触摸输入的方法相同或相似的方法或使用单独用于检测的面板实施(数字)笔传感器254。键256可包括(例如)物理按钮、光学键或键区。超声输入装置258是经由产生超声信号的输入工具通过使用电子装置201中的麦克风(例如,麦克风288)检测声波来识别数据的单元,并且能够进行无线检测。根据本公开的实施例,电子装置201可使用通信模块220从连接到通信模块220的外部装置(例如,计算机或服务器)接收用户输入。For example, the (digital) pen sensor 254 may be implemented using a method that is the same as or similar to the method of receiving the user's touch input or using a separate panel for detection. The key 256 may include (for example) a physical button, an optical key, or a keypad. The ultrasonic input device 258 is a unit that recognizes data by detecting sound waves using a microphone (for example, microphone 288) in the electronic device 201 via an input tool that generates an ultrasonic signal, and is capable of wireless detection. According to an embodiment of the present disclosure, the electronic device 201 may use the communication module 220 to receive user input from an external device (for example, a computer or server) connected to the communication module 220.
显示器260(例如,显示器150)可包括面板262、全息图装置264或投影仪266。面板262可以是(例如)液晶显示器(LCD)或有源矩阵有机发光二极管(AM-OLED)等。例如,可将面板262实施为柔性的、透明的或可穿戴的。可将面板262连同触摸面板252配置为一个模块。全息图装置264可使用光的干扰在空气中显示三维图像。投影仪266可将光投射到屏幕上以显示图像。例如,屏幕可位于电子装置201的内部或外部。根据本公开的实施例,显示器260可还包括用于控制面板262、全息图装置264或投影仪266的控制电路。The display 260 (e.g., the display 150) may include a panel 262, a hologram device 264, or a projector 266. The panel 262 may be, for example, a liquid crystal display (LCD) or an active matrix organic light emitting diode (AM-OLED), etc. For example, the panel 262 may be implemented as a flexible, transparent, or wearable device. The panel 262 may be configured as a module together with the touch panel 252. The hologram device 264 may display a three-dimensional image in the air using interference of light. The projector 266 may project light onto a screen to display an image. For example, the screen may be located inside or outside the electronic device 201. According to an embodiment of the present disclosure, the display 260 may also include a control circuit for controlling the panel 262, the hologram device 264, or the projector 266.
接口270可包括(例如)HDMI 272、USB 274、光学接口276或D-超小型(D-sub)278。接口270可被包括在(例如)图1中示出的通信接口160中。附加地或可选地,接口270可包括移动高清链接(MHL)接口、SD卡/多媒体卡(MMC)接口或红外数据协会(IrDA)标准接口。The interface 270 may include, for example, an HDMI 272, a USB 274, an optical interface 276, or a D-subminiature (D-sub) 278. The interface 270 may be included in, for example, the communication interface 160 shown in FIG1 . Additionally or alternatively, the interface 270 may include a mobile high-definition link (MHL) interface, an SD card/multimedia card (MMC) interface, or an infrared data association (IrDA) standard interface.
音频模块280可双向地转换声音和电信号。音频模块280的至少一部分可被包括在(例如)图1中示出的I/O接口140中。音频模块280可处理经由(例如)扬声器282、接收器284、耳机286或麦克风288等输入或输出的声音信息。The audio module 280 can convert sound and electrical signals bidirectionally. At least a portion of the audio module 280 may be included in, for example, the I/O interface 140 shown in FIG. 1. The audio module 280 may process sound information input or output via, for example, a speaker 282, a receiver 284, an earphone 286, or a microphone 288.
相机模块290和相机模块291是可拍摄静止图像和运动画面的装置,并且可被制造为一个模块,可以分别是图1中的相机1(170)和相机2(171)。根据本申请公开的实施例,相机模块290和相机模块291可包括一个或多个图像传感器(例如,正面传感器或背面传感器)、镜头(未示出)、图像信号处理器(ISP)(未示出)、DSP(未示出)或闪光灯(例如,LED或氙气灯)。ISP或者DSP可以独立于AP 210的元件,但是可将AP 210实施为包括ISP或者DSP至少一项。Camera module 290 and camera module 291 are devices that can capture still images and moving pictures, and can be manufactured as one module, which can be camera 1 (170) and camera 2 (171) in Figure 1, respectively. According to the embodiments disclosed in the present application, camera module 290 and camera module 291 may include one or more image sensors (e.g., front sensors or back sensors), lenses (not shown), image signal processors (ISPs) (not shown), DSPs (not shown), or flashes (e.g., LEDs or xenon lamps). The ISP or DSP may be independent of the components of AP 210, but AP 210 may be implemented to include at least one of the ISP or DSP.
电能管理模块295可管理电子装置201的电能。虽然未示出,但是电能管理模块295可包括(例如)电能管理集成电路(PMIC)、充电器集成电路(IC)或者电池量表或燃料计。The power management module 295 may manage power of the electronic device 201. Although not shown, the power management module 295 may include, for example, a power management integrated circuit (PMIC), a charger integrated circuit (IC), or a battery gauge or a fuel gauge.
例如,可将PMIC安装在集成电路或SoC半导体内部。可将充电方法分类为有线充电方法和无线充电方法。充电IC可对电池进行充电并且可防止来自充电器的过电压或过电流的引入。For example, the PMIC may be installed inside an integrated circuit or a SoC semiconductor. The charging method may be classified into a wired charging method and a wireless charging method. The charging IC may charge the battery and may prevent the introduction of an overvoltage or overcurrent from the charger.
根据本申请公开的实施例,充电IC可包括用于有线充电方法和无线充电方法中的至少一种的充电IC。无线充电方法可以是(例如)磁共振方法、磁感应方法或电磁波方法等,并且可额外包括用于无线充电的附加电路,例如,电路(诸如线圈环、共振电路或整流器等)。According to the embodiments disclosed in the present application, the charging IC may include a charging IC for at least one of a wired charging method and a wireless charging method. The wireless charging method may be, for example, a magnetic resonance method, a magnetic induction method, or an electromagnetic wave method, and may additionally include an additional circuit for wireless charging, for example, a circuit (such as a coil ring, a resonant circuit, or a rectifier, etc.).
电池量表可测量(例如)电池296的剩余量以及充电时的电压、电流或温度。电池296可存储或产生电,并且使用存储或产生的电向电子装置201供电。电池296可包括(例如)可再充电电池或太阳能电池。The battery gauge can measure, for example, the remaining amount of the battery 296 and the voltage, current, or temperature when charging. The battery 296 can store or generate electricity and use the stored or generated electricity to power the electronic device 201. The battery 296 may include, for example, a rechargeable battery or a solar cell.
指示器297可显示电子装置201或其一部分(例如,AP 210)的特定状态,例如,启动状态、消息状态或充电状态等。电机298可将电信号转换为机械振动。虽然未示出,但是电子装置201可包括用于支持移动TV的处理器(例如,GPU)。用于支持移动TV的处理器可处理与标准(例如,诸如数字多媒体广播(DMB)、数字视频广播(DVB)、媒体流等)相应的媒体数据。The indicator 297 may display a specific state of the electronic device 201 or a portion thereof (e.g., the AP 210), such as a startup state, a message state, or a charging state. The motor 298 may convert an electrical signal into a mechanical vibration. Although not shown, the electronic device 201 may include a processor (e.g., a GPU) for supporting a mobile TV. The processor for supporting a mobile TV may process media data corresponding to a standard (e.g., such as digital multimedia broadcasting (DMB), digital video broadcasting (DVB), a media stream, etc.).
可使用一个或多个组件配置该电子装置的上述元件中的每个,并且相关元件的名称可根据该电子装置的类型而改变。电子装置可包括上述元件中的至少一个,并且可省略该元件中的一部分,或可还包括附加的其它元件。此外,该电子装置的该元件中的一部分可组合以形成一个实体并且同样地执行在该组合之前的相关元件的功能。Each of the above-mentioned elements of the electronic device may be configured using one or more components, and the names of the related elements may change according to the type of the electronic device. The electronic device may include at least one of the above-mentioned elements, and may omit a portion of the elements, or may further include additional other elements. In addition, a portion of the elements of the electronic device may be combined to form an entity and similarly perform the functions of the related elements before the combination.
需要说明的是,以上相机或者相机模块也可以称为摄像头、镜头模组或者镜头,其中,相机或者相机模块还可以包括对焦马达或者防抖马达至少一项。 It should be noted that the above-mentioned camera or camera module may also be referred to as a camera head, a lens module or a lens, wherein the camera or camera module may also include at least one of a focus motor and an anti-shake motor.
请参见图3,图3为本申请实施例提供的一种高分辨率高帧率摄像方法流程图,该方法可以由上下文中的图像处理装置(例如电子装置)执行,该图像处理装置包括相机。该相机中的感光元件可以采用周期性的感光方式,将摄入的光信号转化为对应的数字信号,从而获取相应的感光图像。如图3所示,该方法包括步骤S310和步骤S320。Please refer to Figure 3, which is a flow chart of a high-resolution and high-frame-rate camera method provided in an embodiment of the present application. The method can be performed by an image processing device (e.g., an electronic device) in the context, and the image processing device includes a camera. The photosensitive element in the camera can use a periodic photosensitivity method to convert the captured light signal into a corresponding digital signal, thereby obtaining a corresponding photosensitive image. As shown in Figure 3, the method includes steps S310 and S320.
步骤S310:获取N张感光图像,所述N张感光图像是通过对感光元件进行连续N次曝光分别得到的;其中,所述N张感光图像中包括第一感光图像和第二感光图像,所述第一感光图像包括P个感光数据,所述P个感光数据是在一次曝光过程中对所述感光元件上的P个感光点分别进行曝光得到的,所述第二感光图像包括Q个感光数据,所述Q个感光数据是通在一次曝光过程中对所述感光元件上的Q个感光点分别进行曝光得到的,所述P个感光点和所述Q个感光点在所述感光元件上的位置不同,P、Q为大于或等于1的整数,N为大于或等于2的整数。Step S310: Acquire N photosensitive images, wherein the N photosensitive images are obtained by exposing the photosensitive element N times in succession; wherein the N photosensitive images include a first photosensitive image and a second photosensitive image, the first photosensitive image includes P photosensitive data, the P photosensitive data are obtained by exposing P photosensitive points on the photosensitive element in one exposure process, the second photosensitive image includes Q photosensitive data, the Q photosensitive data are obtained by exposing Q photosensitive points on the photosensitive element in one exposure process, the P photosensitive points and the Q photosensitive points are at different positions on the photosensitive element, P and Q are integers greater than or equal to 1, and N is an integer greater than or equal to 2.
其中,感光元件上的感光点和感光图像上的像素点是一一对应,即感光元件上感光点的数量和感光图像上像素点的数量是相同的,感光元件上感光点的位置和感光图像上像素点的位置是一一对应的。具体地,在每次曝光过程中,每个感光点对应的感光数据(也称为每个感光点通过曝光输出的感光数据)也即是所得到的感光图像上对应像素点所包含的感光数据。There is a one-to-one correspondence between the photosensitive points on the photosensitive element and the pixel points on the photosensitive image, that is, the number of photosensitive points on the photosensitive element and the number of pixel points on the photosensitive image are the same, and the positions of the photosensitive points on the photosensitive element and the positions of the pixel points on the photosensitive image are in a one-to-one correspondence. Specifically, in each exposure process, the photosensitive data corresponding to each photosensitive point (also called the photosensitive data output by each photosensitive point through exposure) is the photosensitive data contained in the corresponding pixel point on the obtained photosensitive image.
其中,本申请中的每张感光图像是通过对感光元件进行一次曝光产生的原始感光数据,通常称为RAW Data。Among them, each photosensitive image in this application is the original photosensitive data generated by exposing the photosensitive element once, usually referred to as RAW Data.
其中,第一感光图像和第二感光图像为上述N张感光图像中任意的两张。第一感光图像是在一次曝光过程中,通过对感光元件上的P个感光点分别进行曝光得到的。该P个感光点与第一感光图像上的P个像素点一一对应。该P个像素点中的每个像素点分别包含对应感光点通过曝光输出的感光数据。同理,第二感光图像上Q个像素点中每个像素点包含对应感光点通过曝光输出的感光数据。Among them, the first photosensitive image and the second photosensitive image are any two of the above-mentioned N photosensitive images. The first photosensitive image is obtained by exposing P photosensitive points on the photosensitive element in a single exposure process. The P photosensitive points correspond one-to-one to the P pixel points on the first photosensitive image. Each of the P pixel points contains the photosensitive data output by the corresponding photosensitive point through exposure. Similarly, each of the Q pixel points on the second photosensitive image contains the photosensitive data output by the corresponding photosensitive point through exposure.
其中,在上述N次中的每次曝光中,感光元件上被曝光上的感光点不同,即通过N次曝光分别得到的N张感光图像中,每张感光图像包含的感光数据都是由不同位置的感光点曝光得到的。Among them, in each of the above N exposures, the exposed photosensitivity points on the photosensitive element are different, that is, in the N photosensitive images obtained by N exposures, the photosensitive data contained in each photosensitive image is obtained by exposing photosensitivity points at different positions.
下面具体描述终端设备在时域和空域上的感光方式/曝光方式。The following describes in detail the photosensitivity/exposure method of the terminal device in the time domain and spatial domain.
具体地,终端设备是通过周期性的感光(即本申请中的K个感光周期)来进行数据采集的,每个感光周期内对感光元件曝光的次数相同,且对感光元件的一次曝光过程产生一张感光图像。然后利用连续的感光周期对感光元件进行曝光,得到多张感光图像。Specifically, the terminal device collects data by periodically sensing light (i.e., K light sensing cycles in this application), the number of exposures to the photosensitive element is the same in each light sensing cycle, and one exposure process of the photosensitive element generates a light sensing image. Then, the photosensitive element is exposed using continuous light sensing cycles to obtain multiple light sensing images.
可选地,所述N张感光图像为K*E张感光图像中在时域上连续的N张,所述K*E张感光图像是通过连续的K个感光周期依次曝光得到的,所述K个感光周期中的每个感光周期都对所述感光元件进行连续E次曝光,K为大于或等于1的正整数。Optionally, the N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, and the K*E photosensitive images are obtained by exposing in sequence through K consecutive photosensitive cycles, and each of the K photosensitive cycles exposes the photosensitive element for E consecutive times, where K is a positive integer greater than or equal to 1.
具体地,通过K个连续的感光周期对感光元件进行曝光,每个感光周期内对感光元件曝光E次,共得到时域上连续的K*E张感光图像。上述N张感光图像为K*E张感光图像中在时域上连续的N张。Specifically, the photosensitive element is exposed through K consecutive photosensitive cycles, and the photosensitive element is exposed E times in each photosensitive cycle, so that K*E photosensitive images that are continuous in the time domain are obtained. The above N photosensitive images are N photos that are continuous in the time domain among the K*E photosensitive images.
在每个感光周期内具体的曝光方式如下:The specific exposure method in each photosensitive cycle is as follows:
可选地,所述感光元件包括M个感光单元,所述M个感光单元中的每个感光单元包含C个感光点,且在所述每个感光周期内,所述每个感光单元内的所有感光点都按序进行了一次曝光,C和M为大于或等于2的正整数。Optionally, the photosensitive element includes M photosensitive units, each of the M photosensitive units includes C photosensitizing points, and in each photosensitive cycle, all photosensitizing points in each photosensitive unit are exposed once in sequence, and C and M are positive integers greater than or equal to 2.
具体地,将感光元件划分为至少一个感光单元,每个感光单元上包含至少一个感光点。对于一个感光周期内的E次曝光而言,每次曝光过程中每个感光单元内至少一个感光点被曝光,且任意两次曝光过程中,同一感光单元内被曝光的感光点不同。Specifically, the photosensitive element is divided into at least one photosensitive unit, each of which includes at least one photosensitive point. For E exposures in one photosensitive cycle, at least one photosensitive point in each photosensitive unit is exposed during each exposure, and the exposed photosensitive points in the same photosensitive unit are different during any two exposures.
进一步地,在经过一个感光周期内的E次曝光后,每个感光单元内的所有感光点都进行了一次曝光。并且在一个感光周期内的E次曝光过程中,每个感光单元内的感光点是按预设的顺序依次曝光的。Furthermore, after E exposures in one photosensitive cycle, all the photosites in each photosensitive unit are exposed once, and during the E exposures in one photosensitive cycle, the photosites in each photosensitive unit are exposed in sequence according to a preset order.
例如,每个感光单元内包含4个感光点:感光点1、感光点2、感光点3和感光点4。假如每个感光周期内包含4次曝光过程,每次曝光过程对感光单元内的一个感光点进行曝光,此时,一个感光周期内的4次曝光过程中,每个感光单元内4个感光点的曝光顺序可以是:第一曝光感光点3、第二次曝光感光点2、第三次曝光感光点4、第四次曝光感光点1;或者第一次曝光感光点4、第二次曝光感光点1、第三次曝光感光点2、第四次曝光感光点3。For example, each photosensitive unit contains 4 photosites: photosite 1, photosite 2, photosite 3, and photosite 4. If each photosensitive cycle contains 4 exposure processes, and each exposure process exposes a photosite in the photosensitive unit, then in the 4 exposure processes in one photosensitive cycle, the exposure order of the 4 photosites in each photosensitive unit can be: first exposure photosite 3, second exposure photosite 2, third exposure photosite 4, fourth exposure photosite 1; or first exposure photosite 4, second exposure photosite 1, third exposure photosite 2, fourth exposure photosite 3.
又例如,每个感光单元内包含4个感光点:感光点1、感光点2、感光点3和感光点4。假如每个感光周期内包含3次曝光过程,每次曝光过程对感光单元内的一个或两个感光点进行曝光,此时,一个感光周期内的3次曝光过程中,每个感光单元内4个感光点的曝光顺序可以是:第一曝光感光点3、第二次曝光 感光点2和感光点4、第三次曝光感光点1;或者第一次曝光感光点2和感光点4、第二次曝光感光点1、第三次曝光感光点3。For another example, each photosensitive unit contains 4 photosites: photosite 1, photosite 2, photosite 3, and photosite 4. If each photosensitive cycle contains 3 exposure processes, and each exposure process exposes one or two photosites in the photosensitive unit, then in the 3 exposure processes in one photosensitive cycle, the exposure order of the 4 photosites in each photosensitive unit can be: first exposure photosite 3, second exposure photosite 4. Photosites 2 and 4 are exposed, and photosite 1 is exposed for the third time; or photosites 2 and 4 are exposed for the first time, photosite 1 is exposed for the second time, and photosite 3 is exposed for the third time.
下面请参见图4,图4为本申请实施例提供的一种数据采集方式示意图,其展示了在时域t上的一个感光周期内对感光元件进行连续4次曝光的过程示意图,该4次曝光分别得到四张感光图像:感光图像1-4,该4张感光图像为低分辨率高帧率感光图像序列中连续的4张。Please refer to Figure 4 below, which is a schematic diagram of a data acquisition method provided in an embodiment of the present application, which shows a schematic diagram of the process of exposing a photosensitive element four times continuously within a photosensitive cycle in the time domain t. The four exposures respectively obtain four photosensitive images: photosensitive images 1-4, and the four photosensitive images are four consecutive images in a low-resolution and high-frame rate photosensitive image sequence.
在图4中,用X-0-Y坐标系表示感光元件所在的平面,每个感光点在x方向和y方向上的长度分别为1。可以看出,感光元件上包含16个感光点。将16个感光点划分为4个感光单元:感光单元1-4。每个感光单元内包含4个感光点。其中,感光单元1内包含的4个感光点的坐标分别为(1,1)(2,1)(1,2)(2,2);感光单元2内包含的4个感光点的坐标分别为(3,1)(3,2)(4,1)(4,2);感光单元3内包含的4个感光点的坐标分别为(1,3)(2,3)(1,4)(2,4);感光单元4内包含的4个感光点的坐标分别为(3,3)(4,3)(3,4)(4,4)。In Figure 4, the plane where the photosensitive element is located is represented by the X-0-Y coordinate system, and the length of each photosensitive point in the x direction and the y direction is 1 respectively. It can be seen that the photosensitive element contains 16 photosensitive points. The 16 photosensitive points are divided into 4 photosensitive units: photosensitive units 1-4. Each photosensitive unit contains 4 photosensitive points. Among them, the coordinates of the 4 photosensitive points contained in photosensitive unit 1 are (1, 1) (2, 1) (1, 2) (2, 2) respectively; the coordinates of the 4 photosensitive points contained in photosensitive unit 2 are (3, 1) (3, 2) (4, 1) (4, 2) respectively; the coordinates of the 4 photosensitive points contained in photosensitive unit 3 are (1, 3) (2, 3) (1, 4) (2, 4) respectively; the coordinates of the 4 photosensitive points contained in photosensitive unit 4 are (3, 3) (4, 3) (3, 4) (4, 4) respectively.
如图4所示,每个感光单元内感光点从左到右,从上到下依次被编号为1、2、3、4。As shown in FIG. 4 , the photosites in each photosensitive unit are numbered 1, 2, 3, and 4 from left to right and from top to bottom.
在图4所示的一个感光周期中的每次曝光过程中,阴影部分的感光点为未被曝光的感光点,空白部分的感光点为进行了曝光的感光点。可以看出,在图4上的第一次曝光过程中,每个感光单元内的感光点1被曝光,在第二次曝光过程中,每个感光单元内的感光点2被曝光,在第三次曝光过程中,每个感光单元内的感光点4被曝光,在第四次曝光过程中,每个感光单元内的感光点3被曝光。In each exposure process in a photosensitive cycle shown in FIG4 , the photosites in the shaded part are photosites that are not exposed, and the photosites in the blank part are photosites that are exposed. It can be seen that in the first exposure process in FIG4 , photosites 1 in each photosensitive unit are exposed, in the second exposure process, photosites 2 in each photosensitive unit are exposed, in the third exposure process, photosites 4 in each photosensitive unit are exposed, and in the fourth exposure process, photosites 3 in each photosensitive unit are exposed.
可以看出,采用此种曝光方式,在一个感光周期内,感光元件上的每个感光点都进行了一次曝光。It can be seen that with this exposure method, each photosensitizing point on the photosensitive element is exposed once within one photosensitizing cycle.
在经过上述4次曝光后,分别得到图4所示的4张感光图像:感光图像1-4。每张感光图像上的一个单元格为一个像素点,即感光图像上像素点的数量和感光元件上感光点的数量相同,且位置一一对应。例如,感光元件上坐标为(1,1)的感光点曝光后的感光数据为感光图像上坐标为(1,1)处的像素点中所包含的感光数据。After the above four exposures, four photosensitive images are obtained as shown in FIG4: photosensitive images 1-4. A cell on each photosensitive image is a pixel, that is, the number of pixel points on the photosensitive image is the same as the number of photosensitive points on the photosensitive element, and the positions correspond one to one. For example, the photosensitive data of the photosensitive point with coordinates (1, 1) on the photosensitive element after exposure is the photosensitive data contained in the pixel with coordinates (1, 1) on the photosensitive image.
在每张感光图像中,阴影部分像素点为包含有感光数据的像素点,空白部分的像素点为不包含感光数据的像素点。感光图像1中坐标为(1,1)(3,1)(1,3)和(3,3)的像素点中包含感光数据,此4个像素点中包含的感光数据是在第一次曝光过程中通过对感光元件上对应位置坐标处的感光点进行曝光得到的,而感光图像1中其余位置的像素点中不包含感光数据。In each photosensitive image, the pixels in the shaded part are pixels containing photosensitive data, and the pixels in the blank part are pixels not containing photosensitive data. The pixels at coordinates (1, 1), (3, 1), (1, 3) and (3, 3) in the photosensitive image 1 contain photosensitive data. The photosensitive data contained in these four pixels are obtained by exposing the photosensitive points at the corresponding position coordinates on the photosensitive element during the first exposure process, while the pixels at the remaining positions in the photosensitive image 1 do not contain photosensitive data.
同理,第二次曝光所得到的感光图像2、第三次曝光所得到的感光图像3、以及第四次曝光所得到的感光图像4的过程与上述第一次曝光所得到的感光图像1的过程对应相同,此处不再赘述。Similarly, the processes of obtaining the photosensitive image 2 obtained by the second exposure, the photosensitive image 3 obtained by the third exposure, and the photosensitive image 4 obtained by the fourth exposure correspond to the same process as the photosensitive image 1 obtained by the first exposure mentioned above, and will not be repeated here.
步骤S320:将所述N张感光图像进行融合处理,得到第三感光图像;其中,所述第三感光图像包括与所述P个感光点分别对应的所述P个感光数据,以及与所述Q个感光点分别对应的所述Q个感光数据。Step S320: Fusing the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
其中,N张感光图像和第三感光图像的尺寸相同,包含的像素点的数量相等。N张感光图像中每张感光图像上包含的像素点与感光元件上包含的感光点一一对应,同理,第三感光图像中的像素点与感光元件中的感光点也一一对应。The N photosensitive images and the third photosensitive image have the same size and contain the same number of pixels. The pixels contained in each of the N photosensitive images correspond one-to-one to the photosensitive points contained in the photosensitive element. Similarly, the pixels in the third photosensitive image correspond one-to-one to the photosensitive points in the photosensitive element.
具体地,下面以感光元件上的第二感光点为例描述上述一一对应的过程:第二感光点在第一感光图像中对应像素点的位置,第二感光点在第二感光图像中对应像素点的位置,以及第二感光点在第三感光图像中对应像素点的位置都相同。Specifically, the one-to-one correspondence process is described below using the second photosensitizing point on the photosensitive element as an example: the position of the pixel corresponding to the second photosensitizing point in the first photosensitive image, the position of the pixel corresponding to the second photosensitizing point in the second photosensitive image, and the position of the pixel corresponding to the second photosensitizing point in the third photosensitive image are all the same.
具体地,上述通过融合处理得到第三感光图像的过程具体是:以第一感光图像为例,上述P个感光点对应第一感光图像上的P个像素点,第一感光图像上的P个像素点对应包含上述P个感光点通过曝光输出的P个感光数据,此P个感光数据中的每个感光数据用于描述对应像素点的像素信息。同理,可知P个感光点也对应第三感光图像上的P像素点,且第三感光图像上的P个像素点的位置和第一感光图像上的P个像素点的位置对应相同。将第一感光图像进行融合的过程具体为:以第一感光图像上P个像素点中的第一像素点为例,将第一感光图像上第一像素点包含的感光数据作为第三感光图像中第一像素点所包含的感光数据。上述第一像素点为P个像素点中的任意一个。通过上述融合方式对N张感光图像进行融合,便可使得第三感光图像上的每个像素点中包含感光元件上对应感光点曝光输出的感光数据。Specifically, the process of obtaining the third photosensitive image by fusion processing is as follows: taking the first photosensitive image as an example, the P photosensitive points correspond to the P pixels on the first photosensitive image, and the P pixels on the first photosensitive image correspond to the P photosensitive data output by the P photosensitive points through exposure, and each of the P photosensitive data is used to describe the pixel information of the corresponding pixel. Similarly, it can be known that the P photosensitive points also correspond to the P pixels on the third photosensitive image, and the positions of the P pixels on the third photosensitive image correspond to the same positions of the P pixels on the first photosensitive image. The process of fusing the first photosensitive image is as follows: taking the first pixel among the P pixels on the first photosensitive image as an example, the photosensitive data contained in the first pixel on the first photosensitive image is used as the photosensitive data contained in the first pixel in the third photosensitive image. The first pixel is any one of the P pixels. By fusing N photosensitive images in the above fusion method, each pixel on the third photosensitive image can contain the photosensitive data output by the exposure of the corresponding photosensitive point on the photosensitive element.
上述过程即为对获取到的N张感光图像进行简单融合的过程。The above process is a process of simply fusing the N acquired photosensitive images.
进一步地,基于所选取的感光图像的数量(即N)与每个感光周期内曝光次数的关系,可以采用不同的融合处理方式,得到对应的第三感光图像。基于N与每个感光周期内曝光次数的关系,将本申请中的融 合处理对应两种不同的场景,下面分别进行详细描述。Furthermore, based on the relationship between the number of selected photosensitive images (ie, N) and the number of exposures in each photosensitive cycle, different fusion processing methods can be used to obtain the corresponding third photosensitive image. The combined processing corresponds to two different scenarios, which are described in detail below.
场景一:N等于每个感光周期内的曝光次数Scenario 1: N is equal to the number of exposures in each photosensitive cycle
由于在每个感光周期内,是按照顺序依次对每个感光单元内的感光点进行曝光,因而当从时域上选取连续的N张感光图像,且N等于一个感光周期内的曝光次数时,此时选取时域上连续的N张感光图像,此N张感光图像中包含的所有感光数据的数量正好等于感光元件上感光点的数量,且正好是由感光元件上的每个感光点进行一次曝光得到的。在此种场景下,可以直接采用前述实施例中的简单融合方式对N张感光图像进行融合处理,得到的第三感光图像中每个像素点中便包含对应感光点曝光输出的一个感光数据。Since the photosites in each photosensitive unit are exposed in sequence in each photosensitive cycle, when N consecutive photosites are selected in the time domain, and N is equal to the number of exposures in one photosensitive cycle, the number of all photosites contained in the N photosites is exactly equal to the number of photosites on the photosensitive element, and is obtained by exposing each photosite on the photosensitive element once. In this scenario, the simple fusion method in the aforementioned embodiment can be directly used to fuse the N photosites, and each pixel in the obtained third photosensitive image contains a photosensitive data output by the exposure of the corresponding photosite.
此种场景下一个曝光周期内,可以采用不同的曝光方式:In this scenario, different exposure methods can be used in the next exposure cycle:
(1)方式一:一个感光周期内每次曝光过程中,每个感光单元内的一个感光点被曝光,所述N等于所述C。此种曝光方式下通过使N等于C,便可使得N张感光图像上共同包含的感光数据分别是由感光元件上的每个感光点进行一次曝光得到的。(1) Mode 1: During each exposure process in a photosensitive cycle, one photosensitive point in each photosensitive unit is exposed, and N is equal to C. In this exposure mode, by making N equal to C, the photosensitive data contained in N photosensitive images can be obtained by exposing each photosensitive point on the photosensitive element once.
(2)方式二:一个感光周期内每次曝光过程中,每个感光单元内的一个或多个感光点被曝光,所述N等于一个感光周期内曝光次数。此种曝光方式下通过使N等于一个感光周期内曝光次数,便可使得N张感光图像上共同包含的感光数据分别是由感光元件上的每个感光点进行一次曝光得到的。(2) Mode 2: During each exposure process in a photosensitive cycle, one or more photosites in each photosensitive unit are exposed, and N is equal to the number of exposures in a photosensitive cycle. In this exposure mode, by making N equal to the number of exposures in a photosensitive cycle, the photosensitive data contained in N photosites can be obtained by exposing each photosite on the photosensitive element once.
场景二:N小于每个感光周期内的曝光次数Scenario 2: N is less than the number of exposures in each photosensitive cycle
当N小于每个感光周期内的曝光次数时,此时在感光元件上存在部分感光点,此部分感光点在上述N张感光图像对应的一段连续时域上未被曝光。即N张感光图像包含的所有感光数据中并不包含这部分感光点对应的感光数据,因此在通过对上述简单融合方式对N张感光图像进行融合处理后,第三感光图像中缺失此部分感光点对应的感光数据。此时,可以通过空间插值或计算运动矢量的方式,来计算在这一段连续时域上未被曝光的感光点对应的感光数据,下面具体论述这两种计算方式。When N is less than the number of exposures in each photosensitive cycle, there are some photosites on the photosensitive element, and these photosites are not exposed in a continuous time domain corresponding to the N photosensitive images. That is, all the photosensitive data contained in the N photosensitive images do not contain the photosensitive data corresponding to these photosites. Therefore, after the N photosensitive images are fused by the above simple fusion method, the photosensitive data corresponding to these photosites are missing in the third photosensitive image. At this time, the photosensitive data corresponding to the photosites that are not exposed in this continuous time domain can be calculated by spatial interpolation or calculation of motion vectors. The following is a detailed discussion of these two calculation methods.
(1)下面以M个感光单元中的第一感光单元为例描述空间插值进行计算的过程:(1) The following describes the process of spatial interpolation calculation by taking the first photosensitive unit among the M photosensitive units as an example:
当所述第一感光单元中的第一感光点在所述N次曝光中都未被曝光时,基于所述第一感光单元在所述N次曝光中被曝光感光点对应的感光数据进行插值,计算得到所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;所述第三感光图像还包括与所述第一感光点对应的感光数据。When the first photosensitive point in the first photosensitive unit has not been exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures to calculate the photosensitive data corresponding to the first photosensitive point; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
具体地,上述插值的过程可以是最邻近插值、双线性插值或高阶插值或其它可行的插值方式,本申请对此不限定。Specifically, the interpolation process may be nearest neighbor interpolation, bilinear interpolation, high-order interpolation or other feasible interpolation methods, which are not limited in the present application.
在通过对每个感光单元中感光点对应的感光数据进行插值后,计算得到每个感光单元中在上述一段连续时域(即N次曝光)中未被曝光的感光点对应的感光数据,并将计算得到的感光数据作为第三感光图像中对应像素点中所包含的感光数据,使得第三感光图像中每个像素点都包含一个感光数据。After interpolating the photosensitive data corresponding to the photosensitivity points in each photosensitive unit, the photosensitive data corresponding to the photosensitivity points in each photosensitive unit that have not been exposed in the above-mentioned continuous time domain (i.e., N exposures) are calculated, and the calculated photosensitive data is used as the photosensitive data contained in the corresponding pixel points in the third photosensitive image, so that each pixel point in the third photosensitive image contains a photosensitive data.
(2)下面以M个感光单元中的第一感光单元为例描述利用运动矢量进行计算的过程:(2) The following describes the process of calculating using the motion vector by taking the first photosensitive unit among the M photosensitive units as an example:
当所述第一感光单元中的第一感光点在所述N张感光图像对应的N次曝光中都未被曝光时,基于所述N张感光图像包含的感光数据计算运动矢量,并基于所述运动矢量计算所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;所述第三感光图像还包括与所述第一感光点对应的感光数据。When the first photosensitive point in the first photosensitive unit has not been exposed in the N exposures corresponding to the N photosensitive images, a motion vector is calculated based on the photosensitive data contained in the N photosensitive images, and the photosensitive data corresponding to the first photosensitive point is calculated based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
具体地,获取与上述N张感光图像在时域上相邻的前H张感光图像,对该H张感光图像采用上述简单融合方式进行融合处理,得到第六感光图像。其中,H等于每个感光周期内的曝光次数,即第六感光图像中每个像素点都包含一个感光数据;在此种情况下,H大于N。同时,对N张感光图像也采用上述简单融合方式进行融合处理,得到第三感光图像。Specifically, the first H photosensitive images adjacent to the N photosensitive images in the time domain are obtained, and the H photosensitive images are fused using the simple fusion method to obtain a sixth photosensitive image. H is equal to the number of exposures in each photosensitive cycle, that is, each pixel in the sixth photosensitive image contains a photosensitive data; in this case, H is greater than N. At the same time, the N photosensitive images are also fused using the simple fusion method to obtain a third photosensitive image.
进一步,可选地,对将第三感光图像和第六感光图像进行特征点匹配(例如,可以是通过对物体边缘特征点进行匹配等方式),具体地:确定表征同一物体部位的像素点在第三感光图像和第六感光图像上的位置以及对应的位移;然后通过位移、第三感光图像和第六感光图像之间时间差计算像素点的移动方位和速度,也即感光图像中物体的移动方位和速度(或称为运动矢量)。Further, optionally, feature point matching is performed on the third photosensitive image and the sixth photosensitive image (for example, by matching feature points on the edge of the object, etc.), specifically: the positions of pixels representing the same object part on the third photosensitive image and the sixth photosensitive image and the corresponding displacements are determined; and then the moving direction and speed of the pixels are calculated through the displacement and the time difference between the third photosensitive image and the sixth photosensitive image, that is, the moving direction and speed (or motion vector) of the object in the photosensitive image.
最后基于计算出的运动矢量确定N次曝光过程中第一感光单元上未曝光的感光点对应的感光数据,具体地:以第一感光单元中的第一感光点为例来描述具体计算过程,第一感光点在第三感光图像上对应第一像素点,通过前述计算得到的运动矢量确定第一像素点在第六感光图像上对应第二像素点,第二像素点是与第一像素点表征物体同一部位的像素点,此时,将第二像素点包含的感光数据作为第一像素点所包含的感光数据,也即是作为与第一感光点对应的感光数据。参照上述步骤,计算得到M个感光单元中每个感光 单元内每个未曝光的感光点所对应的感光数据,并将未被曝光感光点的感光数据作为第三感光图像上对应像素点中包含的感光数据,使得第三感光图像上每个像素点都包含一个感光数据。Finally, based on the calculated motion vector, the photosensitive data corresponding to the unexposed photosites on the first photosensitive unit during the N exposures is determined. Specifically: the specific calculation process is described by taking the first photosites in the first photosensitive unit as an example. The first photosites correspond to the first pixel on the third photosensitive image. The motion vector obtained by the above calculation determines that the first pixel corresponds to the second pixel on the sixth photosensitive image. The second pixel is a pixel representing the same part of the object as the first pixel. At this time, the photosensitive data contained in the second pixel is used as the photosensitive data contained in the first pixel, that is, as the photosensitive data corresponding to the first photosites. Referring to the above steps, the photosensitive data of each photosensitive point in the M photosensitive units is calculated. The photosensitive data corresponding to each unexposed photosensitive point in the unit is used, and the photosensitive data of the unexposed photosensitive point is used as the photosensitive data contained in the corresponding pixel point on the third photosensitive image, so that each pixel point on the third photosensitive image contains a photosensitive data.
其中,第三感光图像和第六感光图像之间时间差是通过第三感光图像对应的时刻与第六感光图像对应的时刻进行差值计算得到的。第三感光图像对应的时刻可以是其在时域上对应的N次曝光的中间时刻,同理,第六感光图像对应的时刻的计算方式与第三感光图像对应时刻的计算方式相同。The time difference between the third photosensitive image and the sixth photosensitive image is obtained by performing a difference calculation between the moment corresponding to the third photosensitive image and the moment corresponding to the sixth photosensitive image. The moment corresponding to the third photosensitive image may be the middle moment of the N exposures corresponding to the third photosensitive image in the time domain. Similarly, the calculation method of the moment corresponding to the sixth photosensitive image is the same as the calculation method of the moment corresponding to the third photosensitive image.
其中,上述特征点的尺寸可以和像素点的尺寸相同。The size of the feature points may be the same as the size of the pixels.
综上,通过上述两种场景下的融合处理过程,便可使得融合得到的第三感光图像中的每个像素点上都包含一个感光数据,也即得到了一张高分辨率的感光图像。In summary, through the fusion processing in the above two scenarios, each pixel in the fused third photosensitive image can contain a photosensitive data, that is, a high-resolution photosensitive image is obtained.
上述实施例介绍了利用K*E张感光图像中在时域上连续的N张感光图像融合得到一张对应的高分辨率感光图像的过程。下面将介绍如何基于K*E张感光图像(即低分辨率高帧率感光图像序列)生成对应高分分辨率高帧率的感光图像序列的过程。The above embodiment introduces the process of fusing N photosensitive images that are continuous in the time domain among K*E photosensitive images to obtain a corresponding high-resolution photosensitive image. The following describes how to generate a corresponding high-resolution and high-frame-rate photosensitive image sequence based on K*E photosensitive images (i.e., a low-resolution and high-frame-rate photosensitive image sequence).
具体地,上述K*E张感光图像是通过K个感光周期连续采集得到的,可以每次在K*E张感光图像中选取时域上连续的一组感光图像(即N张)进行上述融合处理,得到一张高分辨率感光图像。对于从K*E张感光图像中任意选取的两组感光图像而言,如果该两组感光图像包含共同的N-1张感光图像,则此两组感光图像在分别进行上述融合处理后得到的两张感光图像在高分辨高帧率感光图像序列中相邻。Specifically, the K*E photosensitive images are acquired through continuous acquisition of K photosensitive cycles, and a group of photosensitive images (i.e., N images) that are continuous in time domain can be selected from the K*E photosensitive images each time for the above fusion processing to obtain a high-resolution photosensitive image. For two groups of photosensitive images arbitrarily selected from the K*E photosensitive images, if the two groups of photosensitive images contain N-1 common photosensitive images, then the two photosensitive images obtained after the two groups of photosensitive images are respectively subjected to the above fusion processing are adjacent in the high-resolution and high-frame-rate photosensitive image sequence.
可选地,可以采用流水线架构进行融合,得到高分辨率高帧率感光图像序列,即按照低分辨率高帧率感光图像序列的采集时域,以一张感光图像为步长,依次选取时域上相邻的N张进行融合处理。Optionally, a pipeline architecture can be used for fusion to obtain a high-resolution and high-frame-rate photosensitive image sequence, that is, according to the acquisition time domain of the low-resolution and high-frame-rate photosensitive image sequence, with one photosensitive image as a step size, N adjacent images in the time domain are selected in turn for fusion processing.
例如,第一次选取K*E张感光图像中第1-第N张感光图像进行前述融合处理,得到一张高分辨率的感光图像,第二次选取第2-第N+1张感光图像进行融合处理,得到一张高分辨率的感光图像,然后依次类推。For example, the first time, the 1st to Nth photosensitive images among K*E photosensitive images are selected for the above-mentioned fusion processing to obtain a high-resolution photosensitive image, and the second time, the 2nd to N+1th photosensitive images are selected for fusion processing to obtain a high-resolution photosensitive image, and so on.
下面介绍在得到第三感光图像后,如何得到,在高分辨率高帧率感光图像序列中,与第三感光图像相邻的下一张感光图像的过程:将所述N张感光图像中在时域上连续的N-1张感光图像、第四感光图像进行融合处理,得到与所述第三感光图像相邻的第五感光图像;其中,所述第四感光图像为在时域上与所述N张感光图像相邻的下一张感光图像,且所述第四感光图像与所述N-1张感光图像在时域上相邻。The following describes a process of obtaining, after obtaining the third photosensitive image, the next photosensitive image adjacent to the third photosensitive image in a high-resolution and high-frame-rate photosensitive image sequence: fusing N-1 photosensitive images and a fourth photosensitive image that are continuous in time domain among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in time domain.
请参见图5,图5为本申请实施例提供的一种生成高分辨率高帧率视频序列的流水线架构。Please refer to FIG. 5 , which is a pipeline architecture for generating a high-resolution and high-frame-rate video sequence provided in an embodiment of the present application.
如图5所示,感光元件采集的高帧率低分辨率感光图像序列中包含感光图像1-6,其中感光图像1为第一曝光得到的感光图像。该高帧率低分辨率感光图像序列存储于存储单元中。其中,数据流上的数字编号代表对应感光图像的传输过程。As shown in FIG5 , the high frame rate low resolution photosensitive image sequence collected by the photosensitive element includes photosensitive images 1-6, wherein photosensitive image 1 is the photosensitive image obtained by the first exposure. The high frame rate low resolution photosensitive image sequence is stored in the storage unit. The digital number on the data stream represents the transmission process of the corresponding photosensitive image.
流水线架构为一种即时采集,即时融合的架构。假设每次选取时域上连续的四张感光图像进行融合,则上述流水线架构运行的具体过程如下:第一次融合过程中,融合单元从存储单元中获取感光图像1-4进行融合,得到感光图像7;第二次融合过程中,融合单元从存储单元中获取感光图像2-5进行融合,得到感光图像8;第三次融合过程中,融合单元从存储单元中获取感光图像3-6进行融合,得到感光图像9。然后依照上述顺序依次进行融合,最后得到高帧率高分辨率感光图像序列。The pipeline architecture is an architecture for real-time acquisition and real-time fusion. Assuming that four consecutive photosensitive images in the time domain are selected for fusion each time, the specific process of the above pipeline architecture is as follows: During the first fusion process, the fusion unit obtains photosensitive images 1-4 from the storage unit for fusion to obtain photosensitive image 7; During the second fusion process, the fusion unit obtains photosensitive images 2-5 from the storage unit for fusion to obtain photosensitive image 8; During the third fusion process, the fusion unit obtains photosensitive images 3-6 from the storage unit for fusion to obtain photosensitive image 9. Then the fusion is performed in sequence according to the above order, and finally a high frame rate and high resolution photosensitive image sequence is obtained.
可以看出通过上述流水线架构,可以使得存储单元中至少只需要保存4张采集到的感光图像,在第一次融合过程之前,保存感光图像1-4,第一次融合过程结束后,存储单元中保存的感光图像更新为:感光图像2-5,然后依次类推。通过此种流水线架构,可以有效较少需要缓存的低分辨率感光图像,减少存储开销;同时即采即融合的方式可以有效降低预览延迟。It can be seen that through the above pipeline architecture, at least only 4 captured photosensitive images need to be stored in the storage unit. Before the first fusion process, photosensitive images 1-4 are stored. After the first fusion process, the photosensitive images stored in the storage unit are updated to: photosensitive images 2-5, and so on. Through this pipeline architecture, the number of low-resolution photosensitive images that need to be cached can be effectively reduced, reducing storage overhead; at the same time, the acquisition and fusion method can effectively reduce preview delays.
可选地,所述方法还包括:对所述第三感光图像中包含的每个感光数据进行处理,得到所述第三感光图像上每个感光数据对应的像素值。Optionally, the method further includes: processing each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
其中,上述对感光数据进行处理,得到对应像素值的过程可以是终端设备上的图像信号处理ISP模块等可行的模块实现的。Among them, the above process of processing the photosensitive data to obtain the corresponding pixel value can be implemented by a feasible module such as an image signal processing ISP module on the terminal device.
其中,上述每个感光数据对应的像素值即是感光数据所位于的像素点对应的像素值,每个感光数据处理得到的像素值可以是RGB格式、YUV格式或其它可行的格式,本申请对此不限定。Among them, the pixel value corresponding to each of the above-mentioned photosensitive data is the pixel value corresponding to the pixel point where the photosensitive data is located. The pixel value obtained by processing each photosensitive data can be in RGB format, YUV format or other feasible format, which is not limited in this application.
通过对高分辨率高帧率的感光图像序列进行上述处理后,便可得到可以直接向用户展示的高分辨率高帧率视频或图像序列。 By performing the above processing on the high-resolution and high-frame-rate photosensitive image sequence, a high-resolution and high-frame-rate video or image sequence that can be directly displayed to the user can be obtained.
请参见图6,图6为本申请实施例提供的一种高分辨率高帧率摄像方法的执行过程示意图。该方法可以通过如图6所示的摄像模块完成,该摄像模块可以是图1实施例中的相机。Please refer to Figure 6, which is a schematic diagram of the execution process of a high-resolution and high-frame-rate camera method provided in an embodiment of the present application. The method can be implemented by a camera module as shown in Figure 6, and the camera module can be the camera in the embodiment of Figure 1.
该方法的执行过程包括:通过对感光元件进行周期性曝光(即前述多个连续感光周期),得到高帧率低分辨率感光图像序列,并由存储单元进行缓存,然后由融合单元利用前述流水线式架构按照时域顺序,对高帧率低分辨率感光图像序列进行融合处理,得到高分辨率高帧率感光图像序列。最后,由图像信号处理单元(如ISP等)对高分辨率高帧率感光图像序列中每张感光图像上包含的感光数据进行处理,得到对应的像素值,即得到用于展示给用户的高分辨率高帧率图像序列/视频等。The execution process of the method includes: obtaining a high frame rate and low resolution photosensitive image sequence by periodically exposing the photosensitive element (i.e., the aforementioned multiple continuous photosensitive cycles), and caching by the storage unit, and then the fusion unit uses the aforementioned pipeline architecture to fuse the high frame rate and low resolution photosensitive image sequence in the time domain order to obtain a high resolution and high frame rate photosensitive image sequence. Finally, the image signal processing unit (such as ISP, etc.) processes the photosensitive data contained in each photosensitive image in the high resolution and high frame rate photosensitive image sequence to obtain the corresponding pixel value, that is, to obtain a high resolution and high frame rate image sequence/video for display to the user.
具体地,上述执行过程具体可以参见前述实施例中相应描述,此处不再赘述。Specifically, the above execution process can be specifically referred to the corresponding description in the aforementioned embodiment, which will not be repeated here.
应当理解,图6中摄像模块所包含的各个模块只是列出的一个示例,并不够构成对各个模块集成情况的限定。即存储单元、融合单元和图像信号处理单元可以是各自独立的模块,或者是集成在一个或多个模块内。It should be understood that the modules included in the camera module in FIG6 are only examples and are not sufficient to limit the integration of the modules. That is, the storage unit, the fusion unit and the image signal processing unit can be independent modules or integrated in one or more modules.
请参见图7,图7为本申请实施例提供的一种图像处理装置。如图7所示,该装置包括获取单元701和融合单元702;其中,Please refer to FIG. 7 , which is an image processing device provided in an embodiment of the present application. As shown in FIG. 7 , the device includes an acquisition unit 701 and a fusion unit 702; wherein,
获取单元701,用于获取N张感光图像,所述N张感光图像是通过对感光元件进行连续N次曝光分别得到的;其中,所述N张感光图像中包括第一感光图像和第二感光图像,所述第一感光图像包括P个感光数据,所述P个感光数据是在一次曝光过程中对所述感光元件上的P个感光点分别进行曝光得到的,所述第二感光图像包括Q个感光数据,所述Q个感光数据是通在一次曝光过程中对所述感光元件上的Q个感光点分别进行曝光得到的,所述P个感光点和所述Q个感光点在所述感光元件上的位置不同,P、Q为大于或等于1的整数,N为大于或等于2的整数。融合单元702,用于将所述N张感光图像进行融合处理,得到第三感光图像;其中,所述第三感光图像包括与所述P个感光点分别对应的所述P个感光数据,以及与所述Q个感光点分别对应的所述Q个感光数据。The acquisition unit 701 is used to acquire N photosensitive images, wherein the N photosensitive images are obtained by exposing the photosensitive element N times in succession; wherein the N photosensitive images include a first photosensitive image and a second photosensitive image, wherein the first photosensitive image includes P photosensitive data, wherein the P photosensitive data are obtained by exposing P photosensitive points on the photosensitive element in one exposure process, and the second photosensitive image includes Q photosensitive data, wherein the Q photosensitive data are obtained by exposing Q photosensitive points on the photosensitive element in one exposure process, wherein the P photosensitive points and the Q photosensitive points have different positions on the photosensitive element, wherein P and Q are integers greater than or equal to 1, and N is an integer greater than or equal to 2. The fusion unit 702 is used to fuse the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
其中,图像处理装置还包含存储单元(图7未示出),用于存储上述N张感光图像或存储N张感光图像和与N张感光图像在时域上相邻的前H张感光图像,H等于一个感光周期内的曝光次数。Among them, the image processing device also includes a storage unit (not shown in Figure 7) for storing the above-mentioned N photosensitive images or storing the N photosensitive images and the previous H photosensitive images adjacent to the N photosensitive images in the time domain, where H is equal to the number of exposures in one photosensitive cycle.
在一种可行的实施方式中,所述N张感光图像为K*E张感光图像中在时域上连续的N张,所述K*E张感光图像是通过连续的K个感光周期依次曝光得到的,所述K个感光周期中的每个感光周期都对所述感光元件进行连续E次曝光,K为大于或等于1的正整数;所述感光元件包括M个感光单元,所述M个感光单元中的每个感光单元包含C个感光点,且在所述每个感光周期内,所述每个感光单元内的所有感光点都按序进行了一次曝光,C和M为大于或等于2的正整数。In a feasible implementation, the N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, and the K*E photosensitive images are obtained by sequentially exposing through K consecutive photosensitive cycles, and each of the K photosensitive cycles exposes the photosensitive element for E consecutive times, and K is a positive integer greater than or equal to 1; the photosensitive element includes M photosensitive units, each of the M photosensitive units contains C photosensitivity points, and in each photosensitive cycle, all photosensitivity points in each photosensitive unit are exposed once in sequence, and C and M are positive integers greater than or equal to 2.
在一种可行的实施方式中,在一个感光周期内的每次曝光中,所述每个感光单元内的一个感光点被曝光,所述N等于所述C。In a feasible implementation, in each exposure within a photosensitive cycle, a photosensitivity point in each photosensitive unit is exposed, and N is equal to C.
在一种可行的实施方式中,所述M个感光单元中包括第一感光单元,所述融合单元702还用于:当所述第一感光单元中的第一感光点在所述N次曝光中都未被曝光时,基于所述第一感光单元在所述N次曝光中被曝光感光点对应的感光数据进行插值,计算得到所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;所述第三感光图像还包括与所述第一感光点对应的感光数据。In a feasible implementation, the M photosensitive units include a first photosensitive unit, and the fusion unit 702 is further used for: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures, and the photosensitive data corresponding to the first photosensitive point is calculated; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes the photosensitive data corresponding to the first photosensitive point.
在一种可行的实施方式中,所述M个感光单元中包括第一感光单元,所述融合单元702还用于:当所述第一感光单元中的第一感光点在所述N张感光图像对应的N次曝光中都未被曝光时,基于所述N张感光图像包含的感光数据计算运动矢量,并基于所述运动矢量计算所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;所述第三感光图像还包括与所述第一感光点对应的感光数据。In a feasible embodiment, the M photosensitive units include a first photosensitive unit, and the fusion unit 702 is further used to: when a first photosensitive point in the first photosensitive unit has not been exposed in the N exposures corresponding to the N photosensitive images, calculate a motion vector based on the photosensitive data contained in the N photosensitive images, and calculate the photosensitive data corresponding to the first photosensitive point based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units; and the third photosensitive image also includes photosensitive data corresponding to the first photosensitive point.
在一种可行的实施方式中,所述融合单元702还用于:将所述N张感光图像中在时域上连续的N-1张感光图像、第四感光图像进行融合处理,得到与所述第三感光图像相邻的第五感光图像;其中,所述第四感光图像为在时域上与所述N张感光图像相邻的下一张感光图像,且所述第四感光图像与所述N-1张感光图像在时域上相邻。In a feasible implementation, the fusion unit 702 is also used to: fuse N-1 photosensitive images and a fourth photosensitive image that are continuous in time domain among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in time domain.
在一种可行的实施方式中,所述装置还包括:图像信号处理单元,用于对所述第三感光图像中包含的每个感光数据进行处理,得到所述第三感光图像上每个感光数据对应的像素值。In a feasible implementation manner, the device further includes: an image signal processing unit, configured to process each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
具体地,上述图像处理装置700的具体执行过程可以参见前述实施例中图3实施例描述的高分辨率高帧率摄像方法的执行流程,此处不再赘述。 Specifically, the specific execution process of the above-mentioned image processing device 700 can refer to the execution process of the high-resolution and high-frame rate camera method described in the embodiment of Figure 3 in the above-mentioned embodiment, and will not be repeated here.
请参见图8,图8为本发明实施例提供的一种图像处理装置的硬件结构示意图,图像处理装置800作为一种电子装置,可以包括电子装置101和电子装置201中全部或者部分元件或者模块。如图8所示,图像处理装置800可以作为图像处理装置700的一种实现方式,图像处理装置800包括处理器802、存储器804、输入/输出接口806、通信接口808和总线810。其中,处理器802、存储器804、输入/输出接口806和通信接口808通过总线810实现彼此之间的通信连接。Please refer to FIG8 , which is a schematic diagram of the hardware structure of an image processing device provided by an embodiment of the present invention. As an electronic device, the image processing device 800 may include all or part of the components or modules in the electronic device 101 and the electronic device 201. As shown in FIG8 , the image processing device 800 may be used as an implementation of the image processing device 700. The image processing device 800 includes a processor 802, a memory 804, an input/output interface 806, a communication interface 808, and a bus 810. The processor 802, the memory 804, the input/output interface 806, and the communication interface 808 are connected to each other through the bus 810.
处理器802可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路(Application Specific Integrated Circuit,ASIC),或者一个或多个集成电路,用于执行相关程序,以实现本发明实施例所提供的图像处理装置800中包括的单元所需执行的功能,或者执行本发明方法实施例和发明内容提供的高分辨率高帧率摄像方法。处理器802可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器802中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器802可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。软件模块可以位于随机存取存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器804,处理器802读取存储器804中的信息,结合其硬件完成上述方法实施例中的步骤。The processor 802 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits, which are used to execute relevant programs to implement the functions required to be performed by the units included in the image processing device 800 provided in the embodiment of the present invention, or to perform the high-resolution and high-frame-rate camera method provided in the method embodiment and the content of the invention. The processor 802 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 802 or an instruction in the form of software. The above-mentioned processor 802 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components. The disclosed methods, steps and logic block diagrams in the embodiments of the present invention may be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc. The software module may be located in a random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, register, or other mature storage media in the art. The storage medium is located in the memory 804, and the processor 802 reads the information in the memory 804 and completes the steps in the above method embodiment in combination with its hardware.
存储器804可以是只读存储器(Read Only Memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(Random Access Memory,RAM)。存储器804可以存储操作***以及其他应用程序。在通过软件或者固件来实现本发明实施例提供的图像处理装置700中包括的单元所需执行的功能,或者执行本发明方法实施例和发明内容提供的高分辨率高帧率摄像方法时,用于实现本发明实施例提供的技术方案的程序代码保存在存储器804中,并由处理器802来执行图像处理装置700中包括的单元所需执行的操作,或者执行本发明方法实施例提供的高分辨率高帧率摄像方法。The memory 804 may be a read-only memory (ROM), a static storage device, a dynamic storage device or a random access memory (RAM). The memory 804 may store an operating system and other application programs. When the functions required to be performed by the units included in the image processing device 700 provided in the embodiment of the present invention are implemented by software or firmware, or the high-resolution and high-frame-rate camera method provided in the method embodiment and the content of the present invention is executed, the program code for implementing the technical solution provided in the embodiment of the present invention is stored in the memory 804, and the processor 802 executes the operations required to be performed by the units included in the image processing device 700, or executes the high-resolution and high-frame-rate camera method provided in the method embodiment of the present invention.
输入/输出接口806用于接收输入的数据和信息,输出操作结果等数据。The input/output interface 806 is used to receive input data and information, and output data such as operation results.
通信接口808使用例如但不限于收发器一类的收发装置,来实现图像处理装置800与其他设备或通信网络之间的通信。The communication interface 808 uses a transceiver device such as, but not limited to, a transceiver to implement communication between the image processing apparatus 800 and other devices or a communication network.
总线810可包括在图像处理装置800各个部件(例如处理器802、存储器804、输入/输出接口806和通信接口808)之间传送信息的通路。The bus 810 may include a path for transmitting information between the various components of the image processing device 800 (eg, the processor 802 , the memory 804 , the input/output interface 806 , and the communication interface 808 ).
应注意,尽管图8所示的图像处理装置800仅仅示出了处理器802、存储器804、输入/输出接口806、通信接口808以及总线810,但是在具体实现过程中,本领域的技术人员应当明白,图像处理装置800还包含实现正常运行所必须的其他器件,例如显示器,相机。同时,根据具体需要,本领域的技术人员应当明白,图像处理装置800还可包含实现其他附加功能的硬件器件。此外,本领域的技术人员应当明白,图像处理装置800也可仅仅包含实现本发明实施例所必须的器件,而不必包含图8中所示的全部器件。It should be noted that although the image processing device 800 shown in FIG8 only shows the processor 802, the memory 804, the input/output interface 806, the communication interface 808 and the bus 810, in the specific implementation process, those skilled in the art should understand that the image processing device 800 also includes other devices necessary for normal operation, such as a display and a camera. At the same time, according to specific needs, those skilled in the art should understand that the image processing device 800 may also include hardware devices for implementing other additional functions. In addition, those skilled in the art should understand that the image processing device 800 may also only include the devices necessary for implementing the embodiment of the present invention, and does not necessarily include all the devices shown in FIG8.
可以理解的是,本实施例的图像处理装置800的更多的执行操作可以参照上述实施例以及发明内容中的相关描述,此处不再赘述。It can be understood that more execution operations of the image processing device 800 of this embodiment can refer to the above embodiments and related descriptions in the content of the invention, which will not be repeated here.
本申请实施例提供了一种芯片***,所述芯片***包括至少一个处理器,存储器和通信接口,所述存储器、所述通信接口和所述至少一个处理器通过线路互联,所述至少一个存储器中存储有指令;所述指令被所述处理器执行时,上述方法实施例中记载的任意一种的部分或全部步骤得以实现。本申请实施例提供了一种计算机存储介质,所述计算机存储介质存储有计算机程序,该计算机程序被执行时,使得上述方法实施例中记载的任意一种的部分或全部步骤得以实现。The embodiment of the present application provides a chip system, the chip system includes at least one processor, a memory and a communication interface, the memory, the communication interface and the at least one processor are interconnected by lines, and the at least one memory stores instructions; when the instructions are executed by the processor, some or all of the steps recorded in any one of the above method embodiments are implemented. The embodiment of the present application provides a computer storage medium, the computer storage medium stores a computer program, and when the computer program is executed, some or all of the steps recorded in any one of the above method embodiments are implemented.
本申请实施例提供了一种计算机程序,该计算机程序包括指令,当该计算机程序被处理器执行时,使得上述方法实施例中记载的任意一种的部分或全部步骤得以实现。An embodiment of the present application provides a computer program, which includes instructions. When the computer program is executed by a processor, part or all of the steps of any one of the above method embodiments are implemented.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可能可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。In the above embodiments, the description of each embodiment has its own emphasis. For the parts that are not described in detail in a certain embodiment, please refer to the relevant description of other embodiments. It should be noted that for the aforementioned method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should be aware that this application is not limited to the described order of actions, because according to this application, some steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also be aware that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by this application.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以 有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device can be implemented in other ways. For example, the device embodiments described above are only schematic, and the division of the above units is only a logical function division. There are other ways of dividing, for example, multiple units or components can be combined or integrated into another system, or some features can be ignored or not performed. Another point, the coupling or direct coupling or communication connection between each other shown or discussed can be an indirect coupling or communication connection through some interface, device or unit, which can be electrical or other forms.
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。 As described above, the above embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them. Although the present application has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that they can still modify the technical solutions described in the aforementioned embodiments, or make equivalent replacements for some of the technical features therein. However, these modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (18)

  1. 一种高分辨率高帧率摄像方法,其特征在于,所述方法包括:A high-resolution and high-frame-rate video recording method, characterized in that the method comprises:
    获取N张感光图像,所述N张感光图像是通过对感光元件进行连续N次曝光分别得到的;其中,所述N张感光图像中包括第一感光图像和第二感光图像,所述第一感光图像包括P个感光数据,所述P个感光数据是在一次曝光过程中对所述感光元件上的P个感光点分别进行曝光得到的,所述第二感光图像包括Q个感光数据,所述Q个感光数据是通在一次曝光过程中对所述感光元件上的Q个感光点分别进行曝光得到的,所述P个感光点和所述Q个感光点在所述感光元件上的位置不同,P、Q为大于或等于1的整数,N为大于或等于2的整数;Acquire N photosensitive images, wherein the N photosensitive images are respectively obtained by exposing a photosensitive element N times in succession; wherein the N photosensitive images include a first photosensitive image and a second photosensitive image, the first photosensitive image includes P photosensitive data, the P photosensitive data are respectively obtained by exposing P photosensitive points on the photosensitive element in one exposure process, the second photosensitive image includes Q photosensitive data, the Q photosensitive data are respectively obtained by exposing Q photosensitive points on the photosensitive element in one exposure process, the P photosensitive points and the Q photosensitive points are at different positions on the photosensitive element, P and Q are integers greater than or equal to 1, and N is an integer greater than or equal to 2;
    将所述N张感光图像进行融合处理,得到第三感光图像;其中,所述第三感光图像包括与所述P个感光点分别对应的所述P个感光数据,以及与所述Q个感光点分别对应的所述Q个感光数据。The N photosensitive images are fused to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
  2. 根据权利要求1所述的方法,其特征在于,The method according to claim 1, characterized in that
    所述N张感光图像为K*E张感光图像中在时域上连续的N张,所述K*E张感光图像是通过连续的K个感光周期依次曝光得到的,所述K个感光周期中的每个感光周期都对所述感光元件进行连续E次曝光,K为大于或等于1的正整数;The N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, and the K*E photosensitive images are obtained by sequentially exposing through K consecutive photosensitive cycles, and each photosensitive cycle in the K photosensitive cycles exposes the photosensitive element for E consecutive times, where K is a positive integer greater than or equal to 1;
    所述感光元件包括M个感光单元,所述M个感光单元中的每个感光单元包含C个感光点,且在所述每个感光周期内,所述每个感光单元内的所有感光点都按序进行了一次曝光,C和M为大于或等于2的正整数。The photosensitive element includes M photosensitive units, each of the M photosensitive units includes C photosensitive points, and in each photosensitive cycle, all photosensitive points in each photosensitive unit are exposed once in sequence, and C and M are positive integers greater than or equal to 2.
  3. 根据权利要求2所述的方法,其特征在于,The method according to claim 2, characterized in that
    在一个感光周期内的每次曝光中,所述每个感光单元内的一个感光点被曝光,所述N等于所述C。In each exposure within a photosensitive cycle, a photosensitivity point in each photosensitive unit is exposed, and N is equal to C.
  4. 根据权利要求2所述的方法,其特征在于,所述M个感光单元中包括第一感光单元,所述方法还包括:The method according to claim 2, characterized in that the M photosensitive units include a first photosensitive unit, and the method further comprises:
    当所述第一感光单元中的第一感光点在所述N次曝光中都未被曝光时,基于所述第一感光单元在所述N次曝光中被曝光感光点对应的感光数据进行插值,计算得到所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;When a first photosensitive point in the first photosensitive unit is not exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures to calculate and obtain the photosensitive data corresponding to the first photosensitive point; wherein the first photosensitive point is any one of the first photosensitive units that is not exposed, and the first photosensitive unit is any one of the M photosensitive units;
    所述第三感光图像还包括与所述第一感光点对应的感光数据。The third photosensitive image also includes photosensitive data corresponding to the first photosensitive point.
  5. 根据权利要求2所述的方法,其特征在于,所述M个感光单元中包括第一感光单元,所述方法还包括:The method according to claim 2, characterized in that the M photosensitive units include a first photosensitive unit, and the method further comprises:
    当所述第一感光单元中的第一感光点在所述N张感光图像对应的N次曝光中都未被曝光时,基于所述N张感光图像包含的感光数据计算运动矢量,并基于所述运动矢量计算所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;When a first photosensitive point in the first photosensitive unit has not been exposed in N exposures corresponding to the N photosensitive images, a motion vector is calculated based on the photosensitive data contained in the N photosensitive images, and photosensitive data corresponding to the first photosensitive point is calculated based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units;
    所述第三感光图像还包括与所述第一感光点对应的感光数据。The third photosensitive image also includes photosensitive data corresponding to the first photosensitive point.
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, characterized in that the method further comprises:
    将所述N张感光图像中在时域上连续的N-1张感光图像、第四感光图像进行融合处理,得到与所述第三感光图像相邻的第五感光图像;Fusing N-1 photosensitive images that are continuous in the time domain and the fourth photosensitive image among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image;
    其中,所述第四感光图像为在时域上与所述N张感光图像相邻的下一张感光图像,且所述第四感光图像与所述N-1张感光图像在时域上相邻。The fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in the time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in the time domain.
  7. 根据权利要求3-6中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 3 to 6, characterized in that the method further comprises:
    对所述第三感光图像中包含的每个感光数据进行处理,得到所述第三感光图像上每个感光数据对应的像素值。 Each photosensitive data contained in the third photosensitive image is processed to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
  8. 一种图像处理装置,其特征在于,所述装置包括:An image processing device, characterized in that the device comprises:
    获取单元,用于获取N张感光图像,所述N张感光图像是通过对感光元件进行连续N次曝光分别得到的;其中,所述N张感光图像中包括第一感光图像和第二感光图像,所述第一感光图像包括P个感光数据,所述P个感光数据是在一次曝光过程中对所述感光元件上的P个感光点分别进行曝光得到的,所述第二感光图像包括Q个感光数据,所述Q个感光数据是通在一次曝光过程中对所述感光元件上的Q个感光点分别进行曝光得到的,所述P个感光点和所述Q个感光点在所述感光元件上的位置不同,P、Q为大于或等于1的整数,N为大于或等于2的整数;An acquisition unit, used for acquiring N photosensitive images, wherein the N photosensitive images are respectively obtained by exposing the photosensitive element N times in succession; wherein the N photosensitive images include a first photosensitive image and a second photosensitive image, wherein the first photosensitive image includes P photosensitive data, wherein the P photosensitive data are respectively obtained by exposing P photosensitive points on the photosensitive element in one exposure process, and the second photosensitive image includes Q photosensitive data, wherein the Q photosensitive data are respectively obtained by exposing Q photosensitive points on the photosensitive element in one exposure process, wherein the P photosensitive points and the Q photosensitive points are located at different positions on the photosensitive element, wherein P and Q are integers greater than or equal to 1, and N is an integer greater than or equal to 2;
    融合单元,用于将所述N张感光图像进行融合处理,得到第三感光图像;其中,所述第三感光图像包括与所述P个感光点分别对应的所述P个感光数据,以及与所述Q个感光点分别对应的所述Q个感光数据。A fusion unit is used to fuse the N photosensitive images to obtain a third photosensitive image; wherein the third photosensitive image includes the P photosensitive data corresponding to the P photosensitive points, and the Q photosensitive data corresponding to the Q photosensitive points.
  9. 根据权利要求8所述的装置,其特征在于,The device according to claim 8, characterized in that
    所述N张感光图像为K*E张感光图像中在时域上连续的N张,所述K*E张感光图像是通过连续的K个感光周期依次曝光得到的,所述K个感光周期中的每个感光周期都对所述感光元件进行连续E次曝光,K为大于或等于1的正整数;The N photosensitive images are N consecutive images in the time domain among K*E photosensitive images, and the K*E photosensitive images are obtained by sequentially exposing through K consecutive photosensitive cycles, and each photosensitive cycle in the K photosensitive cycles exposes the photosensitive element for E consecutive times, where K is a positive integer greater than or equal to 1;
    所述感光元件包括M个感光单元,所述M个感光单元中的每个感光单元包含C个感光点,且在所述每个感光周期内,所述每个感光单元内的所有感光点都按序进行了一次曝光,C和M为大于或等于2的正整数。The photosensitive element includes M photosensitive units, each of the M photosensitive units includes C photosensitive points, and in each photosensitive cycle, all photosensitive points in each photosensitive unit are exposed once in sequence, and C and M are positive integers greater than or equal to 2.
  10. 根据权利要求9所述的装置,其特征在于,The device according to claim 9, characterized in that
    在一个感光周期内的每次曝光中,所述每个感光单元内的一个感光点被曝光,所述N等于所述C。In each exposure within a photosensitive cycle, a photosensitivity point in each photosensitive unit is exposed, and N is equal to C.
  11. 根据权利要求9所述的装置,其特征在于,所述M个感光单元中包括第一感光单元,所述融合单元还用于:The device according to claim 9, characterized in that the M photosensitive units include a first photosensitive unit, and the fusion unit is further used to:
    当所述第一感光单元中的第一感光点在所述N次曝光中都未被曝光时,基于所述第一感光单元在所述N次曝光中被曝光感光点对应的感光数据进行插值,计算得到所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;When a first photosensitive point in the first photosensitive unit is not exposed in the N exposures, interpolation is performed based on the photosensitive data corresponding to the photosensitive points of the first photosensitive unit exposed in the N exposures to calculate and obtain the photosensitive data corresponding to the first photosensitive point; wherein the first photosensitive point is any one of the first photosensitive units that is not exposed, and the first photosensitive unit is any one of the M photosensitive units;
    所述第三感光图像还包括与所述第一感光点对应的感光数据。The third photosensitive image also includes photosensitive data corresponding to the first photosensitive point.
  12. 根据权利要求9所述的装置,其特征在于,所述M个感光单元中包括第一感光单元,所述融合单元还用于:The device according to claim 9, characterized in that the M photosensitive units include a first photosensitive unit, and the fusion unit is further used to:
    当所述第一感光单元中的第一感光点在所述N张感光图像对应的N次曝光中都未被曝光时,基于所述N张感光图像包含的感光数据计算运动矢量,并基于所述运动矢量计算所述第一感光点对应的感光数据;其中,所述第一感光点为所述第一感光单元中的任意一个未被曝光的感光点,所述第一感光单元为所述M个感光单元中的任意一个;When a first photosensitive point in the first photosensitive unit has not been exposed in N exposures corresponding to the N photosensitive images, a motion vector is calculated based on the photosensitive data contained in the N photosensitive images, and photosensitive data corresponding to the first photosensitive point is calculated based on the motion vector; wherein the first photosensitive point is any one of the first photosensitive units that has not been exposed, and the first photosensitive unit is any one of the M photosensitive units;
    所述第三感光图像还包括与所述第一感光点对应的感光数据。The third photosensitive image also includes photosensitive data corresponding to the first photosensitive point.
  13. 根据权利要求8-12中任一项所述的装置,其特征在于,所述融合单元还用于:The device according to any one of claims 8 to 12, characterized in that the fusion unit is further used for:
    将所述N张感光图像中在时域上连续的N-1张感光图像、第四感光图像进行融合处理,得到与所述第三感光图像相邻的第五感光图像;Fusing N-1 photosensitive images that are continuous in the time domain and the fourth photosensitive image among the N photosensitive images to obtain a fifth photosensitive image adjacent to the third photosensitive image;
    其中,所述第四感光图像为在时域上与所述N张感光图像相邻的下一张感光图像,且所述第四感光图像与所述N-1张感光图像在时域上相邻。The fourth photosensitive image is the next photosensitive image adjacent to the N photosensitive images in the time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive images in the time domain.
  14. 根据权利要求10-13中任一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 10 to 13, characterized in that the device further comprises:
    图像信号处理单元,用于对所述第三感光图像中包含的每个感光数据进行处理,得到所述第三感光图像上每个感光数据对应的像素值。The image signal processing unit is used to process each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
  15. 一种芯片***,其特征在于,所述芯片***包括至少一个处理器,存储器和通信接口,所述存储器、所述通信接口和所述至少一个处理器通过线路互联,所述至少一个存储器中存储有指令;所述指令被所述 处理器执行时,权利要求1-7中任一所述的方法得以实现。A chip system, characterized in that the chip system comprises at least one processor, a memory and a communication interface, the memory, the communication interface and the at least one processor are interconnected through a line, the at least one memory stores instructions; the instructions are When executed by a processor, the method described in any one of claims 1 to 7 is implemented.
  16. 一种电子装置,其特征在于,所述电子装置包括至少一个处理器,存储器和通信接口,所述存储器、所述通信接口和所述至少一个处理器通过线路互联,所述至少一个存储器中存储有指令;所述指令被所述处理器执行时,权利要求1-7中任一所述的方法得以实现。An electronic device, characterized in that the electronic device includes at least one processor, a memory and a communication interface, the memory, the communication interface and the at least one processor are interconnected by lines, and instructions are stored in the at least one memory; when the instructions are executed by the processor, any method described in claims 1-7 is implemented.
  17. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,该计算机程序被执行时,权利要求1-7中任意一项所述的方法得以实现。A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, and when the computer program is executed, the method described in any one of claims 1 to 7 is implemented.
  18. 一种计算机程序,其特征在于,该计算机程序包括指令,当该计算机程序被执行时,权利要求1-7中任意一项所述的方法得以实现。 A computer program, characterized in that the computer program comprises instructions, and when the computer program is executed, the method described in any one of claims 1 to 7 is implemented.
PCT/CN2023/120896 2022-09-26 2023-09-23 High-resolution high-frame-rate photographing method, and image processing apparatus WO2024067428A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211187446.0 2022-09-26
CN202211187446.0A CN117808688A (en) 2022-09-26 2022-09-26 High-resolution high-frame-rate image pickup method and image processing apparatus

Publications (1)

Publication Number Publication Date
WO2024067428A1 true WO2024067428A1 (en) 2024-04-04

Family

ID=90428660

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/120896 WO2024067428A1 (en) 2022-09-26 2023-09-23 High-resolution high-frame-rate photographing method, and image processing apparatus

Country Status (2)

Country Link
CN (1) CN117808688A (en)
WO (1) WO2024067428A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100026825A1 (en) * 2007-01-23 2010-02-04 Nikon Corporation Image processing device, electronic camera, image processing method, and image processing program
JP2014236251A (en) * 2013-05-31 2014-12-15 キヤノン株式会社 Imaging apparatus
CN107409180A (en) * 2015-03-09 2017-11-28 三星电子株式会社 Electronic installation with camera model and the image processing method for electronic installation
CN109863742A (en) * 2017-01-25 2019-06-07 华为技术有限公司 Image processing method and terminal device
CN112492228A (en) * 2020-12-15 2021-03-12 维沃移动通信有限公司 Exposure method, camera module and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100026825A1 (en) * 2007-01-23 2010-02-04 Nikon Corporation Image processing device, electronic camera, image processing method, and image processing program
JP2014236251A (en) * 2013-05-31 2014-12-15 キヤノン株式会社 Imaging apparatus
CN107409180A (en) * 2015-03-09 2017-11-28 三星电子株式会社 Electronic installation with camera model and the image processing method for electronic installation
CN109863742A (en) * 2017-01-25 2019-06-07 华为技术有限公司 Image processing method and terminal device
CN112492228A (en) * 2020-12-15 2021-03-12 维沃移动通信有限公司 Exposure method, camera module and electronic equipment

Also Published As

Publication number Publication date
CN117808688A (en) 2024-04-02

Similar Documents

Publication Publication Date Title
KR102170781B1 (en) Electronic device and method for processing image
KR102445699B1 (en) Electronic device and operating method thereof
WO2018048177A1 (en) Electronic device and method for processing multiple images
US10348971B2 (en) Electronic device and method for generating thumbnails based on captured images
CN113890989B (en) Shooting method and electronic device
US10181203B2 (en) Method for processing image data and apparatus for the same
CN104869320A (en) Electronic device and control method of the same
CN108391060A (en) A kind of image processing method, image processing apparatus and terminal
US20230362476A1 (en) Photographing method and apparatus and electronic device
CN105227850B (en) Enabled metadata storage subsystem
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
CN111741303B (en) Deep video processing method and device, storage medium and electronic equipment
KR20170092772A (en) Apparatus and method for processing an image
KR20150027934A (en) Apparatas and method for generating a file of receiving a shoot image of multi angle in an electronic device
US10009545B2 (en) Image processing apparatus and method of operating the same
CN112637481B (en) Image scaling method and device
WO2024109207A1 (en) Method for displaying thumbnail, and electronic device
WO2024067428A1 (en) High-resolution high-frame-rate photographing method, and image processing apparatus
WO2022170866A1 (en) Data transmission method and apparatus, and storage medium
WO2022061723A1 (en) Image processing method, device, terminal, and storage medium
US10715737B2 (en) Imaging device, still image capturing method, and still image capturing program
WO2024109203A1 (en) Photographing processing method and electronic device
WO2020166490A1 (en) Display control device, imaging device, display control method, and display control program
WO2023102934A1 (en) Data processing method, intelligent terminal and storage medium
CN110996013A (en) Electronic device and method for processing image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23870659

Country of ref document: EP

Kind code of ref document: A1