WO2021253186A1 - 信息处理方法、装置和成像*** - Google Patents

信息处理方法、装置和成像*** Download PDF

Info

Publication number
WO2021253186A1
WO2021253186A1 PCT/CN2020/096194 CN2020096194W WO2021253186A1 WO 2021253186 A1 WO2021253186 A1 WO 2021253186A1 CN 2020096194 W CN2020096194 W CN 2020096194W WO 2021253186 A1 WO2021253186 A1 WO 2021253186A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
event
processing
gradient
gradient information
Prior art date
Application number
PCT/CN2020/096194
Other languages
English (en)
French (fr)
Inventor
伦朝林
杨景景
李静
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/096194 priority Critical patent/WO2021253186A1/zh
Priority to CN202080005340.7A priority patent/CN112771843A/zh
Publication of WO2021253186A1 publication Critical patent/WO2021253186A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an information processing method, device, and imaging system.
  • the embodiments of the present disclosure provide an information processing method, device, and imaging system, which can obtain video information through imaging equipment, obtain event information through an event camera, and obtain multiple interpolated frame images based on the video information and event information, thereby The video information can be interpolated based on multiple interpolated frame images to generate video information with a high frame rate.
  • an embodiment of the present disclosure provides an information processing method, including: obtaining first video information, where the first video information includes multiple image frames; obtaining event information, where the event information includes multiple event points, Each of the multiple event points includes coordinate information, time information, and brightness information of the pixel corresponding to the coordinate information; based on the multiple event points, multiple gradient information maps are generated, and the gradient The information map includes gradient information of each pixel; based on the multiple image frames, processing the multiple gradient information maps to obtain multiple interpolated frame images; and comparing the first video information based on the multiple interpolated frame images Perform frame insertion to obtain second video information, wherein the frame rate of the second video information is higher than the frame rate of the first video information.
  • embodiments of the present disclosure provide an information processing device, which includes a first obtaining module, a second obtaining module, a generating module, a first processing module, and a frame inserting module.
  • the first obtaining module is used to obtain first video information
  • the first video information includes a plurality of image frames.
  • the second obtaining module is configured to obtain event information, the event information includes a plurality of event points, and each event point of the plurality of event points includes coordinate information, time information, and brightness of a pixel corresponding to the coordinate information information.
  • the generating module is configured to generate multiple gradient information maps based on the multiple event points, the gradient information maps including gradient information of each pixel.
  • the first processing module is configured to process the multiple gradient information maps based on the multiple image frames to obtain multiple interpolated frame images.
  • the frame insertion module is configured to perform frame insertion on the first video information based on the plurality of frame insertion images to obtain second video information, wherein the frame rate of the second video information is higher than that of the first video information Frame rate.
  • embodiments of the present disclosure provide an imaging system including an imaging device, an event camera, and a processor.
  • the imaging device is used to obtain the first video information.
  • the event camera is used to obtain event information.
  • the processor is configured to operate as follows: obtain first video information, where the first video information includes multiple image frames; obtain event information, where the event information includes multiple event points, each of the multiple event points Including coordinate information, time information, and brightness information of pixels corresponding to the coordinate information; generating multiple gradient information maps based on the multiple event points, the gradient information maps including the gradient information of each pixel; based on all The multiple image frames are processed, the multiple gradient information graphs are processed to obtain multiple interpolated frame images; and the first video information is interpolated based on the multiple interpolated frame images to obtain second video information, wherein: The frame rate of the second video information is higher than the frame rate of the first video information.
  • embodiments of the present disclosure provide a computer system, including: one or more processors; a computer-readable storage medium for storing one or more programs, wherein when the one or more programs are When the one or more processors execute, the one or more processors implement the method described above.
  • embodiments of the present disclosure provide a computer-readable storage medium having executable instructions stored thereon, and when the instructions are executed by a processor, the processor implements the method described above.
  • embodiments of the present disclosure provide a computer program product, including computer-readable instructions, where the computer-readable instructions are used to execute the above-mentioned method when executed.
  • Fig. 1 schematically shows an exemplary system architecture to which an information processing method can be applied in an embodiment of the present disclosure
  • Fig. 2 schematically shows a flowchart of an information processing method according to an embodiment of the present disclosure
  • Fig. 3 schematically shows a schematic diagram of an image acquisition device according to an embodiment of the present disclosure
  • FIG. 4 schematically shows a schematic diagram of an event point set of an embodiment of the present disclosure
  • Fig. 5 schematically shows a block diagram of an information processing device according to an embodiment of the present disclosure
  • FIG. 6 schematically shows a block diagram of an imaging system of an embodiment of the present disclosure.
  • Fig. 7 schematically shows a block diagram of a computer system according to an embodiment of the present disclosure.
  • the present disclosure provides an information processing method.
  • the method includes obtaining first video information by an imaging device, and the first video information may include a plurality of image frames.
  • the event information is obtained through the event camera, and the event information may include multiple event points, and each event point may include coordinate information, time information, and brightness information of a pixel corresponding to the coordinate information.
  • the embodiments of the present disclosure can generate multiple gradient information graphs based on multiple event points in the event information.
  • the gradient information graph includes the gradient information of each pixel and processes the generated multiple image frames based on multiple image frames in the first video information.
  • a gradient information map, multiple interpolated frame images are obtained.
  • the first video information can be interpolated according to the multiple interpolated frame images to obtain the second video information, so that video information with a high frame rate can be obtained.
  • Fig. 1 schematically shows an exemplary system architecture to which an information processing method according to an embodiment of the present disclosure can be applied.
  • FIG. 1 is only an example of the system architecture to which the embodiments of the present disclosure can be applied to help those skilled in the art understand the technical content of the present disclosure, but it does not mean that the embodiments of the present disclosure cannot be used for other Equipment, system, environment or scenario.
  • the system architecture 100 may include an imaging device 101, an event camera 102, a network 103, and a server 104.
  • the network 103 is a medium used to provide a communication link between the imaging device 101 and the server 104, or between the event camera 102 and the server 104.
  • the network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
  • the imaging device 101 may be, for example, a device that uses the principle of optical imaging to form an image and uses a film to record the image.
  • the imaging device 101 may be various imaging cameras equipped with image sensors.
  • the event camera (Event Camera) 102 may be, for example, a device that outputs an asynchronous signal by measuring the brightness change of each pixel.
  • the event camera 102 may be various event cameras equipped with dynamic vision sensors, including but not limited to DVS (Dynamic Vision Sensor), ATIS (Asynchronous Time Based Image Sensor), DAVIS (Dynamic and Active Pixel Vision Sensor), etc.
  • the server 104 may be a server that provides various services. For example, it may analyze and process the received video information and event information, and feed back the processing result to the user.
  • the information processing method provided by the embodiments of the present disclosure can generally be executed by the server 104.
  • the information processing device provided by the embodiment of the present disclosure may generally be set in the server 104.
  • the information processing method provided by the embodiments of the present disclosure may also be executed by a server or a server cluster that is different from the server 104 and can communicate with the imaging device 101, the event camera 102, and/or the server 104.
  • the information processing apparatus provided in the embodiments of the present disclosure may also be set in a server or a server cluster that is different from the server 104 and can communicate with the imaging device 101, the event camera 102, and/or the server 104.
  • the information processing method provided by the embodiments of the present disclosure may also be executed by the imaging device 101 or the event camera 102, or may also be executed by other terminal devices (for example, user terminals) different from the imaging device 101 and the event camera 102.
  • the imaging device 101 may be used to obtain video information
  • the event camera 102 may be used to obtain event information.
  • the server 104 may obtain video information from the imaging device 101 through the network 103 and obtain event information from the event camera 102 through the network 103.
  • the server 104 may obtain a plurality of interpolated frame images based on the obtained video information and event information, and perform interpolating processing on the video information according to the obtained plurality of interpolated frame images, so as to obtain video information with a higher frame rate.
  • Fig. 2 schematically shows a flowchart of an information processing method according to an embodiment of the present disclosure.
  • the method includes operations S201 to S205.
  • first video information is obtained, where the first video information includes a plurality of image frames.
  • the first video information can be obtained by shooting with an imaging device.
  • the imaging device can shoot and record the first video information by, for example, the principle of optical imaging.
  • a digital camera, a single-lens reflex camera, or various electronic devices with a camera can be used to capture the first video information.
  • the first video information may also include video information stored in a device such as a non-volatile storage device, or virtual video information, for example, video information generated by artificial intelligence technology, which is not limited here.
  • the first video information may be, for example, a video sequence composed of multiple consecutive image frames, and the first video information may, for example, reflect brightness information of a certain scene within a period of time.
  • the first video information may include color video or grayscale video.
  • the embodiment of the present disclosure does not limit the format and color channel of the first video information, and those skilled in the art can set it according to actual needs.
  • event information is obtained, the event information includes a plurality of event points, and each event point of the plurality of event points includes coordinate information, time information, and brightness information of a pixel corresponding to the coordinate information.
  • event information can be obtained through an event camera.
  • the event camera can output a series of event sequences by independently collecting brightness information of several points in the scene, or intensity information such as the brightness of each pixel, for example.
  • various event cameras with dynamic vision sensors such as DVS, ATIS, or DAVIS, can be used to obtain event information.
  • the event information may be, for example, an event sequence composed of multiple event points, and the event information may, for example, reflect the brightness change information of the current scene.
  • Each event point can include coordinate information, time information, and brightness information of the pixel corresponding to the coordinate information.
  • the brightness information may be, for example, brightness change information.
  • the brightness change includes that the brightness remains unchanged, the brightness becomes higher, or the brightness becomes lower.
  • the brightness change information corresponding to the brightness remaining unchanged is 0, the brightness change information corresponding to the higher brightness is 1, and the brightness change information corresponding to the lower brightness is -1.
  • each event can contain features (x, y, t, p), where x, y represent spatial location information (for example, the coordinate information of a pixel), t represents the timestamp triggered by the event, and p represents the data polarity (For example, 0 means that the brightness of the pixel does not change, 1 means that the brightness of the pixel is increased, and -1 means that the brightness of the pixel is reduced).
  • x, y represent spatial location information (for example, the coordinate information of a pixel)
  • t represents the timestamp triggered by the event
  • p represents the data polarity
  • 0 means that the brightness of the pixel does not change
  • 1 means that the brightness of the pixel is increased
  • -1 means that the brightness of the pixel is reduced.
  • the photosensitive element of the imaging camera and the photosensitive element of the event camera can be placed as close as possible to reduce incident light and parallax caused by the imaging camera. There is an excessively large difference between the brightness of the acquired first video information and the brightness of the event information acquired by the event camera.
  • FIG. 3 schematically shows a schematic diagram of an image acquisition device 300 according to an embodiment of the present disclosure.
  • the image acquisition device 300 may include an imaging camera 310 and an event camera 320.
  • the imaging camera 310 and the event camera 320 may be detachably fixed in the image acquisition device 300 by a fixing device as close to each other as possible.
  • the imaging camera 310 and the event camera 320 can be controlled to acquire the first video information and event information substantially synchronously to eliminate problems caused by time differences, such as brightness differences in the same scene at different times.
  • the imaging camera 310 and the event camera 320 may be respectively controlled to obtain the first video information and event information, and then the time difference and/or brightness difference may be corrected through an algorithm.
  • the mapping relationship between the first video information and the event information can also be determined based on the parameters of the imaging camera and the parameters of the event camera.
  • the imaging camera and the event camera themselves have certain distortions in the process of acquiring images.
  • FOV field of view
  • the embodiments of the present disclosure can perform internal parameter estimation on the imaging camera and the event camera, and external parameter estimation between the two cameras.
  • a calibration method can be used to calibrate the imaging camera and the event camera to obtain the internal and external parameters of the two.
  • a mapping relationship between the data of the two cameras for example, a homograph matrix (Homograph matrix, H matrix), an affine matrix (Affine matrix), and so on.
  • H matrix homograph matrix
  • Adjae matrix affine matrix
  • the parallax problem between the imaging camera and the event camera can be corrected through the mapping relationship.
  • the formula for mapping the H matrix can be expressed as:
  • K 1 represents the internal parameter matrix of the event camera
  • K 2 represents the internal parameter matrix of the imaging camera
  • R represents the rotation matrix from the imaging camera coordinate system to the event camera coordinate system
  • H represents the homography from the event camera coordinate system to the imaging camera coordinate system.
  • P 1 represents the coordinates of a certain pixel of the image acquired by the event camera
  • P 2 represents the coordinates of the corresponding pixel of P 1 on the image acquired by the imaging camera.
  • the embodiments of the present disclosure can eliminate the problem caused by the time difference by controlling the imaging camera and the event camera to obtain video information and event information at the same time.
  • the embodiments of the present disclosure can eliminate the problem of parallax by establishing a mapping relationship between an imaging camera and an event camera.
  • the embodiments of the present disclosure can eliminate the problem of different fields of view corresponding to the first video information and the event information by performing at least one of cropping, stretching, splicing, and rotation on the first video information and/or event information.
  • the imaging camera may correspond to multiple event cameras, and the field of view formed by the splicing of the multiple event cameras may match the field of view of the imaging camera.
  • the framing range of the image stitched by multiple event cameras is the same as the framing range of the imaging camera, so as to eliminate the problem of different fields of view corresponding to the first video information and the event information.
  • a plurality of gradient information maps are generated based on a plurality of event points, and the gradient information map includes gradient information of each pixel.
  • the first processing, the second processing, and the third processing can be sequentially performed on multiple event points to obtain multiple gradient information graphs, where the first processing includes one of gradient processing, up-sampling processing, and mapping processing.
  • the second processing includes another one of gradient processing, up-sampling processing, and mapping processing
  • the third processing another one of gradient processing, up-sampling processing, and mapping processing.
  • the embodiments of the present disclosure do not limit the sequence between gradient processing, up-sampling processing, and mapping processing, and those skilled in the art can set it according to actual needs. For example, gradient processing can be performed on multiple event points first, then up-sampling processing, and finally mapping processing, to obtain a gradient information map.
  • the gradient information after gradient processing can be modified more, for example, the gradient information can be suppressed or attenuated to meet the requirements of high dynamics.
  • the method of gradient attenuation can use a Gaussian pyramid, and select an attenuation function that satisfies a large gradient attenuation and a small gradient attenuation. Thus, it is ensured that the gradient information will not overflow.
  • gradient suppression or attenuation can be performed in various steps according to requirements. For example, after the mapping process, it can be gradually up-sampled and gradient attenuation performed at the same time.
  • mapping process is performed after the gradient process, the accuracy of the image spatial registration can be improved, and the up-sampling process, as the last step of the three steps, can reduce the amount of data in the process, thereby further improving the image processing speed.
  • a plurality of event points may be divided into a plurality of event point sets according to a time sequence, and the plurality of event point sets respectively correspond to different time periods. Then, gradient processing is performed on multiple event point sets to generate multiple initial gradient information maps. Then, based on the mapping relationship between the first video information and the event information, the multiple initial gradient information maps are mapped to obtain multiple mapped gradient information maps. Finally, perform up-sampling processing on multiple mapped gradient information graphs to obtain multiple gradient information graphs, where each gradient information graph corresponds to a set of event points.
  • the event points with a time stamp between t0 and t1 can be divided into the event point set 401.
  • Gradient processing is performed on each event point in the event point set 401, and an initial gradient information map corresponding to the time period t0 to t1 is generated.
  • the initial gradient information map is mapped to obtain the mapped gradient information map.
  • the up-sampling process is performed on the mapped gradient information graph to obtain the gradient information graph.
  • the gradient information graphs corresponding to the time period t1 to t2, the time period t2 to t3, the time period t3 to t4, the time period t4 to t5, etc. can be generated in sequence.
  • the event point set can be divided according to a preset time interval. For example, all event points in each time interval ⁇ t are grouped into one event point set.
  • the event point set can also be divided according to the preset number of event points. For example, event points are collected in chronological order, and 1,000 event points are reached to form a set.
  • Kalman filtering may be used to perform gradient processing on each event point set to obtain the initial gradient information map corresponding to the set.
  • the initial gradient information map can be mapped through the mapping relationship between the first video information and the event information obtained in the description of operation S202 to obtain the mapped gradient information map to correct the gradient information map Parallax with video information.
  • the mapping gradient information map can be up-sampled by a backward mapping method or an interpolation method (for example, a bicubic interpolation method), so that a gradient information map with a higher spatial resolution can be obtained.
  • a backward mapping method or an interpolation method for example, a bicubic interpolation method
  • it can be up-sampled to the same resolution as the first video information.
  • At least one of cropping, stretching, splicing, and rotation may also be performed on the gradient information map to make the gradient information map consistent with the FOV of the first video information.
  • at least one of cropping, stretching, splicing, and rotation is performed on the first video information, so that the video information is consistent with the FOV of the gradient information map.
  • filtering processing may also be performed on at least one gradient information graph of the multiple gradient information graphs.
  • edge-preserving filtering can be performed on the gradient information map to fine-tune the gradient information map.
  • the edge-preserving filtering may be, for example, bilateral filtering or guided filtering.
  • a plurality of gradient information maps are processed based on a plurality of image frames to obtain a plurality of interpolated frame images.
  • the gradient information graph is processed, and based on the first image frame and the second image frame, the gradient information graph to be processed is processed to obtain the interpolated frame image to be inserted between the first image frame and the second image frame.
  • the gradient information map is the to-be-processed gradient information map that matches the first image frame and the second image frame in time.
  • the to-be-processed gradient information graphs can be processed by Poisson equation to obtain the interpolated frame image to be inserted between the first image frame and the second image frame, where one gradient information to-be-processed corresponds to one interpolated frame image.
  • I is the video frame of the imaging camera
  • G is the gradient information graph to be processed
  • the initial condition of the Poisson equation can be set according to the interpolation frame image or the image frame adjacent to the gradient information map to be processed.
  • 0 may represent the first frame of two adjacent frames of the imaging camera
  • N may represent the second frame of two adjacent frames of the imaging camera. That is, the first image frame and the second image frame of the first video corresponding in time to the gradient information graph to be processed are used as initial conditions for processing the Poisson equation of the gradient information graph to be processed.
  • the initial conditions may be both the first image frame and the second image frame of the first video.
  • 0 may represent the adjacent interpolated frame image generated at the previous moment
  • N may represent the second frame of the two adjacent frames of the imaging camera. That is, the interpolation frame image of the previous frame and the second image frame of the first video of the next frame adjacent to the gradient information graph to be processed are used as initial conditions for processing the Poisson equation of the gradient information graph to be processed.
  • the initial conditions may be the first image frame and the second image frame of the first video.
  • the initial condition The event point set 401 may correspond to the inserted frame image and the second image frame of the first video.
  • the embodiment of the present disclosure does not limit the processing method of the gradient information graph.
  • the embodiment of the present disclosure only needs to process the gradient information graph through the first video information, so that the processed gradient information graph meets the brightness information of the scene, so that a qualified interpolation frame can be generated
  • the image is sufficient, and those skilled in the art can set the processing method according to actual needs.
  • the above-mentioned partial differential equation is a sparse linear system, and there are multiple solutions. It can be directly solved but not limited to the solution of the equation system, or the approximate optimal solution of the equation can be obtained by using an iterative optimization method by deforming the equation.
  • the first video information is frame-inserted based on the plurality of frame-insertion images to obtain second video information, wherein the frame rate of the second video information is higher than the frame rate of the first video information.
  • each interpolated frame image can be inserted into the first video based on the time period information corresponding to each interpolated frame image.
  • the interpolated images corresponding to the event point sets 401, 402, 403, 404, and 405 can be inserted in sequence between the first image frame and the second image frame of the first video.
  • the second video obtained after the frame insertion processing has a higher frame rate, and is suitable for high-speed motion scenes.
  • the second video information may include slow motion video information, which is used to play slow motion information in a high-speed motion scene.
  • the event camera in the embodiment of the present disclosure only compares the brightness changes of the pixels, which greatly shortens the pixel response time (for example, the pixel delay can be shortened to 1 us), and the output signal is time-intensive event points, which is suitable for recording fast motions and forming high Frame rate video stream.
  • the event information collected by the event camera, pre-stored, or virtually generated can be used to generate an interpolated frame image that can be used to interpolate the frame, and then insert it into the first video, so as to obtain a high High-resolution video information with a higher frame rate in order to truly reflect the actual state of motion in high-speed motion and avoid blur and discontinuity in high-speed dynamic scenes.
  • FIG. 5 schematically shows a block diagram of an information processing apparatus 500 according to an embodiment of the present disclosure.
  • the information processing device 500 may include a first obtaining module 510, a second obtaining module 520, a generating module 530, a first processing module 540, and a frame inserting module 550.
  • the information processing apparatus 500 may be placed in a flying equipment, for example.
  • the information processing device 500 may be used, for example, to control a navigation system of flight equipment.
  • the information processing device 500 may control the navigation system of the drone to track the target object and so on.
  • the first obtaining module 510 is configured to obtain first video information, where the first video information includes a plurality of image frames.
  • the second obtaining module 520 is configured to obtain event information.
  • the event information includes a plurality of event points, and each event point of the plurality of event points includes coordinate information, time information, and information about a pixel corresponding to the coordinate information. Brightness information.
  • the generating module 530 is configured to generate multiple gradient information maps based on the multiple event points, and the gradient information maps include gradient information of each pixel.
  • the first processing module 540 is configured to process the multiple gradient information maps based on the multiple image frames to obtain multiple interpolated frame images.
  • the frame insertion module 550 is configured to perform frame insertion on the first video information based on the plurality of frame insertion images to obtain second video information, wherein the frame rate of the second video information is higher than that of the first video information Frame rate.
  • the first processing module 540 is further configured to: obtain consecutive first image frames and second image frames in the first video information, and determine from the plurality of gradient information graphs that they are consistent with each other in time.
  • the gradient information map to be processed that matches the first image frame and the second image frame, and based on the first image frame and the second image frame, process the gradient information map to be processed to obtain An interpolated frame image between the first image frame and the second image frame.
  • processing the gradient information graph to be processed includes: processing the gradient information graph to be processed through a Poisson equation to obtain a corresponding interpolated frame image.
  • one gradient information map to be processed corresponds to one interpolated frame image.
  • the processing of the gradient information graph to be processed through the Poisson equation includes: setting the initial conditions of the Poisson equation according to an interpolated image or image frame adjacent to the gradient information graph to be processed .
  • the generating module 530 is further configured to: divide the multiple event points into multiple event point sets in chronological order, the multiple event point sets corresponding to different time periods, and based on the Multiple event point sets are generated to generate the multiple gradient information graphs, where each gradient information graph corresponds to an event point set.
  • generating the multiple gradient information graphs based on the multiple event point sets includes: performing gradient processing on the multiple event point sets respectively to generate multiple initial gradient information graphs, based on the The mapping relationship between the first video information and the event information, performing mapping processing on the multiple initial gradient information graphs to obtain multiple mapping gradient information graphs, and performing up-sampling processing on the multiple mapping gradient information graphs , To obtain the multiple gradient information graphs.
  • performing up-sampling processing on the multiple mapping gradient information graphs includes: performing up-sampling processing on the multiple mapping gradient information graphs through a backward mapping method, or performing up-sampling processing on the multiple mapping gradient information graphs through an interpolation method. Map the gradient information graph for up-sampling processing.
  • generating multiple gradient information graphs based on the multiple event points includes: sequentially performing first processing, second processing, and third processing on the multiple event points to obtain the multiple gradient information Figure, wherein the first processing includes one of gradient processing, up-sampling processing, and mapping processing, the second processing includes the other of gradient processing, up-sampling processing, and mapping processing, and the third processing gradient processing , Another one of upsampling processing and mapping processing.
  • the device 500 further includes: a filtering module, configured to perform filtering processing on at least one gradient information graph of the plurality of gradient information graphs, wherein the filtering processing includes edge-preserving filtering processing.
  • obtaining the first video information includes obtaining the first video information through an imaging device.
  • obtaining event information includes obtaining the event information through an event camera.
  • the event camera can be used to control the navigation system of the flight device, and the imaging camera and the event camera are placed in the flight device.
  • the device 500 further includes: a control module for controlling the imaging camera and the event camera to synchronously acquire the first video information and the event information.
  • the device 500 further includes: a determining module configured to determine the mapping relationship between the video information and the event information based on the parameters of the imaging camera and the parameters of the event camera.
  • the device 500 further includes: a second processing module, configured to perform processing on the first video information and/or the event when the field of view corresponding to the first video information and the event information are different
  • the information is processed in at least one of cropping, stretching, splicing, and rotation.
  • the imaging device corresponds to a plurality of event cameras, and the field of view formed by the splicing of the plurality of event cameras matches the field of view of the imaging device.
  • the second video information includes slow motion video information.
  • the first video information includes color video or grayscale video.
  • the brightness information includes brightness change information.
  • the brightness change includes the brightness remains unchanged, the brightness becomes higher, or the brightness becomes lower.
  • the brightness change information corresponding to the brightness remaining unchanged is 0, the brightness change information corresponding to the brightness increasing is 1, and the brightness change information corresponding to the brightness decreasing is -1.
  • the apparatus 500 may, for example, execute the method described above with reference to FIG. 2, which will not be repeated here.
  • any number of the modules, sub-modules, units, and sub-units, or at least part of the functions of any number of them may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be split into multiple modules for implementation.
  • any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as a hardware circuit, such as a field programmable gate array (FPGA), a programmable logic array (PLA), System-on-chip, system-on-substrate, system-on-package, application-specific integrated circuit (ASIC), or can be implemented by hardware or firmware in any other reasonable way that integrates or encapsulates the circuit, or by software, hardware, and firmware. Any one of these implementation methods or an appropriate combination of any of them can be implemented.
  • one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as a computer program module, and when the computer program module is executed, the corresponding function may be performed.
  • any of the first obtaining module 510, the second obtaining module 520, the generating module 530, the first processing module 540, and the frame inserting module 550 can be combined into one module/unit/subunit for implementation, or any of them
  • a module/unit/subunit can be split into multiple modules/units/subunits.
  • at least part of the functions of one or more modules/units/subunits of these modules/units/subunits can be combined with at least part of the functions of other modules/units/subunits, and integrated in one module/unit/subunit In the realization.
  • At least one of the first obtaining module 510, the second obtaining module 520, the generating module 530, the first processing module 540, and the frame inserting module 550 may be at least partially implemented as a hardware circuit, for example, on-site Programmable gate array (FPGA), programmable logic array (PLA), system on chip, system on substrate, system on package, application specific integrated circuit (ASIC), or any other reasonable way that can integrate or package the circuit It can be implemented by hardware or firmware, or implemented by any of the three implementation modes of software, hardware, and firmware, or an appropriate combination of any of them.
  • FPGA on-site Programmable gate array
  • PLA programmable logic array
  • ASIC application specific integrated circuit
  • At least one of the first obtaining module 510, the second obtaining module 520, the generating module 530, the first processing module 540, and the frame insertion module 550 may be at least partially implemented as a computer program module, and when the computer program module is run When the time, you can perform the corresponding function.
  • FIG. 6 schematically shows a block diagram of an imaging system 600 according to an embodiment of the present disclosure.
  • the imaging system 600 may include an imaging device 610, an event camera 620, and a processor 630.
  • the imaging system 600 may be placed in a flying device, for example.
  • the imaging system 600 may be used, for example, to control a navigation system of flight equipment.
  • the imaging system 600 can control the navigation system of the drone to track the target object and so on.
  • the imaging device 610 is used to obtain first video information.
  • the imaging device 610 may be, for example, a device that uses the principle of optical imaging to form an image and uses a film to record the image.
  • the imaging device 610 may be various imaging cameras equipped with an image sensor.
  • An event camera (Event Camera) 620 is used to obtain event information.
  • the event camera 620 may be, for example, a device that outputs an asynchronous signal by measuring the brightness change of each pixel.
  • the event camera 102 may be various event cameras equipped with dynamic vision sensors, including but not limited to DVS (Dynamic Vision Sensor), ATIS (Asynchronous Time Based Image Sensor), DAVIS (Dynamic and Active Pixel Vision Sensor), etc.
  • the processor 630 is configured to perform the following operations: obtain first video information, where the first video information includes multiple image frames, and obtain event information, where the event information includes multiple event points, each of the multiple event points
  • the event points include coordinate information, time information, and brightness information of pixels corresponding to the coordinate information.
  • a plurality of gradient information maps are generated, and the gradient information map includes the gradient information of each pixel point
  • processing the multiple gradient information maps to obtain multiple interpolated frame images, and interpolating the first video information based on the multiple interpolated frame images to obtain second video information, wherein, the frame rate of the second video information is higher than the frame rate of the first video information.
  • the processor 630 is connected to the imaging device 610 and the event camera 620, and can receive video information and event information from the imaging device 610 and the event camera 620.
  • the processor 630 may be placed in the imaging device 610 or the event camera 620. Alternatively, the processor 630 may also be placed in other devices besides the imaging device 610 and the event camera 620.
  • processing the multiple gradient information maps to obtain multiple interpolated frame images includes: obtaining consecutive first image frames and second images in the first video information Frame, determine from the plurality of gradient information graphs to-be-processed gradient information graphs that match the first image frame and the second image frame in time, based on the first image frame and the second image frame Image frame, processing the to-be-processed gradient information map to obtain an interpolated frame image to be inserted between the first image frame and the second image frame.
  • processing the gradient information graph to be processed includes: processing the gradient information graph to be processed through a Poisson equation to obtain a corresponding interpolated frame image.
  • one gradient information map to be processed corresponds to one interpolated frame image.
  • processing the gradient information graph to be processed through the Poisson equation includes: setting the initial conditions of the Poisson equation according to an interpolated image or image frame adjacent to the gradient information graph to be processed.
  • generating multiple gradient information graphs based on the multiple event points includes: dividing the multiple event points into multiple event point sets in a time sequence, and the multiple event point sets are respectively The multiple gradient information graphs are generated corresponding to different time periods and based on the multiple event point sets, where each gradient information graph corresponds to one event point set.
  • generating the multiple gradient information graphs based on the multiple event point sets includes: performing gradient processing on the multiple event point sets respectively to generate multiple initial gradient information graphs, based on the The mapping relationship between the first video information and the event information, performing mapping processing on the multiple initial gradient information graphs to obtain multiple mapping gradient information graphs, and performing up-sampling processing on the multiple mapping gradient information graphs, Obtain the multiple gradient information graphs.
  • performing up-sampling processing on the multiple mapping gradient information graphs includes: performing up-sampling processing on the multiple mapping gradient information graphs through a backward mapping method, or performing up-sampling processing on the multiple mapping gradient information graphs through an interpolation method. Map the gradient information graph for up-sampling processing.
  • generating multiple gradient information graphs based on the multiple event points includes: sequentially performing first processing, second processing, and third processing on the multiple event points to obtain the multiple gradient information Figure, wherein the first processing includes one of gradient processing, up-sampling processing, and mapping processing, the second processing includes the other of gradient processing, up-sampling processing, and mapping processing, and the third processing gradient processing , Another one of upsampling processing and mapping processing.
  • the processor 630 is further configured to: perform filtering processing on at least one gradient information graph of the multiple gradient information graphs, where the filtering processing includes edge-preserving filtering processing.
  • obtaining the first video information includes obtaining the first video information through an imaging device.
  • obtaining event information includes obtaining the event information through an event camera.
  • the processor 630 is further configured to: control the imaging camera and the event camera to synchronously acquire the first video information and the event information.
  • the processor 630 is further configured to determine a mapping relationship between the video information and the event information based on the parameters of the imaging camera and the parameters of the event camera.
  • the processor 630 is further configured to: when the fields of view corresponding to the first video information and the event information are different, clip and pull the first video information and/or the event information. At least one of stretching, splicing, and rotation.
  • the imaging device corresponds to a plurality of event cameras, and the field of view formed by the splicing of the plurality of event cameras matches the field of view of the imaging device.
  • the second video information includes slow motion video information.
  • the first video information includes color video or grayscale video.
  • the brightness information includes brightness change information.
  • the brightness change includes that the brightness remains unchanged, the brightness becomes higher, or the brightness becomes lower.
  • the brightness change information corresponding to the brightness remaining unchanged is 0, the brightness change information corresponding to the brightness increasing is 1, and the brightness change information corresponding to the brightness decreasing is -1.
  • the event information collected by the event camera can be used to generate an interpolated frame image that can be used to interpolate the frame, and then insert it into the first video obtained by the imaging device, so that a higher resolution and higher frame can be obtained.
  • High-speed video information in order to truly reflect the actual state of motion in high-speed motion and avoid blur and discontinuity in high-speed dynamic scenes.
  • FIG. 7 schematically shows a block diagram of a computer system 700 according to an embodiment of the present disclosure.
  • the computer system shown in FIG. 7 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • an electronic device 700 includes a processor 701, which can be loaded into a random access memory (RAM) 703 according to a program stored in a read only memory (ROM) 702 or from a storage part 708 The program executes various appropriate actions and processing.
  • the processor 701 may include, for example, a general-purpose microprocessor (for example, a CPU), an instruction set processor and/or a related chipset and/or a special-purpose microprocessor (for example, an application specific integrated circuit (ASIC)), and so on.
  • the processor 701 may also include on-board memory for caching purposes.
  • the processor 701 may include a single processing unit or multiple processing units for performing different actions of a method flow according to an embodiment of the present disclosure.
  • the processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704.
  • the processor 701 executes various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or RAM 703. It should be noted that the program can also be stored in one or more memories other than ROM 702 and RAM 703.
  • the processor 701 may also execute various operations of the method flow according to the embodiments of the present disclosure by executing programs stored in the one or more memories.
  • the system 700 may further include an input/output (I/O) interface 705, and the input/output (I/O) interface 705 is also connected to the bus 704.
  • the system 700 may also include one or more of the following components connected to the I/O interface 705: an input part 706 including a keyboard, a mouse, etc.; including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker
  • the output section 707 including the hard disk and the like; the storage section 708 including the hard disk and the like; and the communication section 709 including the network interface card such as a LAN card, a modem, and the like.
  • the communication section 709 performs communication processing via a network such as the Internet.
  • the drive 710 is also connected to the I/O interface 705 as needed.
  • a removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 710 as needed, so that the computer program read therefrom is installed into the storage portion 708 as needed.
  • the method flow according to the embodiment of the present disclosure may be implemented as a computer software program.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable storage medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 709, and/or installed from the removable medium 711.
  • the computer program executes the above-mentioned functions defined in the system of the embodiment of the present disclosure.
  • the systems, devices, devices, modules, units, etc. described above may be implemented by computer program modules.
  • the present disclosure also provides a computer-readable storage medium.
  • the computer-readable storage medium may be included in the device/device/system described in the above embodiment; or it may exist alone without being assembled into the device/ In the device/system.
  • the aforementioned computer-readable storage medium carries one or more programs, and when the aforementioned one or more programs are executed, the method according to the embodiments of the present disclosure is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • it can include but not limited to: portable computer disk, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), portable compact disk read-only memory (CD- ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable storage medium may include one or more memories other than ROM 702 and/or RAM 703 and/or ROM 702 and RAM 703 described above.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the above-mentioned module, program segment, or part of code contains one or more for realizing the specified logic function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two blocks shown one after the other can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by It is realized by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种信息处理方法,包括:获得第一视频信息,第一视频信息包括多个图像帧;获得事件信息,事件信息包括多个事件点,多个事件点中的每个事件点包括坐标信息、时间信息和与坐标信息对应的像素点的亮度信息;基于多个事件点,生成多个梯度信息图,梯度信息图包括各像素点的梯度信息;基于多个图像帧,处理多个梯度信息图,得到多个插帧图像;以及基于多个插帧图像对第一视频信息进行插帧,得到第二视频信息,其中,第二视频信息的帧率高于第一视频信息的帧率。

Description

信息处理方法、装置和成像*** 技术领域
本公开涉及图像处理领域,具体地,涉及一种信息处理方法、装置和成像***。
背景技术
随着数码技术和网络技术的蓬勃发展,相机已经成为记录生活片段的重要工具和方式。然而,传统成像相机的离散采样需要经过曝光时间积分,模数转换和数据读出等步骤,从而限制了帧率,在高速动态场景中会出现模糊和不连续的现象,难以满足高速运动场景中对高帧率的需求。
发明内容
有鉴于此,本公开实施例提供了一种信息处理方法、装置和成像***,可以通过成像设备获得视频信息,通过事件相机获得事件信息,基于视频信息和事件信息得到多个插帧图像,从而可以基于多个插帧图像对视频信息进行插帧处理,以生成具有高帧率的视频信息。
第一方面,本公开实施例提供了一种信息处理方法,包括:获得第一视频信息,所述第一视频信息包括多个图像帧;获得事件信息,所述事件信息包括多个事件点,所述多个事件点中的每个事件点包括坐标信息、时间信息和与所述坐标信息对应的像素点的亮度信息;基于所述多个事件点,生成多个梯度信息图,所述梯度信息图包括各像素点的梯度信息;基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像;以及基于所述多个插帧图像对所述第一视频信息进行插帧,得到第二视频信息,其中,所述第二视频信息的帧率高于所述第一视频信息的帧率。
第二方面,本公开实施例提供了一种信息处理装置,该装置包括第一获得模块、第二获得模块、生成模块、第一处理模块和插帧模块。其中,第一获得模块用于获得第一视频信息,所述第一视频信息包括多个 图像帧。第二获得模块用于获得事件信息,所述事件信息包括多个事件点,所述多个事件点中的每个事件点包括坐标信息、时间信息和与所述坐标信息对应的像素点的亮度信息。生成模块用于基于所述多个事件点,生成多个梯度信息图,所述梯度信息图包括各像素点的梯度信息。第一处理模块用于基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像。插帧模块用于基于所述多个插帧图像对所述第一视频信息进行插帧,得到第二视频信息,其中,所述第二视频信息的帧率高于所述第一视频信息的帧率。
第三方面,本公开实施例提供了一种成像***,包括成像设备、事件相机和处理器。其中,成像设备用于获得第一视频信息。事件相机用于获得事件信息。处理器用于如下操作:获得第一视频信息,所述第一视频信息包括多个图像帧;获得事件信息,所述事件信息包括多个事件点,所述多个事件点中的每个事件点包括坐标信息、时间信息和与所述坐标信息对应的像素点的亮度信息;基于所述多个事件点,生成多个梯度信息图,所述梯度信息图包括各像素点的梯度信息;基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像;以及基于所述多个插帧图像对所述第一视频信息进行插帧,得到第二视频信息,其中,所述第二视频信息的帧率高于所述第一视频信息的帧率。
第四方面,本公开实施例提供了一种计算机***,包括:一个或多个处理器;计算机可读存储介质,用于存储一个或多个程序,其中,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如上所述的方法。
第五方面,本公开实施例提供了一种计算机可读存储介质,其上存储有可执行指令,该指令被处理器执行时使处理器实现如上所述的方法。
第六方面,本公开实施例提供了一种计算机程序产品,包括计算机可读指令,其中,所述计算机可读指令被执行时用于执行如上所述的方法。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本发明,但并不构成对本公开的限制。在附图中:
图1示意性示出了本公开实施例的可以应用信息处理方法的示例性***架构;
图2示意性示出了本公开实施例的信息处理方法的流程图;
图3示意性示出了本公开实施例的图像获取装置的示意图;
图4示意性示出了本公开实施例的事件点集合的示意图;
图5示意性示出了本公开实施例的信息处理装置的框图;
图6示意性示出了本公开实施例的成像***的框图;以及
图7示意性示出了本公开实施例的计算机***的框图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本公开进一步详细说明。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开提供了一种信息处理方法,该方法包括通过成像设备获得第一视频信息,第一视频信息可以包括多个图像帧。通过事件相机获得事件信息,事件信息可以包括多个事件点,每个事件点可以包括坐标信息、时间信息和与坐标信息对应的像素点的亮度信息。本公开实施例可以基于事件信息中的多个事件点,生成多个梯度信息图,梯度信息图包括各像素点的梯度信息,并基于第一视频信息中的多个图像帧,处理生成的多个梯度信息图,得到多个插帧图像。然后,可以根据多个插帧图像对第一视频信息进行插帧,得到第二视频信息,从而可以获得具有高帧率的视频信息。
图1示意性示出了本公开实施例的可以应用信息处理方法的示例性***架构。
需要注意的是,图1所示仅为可以应用本公开实施例的***架构的示例,以帮助本领域技术人员理解本公开的技术内容,但并不意味着本公开实施例不可以用于其他设备、***、环境或场景。
如图1所示,根据该实施例的***架构100可以包括成像设备101、事件相机102,网络103和服务器104。网络103是用以在成像设备101、和服务器104之间,或者在事件相机102和服务器104之间提供通信链路的介质。网络103可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
成像设备101例如可以是利用光学成像原理形成影像并使用底片记录影像的设备。例如,成像设备101可以是具备图像传感器的各种的成像相机。
事件相机(Event Camera)102例如可以是通过测量每个像素的亮度变化来输出异步信号的设备。例如,事件相机102可以是具备动态视觉传感器的各种事件相机,包括但不限于DVS(Dynamic Vision Sensor)、ATIS(Asynchronous Time Based Image Sensor)、DAVIS(Dynamic and Active Pixel Vision Sensor)等。
服务器104可以是提供各种服务的服务器,例如可以对接收到的视频信息和事件信息进行分析处理,并将处理结果反馈给用户。
需要说明的是,本公开实施例所提供的信息处理方法一般可以由服务器104执行。相应地,本公开实施例所提供的信息处理装置一般可以设置于服务器104中。本公开实施例所提供的信息处理方法也可以由不同于服务器104且能够与成像设备101、事件相机102和/或服务器104通信的服务器或服务器集群执行。相应地,本公开实施例所提供的信息处理装置也可以设置于不同于服务器104且能够与成像设备101、事件相机102和/或服务器104通信的服务器或服务器集群中。或者,本公开实施例所提供的信息处理方法也可以由成像设备101或者事件相机102执行,或者也可以由不同于成像设备101和事件相机102的其他终端设备(例如,用户终端)执行。
例如,成像设备101可以用于获得视频信息,事件相机102可以用于获得事件信息。服务器104可以通过网络103从成像设备101处获得视频信息,以及通过网络103从事件相机102处获得事件信息。服务器104可以基于获得的视频信息和事件信息得到多个插帧图像,并根据得到的多个插帧图像对视频信息进行插帧处理,从而获得具有更高帧率的视频信息。
图2示意性示出了本公开实施例的信息处理方法的流程图。
如图2所示,该方法包括操作S201~S205。
在操作S201,获得第一视频信息,所第一视频信息包括多个图像帧。
根据本公开实施例,可以通过成像设备拍摄获得第一视频信息。成像设备例如可以通过光学成像原理拍摄并记录第一视频信息。例如,可以使用数码相机、单反相机或者具有摄像头的各种电子设备拍摄获得第一视频信息。第一视频信息也可以包括存储在非易失性存储装置等设备中的视频信息,或者是虚拟生成的视频信息,例如,通过人工智能技术生成的视频信息,在此不做限制。
在本公开实施例中,第一视频信息例如可以是由多个连续图像帧组成的视频序列,第一视频信息例如可以反映某一场景一段时间内的亮度信息。
根据本公开实施例,第一视频信息可以包括彩***或者灰度视频,本公开实施例对第一视频信息的格式和颜色通道不进行限定,本领域技术人员可以根据实际需要进行设定。
在操作S202,获得事件信息,事件信息包括多个事件点,多个事件点中的每个事件点包括坐标信息、时间信息和与坐标信息对应的像素点的亮度信息。
根据本公开实施例,可以通过事件相机获得事件信息。事件相机例如可以通过独立采集场景中若干个点的亮度信息,或是每个像素的亮度等强度信息来输出一系列事件序列。例如,可以使用DVS、ATIS或者DAVIS等各种具有动态视觉传感器的事件相机来获取事件信息。
在本公开实施例中,事件信息例如可以是由多个事件点组成的事件序列,事件信息例如可以反映当前场景的亮度变化信息。每个事件点可 以包括坐标信息、时间信息和与坐标信息对应的像素点的亮度信息。亮度信息例如可以是亮度变化信息,亮度变化包括亮度保持不变、亮度变高或者亮度变低。例如,亮度保持不变所对应的亮度变化信息为0,亮度变高所对应的亮度变化信息为1,亮度变低所对应的亮度变化信息为-1。通过对亮度信息进行标准化处理,使得有限的数据即可适应各种场景下的亮度信息表述,降低了场景亮度剧烈变化时的信息流冲击,也降低了数据处理、传输和存储的资源需求。
例如,每个事件可以包含特征(x,y,t,p),其中x,y表示空间位置信息(例如,像素点的坐标信息),t表示该事件触发的时间戳,p表示数据极性(例如,0表示该像素点亮度没有变化,1表示该像素点亮度增强,-1表示该像素点亮度降低)。由此,极大的缩短了的像素响应时间,便于形成高帧率视频流。
根据本公开实施例,如果采用成像相机获取第一视频信息,采用事件相机获得事件信息,可以将成像相机的感光元件和事件相机的感光元件尽量靠近以减少入射光和视差等原因导致的成像相机获取的第一视频信息的亮度与事件相机获取的事件信息的亮度存在过大的差异。
例如,图3示意性示出了本公开实施例的图像获取装置300的示意图。如图3所示,图像获取装置300可以包括成像相机310和事件相机320。成像相机310和事件相机320可以通过固定装置尽量靠近彼此地可拆卸地固定在图像获取装置300中。
根据本公开实施例,可以控制成像相机310和事件相机320基本同步地获取第一视频信息和事件信息,以消除时间差造成的问题,例如同一场景在不同时间的亮度差异。或者,也可以分别控制成像相机310和事件相机320获取第一视频信息和事件信息,再通过算法校正时间差和/或亮度差异。
根据本公开实施例,还可以基于成像相机的参数和事件相机的参数,确定第一视频信息和事件信息之间的映射关系。
可以理解,成像相机和事件相机本身在获取图像过程中存在一定的畸变。例如,如图3所示,成像相机310和事件相机320之间存在一定的视差及视场(Field of View,简称FOV)偏差等问题,以致得两个相 机获取的视频内容没有配准。
本公开实施例可以对成像相机和事件相机进行内参估计,以及两相机之间的外参估计。例如,可以使用标定方法对成像相机和事件相机进行标定,获得两者的内外参数。再为在两个相机的数据间建立映射关系,例如,单应性矩阵(Homograph matrix,H矩阵),仿射矩阵(Affine matrix)等。可以通过映射关系矫正成像相机和事件相机之间的视差问题。以H矩阵为例,H距阵进行映射的公式可以表示为:
Figure PCTCN2020096194-appb-000001
P 2=HP 1
其中,K 1表示事件相机的内参矩阵,K 2表示成像相机的内参矩阵,R表示成像相机坐标系到事件相机坐标系的旋转矩阵,H表示事件相机坐标系到成像相机坐标系的单应性矩阵,P 1表示事件相机获取的图像的某一像素点的坐标,P 2表示P 1在成像相机获取的图像上的对应像素点的坐标。
本公开实施例通过成像相机获取视频信息,可以获得具有较高分辨率的视频信息,通过事件相机获取事件信息,可以获得具有较高帧率的事件序列。
本公开实施例可以通过控制成像相机和事件相机同时获取视频信息和事件信息来消除时间差造成的问题。
本公开实施例可以通过建立成像相机和事件相机的映射关系来消除视差的问题。
本公开实施例可以通过对第一视频信息和/或事件信息进行剪裁、拉伸、拼接、旋转中的至少一种处理来消除第一视频信息和事件信息对应的视场不同的问题。
在本公开实施例中,成像相机可以对应于多个事件相机,多个事件相机拼接后形成的视场可以与成像相机的视场相匹配。例如,多个事件相机拼接后的图像的取景范围与成像相机的取景范围相同,以消除第一视频信息和事件信息对应的视场不同的问题。
在操作S203,基于多个事件点,生成多个梯度信息图,梯度信息图包括各像素点的梯度信息。
根据本公开实施例,可以对多个事件点依次进行第一处理、第二处 理和第三处理得到多个梯度信息图,其中,第一处理包括梯度处理、上采样处理和映射处理中的一个,第二处理包括梯度处理、上采样处理和映射处理中的另一个,第三处理梯度处理、上采样处理和映射处理中的又一个。本公开实施例不对梯度处理、上采样处理和映射处理之间的顺序进行限定,本领域技术人员可以根据实际需要进行设定。例如,可以对多个事件点先进行梯度处理,再进行上采样处理,最后进行映射处理,得到梯度信息图。又例如,也可以对多个事件点先进行梯度处理,再进行映射处理,最后进行上采样处理,得到梯度信息图。再例如,还可以对多个事件点进行上采样处理、再进行映射处理、最后进行梯度处理,得到梯度信息图等等。
当然,对梯度处理后的梯度信息还能进行更多的改造,例如对梯度信息进行压制或衰减,使其满足高动态的需求。梯度衰减的方法可以采用高斯金字塔,选择满足大的梯度衰减多而小的梯度衰减很少的衰减函数即可。由此,保证了梯度信息不会溢出。此外,梯度压制或衰减可以根据需求在各步骤进行,例如可以在映射处理之后,逐步上采样,同时进行梯度衰减,这样处理虽然数据量较大,但梯度数据保留较多且精确度较高;或是在映射处理之前或映射处理时同步进行梯度衰减,这样处理可以进一步减少传输及处理的数据量,进而提高了图像处理速度,满足更高帧率的需求。
下面针对“对多个事件点先进行梯度处理,再进行映射处理,最后进行上采样处理,得到梯度信息图”的实施例进行具体说明。因为在梯度处理之后进行映射处理,可以提高图像空间配准的精确度,而上采样处理作为三个步骤的最后一步,可以减少流程中的数据量,从而进一步提高图像处理速度。
根据本公开实施例,可以先按照时间顺序,将多个事件点划分为多个事件点集合,多个事件点集合分别对应不同的时间段。然后,对多个事件点集合分别进行梯度处理,生成多个初始梯度信息图。再基于第一视频信息和事件信息之间的映射关系,对多个初始梯度信息图进行映射处理,得到多个映射梯度信息图。最后,对多个映射梯度信息图进行上 采样处理,得到多个梯度信息图,其中,每个梯度信息图对应一个事件点集合。
例如,如图4所示,可以将时间戳在t0~t1之间的事件点划分到事件点集合401中。对事件点集合401中的各事件点进行梯度处理,生成该时间段t0~t1对应的初始梯度信息图。然后,对该初始梯度信息图进行映射处理,得到映射梯度信息图。最后,对该映射梯度信息图进行上采样处理,得到梯度信息图。同理,可以依次生成t1~t2时间段、t2~t3时间段、t3~t4时间段、t4~t5时间段……分别对应的梯度信息图。
在本公开实施例中,可以根据预设时间间隔划分事件点集合。例如,每个时间间隔Δt内的所有事件点归为一个事件点集合。也可以根据预设事件点数量划分事件点集合。例如,按照时间顺序收集事件点,到达1000各事件点则组成一个集合。
在本公开实施例中,可以通过卡尔曼(Kalman)滤波对各事件点集合进行梯度处理,得到该集合所对应的初始梯度信息图。
在本公开实施例中,可以通过在操作S202的描述中得到的第一视频信息和事件信息之间的映射关系,对初始梯度信息图进行映射处理,得到映射梯度信息图,以矫正梯度信息图与视频信息之间的视差。
在本公开实施例中,可以通过后向映射方法或者插值方法(例如,bicubic插值方法)对映射梯度信息图进行上采样处理,从而可以得到空间分辨率较高的梯度信息图。例如,可以上采样至与第一视频信息相同的分辨率。
在本公开实施例中,还可以对梯度信息图进行剪裁、拉伸、拼接、旋转中的至少一种处理,以使梯度信息图与第一视频信息的FOV一致。或者,对第一视频信息进行剪裁、拉伸、拼接、旋转中的至少一种处理,以使视频信息与梯度信息图的FOV一致。
在本公开实施例中,还可以对多个梯度信息图中的至少一个梯度信息图进行滤波处理。例如,可以对梯度信息图进行保边滤波处理,更精细化调整梯度信息图。保边滤波例如可以是双边滤波或者导向滤波(guide filter)等。
在操作S204,基于多个图像帧,处理多个梯度信息图,得到多个插 帧图像。
根据本公开实施例,可以获取第一视频信息中连续的第一图像帧和第二图像帧,从多个梯度信息图中确定在时间上与第一图像帧和第二图像帧相匹配的待处理梯度信息图,并基于第一图像帧和第二图像帧,处理待处理梯度信息图,得到要***到第一图像帧和第二图像帧之间的插帧图像。
例如,如图4所示,第一视频信息中的第一图像帧对应的时间为t0,第二图像帧对应的时间为t5,则事件点集合401、402、403、404和405分别对应的梯度信息图即为在时间上与第一图像帧和第二图像帧相匹配的待处理梯度信息图。可以通过泊松方程处理该些待处理梯度信息图,得到要***到第一图像帧和第二图像帧之间的插帧图像,其中,一个待处理梯度信息对应一个插帧图像。
使用泊松方程处理待处理梯度信息图,得到对应的插帧图像可以表示为:求解
Figure PCTCN2020096194-appb-000002
其中,I为成像相机的视频帧,G为待处理梯度信息图,
Figure PCTCN2020096194-appb-000003
Figure PCTCN2020096194-appb-000004
该偏微分方程的初始条如下:
I high_frame_rate(0)=I A
I high_frame_rate(N)=I B
根据本公开实施例,可以根据与待处理梯度信息图相邻的插帧图像或者图像帧设置泊松方程的初始条件。
例如,0可以表示成像相机相邻两帧的第一帧,N可以表示成像相机相邻两帧的第二帧。即,将待处理梯度信息图在时间上相对应的第一视频的第一图像帧和第二图像帧作为处理该待处理梯度信息图的泊松方程的初始条件。例如,对于事件点集合401、402、403、404和405分别对应的待处理梯度信息图,其初始条件可以均为第一视频的第一图像帧 和第二图像帧。
又例如,0可以表示上一时刻生成的相邻的插帧图像,N可以表示成像相机相邻两帧的第二帧。即,将与该待处理梯度信息图相邻的前一帧的插帧图像和后一帧的第一视频的第二图像帧作为处理该待处理梯度信息图的泊松方程的初始条件。例如,对于事件点集合401对应的待处理梯度信息图,其初始条件可以为第一视频的第一图像帧和第二图像帧,对于事件点集合402对应的待处理梯度信息图,其初始条件可以为事件点集合401对应插帧图像和第一视频的第二图像帧。
本公开实施例不限定梯度信息图的处理方式,本公开实施例仅需要通过第一视频信息处理梯度信息图,使得处理后的梯度信息图满足场景的亮度信息,从而可以生成符合条件的插帧图像即可,本领域技术人员可以根据实际需求进行处理方式的设定。例如,上述偏微分方程是一个稀疏线性***,有多种求解方式,可以采用但不限于直接求解该方程***的解,或者通过对方程变形等方式使用迭代优化方式获得方程的近似最优解。
在操作S205,基于多个插帧图像对第一视频信息进行插帧,得到第二视频信息,其中,第二视频信息的帧率高于第一视频信息的帧率。
根据本公开实施例,可以基于各插帧图像对应的时间段信息将各插帧图像***到第一视频中。例如,可以依次将事件点集合401、402、403、404和405分别对应的插帧图像***到第一视频的第一图像帧和第二图像帧之间。
在本公开实施例中,插帧处理后得到的第二视频具有较高的帧率,适用于高速运动场景中。例如,第二视频信息可以包括慢动作视频信息,用于播放高速运动场景中的慢动作信息。
本公开实施例中的事件相机仅比较像素的亮度变化,极大的缩短了像素响应时间(例如,像素延迟可以缩短至1us),输出信号为时间密集的事件点,适合记录快速运动,形成高帧率视频流。
本公开实施例可以将事件相机采集的、或是预先存储、或是虚拟生成的事件信息生成可以用于插帧的插帧图像,再将其***到第一视频中,从而可以获得具有较高分辨率且具有较高帧率的视频信息,以便真实反 应高速运动中的实际运动状态,避免在高速动态场景中出现的模糊和不连续的现象。
图5示意性示出了本公开实施例的信息处理装置500的框图。
如图5所示,该信息处理装置500可以包括第一获得模块510、第二获得模块520、生成模块530、第一处理模块540和插帧模块550。
根据本公开实施例,该信息处理装置500例如可以置于飞行设备中。该信息处理装置500例如可以用于控制飞行设备的导航***。例如,信息处理装置500可以控制无人机的导航***追踪目标对象等等。
第一获得模块510用于获得第一视频信息,所述第一视频信息包括多个图像帧。
第二获得模块520用于获得事件信息,所述事件信息包括多个事件点,所述多个事件点中的每个事件点包括坐标信息、时间信息和与所述坐标信息对应的像素点的亮度信息。
生成模块530用于基于所述多个事件点,生成多个梯度信息图,所述梯度信息图包括各像素点的梯度信息。
第一处理模块540用于基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像。
插帧模块550用于基于所述多个插帧图像对所述第一视频信息进行插帧,得到第二视频信息,其中,所述第二视频信息的帧率高于所述第一视频信息的帧率。
根据本公开实施例,第一处理模块540还用于:获取所述第一视频信息中连续的第一图像帧和第二图像帧,从所述多个梯度信息图中确定在时间上与所述第一图像帧和所述第二图像帧相匹配的待处理梯度信息图,以及基于所述第一图像帧和所述第二图像帧,处理所述待处理梯度信息图,得到要***到所述第一图像帧和所述第二图像帧之间的插帧图像。
根据本公开实施例,处理所述待处理梯度信息图,包括:通过泊松方程处理所述待处理梯度信息图,得到对应插帧图像。
根据本公开实施例,一个待处理梯度信息图对应一个插帧图像。
根据本公开实施例,所述通过泊松方程处理所述待处理梯度信息图,包括:根据与所述待处理梯度信息图相邻的插帧图像或者图像帧设置所述泊松方程的初始条件。
根据本公开实施例,生成模块530还用于:按照时间顺序,将所述多个事件点划分为多个事件点集合,所述多个事件点集合分别对应不同的时间段,以及基于所述多个事件点集合,生成所述多个梯度信息图,其中,每个梯度信息图对应一个事件点集合。
根据本公开实施例,基于所述多个事件点集合,生成所述多个梯度信息图,包括:对所述多个事件点集合分别进行梯度处理,生成多个初始梯度信息图,基于所述第一视频信息和所述事件信息之间的映射关系,对所述多个初始梯度信息图进行映射处理,得到多个映射梯度信息图,以及对所述多个映射梯度信息图进行上采样处理,得到所述多个梯度信息图。
根据本公开实施例,对所述多个映射梯度信息图进行上采样处理,包括:通过后向映射方法对所述多个映射梯度信息图进行上采样处理,或者通过插值方法对所述多个映射梯度信息图进行上采样处理。
根据本公开实施例,基于所述多个事件点,生成多个梯度信息图,包括:对所述多个事件点依次进行第一处理、第二处理和第三处理得到所述多个梯度信息图,其中,所述第一处理包括梯度处理、上采样处理和映射处理中的一个,所述第二处理包括梯度处理、上采样处理和映射处理中的另一个,所述第三处理梯度处理、上采样处理和映射处理中的又一个。
根据本公开实施例,装置500还包括:滤波模块,用于对所述多个梯度信息图中的至少一个梯度信息图进行滤波处理,其中,所述滤波处理包括保边滤波处理。
根据本公开实施例,获得第一视频信息包括通过成像设备获得所述第一视频信息。
根据本公开实施例,获得事件信息包括通过事件相机获得所述事件信息。
根据本公开实施例,事件相机能够用于控制飞行设备的导航***,成像相机和事件相机置于飞行设备中。
根据本公开实施例,装置500还包括:控制模块,用于控制所述成像相机和所述事件相机同步获取所述第一视频信息和所述事件信息。
根据本公开实施例,装置500还包括:确定模块,用于基于所述成像相机的参数和所述事件相机的参数,确定所述视频信息和所述事件信息之间的映射关系。
根据本公开实施例,装置500还包括:第二处理模块,用于在所述第一视频信息和所述事件信息对应的视场不同时,对所述第一视频信息和/或所述事件信息进行剪裁、拉伸、拼接、旋转中的至少一种处理。
根据本公开实施例,成像设备对应于多个事件相机,所述多个事件相机拼接后形成的视场与所述成像设备的视场相匹配。
根据本公开实施例,第二视频信息包括慢动作视频信息。
根据本公开实施例,第一视频信息包括彩***或者灰度视频。
根据本公开实施例,亮度信息包括亮度变化信息。
根据本公开实施例,亮度变化包括亮度保持不变、亮度变高或者亮度变低。
根据本公开实施例,所述亮度保持不变所对应的亮度变化信息为0,所述亮度变高所对应的亮度变化信息为1,所述亮度变低所对应的亮度变化信息为-1。
根据本公开实施例,装置500例如可以执行上文参考图2描述的方法,在此不再赘述。
根据本公开的实施例的模块、子模块、单元、子单元中的任意多个、或其中任意多个的至少部分功能可以在一个模块中实现。根据本公开实施例的模块、子模块、单元、子单元中的任意一个或多个可以被拆分成多个模块来实现。根据本公开实施例的模块、子模块、单元、子单元中的任意一个或多个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上***、基板上的***、封装上的***、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式的硬件或固件来实现,或以软件、硬件以及 固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,根据本公开实施例的模块、子模块、单元、子单元中的一个或多个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。
例如,第一获得模块510、第二获得模块520、生成模块530、第一处理模块540和插帧模块550中的任意多个可以合并在一个模块/单元/子单元中实现,或者其中的任意一个模块/单元/子单元可以被拆分成多个模块/单元/子单元。或者,这些模块/单元/子单元中的一个或多个模块/单元/子单元的至少部分功能可以与其他模块/单元/子单元的至少部分功能相结合,并在一个模块/单元/子单元中实现。根据本公开的实施例,第一获得模块510、第二获得模块520、生成模块530、第一处理模块540和插帧模块550中的至少一个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上***、基板上的***、封装上的***、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式等硬件或固件来实现,或以软件、硬件以及固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,第一获得模块510、第二获得模块520、生成模块530、第一处理模块540和插帧模块550中的至少一个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。
图6示意性示出了本公开实施例的成像***600的框图。
如图6所示,该成像***600可以包括成像设备610、事件相机620以及处理器630。
根据本公开实施例,该成像***600例如可以置于飞行设备中。该成像***600例如可以用于控制飞行设备的导航***。例如,成像***600可以控制无人机的导航***追踪目标对象等等。
成像设备610用于获得第一视频信息。成像设备610例如可以是利用光学成像原理形成影像并使用底片记录影像的设备。例如,成像设备610可以是具备图像传感器的各种的成像相机。
事件相机(Event Camera)620用于获得事件信息。事件相机620例如可以是通过测量每个像素的亮度变化来输出异步信号的设备。例如, 事件相机102可以是具备动态视觉传感器的各种事件相机,包括但不限于DVS(Dynamic Vision Sensor)、ATIS(Asynchronous Time Based Image Sensor)、DAVIS(Dynamic and Active Pixel Vision Sensor)等。
处理器630用于如下操作:获得第一视频信息,所述第一视频信息包括多个图像帧,获得事件信息,所述事件信息包括多个事件点,所述多个事件点中的每个事件点包括坐标信息、时间信息和与所述坐标信息对应的像素点的亮度信息,基于所述多个事件点,生成多个梯度信息图,所述梯度信息图包括各像素点的梯度信息,基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像,以及基于所述多个插帧图像对所述第一视频信息进行插帧,得到第二视频信息,其中,所述第二视频信息的帧率高于所述第一视频信息的帧率。
处理器630与成像设备610和事件相机620连接,可以接收来自成像设备610和事件相机620的视频信息和事件信息。处理器630可以置于成像设备610或者事件相机620中。或者,处理器630也可以置于成像设备610和事件相机620之外的其他设备中。
根据本公开实施例,基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像,包括:获取所述第一视频信息中连续的第一图像帧和第二图像帧,从所述多个梯度信息图中确定在时间上与所述第一图像帧和所述第二图像帧相匹配的待处理梯度信息图,基于所述第一图像帧和所述第二图像帧,处理所述待处理梯度信息图,得到要***到所述第一图像帧和所述第二图像帧之间的插帧图像。
根据本公开实施例,处理所述待处理梯度信息图,包括:通过泊松方程处理所述待处理梯度信息图,得到对应插帧图像。
根据本公开实施例,一个待处理梯度信息图对应一个插帧图像。
根据本公开实施例,通过泊松方程处理所述待处理梯度信息图,包括:根据与所述待处理梯度信息图相邻的插帧图像或者图像帧设置所述泊松方程的初始条件。
根据本公开实施例,基于所述多个事件点,生成多个梯度信息图,包括:按照时间顺序,将所述多个事件点划分为多个事件点集合,所述多个事件点集合分别对应不同的时间段,以及基于所述多个事件点集合, 生成所述多个梯度信息图,其中,每个梯度信息图对应一个事件点集合。
根据本公开实施例,基于所述多个事件点集合,生成所述多个梯度信息图,包括:对所述多个事件点集合分别进行梯度处理,生成多个初始梯度信息图,基于所述第一视频信息和所述事件信息之间的映射关系,对所述多个初始梯度信息图进行映射处理,得到多个映射梯度信息图,对所述多个映射梯度信息图进行上采样处理,得到所述多个梯度信息图。
根据本公开实施例,对所述多个映射梯度信息图进行上采样处理,包括:通过后向映射方法对所述多个映射梯度信息图进行上采样处理,或者通过插值方法对所述多个映射梯度信息图进行上采样处理。
根据本公开实施例,基于所述多个事件点,生成多个梯度信息图,包括:对所述多个事件点依次进行第一处理、第二处理和第三处理得到所述多个梯度信息图,其中,所述第一处理包括梯度处理、上采样处理和映射处理中的一个,所述第二处理包括梯度处理、上采样处理和映射处理中的另一个,所述第三处理梯度处理、上采样处理和映射处理中的又一个。
根据本公开实施例,处理器630还用于:对所述多个梯度信息图中的至少一个梯度信息图进行滤波处理,其中,所述滤波处理包括保边滤波处理。
根据本公开实施例,获得第一视频信息包括通过成像设备获得所述第一视频信息。
根据本公开实施例,获得事件信息包括通过事件相机获得所述事件信息。
根据本公开实施例,处理器630还用于:控制所述成像相机和所述事件相机同步获取所述第一视频信息和所述事件信息。
根据本公开实施例,处理器630还用于:基于所述成像相机的参数和所述事件相机的参数,确定所述视频信息和所述事件信息之间的映射关系。
根据本公开实施例,处理器630还用于:在所述第一视频信息和所述事件信息对应的视场不同时,对所述第一视频信息和/或所述事件信息进行剪裁、拉伸、拼接、旋转中的至少一种处理。
根据本公开实施例,成像设备对应于多个事件相机,所述多个事件相机拼接后形成的视场与所述成像设备的视场相匹配。
根据本公开实施例,第二视频信息包括慢动作视频信息。
根据本公开实施例,第一视频信息包括彩***或者灰度视频。
根据本公开实施例,亮度信息包括亮度变化信息。所述亮度变化包括亮度保持不变、亮度变高或者亮度变低。所述亮度保持不变所对应的亮度变化信息为0,所述亮度变高所对应的亮度变化信息为1,所述亮度变低所对应的亮度变化信息为-1。
本公开实施例可以将事件相机采集的事件信息生成可以用于插帧的插帧图像,再将其***到成像设备获取的第一视频中,从而可以获得具有较高分辨率且具有较高帧率的视频信息,以便真实反应高速运动中的实际运动状态,避免在高速动态场景中出现的模糊和不连续的现象。
图7示意性示出了本公开实施例的计算机***700的框图。图7示出的计算机***仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,根据本公开实施例的电子设备700包括处理器701,其可以根据存储在只读存储器(ROM)702中的程序或者从存储部分708加载到随机访问存储器(RAM)703中的程序而执行各种适当的动作和处理。处理器701例如可以包括通用微处理器(例如CPU)、指令集处理器和/或相关芯片组和/或专用微处理器(例如,专用集成电路(ASIC)),等等。处理器701还可以包括用于缓存用途的板载存储器。处理器701可以包括用于执行根据本公开实施例的方法流程的不同动作的单一处理单元或者是多个处理单元。
在RAM 703中,存储有***700操作所需的各种程序和数据。处理器701、ROM 702以及RAM 703通过总线704彼此相连。处理器701通过执行ROM 702和/或RAM 703中的程序来执行根据本公开实施例的方法流程的各种操作。需要注意,所述程序也可以存储在除ROM 702和RAM 703以外的一个或多个存储器中。处理器701也可以通过执行存储在所述一个或多个存储器中的程序来执行根据本公开实施例的方法流程的各种操作。
根据本公开的实施例,***700还可以包括输入/输出(I/O)接口705,输入/输出(I/O)接口705也连接至总线704。***700还可以包括连接至I/O接口705的以下部件中的一项或多项:包括键盘、鼠标等的输入部分706;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分707;包括硬盘等的存储部分708;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分709。通信部分709经由诸如因特网的网络执行通信处理。驱动器710也根据需要连接至I/O接口705。可拆卸介质711,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器710上,以便于从其上读出的计算机程序根据需要被安装入存储部分708。
根据本公开的实施例,根据本公开实施例的方法流程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读存储介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分709从网络上被下载和安装,和/或从可拆卸介质711被安装。在该计算机程序被处理器701执行时,执行本公开实施例的***中限定的上述功能。根据本公开的实施例,上文描述的***、设备、装置、模块、单元等可以通过计算机程序模块来实现。
本公开还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中描述的设备/装置/***中所包含的;也可以是单独存在,而未装配入该设备/装置/***中。上述计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被执行时,实现根据本公开实施例的方法。
根据本公开的实施例,计算机可读存储介质可以是非易失性的计算机可读存储介质。例如可以包括但不限于:便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。
例如,根据本公开的实施例,计算机可读存储介质可以包括上文描述的ROM 702和/或RAM 703和/或ROM 702和RAM 703以外的一个或多个存储器。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
本领域技术人员可以理解,本公开的各个实施例和/或权利要求中记载的特征可以进行多种组合和/或结合,即使这样的组合或结合没有明确记载于本公开中。特别地,在不脱离本公开精神和教导的情况下,本公开的各个实施例和/或权利要求中记载的特征可以进行多种组合和/或结合。所有这些组合和/或结合均落入本公开的范围。
以上对本公开的实施例进行了描述。但是,这些实施例仅仅是为了说明的目的,而并非为了限制本公开的范围。尽管在以上分别描述了各实施例,但是这并不意味着各个实施例中的措施不能有利地结合使用。本公开的范围由所附权利要求及其等同物限定。不脱离本公开的范围,本领域技术人员可以做出多种替代和修改,这些替代和修改都应落在本公开的范围之内。

Claims (67)

  1. 一种信息处理方法,包括:
    获得第一视频信息,所述第一视频信息包括多个图像帧;
    获得事件信息,所述事件信息包括多个事件点,所述多个事件点中的每个事件点包括坐标信息、时间信息和与所述坐标信息对应的像素点的亮度信息;
    基于所述多个事件点,生成多个梯度信息图,所述梯度信息图包括各像素点的梯度信息;
    基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像;以及
    基于所述多个插帧图像对所述第一视频信息进行插帧,得到第二视频信息,其中,所述第二视频信息的帧率高于所述第一视频信息的帧率。
  2. 根据权利要求1所述的方法,其中,所述基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像,包括:
    获取所述第一视频信息中连续的第一图像帧和第二图像帧;
    从所述多个梯度信息图中确定在时间上与所述第一图像帧和所述第二图像帧相匹配的待处理梯度信息图;
    基于所述第一图像帧和所述第二图像帧,处理所述待处理梯度信息图,得到要***到所述第一图像帧和所述第二图像帧之间的插帧图像。
  3. 根据权利要求2所述的方法,其中,所述处理所述待处理梯度信息图,包括:
    通过泊松方程处理所述待处理梯度信息图,得到对应插帧图像。
  4. 根据权利要求2或3所述的方法,其中,一个待处理梯度信息图对应一个插帧图像。
  5. 根据权利要求3所述的方法,其中,所述通过泊松方程处理所述待处理梯度信息图,包括:
    根据与所述待处理梯度信息图相邻的插帧图像或者图像帧设置所述泊松方程的初始条件。
  6. 根据权利要求1所述的方法,其中,所述基于所述多个事件点,生成多个梯度信息图,包括:
    按照时间顺序,将所述多个事件点划分为多个事件点集合,所述多个事件点集合分别对应不同的时间段;以及
    基于所述多个事件点集合,生成所述多个梯度信息图,其中,每个梯度信息图对应一个事件点集合。
  7. 根据权利要求6所述的方法,其中,所述基于所述多个事件点集合,生成所述多个梯度信息图,包括:
    对所述多个事件点集合分别进行梯度处理,生成多个初始梯度信息图;
    基于所述第一视频信息和所述事件信息之间的映射关系,对所述多个初始梯度信息图进行映射处理,得到多个映射梯度信息图;
    对所述多个映射梯度信息图进行上采样处理,得到所述多个梯度信息图。
  8. 根据权利要求7所述的方法,其中,所述对所述多个映射梯度信息图进行上采样处理,包括:
    通过后向映射方法对所述多个映射梯度信息图进行上采样处理;或者
    通过插值方法对所述多个映射梯度信息图进行上采样处理。
  9. 根据权利要求1所述的方法,其中,所述基于所述多个事件点,生成多个梯度信息图,包括:
    对所述多个事件点依次进行第一处理、第二处理和第三处理得到所述多个梯度信息图,其中,所述第一处理包括梯度处理、上采样处理和映射处理中的一个,所述第二处理包括梯度处理、上采样处理和映射处理中的另一个,所述第三处理梯度处理、上采样处理和映射处理中的又一个。
  10. 根据权利要求1所述的方法,还包括:
    对所述多个梯度信息图中的至少一个梯度信息图进行滤波处理,其中,所述滤波处理包括保边滤波处理。
  11. 根据权利要求1所述的方法,其中,所述获得第一视频信息包括通过成像设备获得所述第一视频信息。
  12. 根据权利要求11所述的方法,其中,所述获得事件信息包括通过事件相机获得所述事件信息。
  13. 根据权利要求12所述的方法,还包括:
    控制所述成像设备和所述事件相机同步获取所述第一视频信息和所述事件信息。
  14. 根据权利要求12所述的方法,还包括:
    基于所述成像设备的参数和所述事件相机的参数,确定所述视频信息和所述事件信息之间的映射关系。
  15. 根据权利要求12所述的方法,其中,所述成像设备对应于多个事件相机,所述多个事件相机拼接后形成的视场与所述成像设备的视场相匹配。
  16. 根据权利要求1所述的方法,还包括:
    在所述第一视频信息和所述事件信息对应的视场不同时,对所述第一视频信息和/或所述事件信息进行剪裁、拉伸、拼接、旋转中的至少一种处理。
  17. 根据权利要求1所述的方法,其中,所述第二视频信息包括慢动作视频信息。
  18. 根据权利要求1所述的方法,其中,所述第一视频信息包括彩***或者灰度视频。
  19. 根据权利要求1所述的方法,其中,所述亮度信息包括亮度变化信息。
  20. 根据权利要求19所述的方法,其中,所述亮度变化包括亮度保持不变、亮度变高或者亮度变低。
  21. 根据权利要求20所述的方法,其中,所述亮度保持不变所对应的亮度变化信息为0,所述亮度变高所对应的亮度变化信息为1,所述亮度变低所对应的亮度变化信息为-1。
  22. 一种信息处理装置,包括:
    第一获得模块,用于获得第一视频信息,所述第一视频信息包括多个图像帧;
    第二获得模块,用于获得事件信息,所述事件信息包括多个事件点,所述多个事件点中的每个事件点包括坐标信息、时间信息和与所述坐标信息对应的像素点的亮度信息;
    生成模块,用于基于所述多个事件点,生成多个梯度信息图,所述梯度信息图包括各像素点的梯度信息;
    第一处理模块,用于基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像;以及
    插帧模块,用于基于所述多个插帧图像对所述第一视频信息进行插帧,得到第二视频信息,其中,所述第二视频信息的帧率高于所述第一视频信息的帧率。
  23. 根据权利要求22所述的装置,其中,所述处理模块还用于:
    获取所述第一视频信息中连续的第一图像帧和第二图像帧;
    从所述多个梯度信息图中确定在时间上与所述第一图像帧和所述第二图像帧相匹配的待处理梯度信息图;
    基于所述第一图像帧和所述第二图像帧,处理所述待处理梯度信息图,得到要***到所述第一图像帧和所述第二图像帧之间的插帧图像。
  24. 根据权利要求22所述的装置,其中,所述处理所述待处理梯度信息图,包括:
    通过泊松方程处理所述待处理梯度信息图,得到对应插帧图像。
  25. 根据权利要求22或24所述的装置,其中,一个待处理梯度信息图对应一个插帧图像。
  26. 根据权利要求24所述的装置,其中,所述通过泊松方程处理所述待处理梯度信息图,包括:
    根据与所述待处理梯度信息图相邻的插帧图像或者图像帧设置所述泊松方程的初始条件。
  27. 根据权利要求23所述的装置,其中,所述生成模块还用于:
    按照时间顺序,将所述多个事件点划分为多个事件点集合,所述多个事件点集合分别对应不同的时间段;以及
    基于所述多个事件点集合,生成所述多个梯度信息图,其中,每个梯度信息图对应一个事件点集合。
  28. 根据权利要求27所述的装置,其中,所述基于所述多个事件点集合,生成所述多个梯度信息图,包括:
    对所述多个事件点集合分别进行梯度处理,生成多个初始梯度信息图;
    基于所述第一视频信息和所述事件信息之间的映射关系,对所述多个初始梯度信息图进行映射处理,得到多个映射梯度信息图;
    对所述多个映射梯度信息图进行上采样处理,得到所述多个梯度信息图。
  29. 根据权利要求28所述的装置,其中,所述对所述多个映射梯度信息图进行上采样处理,包括:
    通过后向映射方法对所述多个映射梯度信息图进行上采样处理;或者
    通过插值方法对所述多个映射梯度信息图进行上采样处理。
  30. 根据权利要求22所述的装置,其中,所述基于所述多个事件点,生成多个梯度信息图,包括:
    对所述多个事件点依次进行第一处理、第二处理和第三处理得到所述多个梯度信息图,其中,所述第一处理包括梯度处理、上采样处理和映射处理中的一个,所述第二处理包括梯度处理、上采样处理和映射处理中的另一个,所述第三处理梯度处理、上采样处理和映射处理中的又一个。
  31. 根据权利要求22所述的装置,还包括:
    滤波模块,用于对所述多个梯度信息图中的至少一个梯度信息图进行滤波处理,其中,所述滤波处理包括保边滤波处理。
  32. 根据权利要求22所述的装置,其中,所述获得第一视频信息包括通过成像设备获得所述第一视频信息。
  33. 根据权利要求32所述的装置,其中,所述获得事件信息包括通过事件相机获得所述事件信息。
  34. 根据权利要求33所述的装置,其中,所述事件相机能够用于控制飞行设备的导航***,所述成像设备和所述事件相机置于所述飞行设备中。
  35. 根据权利要求33所述的装置,还包括:
    控制模块,用于控制所述成像设备和所述事件相机同步获取所述第一视频信息和所述事件信息。
  36. 根据权利要求33所述的装置,还包括:
    确定模块,用于基于所述成像设备的参数和所述事件相机的参数,确定所述视频信息和所述事件信息之间的映射关系。
  37. 根据权利要求33所述的装置,其中,所述成像设备对应于多个事件相机,所述多个事件相机拼接后形成的视场与所述成像设备的视场相匹配。
  38. 根据权利要求22所述的装置,还包括:
    第二处理模块,用于在所述第一视频信息和所述事件信息对应的视场不同时,对所述第一视频信息和/或所述事件信息进行剪裁、拉伸、拼接、旋转中的至少一种处理。
  39. 根据权利要求22所述的装置,其中,所述第二视频信息包括慢动作视频信息。
  40. 根据权利要求22所述的装置,其中,所述第一视频信息包括彩***或者灰度视频。
  41. 根据权利要求22所述的装置,其中,所述亮度信息包括亮度变化信息。
  42. 根据权利要求41所述的装置,其中,所述亮度变化包括亮度保持不变、亮度变高或者亮度变低。
  43. 根据权利要求42所述的装置,其中,所述亮度保持不变所对应的亮度变化信息为0,所述亮度变高所对应的亮度变化信息为1,所述亮度变低所对应的亮度变化信息为-1。
  44. 一种成像***,包括:
    成像设备,用于获得第一视频信息;
    事件相机,用于获得事件信息;以及
    处理器,用于如下操作:
    获得第一视频信息,所述第一视频信息包括多个图像帧;
    获得事件信息,所述事件信息包括多个事件点,所述多个事件点中的每个事件点包括坐标信息、时间信息和与所述坐标信息对应的像素点的亮度信息;
    基于所述多个事件点,生成多个梯度信息图,所述梯度信息图包括各像素点的梯度信息;
    基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像;以及
    基于所述多个插帧图像对所述第一视频信息进行插帧,得到第二视频信息,其中,所述第二视频信息的帧率高于所述第一视频信息的帧率。
  45. 根据权利要求44所述的***,其中,所述基于所述多个图像帧,处理所述多个梯度信息图,得到多个插帧图像,包括:
    获取所述第一视频信息中连续的第一图像帧和第二图像帧;
    从所述多个梯度信息图中确定在时间上与所述第一图像帧和所述第二图像帧相匹配的待处理梯度信息图;
    基于所述第一图像帧和所述第二图像帧,处理所述待处理梯度信息图,得到要***到所述第一图像帧和所述第二图像帧之间的插帧图像。
  46. 根据权利要求45所述的***,其中,所述处理所述待处理梯度信息图,包括:
    通过泊松方程处理所述待处理梯度信息图,得到对应插帧图像。
  47. 根据权利要求45或46所述的***,其中,一个待处理梯度信息图对应一个插帧图像。
  48. 根据权利要求46所述的***,其中,所述通过泊松方程处理所述待处理梯度信息图,包括:
    根据与所述待处理梯度信息图相邻的插帧图像或者图像帧设置所述泊松方程的初始条件。
  49. 根据权利要求44所述的***,其中,所述基于所述多个事件点,生成多个梯度信息图,包括:
    按照时间顺序,将所述多个事件点划分为多个事件点集合,所述多个事件点集合分别对应不同的时间段;以及
    基于所述多个事件点集合,生成所述多个梯度信息图,其中,每个梯度信息图对应一个事件点集合。
  50. 根据权利要求49所述的***,其中,所述基于所述多个事件点集合,生成所述多个梯度信息图,包括:
    对所述多个事件点集合分别进行梯度处理,生成多个初始梯度信息图;
    基于所述第一视频信息和所述事件信息之间的映射关系,对所述多个初始梯度信息图进行映射处理,得到多个映射梯度信息图;
    对所述多个映射梯度信息图进行上采样处理,得到所述多个梯度信息图。
  51. 根据权利要求50所述的***,其中,所述对所述多个映射梯度信息图进行上采样处理,包括:
    通过后向映射方法对所述多个映射梯度信息图进行上采样处理;或者
    通过插值方法对所述多个映射梯度信息图进行上采样处理。
  52. 根据权利要求44所述的***,其中,所述基于所述多个事件点,生成多个梯度信息图,包括:
    对所述多个事件点依次进行第一处理、第二处理和第三处理得到所述多个梯度信息图,其中,所述第一处理包括梯度处理、上采样处理和映射处理中的一个,所述第二处理包括梯度处理、上采样处理和映射处理中的另一个,所述第三处理梯度处理、上采样处理和映射处理中的又一个。
  53. 根据权利要求44所述的***,所述处理器还用于:
    对所述多个梯度信息图中的至少一个梯度信息图进行滤波处理,其中,所述滤波处理包括保边滤波处理。
  54. 根据权利要求44所述的***,其中,所述获得第一视频信息包括通过成像设备获得所述第一视频信息。
  55. 根据权利要求54所述的***,其中,所述获得事件信息包括通过事件相机获得所述事件信息。
  56. 根据权利要求55所述的***,所述处理器还用于:
    控制所述成像设备和所述事件相机同步获取所述第一视频信息和所述事件信息。
  57. 根据权利要求55所述的***,所述处理器还用于:
    基于所述成像设备的参数和所述事件相机的参数,确定所述视频信息和所述事件信息之间的映射关系。
  58. 根据权利要求55所述的***,其中,所述成像设备对应于多个事件相机,所述多个事件相机拼接后形成的视场与所述成像设备的视场相匹配。
  59. 根据权利要求44所述的***,所述处理器还用于:
    在所述第一视频信息和所述事件信息对应的视场不同时,对所述第一视频信息和/或所述事件信息进行剪裁、拉伸、拼接、旋转中的至少一种处理。
  60. 根据权利要求44所述的***,其中,所述第二视频信息包括慢动作视频信息。
  61. 根据权利要求44所述的***,其中,所述第一视频信息包括彩***或者灰度视频。
  62. 根据权利要求44所述的***,其中,所述亮度信息包括亮度变化信息。
  63. 根据权利要求62所述的***,其中,所述亮度变化包括亮度保持不变、亮度变高或者亮度变低。
  64. 根据权利要求63所述的***,其中,所述亮度保持不变所对应的亮度变化信息为0,所述亮度变高所对应的亮度变化信息为1,所述亮度变低所对应的亮度变化信息为-1。
  65. 一种计算机***,包括:
    一个或多个处理器;
    计算机可读存储介质,用于存储一个或多个程序,
    其中,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现权利要求1至21中任一项所述的方法。
  66. 一种计算机可读存储介质,其上存储有可执行指令,该指令被处理器执行时使处理器实现权利要求1至21中任一项所述的方法。
  67. 一种计算机程序产品,包括计算机可读指令,其中,所述计算机可读指令被执行时用于执行根据权利要求1-21中任一项所述的方法。
PCT/CN2020/096194 2020-06-15 2020-06-15 信息处理方法、装置和成像*** WO2021253186A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/096194 WO2021253186A1 (zh) 2020-06-15 2020-06-15 信息处理方法、装置和成像***
CN202080005340.7A CN112771843A (zh) 2020-06-15 2020-06-15 信息处理方法、装置和成像***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096194 WO2021253186A1 (zh) 2020-06-15 2020-06-15 信息处理方法、装置和成像***

Publications (1)

Publication Number Publication Date
WO2021253186A1 true WO2021253186A1 (zh) 2021-12-23

Family

ID=75699556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096194 WO2021253186A1 (zh) 2020-06-15 2020-06-15 信息处理方法、装置和成像***

Country Status (2)

Country Link
CN (1) CN112771843A (zh)
WO (1) WO2021253186A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885112A (zh) * 2022-03-23 2022-08-09 清华大学 基于数据融合的高帧率视频生成方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837136B (zh) * 2021-09-29 2022-12-23 深圳市慧鲤科技有限公司 视频插帧方法及装置、电子设备和存储介质
CN114245007B (zh) * 2021-12-06 2023-09-05 西北工业大学 一种高帧率视频合成方法、装置、设备和存储介质
CN116916149A (zh) * 2022-04-19 2023-10-20 荣耀终端有限公司 视频处理方法、电子设备及可读介质
CN115617039B (zh) * 2022-09-15 2023-06-13 哈尔滨工程大学 一种基于事件触发的分布式仿射无人艇编队控制器构建方法和无人艇编队控制方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170302826A1 (en) * 2016-04-15 2017-10-19 General Electric Company Synchronous sampling methods for infrared cameras
CN108090935A (zh) * 2017-12-19 2018-05-29 清华大学 混合相机***及其时间标定方法及装置
CN108961318A (zh) * 2018-05-04 2018-12-07 上海芯仑光电科技有限公司 一种数据处理方法及计算设备
CN110660088A (zh) * 2018-06-30 2020-01-07 华为技术有限公司 一种图像处理的方法和设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8274552B2 (en) * 2010-12-27 2012-09-25 3Dmedia Corporation Primary and auxiliary image capture devices for image processing and related methods
CN109922231A (zh) * 2019-02-01 2019-06-21 重庆爱奇艺智能科技有限公司 一种用于生成视频的插帧图像的方法和装置
CN110120011B (zh) * 2019-05-07 2022-05-31 电子科技大学 一种基于卷积神经网络和混合分辨率的视频超分辨方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170302826A1 (en) * 2016-04-15 2017-10-19 General Electric Company Synchronous sampling methods for infrared cameras
CN108090935A (zh) * 2017-12-19 2018-05-29 清华大学 混合相机***及其时间标定方法及装置
CN108961318A (zh) * 2018-05-04 2018-12-07 上海芯仑光电科技有限公司 一种数据处理方法及计算设备
CN110660088A (zh) * 2018-06-30 2020-01-07 华为技术有限公司 一种图像处理的方法和设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885112A (zh) * 2022-03-23 2022-08-09 清华大学 基于数据融合的高帧率视频生成方法及装置

Also Published As

Publication number Publication date
CN112771843A (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2021253186A1 (zh) 信息处理方法、装置和成像***
WO2021115136A1 (zh) 视频图像的防抖方法、装置、电子设备和存储介质
CN108833785B (zh) 多视角图像的融合方法、装置、计算机设备和存储介质
US11222409B2 (en) Image/video deblurring using convolutional neural networks with applications to SFM/SLAM with blurred images/videos
US8280194B2 (en) Reduced hardware implementation for a two-picture depth map algorithm
US20170026592A1 (en) Automatic lens flare detection and correction for light-field images
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
WO2020253618A1 (zh) 一种视频抖动的检测方法及装置
JP2014150443A (ja) 撮像装置、その制御方法及びプログラム
KR20110056098A (ko) P s f를 추정하기 위한 장치 및 방법
JP4958806B2 (ja) ぶれ検出装置、ぶれ補正装置及び撮像装置
CN109040525B (zh) 图像处理方法、装置、计算机可读介质及电子设备
KR20200011000A (ko) 증강 현실 프리뷰 및 위치 추적을 위한 장치 및 방법
WO2020092051A1 (en) Rolling shutter rectification in images/videos using convolutional neural networks with applications to sfm/slam with rolling shutter images/videos
CN107993253B (zh) 目标跟踪方法及装置
US10721419B2 (en) Ortho-selfie distortion correction using multiple image sensors to synthesize a virtual image
Sindelar et al. Space-variant image deblurring on smartphones using inertial sensors
WO2019104453A1 (zh) 图像处理方法和装置
CN112752086B (zh) 用于环境映射的图像信号处理器、方法和***
WO2018072308A1 (zh) 一种图像输出方法以及电子设备
CN112308809B (zh) 一种图像合成方法、装置、计算机设备及存储介质
CN114255177A (zh) 成像中的曝光控制方法、装置、设备及存储介质
JP2012085205A (ja) 画像処理装置、撮像装置、画像処理方法および画像処理プログラム
Šindelář et al. A smartphone application for removing handshake blur and compensating rolling shutter
TWI755250B (zh) 植物生長曲線確定方法、裝置、電子設備及存儲媒體

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20940663

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20940663

Country of ref document: EP

Kind code of ref document: A1