WO2018058476A1 - 一种图像校正方法及装置 - Google Patents

一种图像校正方法及装置 Download PDF

Info

Publication number
WO2018058476A1
WO2018058476A1 PCT/CN2016/100953 CN2016100953W WO2018058476A1 WO 2018058476 A1 WO2018058476 A1 WO 2018058476A1 CN 2016100953 W CN2016100953 W CN 2016100953W WO 2018058476 A1 WO2018058476 A1 WO 2018058476A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame image
ith frame
ith
quadrilateral
Prior art date
Application number
PCT/CN2016/100953
Other languages
English (en)
French (fr)
Inventor
张运超
郜文美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680089219.0A priority Critical patent/CN109690611B/zh
Priority to US16/338,364 priority patent/US20190355104A1/en
Priority to PCT/CN2016/100953 priority patent/WO2018058476A1/zh
Publication of WO2018058476A1 publication Critical patent/WO2018058476A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present invention relates to the field of image processing, and in particular, to an image correction method and apparatus.
  • the smart terminal with built-in camera is convenient and fast, easy to share anytime and anywhere, and gradually replaces the traditional scanner, becoming the preferred way to obtain electronic data.
  • the intelligent terminal replaces the scanner, and can record not only the conventional still image information, but also the moving image information including the image sequence, such as slides, handouts, and television pictures that cannot be placed in the scanner.
  • the current conventional processing scheme is to correct the captured image by using algorithms such as quadrilateral detection and trapezoidal correction.
  • quadrilateral detection algorithm uses the edge extraction algorithm in computer vision to detect the rectangular edge of the target image, and is used to eliminate the non-target area outside the rectangular frame.
  • the trapezoidal correction algorithm performs projection correction on the rectangular region obtained by the quadrilateral detection algorithm, corrects the projection distortion caused by the photographing angle of view, and obtains a target image with higher quality.
  • quadrilateral detection and trapezoidal correction are generally performed on each frame image included in the moving image information.
  • the correction process takes too long, the system burden is heavy, and the real-time performance is poor.
  • the embodiment of the invention provides an image correction method and device, which realizes image correction with short time and light burden, and improves real-time correction for image sequence correction.
  • an image correction method is provided. This method can be applied to capture image terminals.
  • the method specifically includes: Step 1, capturing an ith frame image, where i is a positive integer greater than or equal to 1; Step 2, using an optical flow constraint equation, tracking a quadrilateral region of the initial frame image in the ith frame image to obtain an i-th image a quadrilateral region of the frame image; step 3, correcting the image of the i-th frame according to the quadrilateral region of the image of the i-th frame.
  • the image in the image sequence is corrected by using the optical flow constraint equation
  • the image correction method provided by the present application is provided because the optical flow constraint equation tracking is reduced by one third by the quadrilateral detection time.
  • the time for correcting the image in the image sequence is greatly reduced, and the real-time performance of the image correction is improved, and the processing efficiency of the device is also improved, and the burden on the device is reduced.
  • the quadrilateral region of the initial frame image may be a predefined fixed region, or may be a quadrilateral region obtained by quadrilateral detection of the initial frame.
  • an implementation scheme for correcting an image of an i-th frame according to a quadrilateral region of an image of an ith frame specifically includes: calculating an i-th according to a quadrilateral region of an image of the i-th frame Attitude transformation matrix between the frame image and the i-1th frame image in the image sequence in which the ith frame image is located Calculate the estimated pose transformation matrix of the i-th frame image to the real rectangle H i-1 is the attitude transformation matrix of the i-1th frame image to the real rectangle; Correcting the ith frame image.
  • the attitude transformation matrix of the current image to the real rectangle is estimated according to the posture transformation matrix of the previous frame image to the real rectangle, thereby avoiding the jitter problem between different frame images due to user jitter or light adjustment, and the improvement is improved. Stability when image sequence correction.
  • an implementation solution for correcting an image of an ith frame according to a quadrilateral region of an image of an ith frame specifically includes: according to a quadrilateral The geometric relationship of the side length, the real pose transformation matrix of the quadrilateral region of the i-th frame image to the real rectangular region use Correct the image of the i-th frame.
  • the pose transformation matrix of the current image to the real rectangle is directly estimated, which is simple to implement, and does not need to save the process quantity in other frame correction, thereby avoiding the occupation of the content by the process quantity.
  • the initial frame image may be determined according to actual needs.
  • the initial frame image may be the first frame image of the image sequence in which the ith frame image is located.
  • the image correction method provided by the present application may further The method includes: updating the initial frame image to the i+1th frame of the image sequence if the ith frame satisfies the reinitialization condition. By re-initializing the condition, the cumulative error of the optical flow tracking method is corrected, and the robustness of the image correction process is improved.
  • the re-initialization condition is defined by the difference between the frame number of the current frame image and the initial frame image, and whether re-initialization is performed is determined from the time dimension.
  • the reinitialization condition may include: the difference in the number of frames from the initial frame is greater than or equal to a first predetermined threshold.
  • determining whether to perform re-initialization from the time dimension may further include: the time difference between the current time and the corrected initial frame is greater than or equal to a preset threshold.
  • the re-initialization condition is defined by the number of tracking points of the current frame image, and whether the re-initialization is performed from the dimension of the tracking quality, so that re-initialization is performed The timing is more in line with the accuracy requirements.
  • the reinitialization condition may include: using the optical flow constraint equation, tracking the number of tracking points of the quadrilateral region of the initial frame is less than or equal to a second predetermined threshold.
  • preset thresholds may be set according to actual requirements, and the present application does not specifically limit this.
  • the image correction method may further include: determining whether the image of the i-th frame is an initial frame image; if the image of the i-th frame is not If it is the initial frame image, perform steps 2 and 3 to correct the ith frame image. To achieve different correction processing for the initial frame image and the non-initial frame image.
  • the image correction method corrects the ith frame image, and specifically includes: performing quadrilateral detection on the ith frame image, acquiring a quadrilateral region of the ith frame image, and calculating a true posture of the quadrilateral region of the ith frame image to the real rectangular region. Transformation matrix use Correct the image of the i-th frame.
  • the image correcting method corrects the image of the ith frame, and specifically includes: performing step 2 and step 3 first, correcting the image of the ith frame, performing quadrilateral detection on the image of the ith frame, and acquiring a quadrilateral region of the image of the ith frame as The quadrilateral area of the initial frame.
  • H i-1 may include an estimate of the i-1th frame image to the real rectangle. Attitude transformation matrix
  • H i-1 may include the true posture of the i-1th frame image to the real rectangle. Transformation matrix
  • the quadrilateral region of the initial frame image is tracked in the ith frame image by using the optical flow constraint equation, and the image of the ith frame is obtained.
  • the quadrilateral region may be implemented by: using an optical flow constraint equation, tracking the position of each stable corner point in the stable point set in the ith frame image to obtain a quadrilateral region of the ith frame image; wherein the stable point set includes the initial frame At least four stable corner points on the quadrilateral area of the image.
  • the image correction method provided by the present application may further Including: presenting the corrected i-th to the user Frame image. Real-time correction and output to the user.
  • the image correction method provided by the present application may further Including: when i is equal to N, the first frame image to the Nth frame image of the corrected image sequence are continuously presented to the user, N is greater than or equal to 2, and the image sequence includes N frame images. After the image sequence is corrected frame by frame, it is uniformly output to the user.
  • an embodiment of the present invention provides an image correcting apparatus, which can implement the functions in the foregoing method examples, and the functions can be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the image correcting apparatus includes a processor and a transceiver configured to support the image correcting apparatus to perform a corresponding function in the above method.
  • the transceiver is used to support communication between the image correction device and other devices.
  • the image correction device can also include a memory for coupling with the processor that holds the program instructions and data necessary for the image correction device.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions for use in the image correcting apparatus, including a program designed to execute the above aspects.
  • FIG. 1 is a schematic diagram of an application scenario of an image correction method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of an image correction apparatus according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart diagram of an image correction method according to an embodiment of the present disclosure.
  • 3A is a schematic diagram of tracking results of an optical flow constraint equation according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a method for correcting an image of an i-th frame according to a quadrilateral region of an image of an i-th frame according to an embodiment of the present disclosure
  • FIG. 4A is a schematic diagram of an image correction process according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of another image correction method according to an embodiment of the present invention.
  • FIG. 5A is a schematic diagram of an image correction result according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another image correction apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of still another image correction apparatus according to an embodiment of the present invention.
  • the application environment includes a playback device 1 for playing a dynamic picture, and a terminal 2 for capturing a dynamic picture played by the playback device 1 to acquire an image sequence.
  • the terminal 2 captures the dynamic picture played by the playback device 1 by calling the built-in camera device, and the picture captured by the terminal 2 is generally larger than the size of the source dynamic picture, and there is a certain tilt angle.
  • the terminal 2 calls the built-in image correcting device to correct the captured picture in real time, corrects the captured source dynamic picture, and outputs the presentation to the user in the form of a short video or a dynamic picture.
  • the playing device 1 may be a device for playing a dynamic picture such as a television or a projector.
  • the embodiment of the present invention does not specifically limit the type of the playback device 1.
  • the terminal 2 can be a user equipment (English name: User Equipment, UE), a mobile phone, a tablet computer, a notebook computer, a super mobile personal computer (English name: Ultra-mobile Personal Computer, UMPC), a netbook, a personal digital assistant (English full name) : Personal Digital Assistant (PDA), e-books, mobile TV, wearables, and more.
  • UE User Equipment
  • a mobile phone a tablet computer
  • a notebook computer a super mobile personal computer
  • UMPC Ultra-mobile Personal Computer
  • netbook a personal digital assistant
  • PDA Personal Digital Assistant
  • e-books mobile TV, wearables, and more.
  • the type of the terminal 2 is not specifically limited in the embodiment of the present invention.
  • the basic principle of the present invention is: an image correction device built in the terminal, performing quadrilateral detection on the initial frame in the captured image sequence to obtain a quadrilateral region for correction, and utilizing optical flow constraints in other frames than the initial frame. Tracks the quadrilateral area of the initial frame and corrects it after acquiring the quadrilateral area. Since the optical flow tracking method takes a short time, the real-time performance of the entire calibration process is well improved, and the burden on the terminal is also reduced.
  • FIG. 2 is a schematic structural diagram of an image correcting apparatus 20 related to various embodiments of the present invention.
  • the image correcting apparatus 20 is built in the terminal 2 in the application scenario shown in FIG. 1, and may be part of the terminal 2. Or all.
  • the image correcting device 20 may include a processor 201, a memory 202, a camera 203, and a display 204.
  • the memory 202 can be a volatile memory (English full name: volatile memory), such as a random access memory (English name: random-access memory, RAM); or a non-volatile memory (English name: non-volatile memory), For example, read-only memory (English full name: read-only memory, ROM), flash memory (English full name: flash memory), hard disk (English full name: hard disk drive, HDD) or Solid state drive (English name: solid-state drive, SSD); or a combination of the above types of memory for storing related applications and configuration files that can implement the method of the present invention.
  • volatile memory such as a random access memory (English name: random-access memory, RAM)
  • non-volatile memory English name: non-volatile memory
  • read-only memory English full name: read-only memory, ROM
  • flash memory English full name: flash memory
  • hard disk English full name: hard disk drive, HDD
  • SSD Solid state drive
  • the processor 201 is a control center of the image correcting device 20, and may be a central processing unit (English name: central processing unit, CPU), or may be a specific integrated circuit (English name: Application Specific Integrated Circuit, ASIC), or One or more integrated circuits configured to implement embodiments of the present invention, such as one or more microprocessors (digital singnal processors, DSP), or one or more field programmable gate arrays (English full name: Field Programmable Gate Array, FPGA).
  • the processor 201 can perform various functions of the image correction device 20 by running or executing software programs and/or modules stored in the memory 202, as well as invoking data stored in the memory 202.
  • the camera 203 can be a camera or otherwise for capturing a sequence of images comprising at least one frame of image.
  • Display 204 can be a user interaction interface for presenting a corrected image to a user.
  • Quadrilateral area refers to the document in the captured image, the video picture, and the position of the slide speech in the image, that is, the area wrapped by the outer edge. This area is generally an irregular quadrilateral considering the viewing angle.
  • the quadrilateral region is generally detected by using an edge detection algorithm in computer vision.
  • Rectangular area refers to the length and width of documents, video pictures, and slide notes in the captured image in the real world. This area is generally a regular rectangle. In general, the actual length and width of the area cannot be directly measured, so an algorithm is needed to estimate the true aspect ratio of the rectangular area.
  • Gesture Refers to the different forms of documents, video pictures, and slide notes in the captured image, which is a relative concept.
  • the gesture contains a transformation process from one form to another, which can be mathematically characterized by a homography matrix. Called the attitude transformation matrix.
  • the attitude change matrix between the two images can be calculated.
  • the transformation of the image to the quadrilateral of the image to perform the transformation of the representation of the posture transformation matrix of the real rectangle can correct the image.
  • the captured image is a posture change process from a rectangular area to a quadrilateral area at a time.
  • the homography matrix from a rectangle to a quadrilateral is called a quadrilateral transformation posture, and the position of the image in the first frame image and the second frame image are similarly.
  • the position in the middle is another attitude change process, and can also be represented by a pose transformation matrix, which is called a pose transformation matrix between the first frame image and the second frame image.
  • an embodiment of the present invention provides an image correction method, which is applied to the image correction device 20 shown in FIG. 2 and the application scenario shown in FIG. 1.
  • the image correction method provided by the embodiment of the present invention has the same correction process for each frame in the image sequence.
  • the following describes the process of correcting the image of the ith frame in the image sequence, which will not be described one by one.
  • the method may include:
  • the scanner 203 included in the image correcting device 20 shown in Fig. 2 executes S301.
  • i is a positive integer greater than or equal to 1.
  • the processor 201 included in the image correcting device 20 shown in FIG. 2 executes S302.
  • the quadrilateral region of the initial frame image may be a predefined fixed quadrilateral region.
  • the image correction device 20 can correspond to a fixed quadrilateral region by the still mode, and when the user selects the still mode of the device 20, the predefined fixed quadrilateral region in the image correction process is determined, corresponding to the still mode. Fixed quadrilateral area.
  • different modes may be preset to correspond to different quadrilateral regions, and the user selects different modes to determine a fixed quadrilateral region.
  • This embodiment of the present invention does not specifically limit this.
  • the quadrilateral region of the initial frame image may be obtained by quadrilateral detection of the initial frame image.
  • the initial frame image has been quadrilaterally detected, and the quadrilateral region of the initial frame image is acquired.
  • the initial frame image may be a frame image of the debugging stage before the image sequence is captured, or may be the first frame image of the image sequence.
  • the initial frame image can also be set according to actual needs.
  • the embodiment of the present invention does not specifically limit the initial frame image.
  • the process of quadrilateral detection may include: Gaussian downsampling the image; converting the image into a grayscale image if the input image is a color image; reducing the image noise by using a filtering algorithm; performing edge detection using an operator; using a Hough transform Linearly screen the detected edges; construct a reasonable quadrilateral using the selected lines.
  • the filtering algorithm may include, but is not limited to, Gaussian filtering, median filtering, and bilateral filtering.
  • Operators performing edge detection may include, but are not limited to, Canny operators, Sobel operators.
  • the quadrilateral region of the initial frame image is tracked in the ith frame image by using the optical flow constraint equation, and the quadrilateral region of the ith frame image is obtained, which can be implemented by using an optical flow constraint equation in the ith frame image.
  • the position of each stable corner point in the stable point set is tracked, and the quadrilateral area of the image of the i-th frame is obtained.
  • the set of stable points includes at least four stable corner points on the quadrilateral region of the initial frame image.
  • the set of stable points includes, but is not limited to, four vertices of a quadrilateral region of the initial frame image.
  • the optical flow constraint equation is the motion vector of the motion response of the pixel in the three-dimensional space in the two-dimensional imaging plane. According to the conservation law of the optical flow equation, the specific position of the pixel in the next frame can be solved. The specific process is not described in detail in the embodiments of the present invention.
  • quadrilateral detection is performed on the initial frame image to obtain a quadrilateral region of the initial frame image, as shown by the shaded area in the figure, and the area is four.
  • the vertices are quadrilateral regions of A, B, C, and D, respectively.
  • the optical flow constraint equation is used to track the quadrilateral region of the initial frame image shown in FIG. 3A, and the tracking stable point is set as the initial frame image in the quadrilateral region A, B, C, D .
  • Optical flow constraint equation assuming tracking position A, B, C, D in the i-th frame image are A,, B,, C, , D,, quadrangular region i-th frame shown in FIG. 3A (b), The shaded area is shown.
  • the processor 201 included in the image correcting device 20 shown in FIG. 2 executes S303.
  • the image of the ith frame is corrected according to the quadrilateral region of the image of the ith frame in S303, which may be implemented by any one of the following two solutions:
  • the process of correcting the image of the i-th frame may specifically include S401 to S403:
  • the quadrilateral region of the i-th frame image is calculated to the quadrilateral region of the i-th frame image, and the mathematical homography matrix is used to be the i-1th in the image sequence of the i-th frame image and the i-th frame image.
  • a pose transformation matrix between frame images is used to be the i-1th in the image sequence of the i-th frame image and the i-th frame image.
  • H i-1 is an attitude transformation matrix of the i-1th frame image to the real rectangle.
  • H i-1 may include an estimated pose transformation matrix of the i-1th frame image to the real rectangle.
  • the real pose transformation matrix of the i-1th frame image to the real rectangle may include an estimated pose transformation matrix of the i-1th frame image to the real rectangle.
  • H i-1 the specific content of H i-1 is still is It can be set according to actual needs, and is not specifically limited in this embodiment of the present invention.
  • the method may further include: calculating a real pose transformation matrix of the ith frame image to the real rectangle according to the quadrilateral region of the ith frame image and the corrected ith frame image. For calculating when performing S402 on correcting the i+1th frame image
  • FIG. 4A a process of correcting an image sequence including a plurality of frame images by the first scheme described above is illustrated.
  • H i-1 is
  • the pose transformation matrix between the image and the image of the previous frame is used.
  • the real pose transformation matrix of the previous frame image to the real rectangle Obtain an estimated pose transformation matrix from the ith frame image to the real rectangle Used to correct the ith frame image and calculate the true pose transformation matrix of the ith frame image to the real rectangle Used to correct the i+1th frame image.
  • the attitude transformation matrix between the image and the previous frame is used.
  • the real pose transformation matrix of the previous frame image to the real rectangle Obtain an estimated pose transformation matrix from the i+1th frame image to the real rectangle For correcting the i+1th frame image, and calculating the real pose transformation matrix of the i+1th frame image to the real rectangle Used to correct the i+2th frame image.
  • the attitude transformation matrix between the image and the previous frame is used.
  • the real pose transformation matrix of the previous frame image to the real rectangle Obtain the estimated attitude transformation matrix of the i+2 frame image to the real rectangle It is used to correct the i+2 frame image and calculate the real pose transformation matrix of the i+2 frame image to the real rectangle. Used to correct the i+3th frame image. Subsequent iterative processing will not be repeated.
  • the process of correcting the image of the i-th frame may specifically include: calculating the length and width of the original rectangular region according to the geometric relationship of the side length of the quadrilateral and the quadrilateral region of the image of the i-th frame.
  • the ratio transformation matrix of the i-th frame image quadrilateral region to the original rectangle is calculated; finally, the quadrilateral region of the i-th frame image, the quadrilateral region of the i-th frame image, and the pose transformation matrix of the original rectangle are corrected.
  • the image correction method provided by the embodiment of the present invention corrects the image in the image sequence by using the optical flow constraint equation, and the image correction method provided by the present application is provided because the optical flow constraint equation tracking is reduced by one third by the quadrilateral detection time.
  • the time for correcting the image in the image sequence is greatly reduced, and the real-time performance of the image correction is improved, and the processing efficiency of the device is also improved, and the burden on the device is reduced.
  • the method may further include: S304:
  • the processor 201 included in the image correcting device 20 shown in FIG. 2 executes S304 through the display 204.
  • the corrected ith frame image may be presented to the user immediately after S303.
  • the S304 may be specifically implemented to: continuously present the first frame of the corrected image sequence to the user. Image to Nth frame image.
  • the first frame image to the Nth frame image of the corrected image sequence may be continuously presented to the user in a video or dynamic image manner.
  • the method may further include S305:
  • the processor 201 included in the image correcting device 20 shown in Fig. 2 executes S305.
  • the reinitialization condition may include: the difference in the number of frames from the initial frame is greater than or equal to a first preset threshold.
  • the number of tracking points of the quadrilateral region of the tracking initial frame is less than or equal to a second predetermined threshold.
  • the length of the distance correction initial frame is greater than or equal to a third preset threshold.
  • the value of the first preset threshold or the second preset threshold or the third preset threshold may be configured according to actual requirements, which is not specifically limited in this embodiment of the present invention.
  • re-initialization condition may be set according to actual requirements, which is not specifically limited in this embodiment of the present invention.
  • the method may further include:
  • the processor 201 included in the image correcting device 20 shown in Fig. 2 executes S301a.
  • the ith frame image correction is performed in S302 and S303.
  • the method may further include:
  • the processor 201 included in the image correcting device 20 shown in FIG. 2 executes S306.
  • the solution may be implemented by using any one of the following two solutions:
  • S302 and S303 are executed to correct the image of the ith frame, and then quadrilateral detection is performed on the ith frame image, and the quadrilateral region of the ith frame image is obtained as a quadrilateral region of the initial frame for optical flow tracking of the subsequent frame image.
  • S302 and S303 are performed on the ith frame.
  • the image is corrected, and the quadrilateral detection is performed on the image of the ith frame, and the quadrilateral region of the image of the ith frame is obtained as the quadrilateral region of the initial frame, which may be performed at the same time or may be performed sequentially, which is not specifically limited in the embodiment of the present invention.
  • the image correction method provided by the embodiment of the present invention is used to compare the captured video sequence including the multi-frame image before and after the correction as shown in FIG. 5A.
  • the first frame of the continuous frame image in the video sequence is corrected, and the image of each frame in the first row is corrected by the image correction method provided by the embodiment of the present invention.
  • the image correction device includes hardware structures and/or software modules corresponding to the execution of the respective functions in order to implement the above functions.
  • the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
  • the embodiment of the present invention may divide the function module into the image correcting device according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 6 shows a possible structural diagram of the image correcting device 60 involved in the above embodiment.
  • the image correcting device 60 includes a capturing unit 601, an acquiring unit 602, and a correcting unit 603.
  • the capturing unit 601 is configured to support the image correcting device 60 to perform the process S301 in FIG. 3 or FIG. 5;
  • the obtaining unit 602 is configured to support the image correcting device 60 to perform the process S302 in FIG. 3 or FIG. 5;
  • the correcting unit 603 is configured to support the image correcting
  • the device 60 performs the process S303 in Fig. 3 or Fig. 5. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 7 shows a possible structural diagram of the image correcting device 60 involved in the above embodiment.
  • the image correction device 60 may include a processing module 701, a communication module 702, and a capture module 703.
  • the processing module 701 is configured to control and manage the actions of the image correcting device 60.
  • the processing module 701 is configured to support the image correcting device 60 by the capturing module 703 to perform the process S301 in FIG. 3 or FIG. 5, and the processing module 701 is further configured to support the image correcting device 60 to perform the processes S302 and S303 in FIG. 3 or FIG. And/or other processes for the techniques described herein.
  • Communication module 702 is used to support communication of image correction device 60 with other network entities.
  • the image correction device 60 may further include a storage module 704 for storing program codes and data of the image correction device 60.
  • the processing module 701 may be the processor 201 in the physical structure of the image correcting device 20 shown in FIG. 2, and may be a processor or a controller.
  • it can be a CPU, a general purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor 201 can also be a combination of computing functions, such as one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 702 can be a communication port or can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the capture module 703 may be the camera 203 in the physical structure of the image correction device 20 shown in FIG. 2, and may be a camera or a camera module.
  • the storage module 704 may be the memory 202 in the physical structure of the image correction device 20 shown in FIG. 2.
  • the image correcting device 60 may be the image correcting device 20 shown in FIG.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware, or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in RAM, flash memory, ROM, Erasable Programmable ROM (EPROM), and electrically erasable programmable read only memory (Electrically EPROM).
  • EEPROM electrically erasable programmable read only memory
  • registers hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a core network interface device.
  • the processor and the storage medium may also exist as discrete components in the core network interface device.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner, such as multiple units or groups. Pieces can be combined or integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be physically included separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the software functional units described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform portions of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, and the program code can be stored. Medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例提供一种图像校正方法及装置,涉及图像处理领域,实现用时短、负担轻的图像校正,提高对于图像序列校正的实时性。本发明实施例提供的方案包括:捕获第i帧图像,i为大于或等于1的正整数;利用光流约束方程,在第i帧图像中跟踪初始帧图像的四边形区域,获取第i帧图像的四边形区域;根据第i帧图像的四边形区域,校正第i帧图像。本发明用于图像校正。

Description

一种图像校正方法及装置 技术领域
本发明涉及图像处理领域,尤其涉及一种图像校正方法及装置。
背景技术
传统扫描仪利用光电以及数字处理技术,将静态图像信息(例如:纸质文件、图纸等)以扫描的方式转换为数字信号,用以计算机的显示、编辑以及存储等。
随着移动互联网以及智能终端的发展普及,内置相机的智能终端因其方便快捷,便于随时随地分享,逐渐取代传统扫描仪,成为获取电子数据的首选方式。智能终端替代扫描仪,不仅能记录传统静态图像信息,并且对于无法放入扫描仪中的幻灯片、讲义以及电视画面等包括图像序列的动态图像信息也能随时记录。
然而,在捕获图像时,不可避免的会受到拍摄视角以及光照条件的限制,导致捕获的图像存在投影畸变以及包含非目标区域。为解决该问题,目前常规的处理方案是利用四边形检测以及梯形校正等算法对捕获的图像进行校正。其中,四边形检测算法是利用计算机视觉中的边缘提取算法检测目标图像的矩形边缘,用以剔除矩形边框外侧非目标区域。梯形校正算法对四边形检测算法获得的矩形区域进行投影校正,校正拍摄视角引起的投影畸变,获得质量较高的目标图像。
目前,对于包含图像序列的动态图像信息的校正方案,通常是对动态图像信息包含的每一帧图像进行四边形检测及梯形校正。当动态图像信息包含的图像帧数量较大时,校正过程用时过长,***负担较重,实时性差。
发明内容
本发明实施例提供一种图像校正方法及装置,实现用时短、负担轻的图像校正,提高对于图像序列校正的实时性。
为达到上述目的,本发明的实施例采用如下技术方案:
本申请的第一方面,提供一种图像校正方法。该方法可以应用于捕获图像终端。该方法具体包括:步骤1、捕获第i帧图像,i为大于或等于1的正整数;步骤2、利用光流约束方程,在第i帧图像中跟踪初始帧图像的四边形区域,获取第i帧图像的四边形区域;步骤3、根据第i帧图像的四边形区域,校正第i帧图像。
在本申请提供的图像校正方法中,对于图像序列中的图像采用光流约束方程跟踪后进行校正,由于光流约束方程跟踪比四边形检测用时缩减三分之一,所以本申请提供的图像校正方法对于图像序列中图像的校正过程用时大大减少,在提高图像校正的实时性的同时,也提高了设备的处理效率,减少了设备的负担。
其中,对于初始帧图像的四边形区域,可以为预定义的固定区域,也可以为对初始帧进行四边形检测得到的四边形区域。
结合第一方面,在一种可能的实现方式中,提供一种根据第i帧图像的四边形区域,校正第i帧图像的实现方案,具体包括:根据第i帧图像的四边形区域,计算第i帧图像与第i帧图像所在的图像序列中的第i-1帧图像之间的姿态变换矩阵
Figure PCTCN2016100953-appb-000001
计算第i帧图像到真实矩形的预估姿态变换矩阵
Figure PCTCN2016100953-appb-000002
Hi-1为第i-1帧图像到真实矩形的姿态变换矩阵;采用
Figure PCTCN2016100953-appb-000003
校正所述第i帧图像。在进行图像校正时,根据前一帧图像到真实矩形的姿态变换矩阵预估当前图像到真实矩形的姿态变换矩阵,避免了由于用户抖动或者光线调整导致的不同帧图像间的抖动问题,提高了图像序列校正时的稳定性。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,提供一种根据第i帧图像的四边形区域,校正第i帧图像的实现方案,具体包括:根据四边形的边长几何关系,计算第i帧图像的四边形区域到真实矩形区域的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000004
采用
Figure PCTCN2016100953-appb-000005
校正第i帧图像。在进行图像校正时,直接预估当前图像到真实矩形的姿态变换矩阵,实现简单,无需保存其他帧校正时的过程量,避免了过程量对内容的占用。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,为了提高方案的实现灵活性,对于初始帧图像可以根据实际需求确定。可选的,初始帧图像可以为第i帧图像所在的图像序列的第一帧图像。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,在根据第i帧图像的四边形区域,校正第i帧图像之后,本申请提供的图像校正方法还可以包括:若第i帧满足重新初始化条件,将初始帧图像更新为图像序列的第i+1帧。通过重新初始化条件,纠正光流跟踪法累积误差,提高图像校正过程的鲁棒性。
需要说明的是,对于重新初始化条件,可以根据实际需求定义,本申请对此不进行具体限定。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,通过当前帧图像与初始帧图像的帧数差定义重新初始化条件,从时间维度确定是否进行重新初始化。重新初始化条件可以包括:与初始帧的帧数差大于或等于第一预设阈值。
进一步可选的,从时间维度确定是否进行重新初始化,重新初始化条件或者可以包括:当前时刻与校正初始帧的时差大于或等于预设阈值。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,通过当前帧图像的跟踪点数定义重新初始化条件,从跟踪质量的维度确定是否进行重新初始化,使得重新初始化的时机更符合校正精度需求。重新初始化条件可以包括:利用光流约束方程,跟踪初始帧的四边形区域的跟踪点数小于或等于第二预设阈值。
需要说明的是,上述预设阈值,分别可以根据实际需求设定,本申请对此均不进行具体限定。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,若图像校正方法中设置了在满足重新初始化条件时重新进行初始化,在捕获第i帧图像之后,本申请提供的图像校正方法还可以包括:判断第i帧图像是否为初始帧图像;若第i帧图像不 是初始帧图像,则执行步骤2和步骤3,对第i帧图像校正。以实现对于初始帧图像与非初始帧图像进行不同的校正处理。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,在判断第i帧图像是否为初始帧图像之后,若第i帧图像是初始帧图像,按照初始帧图像的校正方法对第i帧图像进行校正,具体可以包括:对第i帧图像进行四边形检测,获取第i帧图像的四边形区域,并计算第i帧图像的四边形区域到真实矩形区域的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000006
采用
Figure PCTCN2016100953-appb-000007
校正第i帧图像。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,在判断第i帧图像是否为初始帧图像之后,若第i帧图像是初始帧图像,按照初始帧图像的校正方法对第i帧图像进行校正,具体可以包括:先执行步骤2和步骤3,对第i帧图像校正,再对第i帧图像进行四边形检测,获取第i帧图像的四边形区域作为初始帧的四边形区域。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,为了使得图像校正过程实现简单,Hi-1可以包括第i-1帧图像到真实矩形的预估姿态变换矩阵
Figure PCTCN2016100953-appb-000008
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,为了使得图像校正结果更加准确,Hi-1可以包括第i-1帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000009
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,利用光流约束方程,在第i帧图像中跟踪初始帧图像的四边形区域,获取第i帧图像的四边形区域,具体可以实现为:利用光流约束方程,在第i帧图像中跟踪稳定点集中的每个稳定角点的位置,得到第i帧图像的四边形区域;其中,稳定点集包括初始帧图像的四边形区域上至少四个稳定角点。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,在根据第i帧图像的四边形区域,校正第i帧图像之后,本申请提供的图像校正方法还可以包括:向用户呈现校正后的第i 帧图像。实现实时校正并向用户输出。
结合第一方面或上述任一种可能的实现方式,在另一种可能的实现方式中,在根据第i帧图像的四边形区域,校正第i帧图像之后,本申请提供的图像校正方法还可以包括:当i等于N时,连续向用户呈现校正后的图像序列的第一帧图像至第N帧图像,N大于或等于2,图像序列中包括N帧图像。实现将图像序列逐帧校正之后,统一向用户输出。
第二方面,本发明实施例提供了一种图像校正装置,该图像校正装置可以实现上述方法示例中的功能,所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个上述功能相应的模块。
结合第二方面,在一种可能的实现方式中,该图像校正装置的结构中包括处理器和收发器,该处理器被配置为支持该图像校正装置执行上述方法中相应的功能。该收发器用于支持该图像校正装置与其他设备之间的通信。该图像校正装置还可以包括存储器,该存储器用于与处理器耦合,其保存该图像校正装置必要的程序指令和数据。
第三方面,本发明实施例提供了一种计算机存储介质,用于储存为上述图像校正装置所用的计算机软件指令,其包含用于执行上述方面所设计的程序。
上述第二方面及第三方面提供的方案,用于实现上述第一方面提供的图像校正方法,因此可以与第一方面达到相同的有益效果,此处不再进行赘述。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的图像校正方法的应用场景示意图;
图2为本发明实施例提供的一种图像校正装置的结构示意图;
图3为本发明实施例提供的一种图像校正方法的流程示意图;
图3A为本发明实施例提供的一种光流约束方程跟踪结果示意图;
图4为本发明实施例提供的一种根据第i帧图像的四边形区域,校正第i帧图像的方法流程示意图;
图4A为本发明实施例提供的一种图像校正过程示意图;
图5为本发明实施例提供的另一种图像校正方法的流程示意图;
图5A为本发明实施例提供的一种图像校正结果示意图;
图6为本发明实施例提供的另一种图像校正装置的结构示意图;
图7为本发明实施例提供的再一种图像校正装置的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
另外,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
在描述本发明实施例之前,先对图像校正的应用环境进行说明。
如图1所示,示意了图像校正的应用环境。该应用环境中包括播放动态画面的播放设备1,及用于捕获播放设备1播放的动态画面,获取图像序列的终端2。
具体的,终端2通过调用内置摄像装置拍摄播放设备1播放的动态画面,终端2拍摄的画面一般要大于源动态画面的大小,并且存在一定的倾斜角。终端2调用内置的图像校正装置实时校正拍摄的画面,将拍摄的源动态画面校正后以短视频或者动态图的形式向用户输出呈现。
其中,播放设备1可以为电视、投影仪等用于播放动态画面的设备。本发明实施例对于播放设备1的类型不进行具体限定。
该终端2可以为用户设备(英文全称:User Equipment,UE)、手机、平板电脑、笔记本电脑、超级移动个人计算机(英文全称:Ultra-mobile Personal Computer,UMPC)、上网本、个人数字助理(英文全称:Personal Digital Assistant,PDA)、电子书、移动电视、穿戴设备等等。本发明实施例对于终端2的类型也不进行具体限定。
基于此,本发明的基本原理是:终端中内置的图像校正装置,对于捕获的图像序列中的初始帧进行四边形检测获取四边形区域进行校正,在除初始帧之外的其他帧中利用光流约束,跟踪初始帧的四边形区域,获取到四边形区域后进行校正。由于光流跟踪法用时短,很好的提高了整个校正过程的实时性,也降低了终端的负担。
图2示出的是与本发明各实施例相关的一种图像校正装置20的结构示意图,该图像校正装置20内置于图1所示的应用场景中的终端2中,可以为终端2的部分或者全部。
如图2所示,该图像校正装置20可以包括:处理器201、存储器202、拍摄器203及显示器204。
下面结合图2对图像校正装置20的各个构成部件进行具体的介绍:
存储器202,可以是易失性存储器(英文全称:volatile memory),例如随机存取存储器(英文全称:random-access memory,RAM);或者非易失性存储器(英文全称:non-volatile memory),例如只读存储器(英文全称:read-only memory,ROM),快闪存储器(英文全称:flash memory),硬盘(英文全称:hard disk drive,HDD)或 固态硬盘(英文全称:solid-state drive,SSD);或者上述种类的存储器的组合,用于存储可实现本发明方法的相关应用程序、以及配置文件。
处理器201是图像校正装置20的控制中心,可以是一个中央处理器(英文全称:central processing unit,CPU),也可以是特定集成电路(英文全称:Application Specific Integrated Circuit,ASIC),或者是被配置成实施本发明实施例的一个或多个集成电路,例如:一个或多个微处理器(英文全称:digital singnal processor,DSP),或,一个或者多个现场可编程门阵列(英文全称:Field Programmable Gate Array,FPGA)。处理器201可以通过运行或执行存储在存储器202内的软件程序和/或模块,以及调用存储在存储器202内的数据,执行图像校正装置20的各种功能。
拍摄器203可以为摄像头或者其他,用于捕获包括至少一帧图像的图像序列。
显示器204可以为用户交互界面,用于向用户呈现校正后的图像。
下面结合附图,对本发明的实施例进行具体阐述。
先对本发明实施例中用到的名词进行解释如下:
四边形区域:指捕获的图像中的文档、视频画面以及幻灯片讲义在图像中所处位置,也即外边缘包裹的区域,考虑到拍摄视角的缘故,该区域一般是不规则的四边形。四边形区域一般采用计算机视觉中的边缘检测算法对拍摄图像进行检测得到。
矩形区域:指捕获的图像中的文档、视频画面以及幻灯片讲义在真实世界中的长宽,该区域一般为规则的矩形。一般情况下无法直接测量该区域的实际长宽,因此需要算法去预估该矩形区域的真实长宽比。
姿态:指捕获的图像中的文档、视频画面以及幻灯片讲义在图像中的不同形态,是一种相对的概念。姿态包含了从一种形态到另外一种形态的转化过程,在数学上可以用一个单应性矩阵来表征, 称之为姿态变换矩阵。
当获取到两个图像的四边形区域时,可以计算获取两个图像间的姿态变化矩阵。
根据图像到真实矩形的姿态变换矩阵,对图像的四边形执行图像到真实矩形的姿态变换矩阵表征的转化,即可校正图像。
例如,捕获图像是一次从矩形区域到四边形区域的姿态变化过程,从矩形到四边形的单应性矩阵被称为四边形变换姿态,同理该图像在第一帧图像中的位置与第二帧图像中的位置是另外一次姿态变化过程,也可以用一个姿态变换矩阵表征,称之为第一帧图像与第二帧图像之间的姿态变换矩阵。
一方面,本发明实施例提供一种图像校正方法,应用于图2所示的图像校正装置20及图1所示的应用场景中。
需要说明的是,本发明实施例提供的图像校正方法,对于图像序列中每一帧的校正过程相同。本发明实施例下面仅描述校正图像序列中第i帧图像的过程,不再一一赘述。
如图3所示,所述方法可以包括:
S301、捕获第i帧图像。
具体的,由图2所示的图像校正装置20中包括的拍摄器203执行S301。
其中,i为大于或等于1的正整数。
S302、利用光流约束方程,在第i帧图像中跟踪初始帧图像的四边形区域,获取第i帧图像的四边形区域。
具体的,由图2所示的图像校正装置20中包括的处理器201执行S302。
可选的,在一种可能的实施方式中,初始帧图像的四边形区域可以为预定义的固定的四边形区域。在该实施方式中,图像校正装置20中可以通过静止模式对应固定的四边形区域,当用户选择装置20的静止模式时,确定了图像校正过程中的预定义的固定的四边形区域,为静止模式对应的固定的四边形区域。
其中,也可以在装置20中预设不同的模式对应不同的四边形区域,由用户选择不同的模式确定固定的四边形区域。本发明实施例对此不进行具体限定。
可选的,初始帧图像的四边形区域可以为对初始帧图像进行四边形检测得到。相应的,在S302之前,已经对初始帧图像进行了四边形检测,获取到了初始帧图像的四边形区域。
可选的,初始帧图像可以为捕获图像序列之前,调试阶段的一帧图像,或者,可以为图像序列的第一帧图像。当然,还可以根据实际需求设定初始帧图像。本发明实施例对于初始帧图像不进行具体限定。
示例性的,四边形检测的过程可以包括:将图像进行高斯降采样;如果输入图像是彩色图像,则转化为灰度图像;采用滤波算法降低图像噪声;采用算子进行边缘检测;采用霍夫变换对检测到的边缘进行直线筛选;利用筛选到的直线构建合理的四边形。
其中,滤波算法可以包括但不限于高斯滤波,中值滤波以及双边滤波。进行边缘检测的算子可以包括但不限于Canny算子、Sobel算子。
需要说明的是,上述示例并不是对四边形检测过程的具体限定。
具体的,S302中,利用光流约束方程,在第i帧图像中跟踪初始帧图像的四边形区域,获取第i帧图像的四边形区域,可以实现为:利用光流约束方程,在第i帧图像中跟踪稳定点集中的每个稳定角点的位置,得到第i帧图像的四边形区域。
其中,稳定点集包括初始帧图像的四边形区域上至少四个稳定角点。稳定点集包括但不限于初始帧图像的四边形区域的四个顶点。
其中,光流约束方程是三维空间中像素点的运动反应在二维成像平面中的运动向量,根据光流方程守恒定律即可求解像素点在下一帧的具***置。具体过程本发明实施例不再进行赘述。
示例性的,如图3A中(a)所示,对初始帧图像进行四边形检测得到初始帧图像的四边形区域如图中阴影区域所示,该区域的四 个顶点分别为A、B、C、D的四边形区域。
在S302中,对第i帧图像,采用光流约束方程,跟踪图3A中所示的初始帧图像的四边形区域,将跟踪稳定点集为初始帧图像的四边形区域中A、B、C、D。假设采用光流约束方程跟踪A、B、C、D在第i帧图像中的位置分别为A、B、C、D,第i帧的四边形区域如图3A的(b)中的阴影区域所示。
S303、根据第i帧图像的四边形区域,校正第i帧图像。
具体的,由图2所示的图像校正装置20中包括的处理器201执行S303。
可选的,在S303中根据第i帧图像的四边形区域,校正第i帧图像,具体可以通过下述两种方案中任一项方案实现:
第一种方案、
在第一种方案中,如图4所示,根据第i帧图像的四边形区域,校正第i帧图像的过程具体可以包括S401至S403:
S401、根据第i帧图像的四边形区域,计算第i帧图像与第i帧图像的图像序列中的第i-1帧图像之间的姿态变换矩阵
Figure PCTCN2016100953-appb-000010
其中,计算第i-1帧图像的四边形区域到第i帧图像的四边形区域,用数学上的单应性矩阵,则为第i帧图像与第i帧图像的图像序列中的第i-1帧图像之间的姿态变换矩阵。
S402、计算第i帧图像到真实矩形的预估姿态变换矩阵
Figure PCTCN2016100953-appb-000011
其中,Hi-1为第i-1帧图像到真实矩形的姿态变换矩阵。
可选的,Hi-1可以包括第i-1帧图像到真实矩形的预估姿态变换矩阵
Figure PCTCN2016100953-appb-000012
或者,第i-1帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000013
需要说明的是,对于Hi-1的具体内容,是
Figure PCTCN2016100953-appb-000014
还是
Figure PCTCN2016100953-appb-000015
可以根据实际需求设定,本发明实施例对此不进行具体限定。
S403、采用
Figure PCTCN2016100953-appb-000016
校正第i帧图像。
具体的,在第i帧图像的四边形区域上,执行
Figure PCTCN2016100953-appb-000017
表征的转化过程,完成对第i帧图像的校正。
进一步的,若在S402中Hi-1
Figure PCTCN2016100953-appb-000018
在S403之后,所述方法还可以包括:根据第i帧图像的四边形区域及校正后的第i帧图像,计算第i帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000019
用于在校正第i+1帧图像执行S402时计算
Figure PCTCN2016100953-appb-000020
示例性的,如图4A所示,示意了通过上述第一种方案校正包括多个帧图像的图像序列的过程。其中,Hi-1
Figure PCTCN2016100953-appb-000021
具体的,在图4A所示的过程中,第i帧图像校正时,采用与前一帧图像间的姿态变换矩阵
Figure PCTCN2016100953-appb-000022
及前一帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000023
得到第i帧图像到真实矩形的预估姿态变换矩阵
Figure PCTCN2016100953-appb-000024
用于校正第i帧图像,并计算得到第i帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000025
用于校正第i+1帧图像。
进一步的,第i+1帧图像校正时,采用与前一帧图像间的姿态变换矩阵
Figure PCTCN2016100953-appb-000026
及前一帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000027
得到第i+1帧图像到真实矩形的预估姿态变换矩阵
Figure PCTCN2016100953-appb-000028
用于校正第i+1帧图像,并计算得到第i+1帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000029
用于校正第i+2帧图像。
进一步的,第i+2帧图像校正时,采用与前一帧图像间的姿态变换矩阵
Figure PCTCN2016100953-appb-000030
及前一帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000031
得到第i+2帧图像到真实矩形的预估姿态变换矩阵
Figure PCTCN2016100953-appb-000032
用于校正第i+2帧图像,并计算得到第i+2帧图像到真实矩形的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000033
用于校正第i+3帧图像。后续迭代处理,不再赘述。
第二种方案、
在第二种方案中,根据第i帧图像的四边形区域,校正第i帧图像的过程具体可以包括:先根据四边形的边长几何关系及第i帧图像的四边形区域,计算原始矩形区域长宽比;再计算第i帧图像四边形区域到原始矩形的姿态变换矩阵;最后将第i帧图像的四边形区域,第i帧图像四边形区域到原始矩形的姿态变换矩阵进行校正。
需要说明的是,上述S301至S303仅描述了校正第i帧图像的 过程,当获取一帧图像时,即执行上述S301至S303的过程进行校正,本发明实施例不再进行一一赘述。
本发明实施例提供的图像校正方法,对于图像序列中的图像采用光流约束方程跟踪后进行校正,由于光流约束方程跟踪比四边形检测用时缩减三分之一,所以本申请提供的图像校正方法对于图像序列中图像的校正过程用时大大减少,在提高图像校正的实时性的同时,也提高了设备的处理效率,减少了设备的负担。
可选的,如图5所示,在S303之后,所述方法还可以包括S304:
S304、向用户呈现校正后的第i帧图像。
具体的,由图2所示的图像校正装置20中包括的处理器201通过显示器204执行S304。
可选的,可以在S303之后立即向用户呈现校正后的第i帧图像。
可选的,在S303之后,若i等于N,N大于或等于2,图像序列中包括N帧图像,所述S304具体可以实现为:连续向用户呈现校正后的所述图像序列的第一帧图像至第N帧图像。
可选的,在执行S304时,可以以视频或者动态图的方式连续向用户呈现校正后的所述图像序列的第一帧图像至第N帧图像。
进一步的,对于初始帧图像,可以在校正过程中进行更新。如图5所示,在S303之后,所述方法还可以包括S305:
S305、若第i帧满足重新初始化条件,将初始帧图像更新为图像序列的第i+1帧。
具体的,由图2所示的图像校正装置20中包括的处理器201执行S305。
可选的,重新初始化条件可以包括:与初始帧的帧数差大于或等于第一预设阈值。或者,利用光流约束方程,跟踪初始帧的四边形区域的跟踪点数小于或等于第二预设阈值。或者,距离校正初始帧的时长大于或等于第三预设阈值。
其中,对于第一预设阈值或第二预设阈值或第三预设阈值的取值,可以根据实际需求配置,本发明实施例对此不进行具体限定。 第一预设阈值或第二预设阈值或第三预设阈值的取值设定的越小,图像校正的准确度越高,但是实时性会相应降低。第一预设阈值或第二预设阈值或第三预设阈值的取值设定的越大,图像校正的实时性越高,但是准确度会相应降低。
需要说明的是,对于重新初始化条件,可以根据实际需求设定,本发明实施例对此不进行具体限定。
进一步的,如图5所示,在S301之后,所述方法还可以包括:
S301a、判断第i帧图像是否为初始帧图像。
具体的,由图2所示的图像校正装置20中包括的处理器201执行S301a。
具体的,若第i帧图像不是初始帧图像,则执行S302及S303对第i帧图像校正。
进一步的,在S301a之后,若第i帧图像是初始帧图像,所述方法还可以包括:
S306、对第i帧图像进行校正并初始化。
具体的,由图2所示的图像校正装置20中包括的处理器201执行S306。
可选的,在执行S306时,可以通过下述两种方案中任一种实现:
方案A、
对第i帧图像进行四边形检测,获取第i帧图像的四边形区域,并计算第i帧图像的四边形区域到真实矩形区域的真实姿态变换矩阵
Figure PCTCN2016100953-appb-000034
采用
Figure PCTCN2016100953-appb-000035
校正第i帧图像。
需要说明的是,方案A的具体执行过程,与S302中描述的四边形检测以及S303中第二种方案相同,此处不再进行赘述。
方案B、
先执行S302和S303,对第i帧图像校正,再对第i帧图像进行四边形检测,获取第i帧图像的四边形区域作为述初始帧的四边形区域,用于后续帧图像的光流跟踪。
需要说明的是,在上述方案B中,执行S302和S303对第i帧 图像校正,与对第i帧图像进行四边形检测,获取第i帧图像的四边形区域作为述初始帧的四边形区域,可以同时执行,也可以先后执行,本发明实施例对此不进行具体限定。
还需要说明的是,图5中包括的各个步骤的执行先后顺序,本发明实施例对此并不进行具体限定。图5中只是以示例的形式示意一种执行先后顺序。
示例性的,采用本发明实施例提供的图像校正方法,对捕获的包括多帧图像的视频序列校正前后比较示意如图5A所示。
在图5A中,第一行为该视频序列中校正前的连续帧图像,第二行为第一行中每一帧图像通过本发明实施例提供的图像校正方法校正后的图像。
上述主要从图像校正装置的工作过程的角度对本发明实施例提供的方案进行了介绍。可以理解的是,图像校正装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本发明实施例可以根据上述方法示例对图像校正装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图6示出了上述实施例中所涉及的图像校正装置60的一种可能的结构示意图。图像校正装置60包括:捕获单元601,获取单元602,校正单元603。 捕获单元601用于支持图像校正装置60执行图3或图5中的过程S301;获取单元602用于支持图像校正装置60执行图3或图5中的过程S302;校正单元603用于支持图像校正装置60执行图3或图5中的过程S303。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的单元的情况下,图7示出了上述实施例中所涉及的图像校正装置60的一种可能的结构示意图。图像校正装置60可以包括:处理模块701、通信模块702、捕获模块703。处理模块701用于对图像校正装置60的动作进行控制管理。例如,处理模块701用于通过捕获模块703支持图像校正装置60执行图3或图5中的过程S301,处理模块701还用于支持图像校正装置60执行图3或图5中的过程S302和S303,和/或用于本文所描述的技术的其它过程。通信模块702用于支持图像校正装置60与其他网络实体的通信。图像校正装置60还可以包括存储模块704,用于存储图像校正装置60的程序代码和数据。
其中,处理模块701可以为图2所示的图像校正装置20的实体结构中的处理器201,可以是处理器或控制器。例如可以是CPU,通用处理器,DSP,ASIC,FPGA或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器201也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块702可以是通信端口,或者可以是收发器、收发电路或通信接口等。捕获模块703可以为图2所示的图像校正装置20的实体结构中的拍摄器203,可以是摄像头或者相机模块。存储模块704可以是图2所示的图像校正装置20的实体结构中的存储器202。
当处理模块701为处理器,捕获模块703为拍摄器,存储模块704为存储器时,本发明实施例图7所涉及的图像校正装置60可以为图2所示的图像校正装置20。
结合本发明公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于RAM、闪存、ROM、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于核心网接口设备中。当然,处理器和存储介质也可以作为分立组件存在于核心网接口设备中。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组 件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理包括,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (20)

  1. 一种图像校正方法,其特征在于,包括:
    步骤1、捕获第i帧图像;所述i为大于或等于1的正整数;
    步骤2、利用光流约束方程,在所述第i帧图像中跟踪初始帧图像的四边形区域,获取所述第i帧图像的四边形区域;
    步骤3、根据所述第i帧图像的四边形区域,校正所述第i帧图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第i帧图像的四边形区域,校正所述第i帧图像,包括:
    根据所述第i帧图像的四边形区域,计算所述第i帧图像与所述第i帧图像所在的图像序列中的第i-1帧图像之间的姿态变换矩阵
    Figure PCTCN2016100953-appb-100001
    计算所述第i帧图像到真实矩形的预估姿态变换矩阵
    Figure PCTCN2016100953-appb-100002
    其中,所述Hi-1为所述第i-1帧图像到真实矩形的姿态变换矩阵;
    采用所述
    Figure PCTCN2016100953-appb-100003
    校正所述第i帧图像。
  3. 根据权利要求1或2所述的方法,其特征在于,所述初始帧图像为所述第i帧图像所在的图像序列的第一帧图像。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,在所述根据所述第i帧图像的四边形区域,校正所述第i帧图像之后,所述方法还包括:
    若所述第i帧满足重新初始化条件,将所述初始帧图像更新为所述图像序列的第i+1帧。
  5. 根据权利要求4所述的方法,其特征在于,所述重新初始化条件包括:
    与所述初始帧的帧数差大于或等于第一预设阈值;
    或者,
    利用光流约束方程,跟踪所述初始帧的四边形区域的跟踪点数小于或等于第二预设阈值。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,在所述捕获第i帧图像之后,所述方法还包括:
    判断所述第i帧图像是否为所述初始帧图像;
    若所述第i帧图像不是所述初始帧图像,则执行所述步骤2和所述步骤3,对所述第i帧图像校正。
  7. 根据权利要求6所述的方法,其特征在于,在所述判断所述第i帧图像是否为所述初始帧图像之后,若所述第i帧图像是所述初始帧图像,所述方法还包括:
    对所述第i帧图像进行四边形检测,获取所述第i帧图像的四边形区域,并计算所述第i帧图像的四边形区域到真实矩形区域的真实姿态变换矩阵
    Figure PCTCN2016100953-appb-100004
    采用所述
    Figure PCTCN2016100953-appb-100005
    校正所述第i帧图像;
    或者,
    先执行所述步骤2和所述步骤3,对所述第i帧图像校正,再对所述第i帧图像进行四边形检测,获取所述第i帧图像的四边形区域作为所述初始帧的四边形区域。
  8. 根据权利要求2-7任一项所述的方法,其特征在于,所述Hi-1包括:
    所述第i-1帧图像到真实矩形的预估姿态变换矩阵
    Figure PCTCN2016100953-appb-100006
    或者,
    所述第i-1帧图像到真实矩形的真实姿态变换矩阵
    Figure PCTCN2016100953-appb-100007
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述利用光流约束方程,在所述第i帧图像中跟踪初始帧图像的四边形区域,获取所述第i帧图像的四边形区域,包括:
    利用所述光流约束方程,在所述第i帧图像中跟踪稳定点集中的每个稳定角点的位置,得到所述第i帧图像的四边形区域;其中,所述稳定点集包括所述初始帧图像的四边形区域上至少四个稳定角点。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,在所述根据所述第i帧图像的四边形区域,校正所述第i帧图像之后,所述方法还包括:
    向用户呈现校正后的所述第i帧图像;
    或者,
    当i等于N时,连续向用户呈现校正后的所述图像序列的第一帧图像至第N帧图像;其中,所述N大于或等于2,所述图像序列中包括N帧图像。
  11. 一种图像校正装置,其特征在于,包括处理器,所述处理器用于执行下述步骤:
    步骤1、捕获第i帧图像;所述i为大于或等于1的正整数;
    步骤2、利用光流约束方程,在所述第i帧图像中跟踪初始帧图像的四边形区域,获取所述第i帧图像的四边形区域;
    步骤3、根据所述第i帧图像的四边形区域,校正所述第i帧图像。
  12. 根据权利要求11所述的装置,其特征在于,所述处理器具体用于:
    根据所述第i帧图像的四边形区域,计算所述第i帧图像与所述第i帧图像所在的图像序列中的第i-1帧图像之间的姿态变换矩阵
    Figure PCTCN2016100953-appb-100008
    计算所述第i帧图像到真实矩形的预估姿态变换矩阵
    Figure PCTCN2016100953-appb-100009
    其中,所述Hi-1为所述第i-1帧图像到真实矩形的姿态变换矩阵;
    采用所述
    Figure PCTCN2016100953-appb-100010
    校正所述第i帧图像。
  13. 根据权利要求11或12所述的装置,其特征在于,所述初始帧图像为所述第i帧图像所在的图像序列的第一帧图像。
  14. 根据权利要求11-13任一项所述的装置,其特征在于,所述处理器还用于:
    在所述根据所述第i帧图像的四边形区域,校正所述第i帧图像之后,若所述第i帧满足重新初始化条件,将所述初始帧图像更新为所述图像序列的第i+1帧。
  15. 根据权利要求14所述的装置,其特征在于,所述重新初始 化条件包括:
    与所述初始帧的帧数差大于或等于第一预设阈值;
    或者,
    利用光流约束方程,跟踪所述初始帧的四边形区域的跟踪点数小于或等于第二预设阈值。
  16. 根据权利要求11-15任一项所述的装置,其特征在于,所述处理器还用于:
    在所述捕获第i帧图像之后,判断所述第i帧图像是否为所述初始帧图像;
    若所述第i帧图像不是所述初始帧图像,则执行所述步骤2和所述步骤3,对所述第i帧图像校正。
  17. 根据权利要求16所述的装置,其特征在于,在所述判断所述第i帧图像是否为所述初始帧图像之后,若所述第i帧图像是所述初始帧图像,所述处理器还用于:
    对所述第i帧图像进行四边形检测,获取所述第i帧图像的四边形区域,并计算所述第i帧图像的四边形区域到真实矩形区域的真实姿态变换矩阵
    Figure PCTCN2016100953-appb-100011
    采用所述
    Figure PCTCN2016100953-appb-100012
    校正所述第i帧图像;
    或者,
    先执行所述步骤2和所述步骤3,对所述第i帧图像校正,再对所述第i帧图像进行四边形检测,获取所述第i帧图像的四边形区域作为所述初始帧的四边形区域。
  18. 根据权利要求12-17任一项所述的装置,其特征在于,所述Hi-1包括:
    所述第i-1帧图像到真实矩形的预估姿态变换矩阵
    Figure PCTCN2016100953-appb-100013
    或者,
    所述第i-1帧图像到真实矩形的真实姿态变换矩阵
    Figure PCTCN2016100953-appb-100014
  19. 根据权利要求11-18任一项所述的装置,其特征在于,所述处理器具体用于:
    利用所述光流约束方程,在所述第i帧图像中跟踪稳定点集中的 每个稳定角点的位置,得到所述第i帧图像的四边形区域;其中,所述稳定点集包括所述初始帧图像的四边形区域上至少四个稳定角点。
  20. 根据权利要求11-19任一项所述的装置,其特征在于,在所述根据所述第i帧图像的四边形区域,校正所述第i帧图像之后,所述处理器还用于:
    向用户呈现校正后的所述第i帧图像;
    或者,
    当i等于N时,连续向用户呈现校正后的所述图像序列的第一帧图像至第N帧图像;其中,所述N大于或等于2,所述图像序列中包括N帧图像。
PCT/CN2016/100953 2016-09-29 2016-09-29 一种图像校正方法及装置 WO2018058476A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201680089219.0A CN109690611B (zh) 2016-09-29 2016-09-29 一种图像校正方法及装置
US16/338,364 US20190355104A1 (en) 2016-09-29 2016-09-29 Image Correction Method and Apparatus
PCT/CN2016/100953 WO2018058476A1 (zh) 2016-09-29 2016-09-29 一种图像校正方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/100953 WO2018058476A1 (zh) 2016-09-29 2016-09-29 一种图像校正方法及装置

Publications (1)

Publication Number Publication Date
WO2018058476A1 true WO2018058476A1 (zh) 2018-04-05

Family

ID=61763011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/100953 WO2018058476A1 (zh) 2016-09-29 2016-09-29 一种图像校正方法及装置

Country Status (3)

Country Link
US (1) US20190355104A1 (zh)
CN (1) CN109690611B (zh)
WO (1) WO2018058476A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598744A (zh) * 2018-11-29 2019-04-09 广州市百果园信息技术有限公司 一种视频跟踪的方法、装置、设备和存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018112790A1 (zh) * 2016-12-21 2018-06-28 华为技术有限公司 图象处理方法及装置
CN112738491B (zh) * 2020-12-29 2022-12-02 视田科技(天津)有限公司 一种投影反射画面的校正方法
CN113240587B (zh) * 2021-07-12 2021-09-24 深圳华声医疗技术股份有限公司 超分辨率扫描变换方法、装置、超声设备及存储介质
CN113949830B (zh) * 2021-09-30 2023-11-24 国家能源集团广西电力有限公司 一种图像处理方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011103031A (ja) * 2009-11-10 2011-05-26 Victor Co Of Japan Ltd 透視変換パラメータ生成装置、画像補正装置、透視変換パラメータ生成方法、画像補正方法、及びプログラム
CN103052961A (zh) * 2010-08-05 2013-04-17 高通股份有限公司 识别具有相机功能的移动设备捕获的可视媒体内容
CN104067605A (zh) * 2012-01-17 2014-09-24 夏普株式会社 拍摄装置、拍摄图像处理***、程序以及记录介质
CN105096261A (zh) * 2014-05-13 2015-11-25 北京大学 图像处理装置和图像处理方法
CN105809184A (zh) * 2015-10-30 2016-07-27 哈尔滨工程大学 一种适用于加油站的车辆实时识别跟踪与车位占用判断的方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4006601B2 (ja) * 2004-03-29 2007-11-14 セイコーエプソン株式会社 画像処理システム、プロジェクタ、プログラム、情報記憶媒体および画像処理方法
CN101859384B (zh) * 2010-06-12 2012-05-23 北京航空航天大学 目标图像序列度量方法
US9294676B2 (en) * 2012-03-06 2016-03-22 Apple Inc. Choosing optimal correction in video stabilization
CN104168444B (zh) * 2013-05-17 2018-05-01 浙江大华技术股份有限公司 一种跟踪球机的目标跟踪方法及跟踪球机

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011103031A (ja) * 2009-11-10 2011-05-26 Victor Co Of Japan Ltd 透視変換パラメータ生成装置、画像補正装置、透視変換パラメータ生成方法、画像補正方法、及びプログラム
CN103052961A (zh) * 2010-08-05 2013-04-17 高通股份有限公司 识别具有相机功能的移动设备捕获的可视媒体内容
CN104067605A (zh) * 2012-01-17 2014-09-24 夏普株式会社 拍摄装置、拍摄图像处理***、程序以及记录介质
CN105096261A (zh) * 2014-05-13 2015-11-25 北京大学 图像处理装置和图像处理方法
CN105809184A (zh) * 2015-10-30 2016-07-27 哈尔滨工程大学 一种适用于加油站的车辆实时识别跟踪与车位占用判断的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANG, SHENGLAN: "The Design and Research of the Auxiliary Navigation System Based on Augmented Reality", no. 3, 15 March 2016 (2016-03-15), pages 54 and 55, ISSN: 1674-0246 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598744A (zh) * 2018-11-29 2019-04-09 广州市百果园信息技术有限公司 一种视频跟踪的方法、装置、设备和存储介质

Also Published As

Publication number Publication date
US20190355104A1 (en) 2019-11-21
CN109690611A (zh) 2019-04-26
CN109690611B (zh) 2021-06-22

Similar Documents

Publication Publication Date Title
WO2018058476A1 (zh) 一种图像校正方法及装置
WO2018214365A1 (zh) 图像校正方法、装置、设备、***及摄像设备和显示设备
US10217200B2 (en) Joint video stabilization and rolling shutter correction on a generic platform
TWI517705B (zh) 同屬平台視訊影像穩定
US10915998B2 (en) Image processing method and device
WO2021115136A1 (zh) 视频图像的防抖方法、装置、电子设备和存储介质
US9373187B2 (en) Method and apparatus for producing a cinemagraph
US20050265453A1 (en) Image processing apparatus and method, recording medium, and program
WO2020253618A1 (zh) 一种视频抖动的检测方法及装置
US9307148B1 (en) Video enhancement techniques
US11282232B2 (en) Camera calibration using depth data
JP2017130929A (ja) 撮像装置により取得された文書画像の補正方法及び補正装置
US9398217B2 (en) Video stabilization using padded margin pixels
WO2018045596A1 (zh) 一种处理方法及移动设备
TW201137791A (en) A method to measure local image similarity based on the L1 distance measure
CN109194878B (zh) 视频图像防抖方法、装置、设备和存储介质
WO2020001222A1 (zh) 图像处理方法、装置、计算机可读介质及电子设备
JP2013106352A (ja) 画像中の投影領域の決定方法、及び装置
CN110493512B (zh) 摄影构图方法、装置、摄影设备、电子装置及存储介质
US8340469B2 (en) Information processing apparatus, information processing method, program, and image processing apparatus
TW201707438A (zh) 電子裝置及影像處理方法
CN108780572B (zh) 图像校正的方法及装置
US10282633B2 (en) Cross-asset media analysis and processing
WO2018006669A1 (zh) 视差融合方法和装置
CN111754411B (zh) 图像降噪方法、图像降噪装置及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16917230

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16917230

Country of ref document: EP

Kind code of ref document: A1