US20140056354A1 - Video processing apparatus and method - Google Patents
Video processing apparatus and method Download PDFInfo
- Publication number
- US20140056354A1 US20140056354A1 US13/590,504 US201213590504A US2014056354A1 US 20140056354 A1 US20140056354 A1 US 20140056354A1 US 201213590504 A US201213590504 A US 201213590504A US 2014056354 A1 US2014056354 A1 US 2014056354A1
- Authority
- US
- United States
- Prior art keywords
- frames
- frame
- video
- interpolated
- discontinuity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012545 processing Methods 0.000 title claims abstract description 25
- 238000003672 processing method Methods 0.000 claims description 7
- 238000006073 displacement reaction Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 24
- 230000015654 memory Effects 0.000 description 13
- 238000004458 analytical method Methods 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/0147—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation using an indication of film mode or an indication of a specific pattern, e.g. 3:2 pull-down pattern
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/521—Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/89—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
- H04N19/895—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440281—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/014—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
Definitions
- the invention relates to video processing, and more particularly to motion estimation and motion compensation.
- Video data comprises a series of frames. Because each frame is a picture containing a plurality of color pixels, the amount of video data is large. To facilitate storing or transmitting, video data is usually compressed according to a video compression standard to reduce its size. The video data is therefore stored or transmitted in a compressed format. Examples of the video compression standard comprise MPEG-1 for video CD, H.262/MPEG-2 for DVD video, H.263 for video conferencing, and H.264/MPEG-4 for Blu-ray disk and HD DVD. Before the compressed video data is displayed, the compressed video data must be decoded to recover the series of video frames. When the compressed video data is decoded, if decoding errors occur, some of the frames will not be recovered from the compressed data due to decoding error and will therefore be dropped; which is referred to as “dropped frames”.
- the dropped frames lead to discontinuity of a series of frames.
- a video processor may generate reproduced frames to fill in for the vacancies of the dropped frames.
- An ordinary video processor may simply duplicate video frame prior to the dropped frame to generate the reproduced frame.
- the reproduced frame is then inserted to the vacancy position of the dropped frame between the frames.
- the series of frames comprising the reproduced frames are then displayed according to the time stamps thereof. Because reproduced frame is a duplicate of a prior frame, when the frames are displayed, the motion of images corresponding to the reproduced frames on a screen will seem to be suspended; which is referred to as a “judder artifact problem”.
- a segment of a 24 Hz film comprises three frames 101 , 102 , and 103 .
- two reproduced frames 101 ′ and 101 ′′ are generated by duplicating the frame 101 and a reproduced frame 102 ′ is generated by duplicating the frame 102 .
- the reproduced frames 101 ′ and 101 ′′ are then inserted between the frames 101 and 102 , and the reproduced frame 102 ′ is inserted between the frames 102 and 103 .
- the frames 101 , 101 ′, 101 ′′, 102 , 102 ′, and 103 are displayed in sequence.
- the invention provides a video processing apparatus.
- the video processing apparatus comprises a decoder, a detector, and a motion estimation and motion compensation (MEMC) module.
- the decoder decodes video data to generate a series of video frames with time stamps.
- the detector detects discontinuity of the video frames to generate discontinuity information.
- the MEMC module selects a previous frame before the discontinuity and a subsequent frame after the discontinuity from the video frames according to the discontinuity information, performs a motion estimation process to determine at least one motion vector between the previous frame and the subsequent frame, performs a motion compensation process according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame, and inserts the interpolated frame into the video frames to obtain a series of compensated frames.
- a video processing apparatus comprises a decoder, a detector, and a motion estimation and motion compensation (MEMC) module.
- video data is decoded by the decoder to generate a series of video frames with time stamps. Discontinuity of the video frames is then detected by the detector to generate discontinuity information.
- a previous frame and a subsequent frame are then selected by the MEMC module from the video frames according to the discontinuity information.
- a motion estimation process is then performed by the MEMC module to determine at least one motion vector.
- a motion compensation process is then performed by the MEMC module according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame. The interpolated frame is then inserted by the MEMC module into the video frames to obtain a series of compensated frames.
- FIG. 1 is a schematic diagram of an example of frame repetition
- FIG. 2 is a block diagram of a video processing apparatus according to the invention.
- FIG. 3 is a block diagram of a motion estimation and motion compensation (MEMC) module according to the invention.
- FIG. 4A is a block diagram of an embodiment of a motion estimation module according to the invention.
- FIG. 4B is a diagram of a motion estimation process according to the invention.
- FIG. 5A is a block diagram of a motion compensation module according to the invention.
- FIG. 5B is a schematic diagram of an example of a pixel interpolation process according to the invention.
- FIG. 6A is a schematic diagram of generation of an interpolated frame according to an MEMC process to eliminate discontinuities
- FIG. 6B is a schematic diagram of generation of multiple interpolated frames according to an MEMC process to increase a frame rate
- FIG. 7 is a diagram of an embodiment of an MEMC interpolation process applied to frames with a random sampling time
- FIG. 8 is a diagram of an embodiment of an MEMC interpolation process applied to frames with a non-constant frame rate
- FIG. 9 is a diagram of comparisons between applications of a conventional frame repetition method and an MEMC method of the invention.
- the video processing apparatus 200 comprises a decoder 202 , a detector 204 , and a motion estimation and motion compensation (MEMC) module 206 .
- the decoder 202 decodes compressed video data to obtain a series of video frames.
- the video frames are then sent to the detector 204 and the MEMC module 206 .
- the detector 204 detects discontinuities of the video frames to generate discontinuity information.
- the MEMC module 206 uses motion estimation and motion compensation methodology to generate interpolated frames, and inserts the interpolated frames between the video frames according to the discontinuity information so as to obtain a series of compensated frames.
- the video processing apparatus 200 can then sequentially display the compensated frames on a screen.
- the MEMC module 300 comprises a motion estimation module 302 and a motion compensation module 304 .
- the MEMC module 300 selects a previous frame prior to the discontinuity of the video frames and a subsequent frame after the discontinuity of the video frames.
- the motion estimation module 302 then performs a motion estimation process to determine at least one motion vector among the previous frame and the subsequent frame.
- the motion compensation module 304 then performs a motion compensation process according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame.
- the MEMC module 300 then inserts the interpolated frame into the video frames to obtain a series of compensated frames without discontinuities.
- the motion estimation module 400 comprises a data flow controller 402 , a previous frame memory 404 , a subsequent frame memory 406 , a block matching module 408 , and a motion vector decision module 410 .
- a previous frame is stored in the previous frame memory 404
- a subsequent frame is stored in the subsequent frame memory 406 .
- the data flow controller 402 selects a first candidate block from a plurality of blocks of the previous frame, and sends the memory address of the first candidate block to the previous frame memory 404 .
- the previous frame memory 404 according to the memory address of the first candidate block, outputs the first candidate block.
- the first candidate block then is sent to the block matching module 408 .
- the data flow controller 402 selects a second candidate block from a plurality of blocks of the subsequent frame, and sends the memory address of the second candidate block to the subsequent frame memory 406 .
- the subsequent frame memory 406 according to the memory address of the second candidate block, outputs the second candidate block.
- the second candidate block then is sent to the block matching module 408 .
- the disclosed two memories for storing previous frame and subsequent frame respectively is not intend to limit the scope of the invention thereto since many other possible configuration may be employed by the skilled artisan according to the present embodiment.
- the block matching unit 408 calculates a sum of absolute difference (SAD) between the pixels of the first candidate block and the second candidate block.
- the SAD value indicates a difference level between the first candidate block and the second candidate block and is sent to the motion vector decision module 410 .
- the motion estimation decision module 410 can determine matched blocks with a minimum SAD, and can calculate a motion vector between the two matched blocks. Referring to FIG. 4B , a diagram of a motion estimation process according to the invention is shown. The pixels of a block 420 in the subsequent frame is determined to match the pixels of a block 430 in the previous frame, and a motion vector 440 between the matched blocks 420 and 430 is calculated.
- the motion compensation module 500 comprises an MV post processing module 502 , an MV reliability analysis module 504 , and an interpolation kernel 506 .
- a motion estimation module sends a block motion vector and block information to the MV post processing module 502 .
- the MV post processing module 502 then derives a pixel motion vector and pixel information from the block motion vector and the block information.
- the MV reliability analysis module 504 receives the pixel motion vector and the pixel information, and generates a motion vector reliability indicator.
- the MV reliability analysis module 504 then sends the received pixel motion vector and the motion vector reliability indicator to the interpolation kernel 506 .
- the interpolation kernel 506 then accesses the originals pixels according to the pixel motion vector. And the interpolation kernel 506 performs interpolation operation to those original pixels to generate interpolated pixels according to the motion vector reliability indicator.
- FIG. 5B a schematic diagram of an example of a pixel interpolation process according to the invention is shown.
- a stationary average value STA corresponding to a non-motion-compensated interpolation process is then calculated by averaging the sum of the pixels B and C, and a motion compensation average value MCA corresponding to motion-compensated interpolation process is then calculated by averaging the sum of the pixels A and D.
- a unreliable indication value URI is then used as a blending factor to mix the value MCA with the value STA to obtain the interpolated pixel value according to the following algorithm:
- MIX is the interpolated pixel value
- blending factor ⁇ is in proportional to the URI value
- FIG. 6A a schematic diagram of generation of an interpolated frame according to an MEMC process to eliminate discontinuities is shown.
- a discontinuity 612 is detected in a series of video frames. The discontinuity may be induced by decoding errors.
- a previous frame 611 prior to the discontinuity 612 and a subsequent frame 613 subsequent to the discontinuity 612 are then selected from the video frames.
- An interpolated frame 612 ′′ is then generated according to an MEMC process. In comparison with the frame 612 ′ generated by duplicating the previous frame 611 , the interpolated frame 612 ′′ indicates a precise location of a ball.
- the interpolated frame 612 ′′ is inserted as the frames having discontinuity to obtain compensated frames without discontinuities, such that the judder artifact problem does not occur in the compensated frames.
- FIG. 6B a schematic diagram of generation of multiple interpolated frames according to an MEMC process to increase a frame rate is shown.
- a video film with a frame rate of 20 Hz comprises two frames 601 and 606 .
- an MEMC process is then used to generate four interpolated frames 602 , 603 , 604 , and 605 according to the frames 601 and 606 .
- the interpolated frames 602 , 603 , 604 , and 605 are then inserted between the frames 601 and 606 to generate a film with a frame rate of 60 Hz.
- the film compensated according to the MEMC process has no judder artifact problems.
- FIG. 7 a diagram illustrates an embodiment of an MEMC interpolation process applied to video frames with a random sampling time.
- a series of video frames F N ⁇ 3 , F N ⁇ 2 , F N ⁇ 1 , F N+1 , F N+2 , F N+3 , and F N+4 have random time stamps.
- the intervals between the time stamps of the video frames F N ⁇ 3 ⁇ F N+4 are random.
- the video frames F N ⁇ 3 ⁇ F N+4 are a series of pictures photographed with a camera.
- the detector 204 calculates time stamps of the compensated frames according to the required frame rate, and compares the time stamps of the compensated frames with the time stamps of the video frames F N ⁇ 3 ⁇ F N+4 to generate the discontinuity information.
- the MEMC module 206 then synthesizes a series of interpolated frames according to the discontinuity information and the video frames, and then inserts the interpolated frames into the video frames F N ⁇ 3 ⁇ F N+4 to obtain the compensated frames with the required frame rate. For example, the MEMC module 206 generates an interpolated frame F N according to the frames F N ⁇ 1 and F N+1 , and then inserts the interpolated frame F N between the frames F N ⁇ 1 and F N+1 .
- FIG. 8 a diagram depicts an embodiment of an MEMC interpolation process applied to video frames with a non-constant frame rate.
- the frame rate of a series of video frames is not precisely constant.
- the video frames have a rough frame rate, e.g. 10 Hz
- the interval between the frames F P and F S is not exactly 1/10 second.
- the time stamp of the frame F P may not be exactly at the time b, which is a time stamp corresponding to a constant frame rate, and the time stamp of frame F P may be at the time “a” or “c” with a drift error from time “a”.
- the actual time stamp of the frame F S may not be exactly at the time “g”, which is a time stamp corresponding to the constant frame rate, and the actual time stamp of frame F S may be at the time “f” or “h” with a drift error from time “g”.
- the MEMC module 206 When the MEMC module 206 generates an interpolated frame F 1 according to the video frames F P and F S with drift error, the content of the interpolated frame F 1 may be affected by the drift error of the video frames F P and F S .
- an interpolated frame F 1 derived from the frame 801 with a time stamp at time “a” and the frame 803 with a time stamp at time f has a content of frame 805 .
- An interpolated frame F 1 derived from the frame 802 with a time stamp at time c and the frame 804 with a time stamp at time h has a content of frame 806 .
- the content of frame 805 should be different from the content of frame 806 .
- the MEMC module 206 may synthesize the interpolated frame F 1 from the previous frame F P and the subsequent frame F S according to the following algorithm:
- Z is the content of the interpolated frame F 1
- X is the content of the previous frame F P
- Y is the content of the subsequent frame F S
- Tp represents the time stamp of the previous frame F P
- Td represents the time stamp of the interpolated frame F 1
- Ts represents the time stamp of the subsequent frame F S .
- the MEMC module 206 may determines a first motion vector V 1 indicating displacement from the previous frame F P to the interpolated frame F 1 and a second motion vector V 2 indicating displacement from the interpolated frame F 1 to the subsequent frame F S according to the following algorithm:
- V1 V ⁇ ( Td ⁇ Tp )/( Ts ⁇ Tp );
- V 2 V ⁇ ( Ts ⁇ Td )/( Ts ⁇ Tp ),
- V is the motion vector from the previous frame F P to the subsequent frame F S
- Tp is the time stamp of the previous frame F P
- Td is the time stamp of the interpolated frame F 1
- Ts is the time stamp of the subsequent frame F S .
- the MEMC method can also be applied to compensation of on-screen-display (OSD) images of a Flash user interface.
- OSD on-screen-display
- a Flash user interface is executed in a computer
- a series of OSD images of the Flash user interface are sequentially displayed on a screen according to user inputs. If the frame number of OSD images is not large enough, a judder artifact problem also occurs when the Flash user interface is executed.
- a plurality of interpolated images are generated by the MEMC method according to the invention, and are then inserted between the OSD images of the Flash user interface when the Flash user interface is executed.
- the MEMC method can also be applied to compensation of video conference frames of Internet phones such as Skype.
- VoIP Voice over Internet Protocol
- a plurality of interpolated images are generated by the MEMC method according to the invention, and are then inserted between the video frames of the video conference for smooth display when the video conference proceeds.
- the MEMC method can also be applied to compensation of playback frames of a digital television system.
- FIG. 9 a diagram shows comparisons between applications of a conventional frame repetition method and an MEMC method of the invention.
- a decoder generates video frames at a frame rate of 30 frames per second (FPS). Due to decoding errors, the decoder may generate video frames less than 30 FPS, for example at a frame rate of 27 FPS with discontinuities.
- a repeater duplicates previous frames of the discontinuity of the video frames so as to compensate the video frames. The discontinuity is therefore removed, and the frame rate of the compensated frames is increased to 30 FPS.
- An MEMC module further generates interpolated frames to increase the frame rate of the video frames from 30 FPS to 60 FPS. The judder artifact problem, however, occurs when the duplicated frames are displayed.
- an MEMC module generates interpolated frames according to the MEMC method and then inserts the interpolated frames between the video frames. The discontinuity is therefore removed, and the frame rate of the compensated frames is increased from 27 FPS to 30 FPS.
- the MEMC module further generates interpolated frames to increase the frame rate of the video frames from 30 FPS to 60 FPS. No judder artifact problems occur when the video frames are displayed.
- the MEMC module according to the invention can also generates interpolated frames according to the MEMC method to increase the frame rate of the video frames to 60 FPS.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Systems (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a video processing apparatus. In one embodiment, the video processing apparatus includes a decoder, a detector, and a motion estimation and motion compensation (MEMC) module. The decoder decodes video data to generate a series of video frames with time stamps. The detector detects discontinuity of the video frames to generate discontinuity information. The MEMC module selects a previous frame prior to the discontinuity and a subsequent frame after the discontinuity from the video frames according to the discontinuity information, performs a motion estimation process to determine at least one motion, performs a motion compensation process according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame, and inserts the interpolated frame into the video frames to obtain a series of compensated frames.
Description
- 1. Field of the Invention
- The invention relates to video processing, and more particularly to motion estimation and motion compensation.
- 2. Description of the Related Art
- Video data comprises a series of frames. Because each frame is a picture containing a plurality of color pixels, the amount of video data is large. To facilitate storing or transmitting, video data is usually compressed according to a video compression standard to reduce its size. The video data is therefore stored or transmitted in a compressed format. Examples of the video compression standard comprise MPEG-1 for video CD, H.262/MPEG-2 for DVD video, H.263 for video conferencing, and H.264/MPEG-4 for Blu-ray disk and HD DVD. Before the compressed video data is displayed, the compressed video data must be decoded to recover the series of video frames. When the compressed video data is decoded, if decoding errors occur, some of the frames will not be recovered from the compressed data due to decoding error and will therefore be dropped; which is referred to as “dropped frames”.
- The dropped frames lead to discontinuity of a series of frames. To remove discontinuity from the frames, a video processor may generate reproduced frames to fill in for the vacancies of the dropped frames. An ordinary video processor may simply duplicate video frame prior to the dropped frame to generate the reproduced frame. The reproduced frame is then inserted to the vacancy position of the dropped frame between the frames. The series of frames comprising the reproduced frames are then displayed according to the time stamps thereof. Because reproduced frame is a duplicate of a prior frame, when the frames are displayed, the motion of images corresponding to the reproduced frames on a screen will seem to be suspended; which is referred to as a “judder artifact problem”.
- Referring to
FIG. 1 , a schematic diagram of an example of frame repetition is shown. A segment of a 24 Hz film comprises threeframes frames 101′ and 101″ are generated by duplicating theframe 101 and a reproducedframe 102′ is generated by duplicating theframe 102. The reproducedframes 101′ and 101″ are then inserted between theframes frame 102′ is inserted between theframes frames frames frames frames frame 102; which is referred to as the “judder artifact problem”. Similarly, when theframes frames frame 103. To prevent video film from the judder artifact problem, a motion estimation and motion compensation (MEMC) method is therefore applied. - The invention provides a video processing apparatus. In one embodiment, the video processing apparatus comprises a decoder, a detector, and a motion estimation and motion compensation (MEMC) module. The decoder decodes video data to generate a series of video frames with time stamps. The detector detects discontinuity of the video frames to generate discontinuity information. The MEMC module selects a previous frame before the discontinuity and a subsequent frame after the discontinuity from the video frames according to the discontinuity information, performs a motion estimation process to determine at least one motion vector between the previous frame and the subsequent frame, performs a motion compensation process according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame, and inserts the interpolated frame into the video frames to obtain a series of compensated frames.
- The invention provides a video processing method. In one embodiment, a video processing apparatus comprises a decoder, a detector, and a motion estimation and motion compensation (MEMC) module. First, video data is decoded by the decoder to generate a series of video frames with time stamps. Discontinuity of the video frames is then detected by the detector to generate discontinuity information. A previous frame and a subsequent frame are then selected by the MEMC module from the video frames according to the discontinuity information. A motion estimation process is then performed by the MEMC module to determine at least one motion vector. A motion compensation process is then performed by the MEMC module according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame. The interpolated frame is then inserted by the MEMC module into the video frames to obtain a series of compensated frames.
- A detailed description is given in the following embodiments with reference to the accompanying drawings.
- The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
-
FIG. 1 is a schematic diagram of an example of frame repetition; -
FIG. 2 is a block diagram of a video processing apparatus according to the invention; -
FIG. 3 is a block diagram of a motion estimation and motion compensation (MEMC) module according to the invention; -
FIG. 4A is a block diagram of an embodiment of a motion estimation module according to the invention; -
FIG. 4B is a diagram of a motion estimation process according to the invention; -
FIG. 5A is a block diagram of a motion compensation module according to the invention; -
FIG. 5B is a schematic diagram of an example of a pixel interpolation process according to the invention; -
FIG. 6A is a schematic diagram of generation of an interpolated frame according to an MEMC process to eliminate discontinuities; -
FIG. 6B is a schematic diagram of generation of multiple interpolated frames according to an MEMC process to increase a frame rate; -
FIG. 7 is a diagram of an embodiment of an MEMC interpolation process applied to frames with a random sampling time; -
FIG. 8 is a diagram of an embodiment of an MEMC interpolation process applied to frames with a non-constant frame rate; and -
FIG. 9 is a diagram of comparisons between applications of a conventional frame repetition method and an MEMC method of the invention. - The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
- Referring to
FIG. 2 , a block diagram of avideo processing apparatus 200 according to the invention is shown. In one embodiment, thevideo processing apparatus 200 comprises adecoder 202, adetector 204, and a motion estimation and motion compensation (MEMC)module 206. Thedecoder 202 decodes compressed video data to obtain a series of video frames. The video frames are then sent to thedetector 204 and theMEMC module 206. Thedetector 204 detects discontinuities of the video frames to generate discontinuity information. TheMEMC module 206 then uses motion estimation and motion compensation methodology to generate interpolated frames, and inserts the interpolated frames between the video frames according to the discontinuity information so as to obtain a series of compensated frames. Thevideo processing apparatus 200 can then sequentially display the compensated frames on a screen. - Referring to
FIG. 3 , a block diagram of a motion estimation and motion compensation (MEMC)module 300 according to the invention is shown. In one embodiment, theMEMC module 300 comprises amotion estimation module 302 and amotion compensation module 304. When discontinuity information indicates that a discontinuity has occurred in the video frames, theMEMC module 300 selects a previous frame prior to the discontinuity of the video frames and a subsequent frame after the discontinuity of the video frames. Themotion estimation module 302 then performs a motion estimation process to determine at least one motion vector among the previous frame and the subsequent frame. Themotion compensation module 304 then performs a motion compensation process according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame. TheMEMC module 300 then inserts the interpolated frame into the video frames to obtain a series of compensated frames without discontinuities. - Referring to
FIG. 4A , a block diagram of an embodiment of amotion estimation module 400 according to the invention is shown. In one embodiment, themotion estimation module 400 comprises adata flow controller 402, aprevious frame memory 404, asubsequent frame memory 406, ablock matching module 408, and a motionvector decision module 410. A previous frame is stored in theprevious frame memory 404, and a subsequent frame is stored in thesubsequent frame memory 406. Thedata flow controller 402 selects a first candidate block from a plurality of blocks of the previous frame, and sends the memory address of the first candidate block to theprevious frame memory 404. Theprevious frame memory 404, according to the memory address of the first candidate block, outputs the first candidate block. The first candidate block then is sent to theblock matching module 408. Similarly, thedata flow controller 402 selects a second candidate block from a plurality of blocks of the subsequent frame, and sends the memory address of the second candidate block to thesubsequent frame memory 406. Thesubsequent frame memory 406, according to the memory address of the second candidate block, outputs the second candidate block. The second candidate block then is sent to theblock matching module 408. In this embodiment, the disclosed two memories for storing previous frame and subsequent frame respectively is not intend to limit the scope of the invention thereto since many other possible configuration may be employed by the skilled artisan according to the present embodiment. - The
block matching unit 408 calculates a sum of absolute difference (SAD) between the pixels of the first candidate block and the second candidate block. The SAD value indicates a difference level between the first candidate block and the second candidate block and is sent to the motionvector decision module 410. After all blocks of a candidate area of the previous frame are compared with all blocks of a candidate area of the subsequent frame, the motionestimation decision module 410 can determine matched blocks with a minimum SAD, and can calculate a motion vector between the two matched blocks. Referring toFIG. 4B , a diagram of a motion estimation process according to the invention is shown. The pixels of ablock 420 in the subsequent frame is determined to match the pixels of ablock 430 in the previous frame, and amotion vector 440 between the matchedblocks - Referring to
FIG. 5A , a block diagram of amotion compensation module 500 according to the invention is shown. In one embodiment, themotion compensation module 500 comprises an MVpost processing module 502, an MVreliability analysis module 504, and aninterpolation kernel 506. A motion estimation module sends a block motion vector and block information to the MVpost processing module 502. The MVpost processing module 502 then derives a pixel motion vector and pixel information from the block motion vector and the block information. The MVreliability analysis module 504 receives the pixel motion vector and the pixel information, and generates a motion vector reliability indicator. The MVreliability analysis module 504 then sends the received pixel motion vector and the motion vector reliability indicator to theinterpolation kernel 506. Theinterpolation kernel 506 then accesses the originals pixels according to the pixel motion vector. And theinterpolation kernel 506 performs interpolation operation to those original pixels to generate interpolated pixels according to the motion vector reliability indicator. - Referring to
FIG. 5B , a schematic diagram of an example of a pixel interpolation process according to the invention is shown. To generate an interpolated pixel P of an interpolated frame, two pixels A and B are fetched from a previous frame, and two pixels C and D are fetched from a subsequent frame. A stationary average value STA corresponding to a non-motion-compensated interpolation process is then calculated by averaging the sum of the pixels B and C, and a motion compensation average value MCA corresponding to motion-compensated interpolation process is then calculated by averaging the sum of the pixels A and D. A unreliable indication value URI is then used as a blending factor to mix the value MCA with the value STA to obtain the interpolated pixel value according to the following algorithm: -
MIX=(1−α)×MCA+α×STA, - wherein MIX is the interpolated pixel value, and the blending factor α is in proportional to the URI value.
- Referring to
FIG. 6A , a schematic diagram of generation of an interpolated frame according to an MEMC process to eliminate discontinuities is shown. Adiscontinuity 612 is detected in a series of video frames. The discontinuity may be induced by decoding errors. Aprevious frame 611 prior to thediscontinuity 612 and asubsequent frame 613 subsequent to thediscontinuity 612 are then selected from the video frames. An interpolatedframe 612″ is then generated according to an MEMC process. In comparison with theframe 612′ generated by duplicating theprevious frame 611, the interpolatedframe 612″ indicates a precise location of a ball. The interpolatedframe 612″ is inserted as the frames having discontinuity to obtain compensated frames without discontinuities, such that the judder artifact problem does not occur in the compensated frames. - Referring to
FIG. 6B , a schematic diagram of generation of multiple interpolated frames according to an MEMC process to increase a frame rate is shown. A video film with a frame rate of 20 Hz comprises twoframes frames frames frames FIG. 1 , the film compensated according to the MEMC process has no judder artifact problems. - Referring to
FIG. 7 , a diagram illustrates an embodiment of an MEMC interpolation process applied to video frames with a random sampling time. A series of video frames FN−3, FN−2, FN−1, FN+1, FN+2, FN+3, and FN+4 have random time stamps. In other words, the intervals between the time stamps of the video frames FN−3˜FN+4 are random. In one embodiment, the video frames FN−3˜FN+4 are a series of pictures photographed with a camera. To generate a series of compensated frames with a required frame rate, thedetector 204 calculates time stamps of the compensated frames according to the required frame rate, and compares the time stamps of the compensated frames with the time stamps of the video frames FN−3˜FN+4 to generate the discontinuity information. TheMEMC module 206 then synthesizes a series of interpolated frames according to the discontinuity information and the video frames, and then inserts the interpolated frames into the video frames FN−3˜FN+4 to obtain the compensated frames with the required frame rate. For example, theMEMC module 206 generates an interpolated frame FN according to the frames FN−1 and FN+1, and then inserts the interpolated frame FN between the frames FN−1 and FN+1. - Referring to
FIG. 8 , a diagram depicts an embodiment of an MEMC interpolation process applied to video frames with a non-constant frame rate. The frame rate of a series of video frames is not precisely constant. Although the video frames have a rough frame rate, e.g. 10 Hz, the interval between the frames FP and FS is not exactly 1/10 second. For example, the time stamp of the frame FP may not be exactly at the time b, which is a time stamp corresponding to a constant frame rate, and the time stamp of frame FP may be at the time “a” or “c” with a drift error from time “a”. Similarly, the actual time stamp of the frame FS may not be exactly at the time “g”, which is a time stamp corresponding to the constant frame rate, and the actual time stamp of frame FS may be at the time “f” or “h” with a drift error from time “g”. When theMEMC module 206 generates an interpolated frame F1 according to the video frames FP and FS with drift error, the content of the interpolated frame F1 may be affected by the drift error of the video frames FP and FS. For example, an interpolated frame F1 derived from theframe 801 with a time stamp at time “a” and theframe 803 with a time stamp at time f has a content offrame 805. An interpolated frame F1 derived from theframe 802 with a time stamp at time c and theframe 804 with a time stamp at time h has a content offrame 806. And the content offrame 805 should be different from the content offrame 806. - To deal with the effect introduced by a drift error of reference frames in generation of an interpolated frame, the
MEMC module 206 may synthesize the interpolated frame F1 from the previous frame FP and the subsequent frame FS according to the following algorithm: -
Z=[X×(Ts−Td)/(Ts−Tp)]+[Y×(Td−Tp)/(Ts−Tp)], - wherein Z is the content of the interpolated frame F1, X is the content of the previous frame FP, Y is the content of the subsequent frame FS, “Tp” represents the time stamp of the previous frame FP, Td represents the time stamp of the interpolated frame F1, and Ts represents the time stamp of the subsequent frame FS.
- In addition, to deal with the effect introduced by a drift error of reference frames in generation of an interpolated frame, the
MEMC module 206 may determines a first motion vector V1 indicating displacement from the previous frame FP to the interpolated frame F1 and a second motion vector V2 indicating displacement from the interpolated frame F1 to the subsequent frame FS according to the following algorithm: -
V1=V×(Td−Tp)/(Ts−Tp); and -
V2=V×(Ts−Td)/(Ts−Tp), - wherein V is the motion vector from the previous frame FP to the subsequent frame FS, Tp is the time stamp of the previous frame FP, Td is the time stamp of the interpolated frame F1, and Ts is the time stamp of the subsequent frame FS.
- The MEMC method can also be applied to compensation of on-screen-display (OSD) images of a Flash user interface. When a Flash user interface is executed in a computer, a series of OSD images of the Flash user interface are sequentially displayed on a screen according to user inputs. If the frame number of OSD images is not large enough, a judder artifact problem also occurs when the Flash user interface is executed. To solve the judder artifact problem of the Flash user interface, a plurality of interpolated images are generated by the MEMC method according to the invention, and are then inserted between the OSD images of the Flash user interface when the Flash user interface is executed.
- The MEMC method can also be applied to compensation of video conference frames of Internet phones such as Skype. When a Voice over Internet Protocol (VoIP) phone is used to build a video conference, since a network bandwidth of the VoIP phone is not high enough or a frame rate of the video conference is low, the judder artifact problem is induced. To solve the judder artifact problem of a video conference, a plurality of interpolated images are generated by the MEMC method according to the invention, and are then inserted between the video frames of the video conference for smooth display when the video conference proceeds. Similarly, the MEMC method can also be applied to compensation of playback frames of a digital television system.
- Referring to
FIG. 9 , a diagram shows comparisons between applications of a conventional frame repetition method and an MEMC method of the invention. A decoder generates video frames at a frame rate of 30 frames per second (FPS). Due to decoding errors, the decoder may generate video frames less than 30 FPS, for example at a frame rate of 27 FPS with discontinuities. According to a conventional frame repetition method, a repeater duplicates previous frames of the discontinuity of the video frames so as to compensate the video frames. The discontinuity is therefore removed, and the frame rate of the compensated frames is increased to 30 FPS. An MEMC module further generates interpolated frames to increase the frame rate of the video frames from 30 FPS to 60 FPS. The judder artifact problem, however, occurs when the duplicated frames are displayed. - According to an MEMC method of the invention, an MEMC module generates interpolated frames according to the MEMC method and then inserts the interpolated frames between the video frames. The discontinuity is therefore removed, and the frame rate of the compensated frames is increased from 27 FPS to 30 FPS. The MEMC module further generates interpolated frames to increase the frame rate of the video frames from 30 FPS to 60 FPS. No judder artifact problems occur when the video frames are displayed. Similarly, when the decoder generates images at a random frame rate such as 10 FPS or 15 FPS, the MEMC module according to the invention can also generates interpolated frames according to the MEMC method to increase the frame rate of the video frames to 60 FPS.
- While the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Claims (15)
1. A video processing apparatus, comprising:
a decoder, decoding video data to generate a series of video frames with time stamps;
a detector, detecting discontinuity of the video frames to generate discontinuity information; and
a motion estimation and motion compensation (MEMC) module, selecting a previous frame prior to the discontinuity and a subsequent frame after the discontinuity from the video frames according to the discontinuity information, performing a motion estimation process to determine at least one motion vector, performing a motion compensation process according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame, and inserting the interpolated frame into the video frames to obtain a series of compensated frames.
2. The video processing apparatus as claimed in claim 1 , wherein the MEMC module comprises:
a motion estimation module, performing the motion estimation process to determine the motion vector between the previous frame and the subsequent frame; and
a motion compensation module, performing the motion compensation process according to the motion vector to synthesize the interpolated frame from the previous frame and the subsequent frame.
3. The video processing apparatus as claimed in claim 1 , wherein the detector calculates time stamps of the compensated frames according to a required frame rate and compares the time stamps of the compensated frames and the time stamps of the video frames to generate the discontinuity information, the MEMC module synthesizes the interpolated frames according to the discontinuity information and the video frames, and the MEMC module inserts the interpolated frames into the video frames to obtain the compensated frames with the required frame rate.
4. The video processing apparatus as claimed in claim 1 , wherein the interpolated frame is synthesized from the previous frame and the subsequent frame in accordance with the time stamp of the previous frame, the time stamp of the subsequent frame and the time stamp of the interpolated frame.
5. The video processing apparatus as claimed in claim 1 , wherein the MEMC module determines a first motion vector indicating displacement from the previous frame to the interpolated frame and a second motion vector indicating displacement from the interpolated frame to the subsequent frame as references for the motion composition process according to the time stamp of the previous frame, the time stamp of the subsequent frame and the time stamp of the interpolated frame.
6. The video processing apparatus as claimed in claim 1 , wherein the video frames are a series of on-screen-display (OSD) images of a Flash user interface, and the video processing apparatus generates the compensated frames according to the OSD images for smooth display.
7. The video processing apparatus as claimed in claim 1 , wherein the video frames are frames of a video conference or video call, and the video processing apparatus generates the compensated frames with an increased frame rate for smooth display.
8. The video processing apparatus as claimed in claim 1 , wherein the video frames are playback frames of a digital television system, and the video processing apparatus generates the compensated frames with an increased frame rate for smooth display.
9. A video processing method, wherein a video processing apparatus comprises a decoder, a detector, and a motion estimation and motion compensation (MEMC) module, comprising:
decoding video data by the decoder to generate a series of video frames with time stamps;
detecting discontinuity of the video frames by the detector to generate discontinuity information;
selecting a previous frame prior to the discontinuity and a subsequent frame after the discontinuity from the video frames by the MEMC module according to the discontinuity information;
performing a motion estimation process by the MEMC module to determine at least one motion vector;
performing a motion compensation process by the MEMC module according to the motion vector to synthesize an interpolated frame from the previous frame and the subsequent frame; and
inserting the interpolated frame by the MEMC module into the video frames to obtain a series of compensated.
10. The video processing method as claimed in claim 9 , wherein the intervals between the time stamps of the video frames are random, and detecting of the discontinuity comprises:
calculating time stamps of the compensated frames by the detector according to a required frame rate; and
comparing the time stamps of the compensated frames and the time stamps of the video frames to generate the discontinuity information,
wherein the interpolated frames are synthesized according to the discontinuity information, and the interpolated frames are inserted into the video frames to obtain the compensated frames with the required frame rate.
11. The video processing method as claimed in claim 9 , wherein the interpolated frame is synthesized from the previous frame and the subsequent frame in accordance with the time stamp of the previous frame, the time stamp of the subsequent frame and the time stamp of the interpolated frame.
12. The video processing method as claimed in claim 9 , wherein the step of performing a motion estimation process further comprises:
determining of a first motion vector indicating displacement from the previous frame to the interpolated frame and a second motion vector indicating displacement from the interpolated frame to the subsequent frame as references for the motion composition process according to the time stamp of the previous frame, the time stamp of the subsequent frame and the time stamp of the interpolated frame.
13. The video processing method as claimed in claim 9 , wherein the video frames are a series of on-screen-display (OSD) images of a Flash user interface, and the compensated frames are generated according to the OSD images for smooth display.
14. The video processing method as claimed in claim 9 , wherein the video frames are frames of a video conference or video call, and the compensated frames are generated with an increased frame rate for smooth display.
15. The video processing apparatus as claimed in claim 9 , wherein the video frames are playback frames of a digital television system, and the video processing apparatus generates the compensated frames with an increased frame rate for smooth display.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/590,504 US20140056354A1 (en) | 2012-08-21 | 2012-08-21 | Video processing apparatus and method |
EP12182160.7A EP2701386A1 (en) | 2012-08-21 | 2012-08-29 | Video processing apparatus and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/590,504 US20140056354A1 (en) | 2012-08-21 | 2012-08-21 | Video processing apparatus and method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140056354A1 true US20140056354A1 (en) | 2014-02-27 |
Family
ID=46754337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/590,504 Abandoned US20140056354A1 (en) | 2012-08-21 | 2012-08-21 | Video processing apparatus and method |
Country Status (2)
Country | Link |
---|---|
US (1) | US20140056354A1 (en) |
EP (1) | EP2701386A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170324974A1 (en) * | 2016-05-06 | 2017-11-09 | Himax Technologies Limited | Image processing apparatus and image processing method thereof |
CN112437241A (en) * | 2017-12-27 | 2021-03-02 | 安纳帕斯股份有限公司 | Frame rate detection method and frame rate conversion method |
CN113141537A (en) * | 2021-04-02 | 2021-07-20 | Oppo广东移动通信有限公司 | Video frame insertion method, device, storage medium and terminal |
CN113473184A (en) * | 2021-07-27 | 2021-10-01 | 咪咕音乐有限公司 | Video color ring tone blocking processing method, terminal equipment and storage medium |
US11303847B2 (en) * | 2019-07-17 | 2022-04-12 | Home Box Office, Inc. | Video frame pulldown based on frame analysis |
WO2022143078A1 (en) * | 2020-12-28 | 2022-07-07 | 深圳创维-Rgb电子有限公司 | Video automatic motion compensation method, apparatus, and device, and storage medium |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105828106B (en) * | 2016-04-15 | 2019-01-04 | 山东大学苏州研究院 | A kind of non-integral multiple frame per second method for improving based on motion information |
TW201834455A (en) * | 2016-12-05 | 2018-09-16 | 晨星半導體股份有限公司 | Stereoscopic image stream processing device and stereoscopic image stream processing method |
CN107360424B (en) * | 2017-07-28 | 2019-10-25 | 深圳岚锋创视网络科技有限公司 | A kind of bit rate control method based on video encoder, device and video server |
CN111277779B (en) * | 2020-03-05 | 2022-05-06 | Oppo广东移动通信有限公司 | Video processing method and related device |
CN116260928B (en) * | 2023-05-15 | 2023-07-11 | 湖南马栏山视频先进技术研究院有限公司 | Visual optimization method based on intelligent frame insertion |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050129124A1 (en) * | 2003-12-10 | 2005-06-16 | Tae-Hyeun Ha | Adaptive motion compensated interpolating method and apparatus |
US20080007653A1 (en) * | 2006-07-07 | 2008-01-10 | Kabushiki Kaisha Toshiba | Packet stream receiving apparatus |
US20100231800A1 (en) * | 2009-03-12 | 2010-09-16 | White Christopher J | Display of video with motion |
US20120176536A1 (en) * | 2011-01-12 | 2012-07-12 | Avi Levy | Adaptive Frame Rate Conversion |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7242850B2 (en) * | 2001-02-23 | 2007-07-10 | Eastman Kodak Company | Frame-interpolated variable-rate motion imaging system |
JP4198550B2 (en) * | 2002-09-10 | 2008-12-17 | 株式会社東芝 | Frame interpolation method and apparatus using the frame interpolation method |
KR100541953B1 (en) * | 2003-06-16 | 2006-01-10 | 삼성전자주식회사 | Pixel-data selection device for motion compensation, and method of the same |
EP1843587A1 (en) * | 2006-04-05 | 2007-10-10 | STMicroelectronics S.r.l. | Method for the frame-rate conversion of a digital video signal and related apparatus |
JP2008135980A (en) * | 2006-11-28 | 2008-06-12 | Toshiba Corp | Interpolation frame generating method and interpolation frame generating apparatus |
WO2009109936A1 (en) * | 2008-03-05 | 2009-09-11 | Nxp B.V. | Arrangement and approach for video data up-conversion |
KR20100016741A (en) * | 2008-08-05 | 2010-02-16 | 삼성전자주식회사 | Image processing apparatus and control method thereof |
-
2012
- 2012-08-21 US US13/590,504 patent/US20140056354A1/en not_active Abandoned
- 2012-08-29 EP EP12182160.7A patent/EP2701386A1/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050129124A1 (en) * | 2003-12-10 | 2005-06-16 | Tae-Hyeun Ha | Adaptive motion compensated interpolating method and apparatus |
US20080007653A1 (en) * | 2006-07-07 | 2008-01-10 | Kabushiki Kaisha Toshiba | Packet stream receiving apparatus |
US20100231800A1 (en) * | 2009-03-12 | 2010-09-16 | White Christopher J | Display of video with motion |
US20120176536A1 (en) * | 2011-01-12 | 2012-07-12 | Avi Levy | Adaptive Frame Rate Conversion |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170324974A1 (en) * | 2016-05-06 | 2017-11-09 | Himax Technologies Limited | Image processing apparatus and image processing method thereof |
US10015513B2 (en) * | 2016-05-06 | 2018-07-03 | Himax Technologies Limited | Image processing apparatus and image processing method thereof |
CN112437241A (en) * | 2017-12-27 | 2021-03-02 | 安纳帕斯股份有限公司 | Frame rate detection method and frame rate conversion method |
US11303847B2 (en) * | 2019-07-17 | 2022-04-12 | Home Box Office, Inc. | Video frame pulldown based on frame analysis |
US11711490B2 (en) | 2019-07-17 | 2023-07-25 | Home Box Office, Inc. | Video frame pulldown based on frame analysis |
WO2022143078A1 (en) * | 2020-12-28 | 2022-07-07 | 深圳创维-Rgb电子有限公司 | Video automatic motion compensation method, apparatus, and device, and storage medium |
CN113141537A (en) * | 2021-04-02 | 2021-07-20 | Oppo广东移动通信有限公司 | Video frame insertion method, device, storage medium and terminal |
CN113473184A (en) * | 2021-07-27 | 2021-10-01 | 咪咕音乐有限公司 | Video color ring tone blocking processing method, terminal equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
EP2701386A1 (en) | 2014-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20140056354A1 (en) | Video processing apparatus and method | |
US8559527B2 (en) | Image display apparatus | |
TWI236290B (en) | Method and apparatus of adaptive de-interlacing of motion image | |
US8274602B2 (en) | Image processing apparatus and image processing method with redundant frame detection | |
JP4421508B2 (en) | Method and apparatus for correcting film mode in still region | |
US7773677B2 (en) | Image block error concealing apparatus and method using weight filtering in mobile communication system | |
US10284810B1 (en) | Using low-resolution frames to increase frame rate of high-resolution frames | |
US8432973B2 (en) | Interpolation frame generation apparatus, interpolation frame generation method, and broadcast receiving apparatus | |
JP2007295469A (en) | Device and method for converting video signal | |
US20110262055A1 (en) | Image scaling curve generation | |
CN102077585A (en) | Video processor, video processing method, integrated circuit for video processing, video playback device | |
US20190141332A1 (en) | Use of synthetic frames in video coding | |
WO2007063461A2 (en) | Method and apparatus for detecting video data errors | |
US10291951B2 (en) | Video channel display method and apparatus | |
JP2009200985A (en) | Video processing apparatus | |
US8045817B2 (en) | Method for improving image quality, and image signal processing apparatus and AV device using the same | |
US10080032B2 (en) | Lossy channel video blur avoidance | |
US8502922B2 (en) | Multimedia device and play mode determination method of the same | |
US7215375B2 (en) | Method for line average differences based de-interlacing | |
US20080063067A1 (en) | Frame interpolating circuit, frame interpolating method, and display apparatus | |
JP2009302786A (en) | Video decoding device and method | |
US8224033B2 (en) | Movement detector and movement detection method | |
KR101858043B1 (en) | Defection of fast multi-track video ingest detection method and system | |
CN102487433B (en) | Multimedia apparatus and playing mode detection method thereof | |
JP2009296042A (en) | Telecine conversion image determination apparatus and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MEDIATEK INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, WEI-JEN;CHUANG, HUNG-CHANG;WU, CHUNGYI;AND OTHERS;REEL/FRAME:028819/0976 Effective date: 20120810 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |