CN111836055B - Image processing device and image block matching method based on image content for MEMC - Google Patents

Image processing device and image block matching method based on image content for MEMC Download PDF

Info

Publication number
CN111836055B
CN111836055B CN202010691931.6A CN202010691931A CN111836055B CN 111836055 B CN111836055 B CN 111836055B CN 202010691931 A CN202010691931 A CN 202010691931A CN 111836055 B CN111836055 B CN 111836055B
Authority
CN
China
Prior art keywords
image
partition
image partition
consistency degree
motion estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010691931.6A
Other languages
Chinese (zh)
Other versions
CN111836055A (en
Inventor
徐赛杰
李锋
余横
汪佳丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shunjiu Electronic Technology Co ltd
Original Assignee
Shanghai Shunjiu Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shunjiu Electronic Technology Co ltd filed Critical Shanghai Shunjiu Electronic Technology Co ltd
Priority to CN202010691931.6A priority Critical patent/CN111836055B/en
Publication of CN111836055A publication Critical patent/CN111836055A/en
Application granted granted Critical
Publication of CN111836055B publication Critical patent/CN111836055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing device and an image block matching method based on image content of MEMC, wherein the image processing device comprises an MEMC module, the MEMC module comprises a preprocessing unit, a motion estimation unit and a motion compensation unit, and the preprocessing unit is used for receiving an image frame and judging the scene consistency degree of each image partition in the image frame; the motion estimation unit is used for dynamically controlling the number of candidate vectors of the image blocks in each image partition in a search algorithm according to the scene consistency degree of each image partition, and screening an optimal vector from a plurality of candidate vectors through a matching algorithm to serve as a motion estimation result; the motion compensation unit is used for generating a motion compensation frame according to the motion estimation result. The image processing device and the image block matching method based on the image content of the MEMC can reduce the complexity of a search algorithm, improve the speed and the precision of the estimation of the MEMC module, enable the playing effect of the display equipment to be smoother, and bring better visual effect for users.

Description

Image processing device and image block matching method based on image content for MEMC
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing apparatus and an image block matching method for MEMC based on image content.
Background
The frame rate of current mainstream movies is 24fps, the frame rate of television programs is typically 25fps, and the refresh rate of current mainstream display devices is 60Hz. When people watch television programs or movies, pictures can achieve the refreshing frequency of display equipment by increasing repeated frames, new picture content is not added to videos by the method, the quality of moving pictures is not really changed, and the situation of 'smear' or 'blur' can occur when high-speed moving scenes such as ball games are met.
Motion Estimation and Motion Compensation (MEMC) is a technology widely used for frame rate conversion at present, and by estimating a Motion trajectory of an object in a continuous Motion image and then combining image data and an obtained Motion vector, an intermediate image is interpolated, so that the video frame rate is improved, and the problems of jitter and tailing during video playing are solved.
Motion estimation based on image block matching is a key technology of MEMC, and the problem to be solved is how to improve the estimation speed and accuracy of the MEMC.
Disclosure of Invention
The embodiment of the application provides an image processing device and an image block matching method of an MEMC (motion estimation and motion compensation), wherein the number of candidate vectors of image blocks in each image partition in a search algorithm is dynamically controlled by judging the scene consistency degree of each image partition in an image frame, the optimal vector is screened out from the candidate vectors as a motion estimation result based on the matching algorithm, and a motion compensation frame is generated according to the motion estimation result. The number of candidate vectors of the image block is dynamically controlled, so that the complexity of a search algorithm is reduced, the speed and the precision of motion estimation and motion compensation are improved, a video with a high-speed motion scene can be more smoothly and clearly played by display equipment, and high-quality visual embodiment is brought to a user.
In a first aspect, the present application provides an image processing apparatus comprising a MEMC module comprising a pre-processing unit, a motion estimation unit, and a motion compensation unit,
the preprocessing unit is configured to receive an image frame and judge the scene consistency degree of each image partition in the image frame;
the motion estimation unit is configured to dynamically control the number of candidate vectors of image blocks in each image partition in a search algorithm according to the scene consistency degree result of each image partition, and screen out an optimal vector from the candidate vectors through a matching algorithm to serve as a motion estimation result;
the motion compensation unit is configured to generate a motion compensated frame from the motion estimation result.
In a feasible manner, after receiving the image frame and before determining the scene consistency degree of each image partition in the image frame, the preprocessing unit is further configured to partition the image frame according to a first preset parameter to obtain a plurality of image partitions, and partition the image frame according to a second preset parameter to obtain a plurality of image blocks, where the plurality of image partitions are not overlapped with each other, and the plurality of image blocks are not overlapped with each other.
In a feasible mode, the first preset parameters comprise the number of transverse pixels and the number of longitudinal pixels of the image partition, the second preset parameters comprise the number of transverse pixels and the number of longitudinal pixels of the image block, the number of transverse pixels of the image block is smaller than the number of transverse pixels of the image partition, and the number of longitudinal pixels of the image block is smaller than the number of longitudinal pixels of the image partition.
In a feasible manner, the determining, by the preprocessing unit, a scene consistency degree of each image partition in the image frame specifically includes:
if the image detail value of the image partition is not smaller than a first threshold value, judging that the scene consistency degree of the image partition is low;
if the image detail value of the image partition is not larger than a second threshold value, judging that the scene consistency degree of the image partition is high;
if the image detail value of the image partition is between the first threshold and the second threshold, the scene consistency degree of the image partition is judged to be middle,
wherein the first threshold is greater than the second threshold.
In a possible manner, calculating the image detail value of the image partition specifically includes:
calculating the sum of absolute differences of the pixel attribute information of the adjacent lines in the image partition;
calculating the sum of absolute differences of column pixel attribute information obtained by subtracting adjacent columns in the image partition;
and adding the sum of the absolute differences of the row pixel attribute information and the sum of the absolute differences of the column pixel attribute information to obtain an image detail value of the image partition.
In a feasible manner, the dynamically controlling, by the motion estimation unit, the number of candidate vectors of image blocks in each image partition in a search algorithm according to the scene consistency degree of each image partition in the image frame specifically includes:
if the scene consistency degree of the image partition is low, increasing the number of candidate vectors of the image blocks in the image partition;
if the scene consistency degree of the image partition is middle, keeping the number of the candidate vectors of the image block in the image partition as the number of the initial candidate vectors unchanged;
if the scene consistency degree of the image partition is high, reducing the number of candidate vectors of the image blocks in the image partition;
wherein the number of initial candidate vectors is preset.
In a second aspect, the present application provides an image processing apparatus comprising a pre-processing unit, a motion estimation unit, and a motion compensation unit,
the preprocessing unit is configured to receive an image frame and judge the scene consistency degree of each image partition in the image frame;
the motion estimation unit is configured to dynamically control the number of candidate vectors of image blocks in each image partition in a search algorithm according to the scene consistency degree of each image partition, and screen out an optimal vector from a plurality of candidate vectors through a matching algorithm to serve as a motion estimation result;
the motion compensation unit is configured to generate a motion compensated frame from the motion estimation result.
In a third aspect, the present application provides an MEMC image block matching method based on image content, where the method includes:
receiving an image frame;
judging the scene consistency degree of each image partition in the image frame;
according to the scene consistency degree of each image partition, dynamically controlling the number of candidate vectors of image blocks in each image partition in a search algorithm;
screening an optimal vector from the candidate vectors through a matching algorithm to serve as a motion estimation result;
and outputting the motion estimation result of the image frame.
In a fourth aspect, the present application provides a display device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of the third aspect when executing the program stored in the memory.
In a fifth aspect, the present application provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when being executed by a processor, the computer program implements the method steps of the third aspect.
Compared with the prior art, the image processing device and the image block matching method based on the image content provided by the embodiment of the application dynamically adjust the number of the candidate vectors of the image blocks in the image partition according to the result of the scene consistency degree of the image partition by judging the scene consistency degree of the image partition, reduce the complexity of a search algorithm, improve the estimation speed and accuracy of the MEMC module, and use limited hardware resources for the most needed search position, thereby improving the search efficiency of motion estimation. The problems of jitter tailing and the like during video playing are solved, so that the playing of the display equipment is smoother and clearer when the video with a high-speed moving scene is played, and a better visual effect is brought to a user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an image frame, an image partition and an image block according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an image partition H in FIG. 2 according to an embodiment of the present disclosure;
fig. 4 is a schematic position diagram of a candidate vector according to an embodiment of the present application;
fig. 5 is a schematic diagram of motion vectors of adjacent image blocks according to an embodiment of the present application;
fig. 6 is a second schematic diagram illustrating motion vectors of adjacent image blocks according to an embodiment of the present application;
fig. 7 is a third schematic diagram of motion vectors of adjacent image blocks according to an embodiment of the present application;
fig. 8 is a fourth schematic view of motion vectors of adjacent image blocks according to an embodiment of the present application;
fig. 9 is a fifth schematic view of motion vectors of adjacent image blocks according to an embodiment of the present application;
fig. 10 is a schematic flowchart of an image block matching method based on image content for MEMC according to an embodiment of the present application;
fig. 11 is a second flowchart of an MEMC image block matching method based on image content according to the second embodiment of the present application;
fig. 12 is a schematic diagram of a hardware structure of a display device according to an embodiment of the present application;
fig. 13 is a schematic view of an application scenario of a display device 1100 according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The method flow diagrams of the embodiments of the invention described below are merely exemplary and do not necessarily include all of the contents and steps, nor do they necessarily have to be performed in the order described. For example, some steps may be broken down and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the current market, in order to enable the frame rate of a video source to reach the refresh rate of display equipment, many display equipment are matched with an MEMC function, and the matching based on image blocks is the most common method in motion estimation. The principle based on image block matching is as follows:
assuming that a front frame image and a rear frame image are respectively represented by a K-1 frame image and a K frame image, the K-1 frame image is positioned before the K frame image, the K-1 frame image and the K frame image are equally divided into a plurality of non-overlapping image blocks by taking an image block M x N as a unit (M represents the number of transverse pixels and N represents the number of longitudinal pixels of the image block), and then in the K-1 frame image, an image block G estimated for the current motion is in the image block G K-1 If the image block G is K-1 The image block which is found in the K frame image and is most matched with the image block is the image block G K Then we consider the image block G in the Kth frame image K Is composed of image blocks G in the K-1 frame image K-1 Obtained by shifting, thus the image block G K-1 To image block G K The position change of (2) is recorded as an image block G K-1 The motion vector of (2). For example, image block G K-1 The position in the K-1 frame image is (1, 1), and the image block G K The position in the K-th frame image is (1, 2), then the image block G K-1 To image block G K Is (0, 1)) Then image block G K-1 The motion vector of (2) is (0, 1).
The performance based on image block matching is mainly determined by three factors, namely, the matching standard, the search algorithm and the search area, and the complexity is also determined by the three factors. The conventional matching criterion is SAD (Sum of Absolute Difference), the search area indicates how large the image block can be searched for motion, and the search algorithm is flexible, such as a full search algorithm, a three-step search algorithm, and the like.
Based on this, the application provides an image processing apparatus, which reduces the complexity of motion estimation by improving the search algorithm to improve the estimation speed and accuracy of image block matching.
Fig. 1 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of an image frame, an image partition and an image block according to an embodiment of the present disclosure.
As shown in fig. 1, the present embodiment provides an image processing apparatus 100, which is used for image quality processing of video image display, and may be disposed in a single image processing chip, or may be disposed in an SOC chip having an image processing function, or may be disposed in a TCON chip having an image processing function, and the present disclosure is not limited thereto.
The image processing apparatus 100 provided in the embodiment of the present application includes an MEMC module 101, configured to perform motion estimation and motion compensation on a moving video image, which is a motion image quality compensation technology, and the principle is to use a dynamic mapping system, and insert a frame of motion compensation frame between two traditional frame images, so that a moving picture is clearer and smoother, which is superior to a normal response effect, thereby achieving the effects of removing a residual image of a previous frame image and improving dynamic definition, and reducing the image tailing to a degree that human eyes are hard to perceive.
The MEMC module 101 provided in the embodiment of the present application includes at least three parts, which are a preprocessing unit 1011, a motion estimation unit 1012, and a motion compensation unit 1013. The preprocessing unit 1011 is configured to receive an image frame and determine a scene consistency degree of each image partition in the image frame; the motion estimation unit 1012 is configured to dynamically control the number of candidate vectors of image blocks in each image partition in the search algorithm according to the degree of scene consistency of each image partition, and to screen out an optimal vector from the plurality of candidate vectors as a motion estimation result by a matching algorithm; the motion compensation unit 1013 is configured to generate motion compensated frames from the motion estimation results.
For example, the frame rate of the television program is 30fps, the refresh frequency of the display device is 60hz, the MEMC module performs motion estimation on the current image frame of the television program to obtain a motion estimation result of the current image frame, and then generates a motion compensation frame according to the motion estimation result, so that a motion picture played by the television program processed by the MEMC technology is smoother and clearer.
Referring again to fig. 1, the preprocessing unit 1011, the motion estimation unit 1012, and the motion compensation unit 1013 of the MEMC module 101 are further described below.
The preprocessing unit 1011 is used for receiving the image frame and judging the scene consistency degree of each image partition in the image frame. Wherein the image frames are from a video stream received by the display device. The preprocessing unit 1011 is further configured to partition the image frame according to a first preset parameter after receiving the image frame and before determining the scene consistency degree of each image partition in the image frame, that is, divide the whole image into a plurality of non-overlapping image partitions; and partitioning the image frame according to a second preset parameter, namely dividing the whole image into a plurality of non-overlapping image blocks.
The first preset parameter comprises the number of transverse pixels and the number of longitudinal pixels of the image partition, and the second preset parameter comprises the number of transverse pixels and the number of longitudinal pixels of the image block, wherein the number of transverse pixels of the image block is smaller than the number of transverse pixels of the image partition, and the number of longitudinal pixels of the image block is smaller than the number of longitudinal pixels of the image partition.
It should be understood that the two processes of partitioning the whole image according to the first preset parameter and partitioning the whole image according to the second preset parameter by the preprocessing unit 1011 are not conflicting, and may be executed simultaneously or sequentially, and when executed sequentially, the order between them is not limited herein.
According to the first preset parameter and the second preset parameter, one image partition includes a plurality of image blocks, and for a clearer description of the relationship among the image frames, the image partitions, and the image blocks, please refer to fig. 2.
As shown in fig. 2, the current image frame includes 3840 × 2160 pixels, that is, 3840 horizontal pixels and 2160 vertical pixels; in the first preset parameter, the number of horizontal pixel points representing the image partition is 480, and the number of longitudinal pixel points is 540; in the second preset parameter, the number of horizontal pixel points representing the image block is 8, and the number of vertical pixel points is 4.
According to the first preset parameter, the image frame has 32 (8 × 4) non-overlapping image partitions, each image partition contains 480 × 540 pixels, and according to the second preset parameter, the image frame has 259200 (480 × 540) non-overlapping image blocks, each image block contains 8 × 4 pixels. According to the first preset parameter and the second preset parameter, 8100 (60 × 135) image blocks are included in one image partition.
It should be noted that, in practical application, the first preset parameter and the second preset parameter may be flexibly set according to the number of horizontal pixels and the number of longitudinal pixels of the image frame, as long as the number of horizontal pixels of the image block is less than the number of horizontal pixels of the image partition, and the number of longitudinal pixels of the image block is less than the number of longitudinal pixels of the image partition, in other words, as long as one image partition includes a plurality of image blocks.
After the current image frame is divided into image partitions and image blocks, the preprocessing unit 1011 needs to determine the scene consistency degree of each image partition in the current image frame. In the embodiment of the present application, the process of determining the scene consistency degree of each image partition by the preprocessing unit 1011 is the same, and the following takes the image partition H in fig. 2 as an example to specifically describe how the preprocessing unit 1011 determines the scene consistency degree of the image partition H, and the process of determining the scene consistency degree of other image partitions can refer to the process of determining the scene consistency degree of the image partition H.
In the embodiment of the application, the scene consistency degree of the image partition H is judged according to the image detail value of the image partition H. Specifically, the image detail value of the image partition is compared with a register controlled threshold value reg _ high and a register controlled threshold value reg _ low, and the scene consistency degree of the image partition H is determined according to a result after the comparison with the register controlled threshold value reg _ high and the register controlled threshold value reg _ low, and the scene consistency degree is divided into three types, namely high, medium and low, and the specific judgment process is as follows:
if the image detail value of the image partition H is not less than the threshold value reg _ high controlled by the register, judging that the scene consistency degree of the image partition H is low;
if the image detail value of the image partition H is not greater than the threshold value reg _ low controlled by the register, judging that the scene consistency degree of the image partition H is high;
and if the image detail value of the image partition H is between the threshold value reg _ low controlled by the register and the threshold value reg _ high controlled by the register, judging that the scene consistency degree of the image partition H is middle.
For example, the threshold value reg _ low controlled by the register is 100, the threshold value reg _ high controlled by the register is 200, and if the image detail value obtained by calculating the image partition H is 600, it is known that the image detail value of the image partition H is not less than the threshold value reg _ high controlled by the register, and the scene consistency degree of the image partition H is determined to be low; if the image detail value obtained by calculating the image partition H is 80, the image detail value of the image partition H is not more than the threshold value reg _ low controlled by the register, and the scene consistency degree of the image partition H is judged to be high; if the image detail value obtained by calculating the image partition H is 150, the image detail value of the image partition H is between the threshold value reg _ low controlled by the register and the threshold value reg _ high controlled by the register, and the scene consistency degree of the image partition H is judged to be middle.
It should be noted that a plurality of image blocks are included in the image partition H, and when the scene consistency degree of the image partition H is determined, the scene consistency degree of the plurality of image blocks included in the image partition H is also determined. For example, referring to the example shown in fig. 2, when it is determined that the degree of scene consistency of the image partition H is high, it is considered that the degree of scene consistency of all the image blocks included in the image partition H is also high, that is, the degree of scene consistency of 8100 image blocks in the image partition H is high.
The following describes how the embodiment of the present application calculates the image detail value of the image partition H.
First, the pixel attribute information of the pixel point in the present application will be described. As is well known, the basic units forming an image are pixel points, each pixel point has its own pixel attribute information to present corresponding picture content, and it can be understood what color a pixel point presents depends on its color component, which is the pixel attribute information of the pixel point.
Common color coding methods include RGB format, YUV format, and YCbCr format. In the RGB format, "R" represents a red component, "G" represents a green component, and "B" represents a blue component. In the YUV format, "Y" represents a luminance component, i.e., a gray value, and "U" and "V" each represent a chrominance component. In the YCbCr format, "Y" represents a luminance component, i.e., a gradation value, "Cb" represents a blue density shift amount, and "Cr" represents a red density shift amount.
In the embodiment of the present application, no matter what color coding method is adopted, as long as the pixel attribute information of the pixel point can be represented, the present application does not make a limitation requirement on the adopted color coding method.
For example, by using a YUV color coding method, the YUV components can be mixed into a blending value according to a certain rule, and the blending value is used to represent the pixel attribute information of the pixel point. In the application, how to calculate the pixel attribute information of the pixel point is not limited specifically.
Fig. 3 is a schematic diagram of an image partition H in fig. 2 according to an embodiment of the present disclosure.
As shown in FIG. 3, how to calculate the detail value of the image in the image partition H is described below, according to the description aboveIt can be seen that the number of horizontal pixel points of the image partition H is 480, and the number of vertical pixel points is 540, i.e. the image partition H has 540 rows and 480 columns, and B (i,j) For representing pixel attribute information of a pixel point, wherein subscript i represents the number of rows where the current pixel point is located, and subscript j represents the number of columns where the current pixel point is located, e.g., B (1,2) And indicating the pixel attribute information of the pixel point positioned in the first row and the second column.
If the total number of lines in an image partition is L and the total number of columns is V, the pixel attribute information of the pixel points is C (i,j) And (3) representing, calculating the following steps:
the first step is as follows: calculating Sum _ L, the Sum of absolute differences Sum _ L of pixel attribute information of lines subtracted from adjacent lines in an image partition, i.e. Sum _ L =
Figure 459103DEST_PATH_IMAGE001
The second step is that: calculating Sum _ V, sum _ V =, the Sum of absolute differences of column pixel attribute information subtracted from adjacent columns in an image partition
Figure 923713DEST_PATH_IMAGE002
The third step: and adding Sum _ L and Sum _ V to obtain the image detail value of the image partition.
Referring to fig. 3 again, it can be seen that the total number of rows of the image partition H is 540, the total number of columns is 480, and the pixel attribute information of the pixel point is B (i,j) To show, the process of calculating the image detail value of the image partition H is as follows:
the first step is as follows: calculating Sum _ L _ H, the Sum of absolute differences of pixel attribute information of adjacent lines subtracted in the image partition H, i.e. Sum _ L _ H =
Figure 296926DEST_PATH_IMAGE003
The second step is that: calculating Sum _ V _ H, which is the Sum of absolute differences of column pixel attribute information obtained by subtracting adjacent columns in the image partition H, namely Sum _ V _ H =
Figure 160977DEST_PATH_IMAGE004
The third step: and adding Sum _ L _ H and Sum _ V _ H to obtain the image detail value of the image partition H.
Therefore, the image detail value of the image partition H can be obtained, and the scene consistency degree of the image partition H is judged according to the relation between the image detail value and the threshold value reg _ high controlled by the register and the threshold value reg _ low controlled by the register, so that the scene consistency degrees of all image blocks in the image partition H can be obtained.
It should be noted that, in the embodiment of the present application, the determining of the scene consistency degree of the image partition according to the image detail value of the image partition is only a feasible example for determining the scene consistency degree of the image partition, and in practical application, any method in the prior art may be used to determine the scene consistency degree of the image partition, which may be specifically referred to in the prior art and is not described herein again.
Please refer to the image processing apparatus 100 shown in fig. 1, wherein the MEMC module 101 further includes a motion estimation unit 1012. In this application, the motion estimation unit 1012 is configured to dynamically control the number of candidate vectors of an image block in each image partition in a search algorithm according to the scene consistency degree of each image partition judged by the preprocessing unit 1011, and screen an optimal vector from the multiple candidate vectors through a matching algorithm as a motion estimation result.
Wherein the candidate vectors for the image block include: zero vector, random vector, motion vector of adjacent image block. The number of the candidate vectors of the image block is dynamically controlled, specifically, the number of the random vectors and the motion vectors of the adjacent image block in the candidate vectors is controlled, and the number of the zero vectors is not changed because only one zero vector exists.
The image blocks corresponding to the candidate vectors are used for matching with the current image block, and correspondingly, if the number of the candidate vectors is 10, the number of the image blocks corresponding to the candidate vectors is also 10, that is, the 10 image blocks are used for matching with the current image block.
Fig. 4 is a schematic position diagram of a candidate vector according to an embodiment of the present application.
As shown in FIG. 4, the image block P in the image partition W in the K-1 frame image is now selected k-1 For example, the positions of the image blocks corresponding to the candidate vectors in the image partition W 'in the K-th frame image are described, and the image partition W' are located at the same position in two preceding and succeeding frame images. The candidate vectors include a zero vector, a random vector and a motion vector of an adjacent image block.
A zero vector indicates that the position offset of the image block is zero. For example, as shown in FIG. 4, in the image partition W of the K-1 frame image, the image block P currently used for motion estimation k-1 Is (3, 4), i.e. image block P k-1 In the third row and fourth column of the image partition W. Then, the image block P k-1 Image block P corresponding to the zero vector of k In the image partition W', also in the third row and fourth column, i.e. P k The position of (2) is (3, 4).
The random vector indicates that the position offset of the image block is a random number, but the random number has a certain value range, and the value range is the range of a search area of the image block. For example, as shown in FIG. 4, the image block P is estimated with the current motion k-1 If the position of (3, 4) is taken as an example, if the image block P k-1 Maximum displacement is 2 image blocks in the lateral direction and 2 image blocks in the longitudinal direction, as image block P k-1 The coordinates (3, 4) of (a) are the center, and the coordinates of the positions of the image blocks located at the four top corners of the search area are (5, 6), (5, 2), (1, 6) and (1, 2), respectively, that is, the size of 5 × 5 image blocks is the range of the search area.
Illustratively, as shown in fig. 4, the value range of the random vector is (+ -2, ± 2), and the random vector is assumed to be (-2, 0), which corresponds to the image block P in the image partition W k_r The position of (1, 4). Generally, a plurality of random vectors, for example, 4 random vectors, may be provided, and the function of the random vectors is to converge in time in the motion change region, i.e., calculate the motion direction.
And the motion vector of the adjacent image block is the optimal vector of the image block at the same position when the image of the frame K-2 is taken, and the optimal vector represents the offset of the image block which is most similar to the current motion estimation image block at the next moment.
For example, in the K-1 frame image, the adjacent image block P with the position (2, 4) is determined k-1_s1 The motion vector process of (a) is as follows: first, find the image block P at the same position in the K-2 frame image k-2_s1 The position of the most similar image block obtained in the K-1 frame image is (4, 6), P is k-2_s1 The optimal vector of (2, 2) indicates that the adjacent image block P with the position of (2, 4) in the K-1 frame image k-1_s1 Has a motion vector of (2, 2) which corresponds to the image block P in the K-th frame image k_s1 The position of (4, 6). Typically, a number of motion vectors, e.g. 8, of neighboring image blocks may be provided.
In the embodiment of the present application, the motion vector (x, y) indicates the moving direction of the target object. Wherein x represents movement in the lateral direction, a positive number represents movement to the right, a negative number represents movement to the left, and 0 represents no movement in the lateral direction; y represents movement in the longitudinal direction, a positive number represents downward movement, a negative number represents upward movement, and 0 represents no movement in the longitudinal direction. For example, motion vector (2, 0) indicates two positions to the right and motion vector (0, -3) indicates 3 positions to the up.
Considering that the motion of adjacent image blocks has strong correlation, in this application, the motion estimation unit 1012 dynamically controls the number of candidate vectors of image blocks in each image partition in the search algorithm according to the scene consistency degree of each image partition judged by the preprocessing unit 1011, which is specifically expressed as:
if the scene consistency degree of the image partition is high, reducing the number of candidate vectors of each image block in the image partition;
if the scene consistency degree of the image partition is middle, keeping the number of the candidate vectors of each image block in the image partition as the number of the initial candidate vectors unchanged;
and if the scene consistency degree of the image partition is low, increasing the number of candidate vectors of each image block in the image partition.
Wherein the number of initial candidate vectors is preset.
If the higher the consistency degree of the scene in the image partition where the current motion estimation image block is located is, the higher the possibility that the same object is expressed in the image partition is, the higher the similarity of the motion vectors of the current motion estimation image block and the adjacent image block around the image block is. Therefore, the number of candidate vectors of the image block of the current motion estimation can be dynamically adjusted according to the result that the scene consistency degree of the current image partition is high, namely, the number of the random vectors and the number of the motion vectors of the adjacent image blocks are reduced.
If the lower the consistency degree of the scene in the image partition where the image block of the current motion estimation is located is, the lower the possibility that the same object is expressed in the image partition is, the lower the similarity of the motion vectors of the image block of the current motion estimation in the image partition and the adjacent image block around the image block is. Therefore, the number of candidate vectors of the image block of the current motion estimation can be dynamically adjusted according to the result that the scene consistency degree of the current image partition is low, namely, the number of the random vectors and the number of the motion vectors of the adjacent image blocks are increased.
If the consistency degree of the scene in the image partition where the image block of the current motion estimation is located is middle, the number of the initial candidate vectors can be kept unchanged.
Fig. 5 is a schematic diagram of motion vectors of adjacent image blocks according to an embodiment of the present disclosure.
Fig. 6 is a second schematic diagram illustrating motion vectors of adjacent image blocks according to an embodiment of the present application.
Fig. 7 is a third schematic diagram of motion vectors of adjacent image blocks according to an embodiment of the present application.
Fig. 8 is a fourth schematic view of motion vectors of adjacent image blocks according to an embodiment of the present application.
Fig. 9 is a fifth schematic view of motion vectors of adjacent image blocks according to an embodiment of the present application.
In the following, taking the number of the initial candidate vectors as 13 as an example, the 13 candidate vectors respectively include 1 zero vector, 4 random vectors, and motion vectors of 8 adjacent image blocks. For example, the positions of the current motion-estimated image blocks shown in fig. 5 are referred to as "four" in the figure, and the positions of 8 adjacent image blocks are referred to as "o" in the figure.
When the scene consistency degree of the image partition is medium, that is, the scene consistency degree of each image block located in the image partition is medium, the number of candidate vectors of each image block in the image partition may be kept to 13, that is, the candidate vectors include initially set 1 zero vector, 4 random vectors, and motion vectors of 8 adjacent image blocks, the position of the image block currently being motion-estimated refers to "star" in fig. 5, and the positions of 8 adjacent image blocks refer to "o" in fig. 5. Thereby, the number of candidate vectors for the current motion estimated image block is kept as a whole.
When the scene consistency degree of the image partition is high, that is, the scene consistency degree of each image block located in the image partition is high, the number of candidate vectors for each image block in the image partition can be reduced from 13 to 7, wherein the candidate vectors include 1 zero vector, 2 random vectors, and motion vectors of 4 adjacent image blocks. For example, the positions of the image blocks of the current motion estimation shown in fig. 6 are referred to as "star" in the figure, the positions of 4 adjacent image blocks are referred to as "o" in the figure, and the positions of the reduced 4 adjacent image blocks are referred to as "minus" in the figure, compared to the initial setting. Meanwhile, the degree of scene consistency in the image partition is high, indicating that the possibility of motion variation in the image partition is small, and therefore, the number of random vectors can be reduced, for example, the number of random vectors can be reduced to 2. Thereby, the number of candidate vectors for the current motion estimated image block is reduced as a whole.
When the scene consistency degree of the image partition is low, that is, the scene consistency degree of each image block located in the image partition is low, the number of candidate vectors for each image block in the image partition may be increased from 13 to 19, where the candidate vectors include 1 zero vector, 6 random vectors, and 12 motion vectors of adjacent image blocks. For example, the positions of the image blocks of the current motion estimation as shown in fig. 7 are referred to as "four", the positions of 12 neighboring image blocks are referred to as "o" and "+" in the figure, and the increased positions of 4 neighboring image blocks are referred to as "+" in the figure compared to the initial setting. Meanwhile, the degree of scene consistency in the image partition is low, indicating that the possibility of motion variation in the image partition is high, and therefore, the number of random vectors may be increased, for example, the number of random vectors may be increased to 6. Thereby, the number of candidate vectors for the current motion estimated image block is increased as a whole.
It should be understood that, no specific limitation is made on reducing or increasing the number of motion vectors of adjacent image blocks, and no specific limitation is made on reducing or increasing the number of random vectors, and in practical applications, the number of motion vectors and their arrangement of adjacent image blocks, and the number of random vectors and their arrangement of adjacent image blocks may be flexibly set.
For example, taking the example that when the scene consistency degree of the image partition is high, the number of the motion vectors of the adjacent image blocks is reduced from 8 to 4, in addition to the arrangement manner of the 4 adjacent image blocks shown in fig. 6, the arrangement manner of the 4 adjacent image blocks shown in fig. 8 (the definitions of the symbols and signs are the same as those described above) may also be referred to, and other arrangement manners meeting the requirements may also be referred to.
For example, taking the case that when the scene consistency degree of the image partition is low, the number of the motion vectors of the adjacent image blocks is increased from 8 to 12, in addition to the arrangement manner of the 12 adjacent image blocks shown in fig. 7, reference may also be made to the arrangement manner of the 12 adjacent image blocks shown in fig. 9 (the definitions of the symbols and signs are the same as those described above), and other arrangement manners meeting the requirements.
With continuing reference to fig. 1, the motion estimation unit 1012 is further configured to filter an optimal vector from the multiple candidate vectors through a matching algorithm as a motion estimation result, and in general, when the multiple candidate vectors are filtered, cost is used as an evaluation index obtained according to the matching algorithm, and a candidate vector corresponding to the smallest Cost is used as the optimal vector, that is, as the motion estimation result, and the motion estimation result is output to the motion compensation unit 1013.
The motion compensation unit 1013 is configured to generate a motion compensation frame to be added between the K-1 frame image and the K frame image according to the motion estimation result of the motion estimation unit 1012, so that the frame rate of the video can be matched with the refresh frequency of the display device, so that the moving image is smoother, and the motion process effect is clearer.
For example, in the embodiment of the present application, cost may be an evaluation index obtained according to the aforementioned matching algorithm SAD, where SAD is the sum of absolute differences of pixel attribute information of the image block corresponding to the candidate vector and the image block of the current motion estimation, and the evaluation index herein is not limited to be based on the SAD matching algorithm and may be any evaluation index for evaluating the quality of the candidate vector.
Compared with the prior art, the image processing device provided by the embodiment of the application dynamically adjusts the number of candidate vectors of the image blocks in the image partition according to the scene consistency degree of the image partition by judging the scene consistency degree of the image partition, reduces the number of candidate vectors of each image block in the image partition when the scene consistency degree of the image partition is high, keeps the number of candidate vectors of each image block in the image partition unchanged as the number of initial candidate vectors when the scene consistency degree of the image partition is medium, and increases the number of candidate vectors of each image block in the image partition when the scene consistency degree of the image partition is low. Therefore, the number of candidate vectors of the image block can be dynamically adjusted, so that the complexity of a search algorithm is reduced, the estimation speed and accuracy of the MEMC module are improved, limited hardware resources are used for the most needed search position, and the motion estimation search efficiency is improved. The problems of shaking and tailing and the like during video playing are solved, the playing of the display equipment is smoother, and a better visual effect is brought to a user.
Fig. 10 is a schematic flowchart of an image block matching method based on image content for MEMC according to an embodiment of the present application.
Fig. 11 is a second flowchart illustrating an MEMC image block matching method based on image content according to a second embodiment of the present disclosure.
The method judges the consistency degree of the image scene based on the real-time image content partition, and dynamically controls the number of candidate vectors in a search algorithm according to the consistency degree of the scene, thereby improving the search efficiency of motion estimation and the estimation speed and precision of image block matching, and the method comprises the following steps:
s101: an image frame is received.
S102: and judging the scene consistency degree of each image partition in the image frame.
S103: and dynamically controlling the number of candidate vectors of the image blocks in each image partition in the search algorithm according to the scene consistency degree of each image partition.
S104: and screening the optimal vector from the candidate vectors through a matching algorithm to serve as a motion estimation result.
S105: and outputting the motion estimation result of the image frame.
In the embodiment of the present application, the scene consistency degrees of the image partitions are three types, namely, high, medium, and low, and the process of determining the scene consistency degrees of the image partitions can be referred to in the foregoing description.
As shown in fig. 11, step S103 dynamically controls the number of candidate vectors of image blocks in each image partition in the search algorithm according to the scene consistency degree of each image partition, and specifically includes:
s1031: if the scene consistency degree of the image partition is high, reducing the number of candidate vectors of the image blocks in the image partition;
s1032: if the scene consistency degree of the image partition is middle, keeping the number of the candidate vectors of the image block in the image partition as the number of the initial candidate vectors unchanged;
s1033: and if the scene consistency degree of the image partition is low, increasing the number of candidate vectors of the image block in the image partition.
Fig. 12 is a schematic diagram of a hardware structure of a display device according to an embodiment of the present disclosure, where the display device has a video playing function and may be a smart television, a smart phone, a notebook computer, a desktop computer, or the like.
As shown in fig. 12, a display device 1100, configured to implement the operations corresponding to the display device in the foregoing method embodiments, the electronic device 1100 of this embodiment may include: a memory 1101 and a processor 1102;
a memory 1101 for storing a computer program;
a processor 1102 for executing a computer program stored in a memory to implement the MEMC image block matching method based on image content in the above embodiments. Reference may be made in particular to the description relating to the method embodiments described above.
Alternatively, the memory 1101 may be separate or integrated with the processor 1102.
When the memory 1101 is a device separate from the processor 1102, the display device 1100 may further include:
a bus 1103 is used to connect the memory 1101 and the processor 1102.
Optionally, the embodiment of the present application may further include: a communication interface 1104, the communication interface 1104 being connectable to the processor 1102 via the bus 1103. The processor 1102 may control the communication interface 1104 to implement the above-described receiving and transmitting functions of the display device 1100.
Fig. 13 is a schematic view of an application scenario of a display device 1100 according to an embodiment of the present application. As shown in fig. 13, a user may operate the display apparatus 1100 through the mobile terminal 1120 and the control device 1130.
The control device 1130 may be a remote controller, which includes infrared protocol communication, bluetooth protocol communication, other short-distance communication methods, and the like, and controls the display apparatus 1100 in a wireless or other wired manner. The mobile terminal 1120 may install a software application with the display device 1100 to implement connection communication through a network communication protocol for the purpose of one-to-one control operation and data communication.
As also shown in fig. 13, the display device 1100 is also in data communication with a server 1140 through various communication means. The display device 1100 may be allowed to be communicatively coupled via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 1140 may provide various content and interactions to the display device 1100.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium includes a computer program, and the computer program is used to implement the MEMC image block matching method based on image content in the above embodiment.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An image processing apparatus characterized in that it comprises an MEMC module comprising a pre-processing unit, a motion estimation unit and a motion compensation unit,
the preprocessing unit is configured to receive an image frame and judge the scene consistency degree of each image partition in the image frame;
the motion estimation unit is configured to dynamically control the number of candidate vectors of image blocks in each image partition in a search algorithm according to the scene consistency degree of each image partition, and screen out an optimal vector from a plurality of candidate vectors through a matching algorithm as a motion estimation result;
the motion compensation unit is configured to generate motion compensated frames from the motion estimation results.
2. The apparatus according to claim 1, wherein after receiving the image frame and before determining a scene consistency degree of each image partition in the image frame, the preprocessing unit is further configured to partition the image frame according to a first preset parameter to obtain a plurality of image partitions, and partition the image frame according to a second preset parameter to obtain a plurality of image blocks, the image partitions are not overlapped with each other, and the image blocks are not overlapped with each other.
3. The apparatus of claim 2, wherein the first preset parameter comprises a number of horizontal pixels and a number of vertical pixels of the image partition, and the second preset parameter comprises a number of horizontal pixels and a number of vertical pixels of the image block, wherein the number of horizontal pixels of the image block is smaller than the number of horizontal pixels of the image partition, and the number of vertical pixels of the image block is smaller than the number of vertical pixels of the image partition.
4. The apparatus according to claim 1, wherein the preprocessing unit determines a scene consistency degree of each image partition in the image frame, and specifically includes:
if the image detail value of the image partition is not smaller than a first threshold value, judging that the scene consistency degree of the image partition is low;
if the image detail value of the image partition is not larger than a second threshold value, judging that the scene consistency degree of the image partition is high;
if the image detail value of the image partition is between the first threshold and the second threshold, the scene consistency degree of the image partition is judged to be middle,
wherein the first threshold is greater than the second threshold.
5. The apparatus according to claim 4, wherein calculating the image detail value of the image partition specifically comprises:
calculating the sum of absolute differences of row pixel attribute information obtained by subtracting adjacent rows in the image partition;
calculating the sum of absolute differences of column pixel attribute information obtained by subtracting adjacent columns in the image partition;
and adding the sum of the absolute differences of the row pixel attribute information and the sum of the absolute differences of the column pixel attribute information to obtain an image detail value of the image partition.
6. The apparatus according to claim 4, wherein the motion estimation unit dynamically controls the number of candidate vectors of image blocks in each image partition in a search algorithm according to a scene consistency degree of each image partition in the image frame, and specifically includes:
if the scene consistency degree of the image partition is low, increasing the number of candidate vectors of the image blocks in the image partition;
if the scene consistency degree of the image partition is middle, keeping the number of the candidate vectors of the image blocks in the image partition as the number of the initial candidate vectors unchanged;
if the scene consistency degree of the image partition is high, reducing the number of candidate vectors of the image blocks in the image partition;
wherein the number of initial candidate vectors is preset.
7. An image processing apparatus characterized by comprising a preprocessing unit, a motion estimation unit, and a motion compensation unit,
the preprocessing unit is configured to receive an image frame and judge the scene consistency degree of each image partition in the image frame;
the motion estimation unit is configured to dynamically control the number of candidate vectors of image blocks in each image partition in a search algorithm according to the scene consistency degree of each image partition, and screen out an optimal vector from a plurality of candidate vectors through a matching algorithm as a motion estimation result;
the motion compensation unit is configured to generate a motion compensated frame from the motion estimation result.
8. An image block matching method for MEMC based on image content, the method comprises:
receiving an image frame;
judging the scene consistency degree of each image partition in the image frame;
according to the scene consistency degree of each image partition, dynamically controlling the number of candidate vectors of image blocks in each image partition in a search algorithm;
screening an optimal vector from the candidate vectors through a matching algorithm to serve as a motion estimation result;
and outputting the motion estimation result of the image frame.
9. The display equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the communication interface;
a memory for storing a computer program;
a processor for implementing the method steps of claim 8 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of claim 8.
CN202010691931.6A 2020-07-17 2020-07-17 Image processing device and image block matching method based on image content for MEMC Active CN111836055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010691931.6A CN111836055B (en) 2020-07-17 2020-07-17 Image processing device and image block matching method based on image content for MEMC

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010691931.6A CN111836055B (en) 2020-07-17 2020-07-17 Image processing device and image block matching method based on image content for MEMC

Publications (2)

Publication Number Publication Date
CN111836055A CN111836055A (en) 2020-10-27
CN111836055B true CN111836055B (en) 2023-01-10

Family

ID=72923466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010691931.6A Active CN111836055B (en) 2020-07-17 2020-07-17 Image processing device and image block matching method based on image content for MEMC

Country Status (1)

Country Link
CN (1) CN111836055B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1972421A (en) * 2005-11-25 2007-05-30 三星电子株式会社 Frame interpolator, frame interpolation method and motion credibility evaluator
KR20070079411A (en) * 2006-02-02 2007-08-07 삼성전자주식회사 Method and apparatus for estimating motion vector based on block
CN102905124A (en) * 2011-07-29 2013-01-30 联咏科技股份有限公司 Motion estimation method
KR20170095047A (en) * 2016-02-12 2017-08-22 엔쓰리엔 주식회사 Dynamic frame deletion apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1972421A (en) * 2005-11-25 2007-05-30 三星电子株式会社 Frame interpolator, frame interpolation method and motion credibility evaluator
KR20070079411A (en) * 2006-02-02 2007-08-07 삼성전자주식회사 Method and apparatus for estimating motion vector based on block
CN102905124A (en) * 2011-07-29 2013-01-30 联咏科技股份有限公司 Motion estimation method
KR20170095047A (en) * 2016-02-12 2017-08-22 엔쓰리엔 주식회사 Dynamic frame deletion apparatus and method

Also Published As

Publication number Publication date
CN111836055A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
US20240078649A1 (en) Brightness and contrast enhancement for video
WO2018082185A1 (en) Image processing method and device
US20090190846A1 (en) Scaling an image based on a motion vector
CN107948733B (en) Video image processing method and device and electronic equipment
JP2003513554A (en) How to mix text and graphics on TV screen display
WO2022262618A1 (en) Screen saver interaction method and apparatus, electronic device, and storage medium
CN113691737B (en) Video shooting method and device and storage medium
Turban et al. Extrafoveal video extension for an immersive viewing experience
CN112887694B (en) Video playing method, device and equipment and readable storage medium
EP3271866A1 (en) Method for correction of the eyes image using machine learning and method for machine learning
CN115022679B (en) Video processing method, device, electronic equipment and medium
CN110858388B (en) Method and device for enhancing video image quality
EP4040431A1 (en) Image processing device, image display system, image data transfer device, and image processing method
CN113315999A (en) Virtual reality optimization method, device, equipment and storage medium
Zhang et al. A real-time time-consistent 2D-to-3D video conversion system using color histogram
CN111836055B (en) Image processing device and image block matching method based on image content for MEMC
TWI517097B (en) Method, apparatus, and non-transitory computer readable medium for enhancing image contrast
CN103139524B (en) Method for optimizing video and messaging device
US10114447B2 (en) Image processing method and apparatus for operating in low-power mode
CN113556545B (en) Image processing method and image processing circuit
CN110858389B (en) Method, device, terminal and transcoding equipment for enhancing video image quality
CN115604410A (en) Video processing method, device, equipment and computer readable storage medium
CN114092359A (en) Screen-splash processing method and device and electronic equipment
US20120039533A1 (en) Image processing apparatus and displaying method of the same
US11930207B2 (en) Display device, signal processing device, and signal processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant