CN111784734B - Image processing method and device, storage medium and electronic equipment - Google Patents

Image processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111784734B
CN111784734B CN202010694132.4A CN202010694132A CN111784734B CN 111784734 B CN111784734 B CN 111784734B CN 202010694132 A CN202010694132 A CN 202010694132A CN 111784734 B CN111784734 B CN 111784734B
Authority
CN
China
Prior art keywords
current
block
matching
blocks
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010694132.4A
Other languages
Chinese (zh)
Other versions
CN111784734A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010694132.4A priority Critical patent/CN111784734B/en
Publication of CN111784734A publication Critical patent/CN111784734A/en
Application granted granted Critical
Publication of CN111784734B publication Critical patent/CN111784734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Television Systems (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, a storage medium and electronic equipment, and relates to the technical field of image processing. The image processing method comprises the following steps: determining the position of adjacent pixels of a current pixel to be interpolated in an input image, which is contained in an output image, determining a current image block in the input image through the position of the adjacent pixels, and determining all matching blocks corresponding to the current image block in a plurality of reference frames corresponding to the current frame; image block fusion is carried out according to target matching blocks in all the matching blocks and the current image block to obtain a plurality of fusion blocks, and the target direction of the current image block is determined according to the fusion blocks; and performing directional interpolation on the current image block through the target direction to obtain the pixel value of the current pixel to be interpolated. The embodiment of the disclosure can improve the image quality.

Description

Image processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technology, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
Background
To increase the image resolution, conversion of a low resolution image to a high resolution image may be achieved. In the related art, when the conversion is performed, an interpolation process may have an error due to the limitation of the input image, so that the quality of the output image is poor.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, which further overcome, at least to some extent, the problem of poor image quality.
According to an aspect of the present disclosure, there is provided an image processing method including: determining the position of adjacent pixels of a current pixel to be interpolated in an input image, which is contained in an output image, determining a current image block in the input image through the position of the adjacent pixels, and determining all matching blocks corresponding to the current image block in a plurality of reference frames corresponding to the current frame; image block fusion is carried out according to target matching blocks in all the matching blocks and the current image block to obtain a plurality of fusion blocks, and the target direction of the current image block is determined according to the fusion blocks; and performing directional interpolation on the current image block through the target direction to obtain the pixel value of the current pixel to be interpolated.
According to one aspect of the present disclosure, there is provided an image processing apparatus including: a matching block determining module, configured to determine a position of a neighboring pixel of a current pixel to be interpolated included in an output image in an input image, determine a current image block in the input image through the position of the neighboring pixel, and determine all matching blocks corresponding to the current image block in a plurality of reference frames corresponding to a current frame; the direction determining module is used for carrying out image block fusion according to target matching blocks in all the matching blocks and the current image block to obtain a plurality of fusion blocks, and determining the target direction of the current image block according to the fusion blocks; and the image interpolation module is used for carrying out directional interpolation on the current image block through the target direction to obtain the pixel value of the current pixel to be interpolated.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as set forth in any one of the above.
According to one aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image processing method of any one of the above via execution of the executable instructions.
In the technical solutions provided in some embodiments of the present disclosure, on one hand, since all the matching blocks corresponding to the current image block in the current frame may be determined in the multiple reference frames through motion estimation, the target matching block and the current matching block are fused to obtain the target direction of the current image block, directional interpolation is performed in the current image block based on the target direction to obtain the pixel value of the current pixel to be interpolated, and the pixel value of the current pixel to be interpolated is determined through a combination of multi-frame motion estimation and directional interpolation. By means of the matching strategy, directional interpolation of multi-frame area fusion can be carried out, the problems of direction misjudgment and the like caused by single-frame directional interpolation in the related art can be avoided, limitations are avoided, and the target direction can be comprehensively and accurately determined based on the texture direction. On the other hand, through motion estimation and directional interpolation of multi-frame region fusion, the pixel value of the current pixel to be interpolated can be accurately calculated, the image quality is improved, the effect of image super-resolution is improved, the operation complexity is reduced, and the operability is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which an image processing method or image processing apparatus of embodiments of the present disclosure may be applied;
FIG. 2 illustrates a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure;
Fig. 3 schematically illustrates a flowchart of an image processing method according to an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a flow diagram of determining a matching block in an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of determining a matching block for a current image block in an embodiment of the present disclosure;
FIG. 6 illustrates a flow diagram for determining a target direction in an embodiment of the present disclosure;
FIG. 7 illustrates a flow diagram for directional interpolation in an embodiment of the present disclosure;
Fig. 8 schematically shows a block diagram of an image processing apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations. In addition, all of the following terms "first," "second," are used for distinguishing purposes only and should not be taken as a limitation of the present disclosure.
In order to solve the technical problems in the related art, an image processing method is provided in an embodiment of the present disclosure. Fig. 1 shows a schematic diagram of an exemplary system architecture to which an image processing method or an image processing apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a first end 101, a network 102, and a second end 103. The first end 101 may be a client, for example, a handheld device (smart phone), a tablet computer, a desktop computer, a vehicle-mounted device, a wearable device, or the like, which can be used to collect images and display images (play images). Network 102 is a medium used to provide a communication link between first end 101 and second end 103. Network 102 may include various connection types, such as a wired communication link, a wireless communication link, etc., and in embodiments of the present disclosure, network 102 between first end 101 and second end 103 may be a wired communication link, such as a communication link may be provided over a serial connection, or a wireless communication link may be provided over a wireless network. The second terminal 103 may be a client terminal, such as a portable computer, a desktop computer, a smart phone, or the like, having an image processing function, for performing image processing. When the first end and the second end are both terminals, they may be the same terminal. The second end may also be a server, such as a local server or a cloud server, and the like, which is not limited herein.
In the embodiment of the present disclosure, first, the first end 101 may collect an image or take an image to be processed as an input image, and take an image to be displayed as an output image. Next, the second end 103 may take a certain pixel to be interpolated in the output image as the current pixel to be interpolated and determine its position in the adjacent pixel in the input image. Further, the blocking may be performed in the input image based on the positions of the neighboring pixels and one image block may be determined as the current image block, and all the matching blocks may be determined by means of motion estimation in a plurality of reference frames corresponding to the current frame in which the current image block is located. And the second end can fuse part or all of all the matching blocks with each other and fuse part or all of all the matching blocks with the current image block to obtain a plurality of fusion blocks, so that the target direction of the current image block can be determined according to the fusion blocks. Finally, directional interpolation is carried out on the current image block through the target direction, so that the pixel value of the current pixel to be interpolated is obtained, the whole interpolation operation is completed, and the final image is obtained. The second end can also output the final image to the first end for display or play.
It should be understood that the number of first ends, networks, and second ends in fig. 1 are merely illustrative. There may be any number of clients, networks, and servers, as desired for implementation.
It should be noted that, the image processing method provided in the embodiments of the present disclosure may be performed entirely by the second end or may be performed by the first end, which is not limited herein. Accordingly, an image processing device may be disposed in the second end 103.
Fig. 2 shows a schematic diagram of an electronic device suitable for use in implementing exemplary embodiments of the present disclosure. It should be noted that the electronic device shown in fig. 2 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
The electronic device of the present disclosure includes at least a processor and a memory for storing one or more programs, which when executed by the processor, enable the processor to implement the image processing method of the exemplary embodiments of the present disclosure.
Specifically, as shown in fig. 2, the electronic device 200 may include: processor 210, internal memory 221, external memory interface 222, universal serial bus (Universal Serial Bus, USB) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, sensor module 280, display 290, indicator 292, motor 293, keys 294, and subscriber identity module (Subscriber Identification Module, SIM) card interface 295, among others. The sensor module 280 may include a depth sensor, a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 200. In other embodiments of the application, electronic device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be separated, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the Processor 210 may include an application Processor (Application Processor, AP), a modem Processor, a graphics Processor (Graphics Processing Unit, GPU), an image signal Processor (IMAGE SIGNAL Processor, ISP), a controller, a video codec, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a baseband Processor and/or a neural network Processor (Neural-etwork Processing Unit, NPU), and the like. Wherein the different processing units may be separate devices or may be integrated in one or more processors. In addition, a memory may be provided in the processor 210 for storing instructions and data.
The USB interface 230 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a micro USB interface, USBTypeC interface, or the like. The USB interface 230 may be used to connect a charger to charge the electronic device 200, or may be used to transfer data between the electronic device 200 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
The charge management module 240 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 241 is used for connecting the battery 242, the charge management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charge management module 240 and provides power to the processor 210, the internal memory 221, the display screen 290, the wireless communication module 260, and the like.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
The mobile communication module 250 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 200.
The wireless Communication module 260 may provide solutions for wireless Communication including wireless local area network (Wireless Local Area Networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (Global Navigation SATELLITE SYSTEM, GNSS), frequency modulation (Frequency Modulation, FM), near field Communication (NEAR FIELD Communication), infrared (IR), etc., as applied to the electronic device 200.
The electronic device 200 implements display functions through a GPU, a display screen 290, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
Internal memory 221 may be used to store computer executable program code that includes instructions. The internal memory 221 may include a storage program area and a storage data area. The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 200.
The keys 294 include a power on key, a volume key, etc. The keys 294 may be mechanical keys. Or may be a touch key. The motor 293 may generate a vibratory alert. The motor 293 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The SIM card interface 295 is for interfacing with a SIM card. The electronic device 200 interacts with the network through the SIM card to realize functions such as communication and data communication.
The present application also provides a computer-readable storage medium that may be included in the electronic device described in the above embodiments; or may exist alone without being incorporated into the electronic device.
The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The computer-readable storage medium carries one or more programs which, when executed by one of the electronic devices, cause the electronic device to implement the methods described in the embodiments below.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Fig. 3 schematically shows a flowchart of an image processing method of an exemplary embodiment of the present disclosure, which may be applied, for example, in the course of taking an image, transmitting an image, or processing an image, or in a super-resolution reconstruction scene. The method in the embodiment of the disclosure is applied to the super-resolution reconstruction scene as an example for explanation. In the super-resolution reconstruction scene, the texture detail information lost in the imaging, transmission and other processes of the image or video can be recovered. Referring to fig. 3, with the terminal as an execution subject, the image processing method may include steps S310 to S330, which are described in detail as follows:
in step S310, the positions of neighboring pixels of the current pixel to be interpolated included in the output image in the input image are determined, a current image block is determined in the input image by the positions of the neighboring pixels, and all matching blocks corresponding to the current image block are determined in a plurality of reference frames corresponding to the current frame.
In the embodiment of the disclosure, the input image may be an image captured by using a terminal, or a frame of image in a video, or may be an image obtained from other devices, for example, an image downloaded from the internet or a frame of image in a video. The output image may be an image of the same content as the input image but of a different resolution, i.e. the resolution of the input image may be scaled to obtain the output image. The resolution of the output image may be greater than or less than the input image, and may be determined according to the type of the usage scene.
After determining the input image and the output image, a degree of scaling of the output image relative to the input image may be determined, where the degree of scaling may include a horizontal degree of scaling and a vertical degree of scaling, and the degree of scaling may be expressed in particular as a scaling factor. The output image is obtained by interpolating the input image, so that there may be a plurality of pixels to be interpolated in the output image. The number of pixels to be interpolated present in the output image may be determined according to the resolution of the output image. Specifically, the current pixel to be interpolated refers to any one of all pixels to be interpolated in the output image, and its position in the output image can be represented by (i, j). Adjacent pixels refer to one adjacent pixel that is adjacent to the pixel currently to be interpolated and that is located in a plurality of neighborhood pixels in the input image. The neighborhood pixels here may be four neighboring pixels represented by D neighborhood pixels, which refer to pixels located in the upper left corner of the current pixel to be interpolated, and the position of which may be represented by (x, y).
On the basis, the step of determining the adjacent pixel position of the current pixel to be interpolated comprises: firstly, determining the horizontal coordinates of the adjacent pixels according to the ratio of the horizontal coordinates of the current pixel to be interpolated to the horizontal scaling degree. The horizontal coordinates of the pixel currently to be interpolated refer to its coordinates in the output image. The horizontal zoom factor may be a horizontal magnification, specifically denoted by ratio w. The horizontal coordinates of the adjacent pixels may be the ratio of the horizontal coordinates of the current pixel to be interpolated to the horizontal magnification. Further, the vertical coordinates of the adjacent pixels may be determined according to a ratio of the vertical coordinates of the current pixel to be interpolated to a vertical scaling degree. The vertical coordinates of the pixel currently to be interpolated refer to its coordinates in the output image. The vertical scaling factor may be a vertical magnification, specifically denoted by ratio h. The vertical coordinate may be a ratio of the vertical coordinate of the current pixel to be interpolated to the vertical magnification. The coordinates of the current pixel to be interpolated and its neighboring pixels can be accurately determined by the degree of scaling of the resolution. The horizontal coordinates and the vertical coordinates may be determined simultaneously or sequentially, and are not particularly limited herein. After obtaining the horizontal coordinates and the vertical coordinates, the positions of the adjacent pixels in the input image can be obtained based on both, and can be expressed specifically by the formula (1):
After obtaining the adjacent pixels located in the input image, determining a current image block in the input image through the positions of the adjacent pixels, and determining all matching blocks corresponding to the current image block in a plurality of reference frames corresponding to the current frame. In the embodiment of the disclosure, after determining the positions of the adjacent pixels in the input image, for the current pixel to be interpolated, the input image may be subjected to a blocking operation according to the positions of the adjacent pixels and a preset image block size in the input image, so as to determine an image block. Specifically, the image block corresponding to the current pixel to be interpolated can be determined in the input image according to the preset image block size with the position of the adjacent pixel as the center. The preset image block size may be 4 x 4 or other suitable size. The size of the image block may be set according to actual requirements, and is not particularly limited herein. The number of image blocks may be plural, and the size of the image block is inversely related to the number of image blocks.
After determining the image block, any one of the image blocks being processed may be taken as the current image block, and all the matching blocks corresponding to the current image block may be determined in a plurality of reference frames. The plurality of reference frames may be reference frames corresponding to a current frame in which the current image block is located, for example, a plurality of frames adjacent to the current frame. Any one of a plurality of reference frames may be used as the current reference frame. First, a procedure in one current reference frame will be described as an example. Specifically, a motion vector of a current image block of the current frame relative to the current reference frame may be determined in the current reference frame in a multi-frame motion estimation manner, and a matching block similar to the current image block is determined in the current reference frame based on the motion vector. The basic idea of motion estimation is to divide each frame of an image sequence into a plurality of macro blocks which are not overlapped with each other, consider that the displacement amount of all pixels in the macro blocks is the same, and then find out the block which is most similar to the current image block, namely the matching block, according to a certain matching criterion within a certain given specific searching range from each macro block to the reference frame, and the relative displacement between the matching block and the current image block is the motion vector. When the video is compressed, the current image block can be completely recovered by only storing the motion vector and residual data. In the embodiment of the disclosure, the matching block similar to the current image block is searched from other multiframes by a multiframe motion estimation mode, so that the comprehensiveness and the accuracy of searching are improved, the direction error caused by interpolation according to a single frame is avoided, and the accuracy of interpolation direction is improved by multiframe motion estimation.
A flow chart for determining a matching block in a current reference frame is schematically shown in fig. 4, and with reference to fig. 4, mainly comprises the following steps:
in step S410, a searchable location of the current reference frame is determined according to a search window centered around the location of the current image block.
In this step, a search window may be used to define the extent of the area in which the search process is performed. In the current reference frame, the search window of each image block may be inconsistent, and the size of the search window may be set according to the search window rule. The search window rule may include a gradient size or a number of image blocks traversed. For example, the gradient of the current image block may be based on such that the gradient is positively correlated with the search window, i.e., the larger the gradient, the larger the search window; whereas the smaller. The window size of the search window may also be progressively larger or smaller as the number of image blocks traversed increases.
The searchable location refers to a searchable range of the current reference frame, i.e., a region range where there may be matching blocks, and may be specifically determined according to the size of the search window of each image block. The searchable location of each reference frame may be the same or different.
In step S420, all reference blocks of the same size as the current image block are obtained in the current reference frame by traversing the searchable locations in fixed steps.
In this step, the searchable locations may be matched by a fixed step size (e.g., a few pixels) to obtain all reference blocks from the current reference frame that are the same size as the current image block.
In step S430, all the reference blocks are matched with the current image block to obtain a matching degree, and the matching block is determined from all the reference blocks of the current reference frame according to the matching degree.
In this step, all the reference blocks may be matched with the current image block to obtain the matching degree between the two blocks. Specifically, the reference block can be subjected to feature extraction to obtain a reference feature, the current image block is subjected to feature extraction to obtain a current feature, and then the matching degree between the extracted reference feature and the current feature is calculated based on the extracted reference feature and the current feature. The expression of the matching degree may include, but is not limited to, one or more of SAD (Sum of Absolute Differences, absolute error and algorithm), euclidean distance, texture gradient, and the like. For the absolute error sum algorithm, the smaller the average absolute difference, the more similar.
After the matching degree is calculated, the reference block with the highest matching degree may be used as at least one matching block of the current image block in the current reference frame. Also, the number of matching blocks present in each reference frame may be the same or different, in particular determined according to the value of the degree of matching. The accuracy of the matching blocks can be improved by selecting at least one matching block through the matching degree.
While the matching blocks are obtained, the trustworthiness of each matching block may be recorded. The confidence here is used to indicate the degree of matching of the matching block for interpolation operations. The confidence level may be expressed in terms of SAD (Sum of Absolute Differences, absolute error sum algorithm) to determine the weights in the subsequent interpolation process, thereby affecting the interpolation effect.
It should be noted that, the manner of determining the matching block for all the image blocks in all the reference frames is the same as the steps in fig. 4, so that the description thereof is omitted here.
A schematic diagram of determining a matching block similar to the current image block using motion estimation of multiple frames is schematically shown in fig. 5. Specifically, the previous frame F CUR-1 adjacent to the current frame F CUR may be used as a reference frame and the next frame F CUR+1 may be used as a reference frame, or the previous two frames F CUR-2 adjacent to the current frame F CUR may be used as a reference frame and the next two frames F CUR+2 may be used as reference frames until the previous M frame F CUR-M and the next N frame F CUR+N are used as reference frames, to determine a matching block most similar to the current image block therefrom according to a motion vector therebetween. The matching block most similar to the current image block in the input image is determined through multi-frame motion estimation, so that the accuracy of the matching block can be improved.
With continued reference to fig. 3, in step S320, image block fusion is performed according to the target matching block and the current image block in all the matching blocks to obtain a plurality of fusion blocks, and the target direction of the current image block is determined according to the fusion blocks.
In the embodiment of the disclosure, when the image block fusion is performed, the target matching blocks corresponding to all the matching blocks of the current image block can be selected for fusion. The target matching blocks may be all matching blocks or partial matching blocks, and may be specifically determined according to the number of matching blocks. For example, when the number of matching blocks of the current image block is greater than the number threshold, then a partial matching block may be selected as the target matching block; when the number of matching blocks of the current image block is not greater than the number threshold, then all matching blocks may be selected as target matching blocks. The partial matching block may be K matching blocks with fixed frame positions for all matching blocks. The fixed frame may be several frames nearest to the current frame.
On the basis, the K matching blocks at the fixed frame position can be matched with 1 current image block to execute the fusion between the matching blocks and the current image block and between the matching blocks, and t fusion blocks are obtained by total fusion. Wherein the number of matching blocks required for each fusion block may be different. Specifically, the attribute parameters of each fusion block may be determined according to the ratio of the sum of the products of the weights of each matching block and the attribute parameters of each matching block to the sum of the weights of all matching blocks. The attribute parameters here may be pixel values at the pixel level for each fused block. Each matching block may be 4×4, based on this, K4×4 matching blocks and 1 current image block may be respectively fused, the pixel values at the same position in the matching blocks are weighted and averaged according to the weights corresponding to the matching blocks to obtain a weighted average, and the weighted average is used as the pixel value at the same position in the fused block, so as to determine the attribute parameter of each fused block. The specific way of fusing the matching block and the current image block to obtain the fused block may be as shown in formula (2):
BLK=(w0*BLK0+w1*BLK1+...+wk*BLKk)/(w0+w1+...+wk) Formula (2)
Wherein w 0、w1、wk is the weight of the corresponding matching block, and the weight of each matching block may be one or more of time distance, matching reliability, manual setting, and the like. The weights of the corresponding matching blocks may be different for different matching blocks, i.e. the type of the weights may be determined according to the type of the matching block.
In the embodiment of the present disclosure, the weight of the matching block is taken as the time distance as an example. Specifically, the weight of each matching block may be determined according to the time distance; and fusing the two target matching blocks according to the weight of each matching block, and fusing the target matching block with the current image block according to the weight of each matching block to obtain a plurality of fusion blocks. For example, if there are 12 matching blocks, the target matching blocks of two frames closest in time may be formed into one fusion block, thereby obtaining 6 fusion blocks. By forming the fusion block from the matching block with the closest time distance, the accuracy of the fusion block can be improved.
Further, after obtaining the plurality of fusion blocks, a target direction of the current image block may be determined according to the plurality of fusion blocks. The target direction refers to the texture direction of the current image block. The direction fusion can be performed according to the fusion blocks, and the target direction of the current image block is obtained.
A flow chart for determining the direction of a target is schematically shown in fig. 6, and with reference to fig. 6, mainly comprises the following steps:
In step S610, calculating gradients of each fusion block in a plurality of directions, and determining a texture direction of each fusion block according to a direction in which the gradients are minimum;
in step S620, the texture directions are fused according to the weight of each fusion block, so as to obtain the target direction.
In the disclosed embodiments, the plurality of directions may be any direction, for example, may include any of, but not limited to, a horizontal direction, a vertical direction, a 45 degree direction, and a 135 degree direction. Specifically, the gradient of each fusion block in each of a plurality of directions may be calculated. And, the gradients of the multiple directions of each fusion block can be compared to determine the smallest gradient. Further, the direction in which the gradient is smallest may be determined as the texture direction corresponding to the fusion block. The texture direction of each fusion block may be the same or different, and is specifically determined according to the value of the gradient. The texture direction of each fusion block may range from anywhere between 0-359 degrees. For example, the texture direction of the fusion block 1 is the horizontal direction, and the texture direction of the fusion block 2 is the vertical direction.
After the texture direction of each fusion block is obtained, fusion processing can be performed on the texture directions of all fusion blocks based on the weights corresponding to each fusion block. The weight of each fusion block is used to represent its importance or proportion. The weights of each of the fused blocks may be the same or different, and may be determined based on the number of original blocks used to form the fused block. The original blocks are used for forming the fusion blocks, the original blocks comprise matching blocks and current image blocks, the weight of the fusion blocks is positively correlated with the number of the original blocks forming the fusion blocks, namely, the larger the number of the original blocks is, the larger the numerical value of the weight of the fusion blocks is.
After the weights of the fusion blocks are obtained, weighted average processing can be performed on the texture direction of each fusion block according to the weights of each fusion block, so as to obtain the target direction. Specifically, the target direction may be determined according to a ratio of a sum of products of weights of each fusion block and a texture direction of each fusion block to a sum of weights of all fusion blocks. The specific calculation mode can be as shown in the formula (3):
dir=(dir0*u0+dir1*u1+...+dirt*ut)/(u0+u1+...+ut) Formula (3)
Where dir t is the texture direction of the t fusion block, and u t is the weight of the t fusion block.
In the embodiment of the disclosure, the texture direction of each fusion block is fused by the weight of the fusion block, so that an accurate target direction can be obtained, errors possibly caused when the direction is determined by only one frame of image are avoided, and the accuracy is improved.
In step S330, directional interpolation is performed on the current image block through the target direction, so as to obtain a pixel value of the current pixel to be interpolated.
In the embodiment of the present disclosure, directional interpolation refers to a manner of interpolation in one fixed direction. After the target direction is determined, directional interpolation can be performed according to the reference pixel corresponding to the target direction, so as to calculate the pixel value of the current pixel to be interpolated. Reference pixels refer to all pixels that already exist in the target direction. The directional interpolation may be mean filtering or otherwise. Based on this, the pixel values of the pixels existing in the target direction may be subjected to the mean filtering, and the pixel value of the pixel to be interpolated at present may be calculated according to the result of the mean filtering. The mean value filtering refers to giving a template to the target pixel on the image, where the template includes neighboring pixels around it (8 pixels around the target pixel as the center, forming a filtering template, i.e. removing the target pixel itself), and replacing the original pixel value with the average value of all the pixels in the template. Based on this, the pixel value of the current pixel to be interpolated can be determined from the average of the pixel values of the reference pixels in the target direction.
For all the pixels to be interpolated included in the output image, the pixel value of each pixel to be interpolated may be calculated by the methods in steps S310 to S330 until all the pixels to be interpolated in the output image obtain the corresponding pixel value. After the pixel values of the pixels to be interpolated are determined, an output image can be obtained according to the pixel values of the pixels to be interpolated, so that super-resolution reconstruction processing of the image is realized by the method, and the super-resolution effect and the image quality are improved.
A flow chart of directional interpolation is schematically shown in fig. 7, and referring to fig. 7, the method mainly comprises the following steps:
in step S710, a degree of scaling of the output image relative to the input image is calculated.
In step S720, it is determined whether interpolation of the current output image is completed; if yes, ending; if not, go to step S730.
In step S730, the nearest neighbor of the pixel to be interpolated in the input image is determined, which may be specifically the upper left neighbor.
In step S740, the size of the image block of the input image is determined.
In step S750, matching blocks that match all the image blocks are found in the reference frame.
In step S760, the matching blocks are fused to obtain a fused block.
In step S770, the texture direction of the fusion block is determined.
In step S780, the texture directions of the fusion blocks are fused.
In step S790, directional interpolation is performed, and the flow returns to step S720.
In the technical scheme in fig. 7, by utilizing the changing motion characteristics of the foreground and the background between adjacent frames and adopting a motion estimation method, a plurality of corresponding matching blocks are searched in a plurality of reference frames for a current image block, and finally all the matching blocks are integrated to obtain an integrated block, and then the integrated block is subjected to directivity judgment to determine the texture direction of each integrated block; after the texture direction of the pixel position to be interpolated of the current image block is obtained, interpolation is carried out by adopting a directional interpolation filter, and the current pixel to be interpolated is obtained. The problems that in the related technology, the neural network algorithm is seriously dependent on the data set during training, the actual generalization capability is poor and the complexity is too high are avoided, the complexity is reduced, and the application range is increased. The error judgment of textures generated by the fact that the input image is a single frame is avoided, the limitation is avoided, the subjective problem of the image caused by the error of directional interpolation is avoided, and the image quality is improved.
It should be added that the super-division scheme in the embodiments of the present disclosure may also be used in the denoising process. For example, in the video path, multiple interpolation may be used to obtain a final output resolution image according to setting multiple intermediate resolutions, that is, each time the input is the last output, so as to achieve a better denoising effect. The super-division scheme can also be used in the image restoration process, for example, for some single-frame lost details in the video, other frames can be utilized to perform filling, correction and other operations.
It should be noted that although the steps of the methods in the present disclosure are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
Fig. 8 schematically shows a block diagram of an image processing apparatus of an exemplary embodiment of the present disclosure. Referring to fig. 8, an image processing apparatus 800 according to an exemplary embodiment of the present disclosure may include the following modules:
A matching block determining module 801, configured to determine a position of a neighboring pixel of a current pixel to be interpolated included in an output image in an input image, determine a current image block in the input image through the position of the neighboring pixel, and determine all matching blocks corresponding to the current image block in a plurality of reference frames corresponding to a current frame;
the direction determining module 802 is configured to perform image block fusion according to target matching blocks in all the matching blocks and a current image block to obtain a plurality of fusion blocks, and determine a target direction of the current image block according to the fusion blocks;
and the image interpolation module 803 is configured to perform directional interpolation on the current image block according to the target direction, so as to obtain a pixel value of the current pixel to be interpolated.
In one exemplary embodiment of the present disclosure, the matching block determination module includes: the first coordinate determining module is used for determining the horizontal coordinates of the adjacent pixels according to the ratio of the horizontal coordinates of the current pixel to be interpolated to the horizontal scaling degree; and the second coordinate determining module is used for determining the vertical coordinates of the adjacent pixels according to the ratio of the vertical coordinates of the current pixel to be interpolated to the vertical scaling degree.
In one exemplary embodiment of the present disclosure, the matching block determination module includes: and the motion estimation matching module is used for determining a motion vector of a current image block of the current frame relative to the current reference frame in the current reference frame and determining a matching block similar to the current image block in the current reference frame based on the motion vector.
In one exemplary embodiment of the present disclosure, the motion estimation matching module includes: the position determining module is used for determining the searchable position of the current reference frame by taking the position of the current image block as the center according to a search window; the reference block determining module is used for traversing the searchable position through a fixed step length, and obtaining all reference blocks with the same size as the current image block in the current reference frame; and the matching block selection module is used for matching all the reference blocks with the current image block to obtain matching degree, and determining the matching block from all the reference blocks of the current reference frame according to the matching degree.
In one exemplary embodiment of the present disclosure, the matching block selection module is configured to: and taking at least one reference block with the highest matching degree in all the reference blocks as the matching block of the current image block in the current reference frame.
In one exemplary embodiment of the present disclosure, the direction determination module includes: the weight determining module is used for determining the weight of each matching block according to the time distance; and the matching block fusion module is used for fusing target matching blocks in all the matching blocks according to the weight of each matching block, and fusing the target matching block with the current image block according to the weight of each matching block so as to obtain a plurality of fusion blocks.
In one exemplary embodiment of the present disclosure, the direction determination module includes: the texture direction determining module is used for calculating gradients of each fusion block in a plurality of directions and determining the texture direction of each fusion block according to the direction with the minimum gradient; and the direction fusion module is used for fusing the texture directions according to the weight of each fusion block to obtain the target direction.
In one exemplary embodiment of the present disclosure, the direction fusion module includes: and the target direction determining module is used for carrying out weighted average processing on the texture direction of each fusion block according to the weight of each fusion block to obtain the target direction.
In one exemplary embodiment of the present disclosure, the image interpolation module is configured to: and carrying out directional interpolation on the current image block according to the pixel value of the reference pixel corresponding to the target direction so as to determine the pixel value of the current pixel to be interpolated.
Since each functional module of the image processing apparatus according to the embodiment of the present disclosure is the same as that of the embodiment of the image processing method described above, a detailed description thereof will be omitted.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. An image processing method, comprising:
Determining the position of a neighboring pixel of a current pixel to be interpolated in an input image, which is contained in an output image, determining a current image block in the input image through the position of the neighboring pixel, and determining all matching blocks corresponding to the current image block in a plurality of reference frames corresponding to the current frame based on a motion estimation mode;
Image block fusion is carried out according to target matching blocks in all the matching blocks and the current image block to obtain a plurality of fusion blocks, and the target direction of the current image block is determined according to the fusion blocks;
and performing directional interpolation on the current image block through the target direction to obtain the pixel value of the current pixel to be interpolated.
2. The image processing method according to claim 1, wherein determining the position of the adjacent pixel of the current pixel to be interpolated included in the output image in the input image includes:
Determining the horizontal coordinates of the adjacent pixels according to the ratio of the horizontal coordinates of the current pixel to be interpolated to the horizontal scaling degree;
And determining the vertical coordinates of the adjacent pixels according to the ratio of the vertical coordinates of the current pixel to be interpolated to the vertical scaling degree.
3. The image processing method according to claim 1, wherein said determining all matching blocks corresponding to the current image block among a plurality of reference frames corresponding to a current frame includes:
A motion vector of a current image block of the current frame relative to the current reference frame is determined in the current reference frame, and a matching block similar to the current image block is determined in the current reference frame based on the motion vector.
4. The image processing method according to claim 3, wherein said determining a motion vector of a current image block of the current frame with respect to the current reference frame in the current reference frame, and determining a matching block similar to the current image block in the current reference frame based on the motion vector, comprises:
the position of the current image block is taken as the center, and the searchable position of the current reference frame is determined according to a search window;
Traversing the searchable position by a fixed step length, and obtaining all reference blocks with the same size as the current image block in the current reference frame;
and matching all the reference blocks with the current image block to obtain a matching degree, and determining the matching block from all the reference blocks of the current reference frame according to the matching degree.
5. The image processing method according to claim 4, wherein said determining the matching block from all reference blocks of the current reference frame according to the matching degree comprises:
And taking at least one reference block with the highest matching degree in all the reference blocks as the matching block of the current image block in the current reference frame.
6. The image processing method according to claim 1, wherein the performing image block fusion according to the target matching block and the current image block in all the matching blocks to obtain a plurality of fusion blocks includes:
determining the weight of each matching block according to the time distance;
fusing target matching blocks in all the matching blocks according to the weight of each matching block, and fusing the target matching block with the current image block according to the weight of each matching block to obtain a plurality of fusion blocks.
7. The image processing method according to claim 1, wherein the determining the target direction of the current image block from the plurality of the fusion blocks includes:
Calculating gradients of each fusion block in multiple directions, and determining the texture direction of each fusion block according to the direction with the smallest gradient;
and fusing the texture directions according to the weight of each fusion block to obtain the target direction.
8. The image processing method according to claim 7, wherein the fusing the texture directions according to the weight of each of the fused blocks to obtain the target direction includes:
And carrying out weighted average processing on the texture direction of each fusion block according to the weight of each fusion block to obtain the target direction.
9. The image processing method according to claim 1, wherein the performing directional interpolation on the current image block by the target direction to obtain a pixel value of the current pixel to be interpolated includes:
And carrying out directional interpolation on the current image block according to the pixel value of the reference pixel corresponding to the target direction so as to determine the pixel value of the current pixel to be interpolated.
10. An image processing apparatus, comprising:
a matching block determining module, configured to determine a position of a neighboring pixel of a current pixel to be interpolated included in an output image in an input image, determine a current image block in the input image by using the position of the neighboring pixel, and determine all matching blocks corresponding to the current image block in a plurality of reference frames corresponding to the current frame based on a motion estimation manner;
the direction determining module is used for carrying out image block fusion according to target matching blocks in all the matching blocks and the current image block to obtain a plurality of fusion blocks, and determining the target direction of the current image block according to the fusion blocks;
and the image interpolation module is used for carrying out directional interpolation on the current image block through the target direction to obtain the pixel value of the current pixel to be interpolated.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the image processing method according to any one of claims 1-9.
12. An electronic device, comprising:
A processor; and
A memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image processing method of any of claims 1-9 via execution of the executable instructions.
CN202010694132.4A 2020-07-17 2020-07-17 Image processing method and device, storage medium and electronic equipment Active CN111784734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010694132.4A CN111784734B (en) 2020-07-17 2020-07-17 Image processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010694132.4A CN111784734B (en) 2020-07-17 2020-07-17 Image processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111784734A CN111784734A (en) 2020-10-16
CN111784734B true CN111784734B (en) 2024-07-02

Family

ID=72763085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010694132.4A Active CN111784734B (en) 2020-07-17 2020-07-17 Image processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111784734B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508783B (en) * 2020-11-19 2024-01-30 西安全志科技有限公司 Image processing method based on direction interpolation, computer device and computer readable storage medium
CN112804526B (en) * 2020-12-31 2022-11-11 紫光展锐(重庆)科技有限公司 Image data storage method and equipment, storage medium, chip and module equipment
CN113240609A (en) * 2021-05-26 2021-08-10 Oppo广东移动通信有限公司 Image denoising method and device and storage medium
CN113838095B (en) * 2021-08-30 2023-12-29 天津港集装箱码头有限公司 Personnel tracking ball machine control method based on speed control
CN114007134B (en) * 2021-10-25 2024-06-11 Oppo广东移动通信有限公司 Video processing method, device, electronic equipment and storage medium
CN113689362B (en) * 2021-10-27 2022-02-22 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075760A (en) * 2010-10-27 2011-05-25 无锡中星微电子有限公司 Quick movement estimation method and device
CN110730344A (en) * 2019-09-18 2020-01-24 浙江大华技术股份有限公司 Video coding method and device and computer storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609765B2 (en) * 2004-12-02 2009-10-27 Intel Corporation Fast multi-frame motion estimation with adaptive search strategies
CN110505479B (en) * 2019-08-09 2023-06-16 东华大学 Video compressed sensing reconstruction method with same measurement rate frame by frame under time delay constraint

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075760A (en) * 2010-10-27 2011-05-25 无锡中星微电子有限公司 Quick movement estimation method and device
CN110730344A (en) * 2019-09-18 2020-01-24 浙江大华技术股份有限公司 Video coding method and device and computer storage medium

Also Published As

Publication number Publication date
CN111784734A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111784734B (en) Image processing method and device, storage medium and electronic equipment
CN111641835B (en) Video processing method, video processing device and electronic equipment
CN111598776B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN112269851B (en) Map data updating method and device, storage medium and electronic equipment
CN112270755B (en) Three-dimensional scene construction method and device, storage medium and electronic equipment
CN111970562A (en) Video processing method, video processing device, storage medium and electronic equipment
CN112927271B (en) Image processing method, image processing device, storage medium and electronic apparatus
US20240029297A1 (en) Visual positioning method, storage medium and electronic device
CN113077397B (en) Image beautifying processing method and device, storage medium and electronic equipment
KR20130019167A (en) Method and system for reconstructing zoom-in image having high resolution
CN113409203A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN111652933B (en) Repositioning method and device based on monocular camera, storage medium and electronic equipment
CN113538274A (en) Image beautifying processing method and device, storage medium and electronic equipment
CN112927281A (en) Depth detection method, depth detection device, storage medium, and electronic apparatus
CN113781336B (en) Image processing method, device, electronic equipment and storage medium
CN113409209B (en) Image deblurring method, device, electronic equipment and storage medium
CN115278189A (en) Image tone mapping method and apparatus, computer readable medium and electronic device
CN111784614B (en) Image denoising method and device, storage medium and electronic equipment
CN114419189A (en) Map construction method and device, electronic equipment and storage medium
CN112308809A (en) Image synthesis method and device, computer equipment and storage medium
CN116228607B (en) Image processing method and electronic device
CN111784614A (en) Image denoising method and device, storage medium and electronic equipment
CN113706598B (en) Image processing method, model training method and device, medium and electronic equipment
CN111951168B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111859001B (en) Image similarity detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant