CN113284077A - Image processing method, image processing device, communication equipment and readable storage medium - Google Patents

Image processing method, image processing device, communication equipment and readable storage medium Download PDF

Info

Publication number
CN113284077A
CN113284077A CN202010104217.2A CN202010104217A CN113284077A CN 113284077 A CN113284077 A CN 113284077A CN 202010104217 A CN202010104217 A CN 202010104217A CN 113284077 A CN113284077 A CN 113284077A
Authority
CN
China
Prior art keywords
image
processed
reference frame
frame image
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010104217.2A
Other languages
Chinese (zh)
Inventor
呼静
陈刚
李政
曹志鹏
赵聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010104217.2A priority Critical patent/CN113284077A/en
Priority to PCT/CN2020/127154 priority patent/WO2021164329A1/en
Publication of CN113284077A publication Critical patent/CN113284077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides an image processing method, an image processing device, communication equipment and a readable storage medium. The method comprises the following steps: the method comprises the steps of obtaining a first reference frame image of an image to be processed, wherein the first reference frame image is obtained by carrying out registration processing on the image to be processed and at least one reference image, the similarity between the at least one reference image and the image to be processed meets a preset threshold value, and the quality of the first reference frame image is larger than that of the image to be processed. And carrying out registration and fusion processing on the first reference frame image and the image to be processed so as to fuse image information matched with the image to be processed in the first reference frame image into the image to be processed to obtain a processed image. Due to the fact that registration is conducted twice, the registration effect of the image to be processed and the reference image can be guaranteed, the details of the image to be processed are improved by means of the high-frequency information in the first reference frame image, and vivid and clear details can be effectively restored in the image to be processed.

Description

Image processing method, image processing device, communication equipment and readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a communication device, and a readable storage medium.
Background
In daily life, it is a very common practice to take pictures by using a terminal device, and users expect to take pictures with rich details and high quality by using a mobile phone. However, since the performance of the imaging components of the mobile device is limited by the size, the mobile device can only capture low quality images, and the requirements of the user cannot be met.
In the prior art, a high-quality image similar to a low-quality image is obtained, the high-quality image is registered with the low-quality image in a block matching manner, and the low-quality image is reconstructed according to the registered high-quality image, so that the visual effect of the low-quality image is improved.
However, due to the difference between the obtained high-quality image and the low-quality image in view angle, color detail and resolution, the accuracy of block matching is reduced, and the registration effect between the low-quality image and the high-quality image is not ideal, so that the problems of disordered image texture and unsatisfactory detail improvement effect can occur when the low-quality image is reconstructed.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a communication device and a readable storage medium, so as to solve the problems that the registration effect of a low-quality image and a high-quality image is not ideal, and the texture of the image is disordered and the detail improvement effect is not ideal when the low-quality image is reconstructed.
In a first aspect, an embodiment of the present application provides an image processing method, including: the method comprises the steps of obtaining a first reference frame image of an image to be processed, wherein the first reference frame image is obtained by carrying out registration processing on the image to be processed and at least one reference image, the similarity between the at least one reference image and the image to be processed meets a preset threshold value, and the quality of the first reference frame image is larger than that of the image to be processed. And carrying out registration and fusion processing on the first reference frame image and the image to be processed so as to fuse image information matched with the image to be processed in the first reference frame image into the image to be processed to obtain a processed image.
The image processing method may be implemented by a cloud device. Or, the cloud device and the terminal device may be implemented together. The cloud device is also called a cloud server, and the cloud server has computing capability and can process received data. The terminal device may be a smart phone, a tablet computer, a laptop computer, a wearable device, an Augmented Reality (AR)/Virtual Reality (VR) device, a vehicle-mounted terminal, a server, or the like, and a device capable of being connected to the cloud device in a communication manner.
In a possible implementation manner of the first aspect, when the image processing method is implemented in the cloud device, the image to be processed sent by the terminal device in communication connection with the cloud device may be received first, the first reference frame image is obtained by the cloud device, and the image to be processed is subjected to registration and fusion processing, so as to obtain the processed image. Or, when the image processing method is implemented together by the cloud device and the terminal device, the image to be processed sent by the terminal device in communication connection with the cloud device may be received first, and the first reference frame image is obtained by the cloud device. And then the cloud device sends the first reference frame image to the terminal device, and the terminal device performs registration and fusion processing on the image to be processed according to the received first reference frame image to obtain a processed image.
It should be understood that the image to be processed may be an image acquired by the terminal device in real time through the image acquisition means. Of course, the image to be processed may also be an image stored in the terminal device.
Because the video is obtained by continuously playing a plurality of images, the method provided by the application can also be applied to the processing of the video based on the processing of the images. By way of example only and not limitation, each frame in a segment of video may be used as an image to be processed, and after processing is performed for multiple times, the processed images may be synthesized into a video. Or, a part of frame images in the video can be selected as images to be processed for processing, so as to obtain a plurality of processed images. And then, applying the detail features in the processed image to the unprocessed image according to the feature points of the processed image, and finally generating a video in sequence. It should be understood by those skilled in the art that the scheme of video processing based on the image processing method provided in the present application should be included in the scope of the present application.
In this embodiment, the image to be processed and at least one reference image are registered to obtain a first reference frame image. And then, carrying out registration and fusion processing on the first reference frame image and the image to be processed, and fusing image information matched with the image to be processed in the first reference frame image into the image to be processed to obtain a processed image. Because the first reference frame image is preliminarily registered, the visual angle difference between the first reference frame image and the image to be processed is reduced, and the quality of the reference frame image is higher than that of the image to be processed, so that more detail information is contained. And then, when the image to be processed is processed according to the first reference frame image, the second registration is carried out, and the registration result is finer and more accurate. After the two times of registration, the registration effect of the image to be processed and the reference image can be ensured, so that the details of the image to be processed are promoted by using the high-frequency information in the first reference frame image, and vivid and clear details can be effectively restored in the image to be processed.
In another possible implementation manner of the first aspect, an embodiment is provided in which the cloud device separately implements the image processing method provided by the present application. The method for acquiring the first reference frame image of the image to be processed comprises the following steps: and receiving the image to be processed sent by the terminal equipment. Determining at least one reference image with the similarity meeting a preset threshold value with the image to be processed from a preset reference image database, wherein the quality of the reference image in the reference image database is greater than that of the image to be processed. And carrying out registration processing on the image to be processed and at least one reference image to obtain a first reference frame image.
In a possible implementation manner of the first aspect, an embodiment is provided in which the cloud device and the terminal device jointly implement the image processing method provided by the present application. The method for acquiring the first reference frame image of the image to be processed comprises the following steps: sending the image to be processed to a cloud end, determining at least one reference image with the similarity meeting a preset threshold value from a preset reference image database by the cloud end, and carrying out registration processing on the image to be processed and the at least one reference image to obtain a first reference frame image, wherein the quality of the reference image in the reference image database is greater than that of the image to be processed. And receiving a first reference frame image sent by the cloud.
In some embodiments, registering the image to be processed and the at least one reference image to obtain at least one first reference frame image, includes: and respectively carrying out registration processing on the at least one reference image and the image to be processed to obtain at least one registered reference image. And synthesizing at least one registered reference image to obtain a first reference frame image.
By way of example and not limitation, two methods for performing registration fusion processing on a first reference frame image and an image to be processed are provided in the present application. It should be noted that, when the cloud device is implemented separately, the following steps are performed by the cloud device. When the cloud device and the terminal device are jointly implemented, the following steps are executed by the terminal device.
In one implementation manner, the registering and fusing the first reference frame image and the image to be processed to fuse image information in the first reference frame image, which is matched with the image to be processed, into the image to be processed, so as to obtain a processed image, includes: and acquiring a matching degree map of the image to be processed and the first reference frame image, wherein the matching degree map is used for indicating the matching degree of pixels at corresponding positions in the image to be processed and the first reference frame image. And acquiring a first image and a second image according to the matching degree map, the image to be processed and the first reference frame image, wherein the first image comprises a region of the image to be processed, which is different from pixels in the first reference frame image, and the second image comprises a region of the first reference frame image, which is the same as pixels in the image to be processed. And synthesizing the first image and the second image to obtain a second reference frame image. And carrying out registration and fusion processing on the second reference frame image and the image to be processed to obtain a processed image.
Based on the above implementation, the registering and fusing the second reference frame image and the image to be processed to obtain a processed image, including:
and registering each pixel in the image to be processed and the second reference frame image one by one to obtain pixel position deviation data of the image to be processed and the second reference frame image. And according to the pixel position deviation data, fusing the high-frequency information of each pixel in the second reference frame image to a corresponding position in the image to be processed to obtain a processed image.
In another implementation, the registering and fusing the first reference frame image and the image to be processed to fuse the image information in the first reference frame image, which is matched with the image to be processed, into the image to be processed, so as to obtain a processed image, includes: and acquiring a matching degree map of the image to be processed and the first reference frame image, wherein the matching degree map is used for indicating the matching degree of pixels at corresponding positions in the image to be processed and the reference frame image. And carrying out registration and fusion processing on the first reference frame image and the image to be processed to obtain a third reference frame image. And acquiring a third image and a fourth image according to the matching degree map, the image to be processed and the third reference frame image, wherein the third image comprises a region of the image to be processed, which is different from the pixels in the third reference frame image, and the fourth image comprises a region of the third reference frame image, which is the same as the pixels in the image to be processed. And synthesizing the third image and the fourth image to obtain a processed image.
Based on the above implementation, the registering and fusing the first reference frame image and the image to be processed to obtain a third reference frame image, including: and registering each pixel in the image to be processed and the first reference frame image one by one to obtain pixel position deviation data of the image to be processed and the first reference frame image. And according to the pixel position deviation data, fusing the high-frequency information of each pixel in the first reference frame image to a corresponding position in the image to be processed to obtain a third reference frame image.
Based on the above embodiment, before performing the registration fusion process on the first reference frame image and the image to be processed, the method further includes: a plurality of image areas in the image to be processed are determined. And acquiring N areas matched with the first reference frame image in the plurality of image areas, wherein N is an integer greater than or equal to 1. And determining the region to be processed in the N regions.
Correspondingly, the registration and fusion processing is performed on the first reference frame image and the image to be processed, so as to fuse the image information matched with the image to be processed in the first reference frame image into the image to be processed, and obtain a processed image, and the registration and fusion processing includes: and performing registration and fusion processing on the first reference frame image and the to-be-processed area in the to-be-processed image so as to fuse image information matched with the to-be-processed area in the to-be-processed image in the first reference frame image into the to-be-processed image to obtain a processed image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first reference frame image of an image to be processed, the first reference frame image is obtained by registering the image to be processed and at least one reference image, the similarity between the at least one reference image and the image to be processed meets a preset threshold, and the quality of the first reference frame image is greater than that of the image to be processed. And the registration fusion module is used for performing registration fusion processing on the first reference frame image and the image to be processed so as to fuse the image information matched with the image to be processed in the first reference frame image into the image to be processed to obtain a processed image.
In another possible implementation manner of the second aspect, the obtaining module is configured to receive an image to be processed sent by a terminal device. Determining at least one reference image with the similarity meeting a preset threshold value with the image to be processed from a preset reference image database, wherein the quality of the reference image in the reference image database is greater than that of the image to be processed. And carrying out registration processing on the image to be processed and at least one reference image to obtain a first reference frame image.
In yet another possible implementation manner of the second aspect, the obtaining module is configured to send the image to be processed to the cloud, determine, by the cloud, at least one reference image whose similarity to the image to be processed satisfies a preset threshold from a preset reference image database, and perform registration processing on the image to be processed and the at least one reference image to obtain a first reference frame image. And the quality of the reference image in the reference image database is greater than that of the image to be processed. And receiving a first reference frame image sent by the cloud.
In the present application, by way of example and not limitation, the following two methods are given to illustrate the processing procedure of the registration fusion module for performing registration fusion on the images to be processed. It should be noted that, when the cloud device is implemented separately, the registration fusion module is located in the cloud device. When the cloud device and the terminal device are jointly implemented, the registration and fusion module is located in the terminal device.
In one implementation, the registration fusion module is specifically configured to obtain a matching degree map of the image to be processed and the first reference frame image, where the matching degree map is used to indicate matching degrees of pixels at corresponding positions in the image to be processed and the first reference frame image. And acquiring a first image and a second image according to the matching degree map, the image to be processed and the first reference frame image, wherein the first image comprises a region of the image to be processed, which is different from pixels in the first reference frame image, and the second image comprises a region of the first reference frame image, which is the same as pixels in the image to be processed. And synthesizing the first image and the second image to obtain a second reference frame image. And carrying out registration and fusion processing on the second reference frame image and the image to be processed to obtain a processed image.
Based on the above implementation, the registration fusion module is further configured to register each pixel in the image to be processed and the second reference frame image one by one, so as to obtain pixel position deviation data of the image to be processed and the second reference frame image. And according to the pixel position deviation data, fusing the high-frequency information of each pixel in the second reference frame image to a corresponding position in the image to be processed to obtain a processed image.
In another implementation, the registration fusion module is specifically configured to perform registration fusion processing on the first reference frame image and the image to be processed to obtain a fusion image. And synthesizing the image to be processed and the fusion image according to the matching degree map to obtain a processed image.
Based on the above implementation, the registration fusion module is specifically configured to obtain a matching degree map of the image to be processed and the first reference frame image, where the matching degree map is used to indicate matching degrees of pixels at corresponding positions in the image to be processed and the first reference frame image. And carrying out registration and fusion processing on the first reference frame image and the image to be processed to obtain a third reference frame image. And acquiring a third image and a fourth image according to the matching degree map, the image to be processed and the third reference frame image, wherein the third image comprises a region of the image to be processed, which is different from the pixels in the third reference frame image, and the fourth image comprises a region of the third reference frame image, which is the same as the pixels in the image to be processed. And synthesizing the third image and the fourth image to obtain a processed image.
Based on the above implementation, the registration fusion module is further configured to register each pixel in the image to be processed and the first reference frame image one by one, so as to obtain pixel position deviation data of the image to be processed and the first reference frame image. And according to the pixel position deviation data, fusing the high-frequency information of each pixel in the first reference frame image to a corresponding position in the image to be processed to obtain a third reference frame image.
Based on the above embodiment, the image processing apparatus further includes a determination module configured to determine a plurality of image areas in the image to be processed. And acquiring N areas matched with the first reference frame image in the plurality of image areas, wherein N is an integer greater than or equal to 1. And determining the region to be processed in the N regions.
Correspondingly, the registration fusion module is configured to perform registration fusion processing on the first reference frame image and the to-be-processed region in the to-be-processed image, so as to fuse image information in the first reference frame image, which is matched with the to-be-processed region in the to-be-processed image, into the to-be-processed image, and obtain a processed image.
In a third aspect, an embodiment of the present application provides a communication device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the communication device implements the image processing method provided in the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the image processing method provided in the first aspect is implemented.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the image processing method provided in the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an image to be processed in an embodiment of the present application;
FIG. 4 is a schematic diagram of a reference image in an embodiment of the present application;
FIG. 5 is a flowchart illustrating an image processing method according to another embodiment of the present application;
FIG. 6 is a diagram illustrating a first reference frame image according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating an image processing method according to another embodiment of the present application;
FIG. 8 is a schematic diagram of a first image and a second image in an embodiment of the present application;
FIG. 9 is a schematic illustration of an image after processing in an embodiment of the present application;
FIG. 10 is a flowchart illustrating an image processing method according to another embodiment of the present application;
FIG. 11 is a flowchart illustrating an image processing method according to another embodiment of the present application;
FIG. 12 is a flowchart illustrating an image processing method according to another embodiment of the present application;
FIG. 13 is a schematic diagram of a region where an image to be acquired is matched with a first reference frame image in the present application;
fig. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present application;
fig. 16 is a schematic structural diagram of a communication device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when.. or" upon "or" in response to a determination "or" in response to a detection ".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one implementation" or "some implementations" or the like means that a particular feature, structure, or characteristic described in connection with the implementation is included in one or more implementations of the present application. Thus, appearances of the phrases "in one implementation," "in some implementations," "in other implementations," and the like, in various places throughout this specification are not necessarily all referring to the same implementation, but rather mean "one or more, but not all implementations" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
First, an application scenario to which the image processing method provided in the present application is applied will be exemplarily described with reference to fig. 1 as an example. Referring to fig. 1, an application scenario may include a smart phone 11, a scene 12 to be shot, and a cloud device 13.
The cloud device 12, which may be a cloud server, has computing capability and can process received data. The terminal device 11 may be a smart phone, a tablet computer, a notebook computer, a wearable device, an Augmented Reality (AR)/Virtual Reality (VR) device, a vehicle-mounted terminal, a server, or the like, and a device capable of being connected to the cloud device 13 in a communication manner. Fig. 1 illustrates a terminal device 11 as a smartphone. The terminal device 11 and the cloud device 13 are connected by wireless communication.
The image processing method provided by the application can be implemented in the cloud device 13. For example, after the scene 12 is shot by the terminal device 11 and the low-quality image 111 to be processed is obtained, the image 111 to be processed may be sent to the cloud device 13. The cloud device 13 performs registration twice on the image to be processed 111 through the image processing method provided by the application, so as to obtain a high-quality processed image. The processed image is then transmitted to the terminal device 11.
Or, the image processing method provided by the present application may also be implemented by the cloud device 13 and the terminal device 11 together. For example, after the cloud device 13 receives the low-quality image to be processed 111 sent by the terminal device 11, first registration is performed at the cloud to obtain a first reference frame image, then the first reference frame image is sent to the terminal device 11, and the terminal device 11 performs second registration and fusion processing on the image to be processed 111 according to the first reference frame image to obtain a high-quality processed image.
Due to the fact that registration is conducted twice, the registration effect of the image to be processed and the reference image can be guaranteed, the details of the image to be processed are improved by means of the high-frequency information in the first reference frame image, and vivid and clear details can be effectively restored in the image to be processed.
The steps in the image processing method are allocated to the cloud device and the terminal device, and may be determined according to actual situations during application, which is not limited herein.
It should be noted that any communication standard or protocol may be used for the wireless communication, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), 5G New Radio (5G New Radio, 5GNR), etc.
Fig. 2 shows a schematic flow chart of the image processing method provided in the present application, which describes a flow of performing two-time registration by the cloud device 13. As shown in fig. 2, the method includes:
and S21, the cloud device receives the to-be-processed image acquired and sent by the terminal device.
Referring to the example in fig. 1, in some embodiments, the terminal device may capture an image to be processed through a camera provided in the terminal device. However, due to limitations of hardware devices at the time of photographing, etc., the quality of the photographed image to be processed tends to be low. The image quality refers to the resolution and the resolving power of an image, and the like, the resolution and the resolving power of a low-quality image are low, details in the image are blurred, and the resolution and the resolving power of a high-quality image are high, and clear image details are included. For example, when the user shoots a distant object through the terminal device, the terminal device may shoot the image to be processed in a digital zoom magnification mode or the like according to the instruction of the user. But the details of the shot image to be processed are lost more, and the image to be processed is blurred visually. Note that the image to be processed may be directly captured by the smartphone 11 shown in fig. 1. Or one of the areas may be enlarged after the shot picture is enlarged. Or a frame in a piece of video. The acquisition path of the image to be processed in the present application is not limited.
Referring to fig. 3, fig. 3 shows an example of an image to be processed, in which blurred plants, buildings, clouds are included.
And S22, the cloud device determines at least one reference image with the similarity meeting a preset threshold value with the image to be processed from a preset reference image database.
In some embodiments, the cloud device may obtain the reference image from a preset reference image database. The reference image database includes a plurality of reference images acquired in advance. For example, the image may be a high-definition image captured by a single lens reflex camera, a high-definition image collected from a network, a model image obtained by modeling a real object, a frame image in a high-definition video, or the like. The reference image has higher quality than the image to be processed and has more detail features so as to facilitate the detail in the reference image to be fused into the image to be processed. There are various types of reference images, for example, referring to fig. 4, a in fig. 4 shows a schematic diagram of a plant, which matches the plant blurred in the image to be processed shown in fig. 3, and a in fig. 4 can be used as a reference image of the image to be processed. With continued reference to fig. 4, b in fig. 4 illustrates a building and a cloud, the building does not match the building in the image to be processed shown in fig. 4, but the cloud and the cloud in the image to be processed may partially match, and b in fig. 4 may also be a reference image of the image to be processed. Finally, referring to c in fig. 4, the figure illustrates a partial feature of a building, which matches with the partial feature in the image to be processed shown in fig. 3, so c in fig. 4 can be used as a reference image of the image to be processed.
It should be noted that, the similarity between the reference image and the image to be processed satisfies a preset threshold, which indicates that a similar matching region exists in the reference image in the image to be processed. Because the quality of the reference image is greater than that of the image to be processed, the corresponding region in the image to be processed can be reconstructed according to the matching region similar to the image to be processed in the reference image, so that the region which is not clear enough in the image to be processed becomes clear and accurate.
In one example, the cloud device may match a reference image for the image to be processed from a reference image database using an image matching model. The image matching model may be a model trained based on gray scale matching or feature matching. And the method is used for matching and acquiring the reference image of which the similarity with the image to be processed meets a preset threshold according to the input image to be processed. After the image to be processed is input into the image matching model, at least one reference image can be matched.
S23, the cloud device carries out registration processing on the image to be processed and the at least one reference image to obtain a first reference frame image.
Referring to fig. 5, registering the image to be processed and the at least one reference image to obtain at least one first reference frame image, including:
s231, registering the at least one reference image with the image to be processed respectively to obtain at least one registered reference image.
It should be noted that, since the reference image may be different from the image to be processed by using different image acquisition devices, and the image is acquired under different shooting conditions, there are influence factors such as a viewing angle difference and an illumination difference, the reference image needs to be subjected to registration processing, so as to reduce the difference between the reference image and the image to be processed. In the present application, registration refers to Image registration (Image registration), that is, for two images, a mapping relationship between the two images is obtained first, and then one of the images is mapped to the other Image according to the mapping relationship, so that points in the two images indicating the same position in space correspond to each other, and a mapping relationship between points in the two images indicating the same position in space is obtained, thereby realizing Image fusion according to the obtained mapping relationship.
In one example, the cloud device may employ a registration algorithm to obtain the registered reference image. The registration algorithm may be a grayscale registration or feature registration based algorithm for registering the reference image and the image to be processed. The point in the registered reference image and the point in the image to be processed, which indicates the same position in space, have the same coordinate.
By way of example only, and not limitation, the registration algorithm may employ a gray-scale matching algorithm that may include: mean Absolute Difference Algorithm (MAD), Sum of Absolute error Algorithm (SAD), Sum of Square error Algorithm (Sum of Squared Differences, SSD), Sum of Square error Algorithm (Mean Square Differences, MSD), Normalized product Correlation Algorithm (NCC), Sequential Similarity Detection Algorithm (SSDA), hadamard transform Algorithm (Sum of Absolute Transformed Differences, SATD), local gray value coding Algorithm, and the like. For example, the grayscale images of the image to be processed and the at least one reference image may be obtained first, and then a region in the grayscale image of the image to be processed is determined as the template image. And searching sub-images with the similarity meeting the preset requirement with the template image in the gray-scale image of the reference image according to the template image through the gray-scale matching algorithm, and determining the sub-images matched with the template image in the gray-scale image of the reference image. And finally, realizing registration according to the matched subgraph and the gray level image of the image to be processed.
In another example, the registration algorithm may employ a feature algorithm comprising: FAST, Scale-invariant feature transform (SIFT), Speeded-up robust Features (SURF), Local Binary Pattern (LBP), optical flow (optical flow), and so on. For example, the feature-based registration method may extract features, such as points, lines, regions, and the like, of the image to be processed and the image to be matched. In some embodiments, the feature extraction may be performed by the above-mentioned feature algorithm to extract the point features of the image to be processed and the at least one reference image, and generate the feature descriptor of the image to be processed and the feature descriptor of the reference image, respectively. Finally, the reference image is registered with the image to be processed according to the feature descriptors.
And S232, synthesizing the at least one registered reference image to obtain a first reference frame image.
In one possible embodiment, if there is only one reference picture, the reference picture can be directly used as the first reference frame picture. For example, if the reference frame image in fig. 3 is a in fig. 4, a in fig. 4 is the first reference frame image in fig. 3.
If there are 2 or more than 2 reference images, after the corresponding registered reference images are obtained, the obtained registered reference images need to be merged into one first reference frame image.
By way of example only, and not limitation, one possible resultant first reference frame image is shown in FIG. 6. The first reference frame image includes areas respectively shown as a, b and c in fig. 4, which are matched with the image to be processed, and the pixel coordinate of each matched area in the first reference frame image is the same as the pixel coordinate position of the corresponding area of the area in the image to be processed.
S24, the cloud device carries out registration and fusion processing on the first reference frame image and the image to be processed so as to fuse image information matched with the image to be processed in the first reference frame image into the image to be processed and obtain a processed image.
By way of example only and not limitation, two implementations are given for registration fusion of the image to be processed and the first reference frame image based on the matching degree map, and are separately described below.
In one implementation, as shown in fig. 7, the steps include:
s2401, acquiring a matching degree map of the image to be processed and the first reference frame image.
The matching degree map is used for indicating the matching degree of pixels at corresponding positions in the image to be processed and the first reference frame image.
For example only, a trained matching degree detection network is preset in the cloud device. The cloud device can input the image to be processed and the first reference frame image into a matching degree detection network, the matching degree detection network marks pixels corresponding to the completely matched area in the first reference frame image in the image to be processed as 0 and marks pixels corresponding to the completely unmatched area as 1, and a matrix for indicating the matching degree of each pixel point in the image to be processed and the corresponding pixel point in the first reference frame image is output, namely the matrix can be used as a matching degree map and is marked as Cmap
The matching degree detection network may be implemented by a deep Neural network, for example, a Convolutional Neural Network (CNN), a regional Convolutional Neural network (R-CNN), a Fast regional Convolutional Neural network (Fast R-CNN), a Faster regional Convolutional Neural network (Fast R-CNN), and the like, which are not limited herein.
S2402, acquiring a first image and a second image according to the matching degree map, the image to be processed and the first reference frame image.
The first image comprises a region of the image to be processed, which is different from pixels in the first reference frame image, and the second image comprises a region of the first reference frame image, which is the same as pixels in the image to be processed.
Referring to fig. 8, a in fig. 8 shows a first image, and a non-shaded area in a represents an area in the image to be processed, which is different from pixels in the first reference frame image. B in fig. 8 shows a second image, and the unshaded area in b represents the same area in the first reference frame image as the pixel in the image to be processed.
S2403, synthesizing the first image and the second image to obtain a second reference frame image.
The second reference frame image is obtained by integrating the region image (first image) which is not matched with the first reference frame image in the image to be processed and the region image (second image) which is matched with the image to be processed in the first reference frame image. And the matching degree map indicates the image to be processed and the first imageThe matching degree of the pixels of the corresponding position in the reference frame image (0 is matching, 1 is not matching), and therefore the first reference frame image can be acquired from the matching degree map. For example, if I denotes the image to be processed, CmapMap showing degree of matching, Rf1Representing a first reference frame image, Rf2Representing the second reference frame image, the second reference frame image may be obtained by the following formula:
Rf2=I*Cmap+Rf1*(1.0-Cmap)
the matching degree map and the image to be processed may be multiplied first, so as to keep the region image in the image to be processed that is not matched with the first reference frame image. Meanwhile, subtracting the matching degree map by 1, inverting the matching degree map, and multiplying the inverted matching degree map by the first reference frame image, so that the region image matched with the image to be processed in the first reference frame image can be reserved. And finally, adding the two parts to obtain a second reference frame image.
S2404, registering each pixel in the image to be processed and the second reference frame image one by one to obtain pixel position deviation data of the image to be processed and the second reference frame image.
In some embodiments, pixel position deviation data may be acquired by a registration network. The registration network may be implemented by a deep neural network, and the type of the neural network is the same as the type of the matching degree detection network in S2401, which is not described herein again. The registration accuracy of the registration network is greater than that of the registration algorithm, and the registration network can register each pixel in the image to be processed with a pixel in the second reference frame image to obtain pixel position deviation data of the image to be processed and the second reference frame image.
S2405, according to the pixel position deviation data, fusing the high-frequency information of each pixel in the second reference frame image to a corresponding position in the image to be processed to obtain a processed image.
In some embodiments, the second reference frame image, the pixel position deviation data, and the image to be processed may be input to a fusion network and fused to obtain a processed image. And the fusion network is used for fusing the high-frequency information of the pixels in the second reference frame image to the corresponding positions in the image to be processed according to the pixel position deviation data.
In this embodiment, the image to be processed and the first reference frame image are synthesized into the second reference frame image according to the matching degree map, and the matching degree of the obtained second reference frame image is consistent with that of the image to be processed. And then, the image to be processed is processed according to the second reference frame image, so that a more accurate and more vivid image can be obtained.
It should be noted that, in the data input into the fusion network, the pixel position deviation data is obtained according to the registration network, so that the same loss function can be used for training when training the registration network and the fusion network. Of course, this is not necessary, and the two may be trained separately using different loss functions, which is not limited herein.
Another implementation of S24 can be shown in fig. 10, and the steps include:
s2406, acquiring a matching degree map of the image to be processed and the first reference frame image.
The matching degree map is used for indicating the matching degree of pixels at corresponding positions in the image to be processed and the first reference frame image.
S2407, registering each pixel in the image to be processed and the first reference frame image one by one to obtain pixel position deviation data of the image to be processed and the first reference frame image.
In some embodiments, pixel position deviation data may be acquired by a registration network. The registration network is the same as the registration network in S2404, and is not described herein. S2408, according to the pixel position deviation data, fusing the high-frequency information of each pixel in the first reference frame image to a corresponding position in the image to be processed to obtain a third reference frame image.
In some embodiments, the first reference frame image, the pixel position deviation data, and the image to be processed may be input to a fusion network for fusion to obtain a third reference frame image. The converged network is the same as the converged network in S2405, and is not described herein.
S2409, acquiring a third image and a fourth image according to the matching degree map, the image to be processed and the third reference frame image.
The third image comprises a region of the image to be processed, which is different from pixels in the third reference frame image, and the fourth image comprises a region of the third reference frame image, which is the same as pixels in the image to be processed.
And S2410, synthesizing the third image and the fourth image to obtain a processed image.
In some embodiments, referring to the example of S2403, if the third reference frame image is DMF and the processed image is finalout, the processed image may be obtained through the following formula:
FinalOutput=I*Cmap+DMF*(1.0-Cmap)
the difference between the methods in S2406 to S2410S2401 to S2405 is that, in this embodiment, the image to be processed is fused with the first reference frame image to obtain a third reference frame image. And synthesizing the third reference frame image and the image to be processed according to the matching degree map to obtain a processed image. When the third reference frame image is generated, the matching degree map does not need to be considered, so that the dependency of the fusion network on the matching degree map can be reduced, and the robustness is improved.
And S25, the cloud device sends the processed image to the terminal device.
In some embodiments, fig. 9 shows a possible processed image, and referring to fig. 3, 4 and 9, the processed image shown in fig. 9 is obtained by fusing the image to be processed in fig. 3 and the reference image in fig. 4. Due to the fact that registration is carried out twice, the features in each reference image in the image 4 can be accurately fused into the image to be processed, and therefore the processed image is obtained. Compared with the image to be processed, the processed image has higher quality and clearer visual effect due to the fact that details in the reference image are fused.
After the image to be processed is processed by the image processing method provided by the application, the image information matched with the image to be processed in the first reference frame image can be fused into the image to be processed, so that the processed image contains more high-definition details, the quality of the image to be processed is improved, and the visual effect of the image to be processed is enhanced.
Fig. 11 shows another schematic flowchart of the image processing method provided in the present application, which describes a process in which the first registration is performed by the cloud and the second registration is performed by the terminal. As shown in fig. 11, the method includes:
and S31, the terminal equipment acquires the image to be processed and sends the image to the cloud equipment.
And S32, the cloud device determines at least one reference image with the similarity meeting a preset threshold value with the image to be processed from a preset reference image database.
S33, the cloud device carries out registration processing on the image to be processed and the at least one reference image to obtain a first reference frame image.
And S34, the cloud device sends the first reference frame image to the terminal device.
S35, the terminal device carries out registration and fusion processing on the first reference frame image and the image to be processed so as to fuse image information matched with the image to be processed in the first reference frame image into the image to be processed and obtain a processed image.
S31 to S33 are the same as S21 to S23 in the above method, and are implemented in the cloud device. The implementation manners of S31 to S33 are the same as those of S21 to S23, and the implementation manners of S35 and S24 are the same, which are not described herein again.
The difference is that S35 is implemented in the terminal device, and S24 is implemented in the cloud device.
In this embodiment, the first reference frame image and the to-be-processed image are subjected to registration and fusion processing in the terminal device, so that the data processing pressure of the cloud device can be reduced, and the data processing efficiency of the cloud device can be improved.
Fig. 12 is a schematic flow chart of an image processing method according to another embodiment of the present application. As shown in fig. 12, the image processing method further includes:
and S41, determining a plurality of image areas in the image to be processed.
In some embodiments, referring to fig. 13, a plurality of image areas, such as a cloud color area 14, a building area 15, a plant area 16, and the like, in the image to be processed 111 may be obtained through an image recognition algorithm.
The image recognition algorithm may be a semantic segmentation algorithm, and the implementation manner thereof may be CNN, R-CNN, Fast R-CNN, etc., which is not limited herein.
And S42, acquiring N areas matched with the first reference frame image in the plurality of image areas.
Wherein N is an integer greater than or equal to 1.
In some embodiments, image regions in the image to be processed may have image regions that do not match the first reference frame image, and these image regions that do not match are discarded, and only the regions that match the first reference frame image are retained.
And S43, determining the regions to be processed in the N regions.
In some embodiments, if N is equal to 1, the region is directly determined to be a region to be processed.
For example only, and not by way of limitation, if N is greater than 1, the determination may be made according to a predetermined determination policy. For example, the similarity between each image region and the corresponding region in the first reference frame image may be obtained, and if the similarity of one region is smaller than a preset threshold, the region is determined to be a region to be processed. Alternatively, the determination may be made according to an instruction of the user. For example, the to-be-processed picture 111 and each image area may be displayed on the terminal device, and in response to a region selection instruction issued by a user, the region indicated by the instruction is determined to be the to-be-processed region.
Correspondingly, the registration and fusion processing is performed on the first reference frame image and the image to be processed, so as to fuse the image information matched with the image to be processed in the first reference frame image into the image to be processed, and obtain a processed image, and the registration and fusion processing includes:
and S44, performing registration and fusion processing on the first reference frame image and the to-be-processed area in the to-be-processed image, so as to fuse the image information matched with the to-be-processed area in the to-be-processed image in the first reference frame image into the to-be-processed image, and obtain a processed image.
In the embodiment, by determining the region to be processed and processing only the region to be processed, the image to be processed can be processed more specifically, so that the method can adapt to more complex scenes, and the application range of the image processing method is enlarged.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 14 shows a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application, corresponding to the image processing method provided in the above embodiment, and only shows a part related to the embodiment of the present application for convenience of description. Referring to fig. 14, the apparatus includes:
the acquiring module 51 is configured to acquire a first reference frame image of the image to be processed, where the first reference frame image is obtained by performing registration processing on the image to be processed and at least one reference image, a similarity between the at least one reference image and the image to be processed satisfies a preset threshold, and a quality of the first reference frame image is greater than that of the image to be processed. And the registration fusion module 52 is configured to perform registration fusion processing on the first reference frame image and the image to be processed, so as to fuse image information in the first reference frame image, which is matched with the image to be processed, into the image to be processed, so as to obtain a processed image.
In another possible implementation manner, the obtaining module 51 is configured to receive an image to be processed sent by a terminal device. Determining at least one reference image with the similarity meeting a preset threshold value with the image to be processed from a preset reference image database, wherein the quality of the reference image in the reference image database is greater than that of the image to be processed. And carrying out registration processing on the image to be processed and at least one reference image to obtain a first reference frame image.
In another possible implementation manner, the obtaining module 51 is configured to send the image to be processed to the cloud, determine, by the cloud, at least one reference image whose similarity to the image to be processed satisfies a preset threshold from a preset reference image database, and perform registration processing on the image to be processed and the at least one reference image to obtain a first reference frame image. And the quality of the reference image in the reference image database is greater than that of the image to be processed. And receiving a first reference frame image sent by the cloud.
In the present application, by way of example and not limitation, the following two methods are given to illustrate the processing procedure of the registration fusion module for performing registration fusion on the images to be processed. It should be noted that, when the cloud device is implemented separately, the registration fusion module is located in the cloud device. When the cloud device and the terminal device are jointly implemented, the registration and fusion module is located in the terminal device.
In one implementation, the registration fusion module 52 is specifically configured to obtain a matching degree map of the image to be processed and the first reference frame image, where the matching degree map is used to indicate matching degrees of pixels at corresponding positions in the image to be processed and the first reference frame image. And acquiring a first image and a second image according to the matching degree map, the image to be processed and the first reference frame image, wherein the first image comprises a region of the image to be processed, which is different from pixels in the first reference frame image, and the second image comprises a region of the first reference frame image, which is the same as pixels in the image to be processed. And synthesizing the first image and the second image to obtain a second reference frame image. And carrying out registration and fusion processing on the second reference frame image and the image to be processed to obtain a processed image.
Based on the above implementation, the registration fusion module 52 is further configured to register each pixel in the image to be processed and the second reference frame image one by one, so as to obtain pixel position deviation data of the image to be processed and the second reference frame image. And according to the pixel position deviation data, fusing the high-frequency information of each pixel in the second reference frame image to a corresponding position in the image to be processed to obtain a processed image.
In another implementation, the registration fusion module 52 is specifically configured to perform registration fusion processing on the first reference frame image and the image to be processed to obtain a fusion image. And synthesizing the image to be processed and the fusion image according to the matching degree map to obtain a processed image.
Based on the foregoing implementation, the registration fusion module 52 is specifically configured to obtain a matching degree map of the image to be processed and the first reference frame image, where the matching degree map is used to indicate matching degrees of pixels at corresponding positions in the image to be processed and the first reference frame image. And carrying out registration and fusion processing on the first reference frame image and the image to be processed to obtain a third reference frame image. And acquiring a third image and a fourth image according to the matching degree map, the image to be processed and the third reference frame image, wherein the third image comprises a region of the image to be processed, which is different from the pixels in the third reference frame image, and the fourth image comprises a region of the third reference frame image, which is the same as the pixels in the image to be processed. And synthesizing the third image and the fourth image to obtain a processed image.
Based on the above implementation, the registration fusion module 52 is further configured to register each pixel in the image to be processed and the first reference frame image one by one, so as to obtain pixel position deviation data of the image to be processed and the first reference frame image. And according to the pixel position deviation data, fusing the high-frequency information of each pixel in the first reference frame image to a corresponding position in the image to be processed to obtain a third reference frame image.
Based on the above embodiments, referring to fig. 15, the image processing apparatus further includes a determining module 53 for determining a plurality of image areas in the image to be processed. And acquiring N areas matched with the first reference frame image in the plurality of image areas, wherein N is an integer greater than or equal to 1. And determining the region to be processed in the N regions.
Correspondingly, the registration fusion module 52 is configured to perform registration fusion processing on the first reference frame image and the to-be-processed region in the to-be-processed image, so as to fuse image information in the first reference frame image, which is matched with the to-be-processed region in the to-be-processed image, into the to-be-processed image, so as to obtain a processed image.
It should be noted that, for the information interaction, execution process, and other contents between the modules in the apparatus, the specific functions and technical effects of the embodiments of the method are based on the same concept, and thus reference may be made to the section of the embodiments of the method specifically, and details are not described here.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 16 is a schematic structural diagram of a communication device according to an embodiment of the present application. As shown in fig. 16, the terminal device 6 includes: at least one processor 61 (only one is shown in fig. 16), a memory 62, and a computer program 63 stored in the memory 62 and executable on the at least one processor 61, the processor 61 implementing the steps in the embodiments of the image processing method described above when executing the computer program 63.
Those skilled in the art will appreciate that fig. 16 is merely an example of the communication device 6, and does not constitute a limitation of the communication device 6, and may include more or less components than those shown, or some components in combination, or different components, such as an input output device, an image acquisition device, a network access device, etc.
The communication device 6 may be the cloud device or the terminal device, and the number of the communication devices is not limited. For example, when the communication device 6 is a cloud device, the steps performed by the cloud device can be implemented. The above steps S21, S22, S23, S24, S31, S32, S33, and the like. When the communication device 6 is a terminal device, the steps performed by the above-described terminal device are implemented as S35. It should be noted that when the communication device 6 is a terminal device, another communication device 6 (for example, a cloud device) is required to execute the image processing method provided in the present application together with the terminal device, so as to completely implement each step of the image processing method in the present application.
The Processor 61 may be a Central Processing Unit (CPU), and the Processor 61 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), off-the-shelf Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 62 may in some embodiments be an internal storage unit of the communication device 6, such as a hard disk or a memory of the terminal device 6. The memory 62 may also be an external storage device of the communication device 6 in other embodiments, such as a plug-in hard disk provided on the communication device 6, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and so on. Further, the memory 62 may also include both an internal storage unit of the terminal device 6 and an external storage device. The memory 62 is used for storing an operating system, application programs, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs, and the like. The memory 62 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps that can be implemented in the above method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The above-described image processing method, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiment described above can be realized by a computer program which includes a computer program code, wherein the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (12)

1. An image processing method, comprising:
acquiring a first reference frame image of an image to be processed, wherein the first reference frame image is obtained by registering the image to be processed and at least one reference image, the similarity between the at least one reference image and the image to be processed meets a preset threshold value, and the quality of the first reference frame image is greater than that of the image to be processed;
and performing registration and fusion processing on the first reference frame image and the image to be processed to fuse image information matched with the image to be processed in the first reference frame image into the image to be processed to obtain a processed image.
2. The method of claim 1, wherein the obtaining a first reference frame image of the image to be processed comprises:
receiving the image to be processed sent by the terminal equipment;
determining the at least one reference image with the similarity to the image to be processed meeting the preset threshold from a preset reference image database, wherein the quality of the reference image in the reference image database is greater than that of the image to be processed;
and carrying out registration processing on the image to be processed and the at least one reference image to obtain the first reference frame image.
3. The method of claim 1, wherein the obtaining a first reference frame image of the image to be processed comprises:
sending the image to be processed to a cloud end, determining the at least one reference image with the similarity to the image to be processed meeting the preset threshold from a preset reference image database by the cloud end, and performing registration processing on the image to be processed and the at least one reference image to obtain a first reference frame image, wherein the quality of the reference image in the reference image database is greater than that of the image to be processed;
and receiving the first reference frame image sent by the cloud.
4. The method according to claim 2 or 3, wherein the registering the image to be processed and the at least one reference image to obtain the at least one first reference frame image comprises:
respectively carrying out registration processing on the at least one reference image and the image to be processed to obtain at least one registered reference image;
and synthesizing the at least one registered reference image to obtain the first reference frame image.
5. The method according to any one of claims 1 to 4, wherein the registering and fusing the first reference frame image and the image to be processed to fuse image information in the first reference frame image, which is matched with the image to be processed, into the image to be processed to obtain a processed image, includes:
acquiring a matching degree map of the image to be processed and the first reference frame image, wherein the matching degree map is used for indicating the matching degree of pixels at corresponding positions in the image to be processed and the first reference frame image;
acquiring a first image and a second image according to the matching degree map, the image to be processed and the first reference frame image, wherein the first image comprises a region of the image to be processed, which is different from pixels in the first reference frame image, and the second image comprises a region of the first reference frame image, which is the same as pixels in the image to be processed;
synthesizing the first image and the second image to obtain a second reference frame image;
and carrying out registration and fusion processing on the second reference frame image and the image to be processed to obtain the processed image.
6. The method of claim 5, wherein the registering and fusing the second reference frame image and the image to be processed to obtain the processed image comprises:
registering each pixel in the image to be processed and the second reference frame image one by one to obtain pixel position deviation data of the image to be processed and the second reference frame image;
and according to the pixel position deviation data, fusing the high-frequency information of each pixel in the second reference frame image to a corresponding position in the image to be processed to obtain the processed image.
7. The method according to any one of claims 1 to 4, wherein the registering and fusing the first reference frame image and the image to be processed to fuse image information in the first reference frame image, which is matched with the image to be processed, into the image to be processed to obtain a processed image, includes:
acquiring a matching degree map of the image to be processed and the first reference frame image, wherein the matching degree map is used for indicating the matching degree of pixels at corresponding positions in the image to be processed and the first reference frame image;
registering and fusing the first reference frame image and the image to be processed to obtain a third reference frame image;
acquiring a third image and a fourth image according to the matching degree map, the image to be processed and the third reference frame image, wherein the third image comprises a region of the image to be processed, which is different from pixels in the third reference frame image, and the fourth image comprises a region of the third reference frame image, which is the same as pixels in the image to be processed;
and synthesizing the third image and the fourth image to obtain a processed image.
8. The method of claim 7, wherein the registering and fusing the first reference frame image and the image to be processed to obtain a third reference frame image comprises:
registering each pixel in the image to be processed and the first reference frame image one by one to obtain pixel position deviation data of the image to be processed and the first reference frame image;
and according to the pixel position deviation data, fusing the high-frequency information of each pixel in the first reference frame image to a corresponding position in the image to be processed to obtain a third reference frame image.
9. The method of any one of claims 1-8, wherein prior to the registration fusion process of the first reference frame image and the image to be processed, the method further comprises:
determining a plurality of image areas in the image to be processed;
acquiring N areas matched with the first reference frame image in the plurality of image areas, wherein N is an integer greater than or equal to 1;
determining a region to be processed in the N regions;
correspondingly, the registering and fusing the first reference frame image and the image to be processed to fuse the image information matched with the image to be processed in the first reference frame image into the image to be processed to obtain a processed image, including:
and performing registration and fusion processing on the first reference frame image and the to-be-processed area in the to-be-processed image so as to fuse image information matched with the to-be-processed area in the to-be-processed image in the first reference frame image into the to-be-processed image to obtain a processed image.
10. An image processing apparatus characterized by comprising:
an obtaining module, configured to obtain a first reference frame image of an image to be processed, where the first reference frame image is obtained by performing registration processing on the image to be processed and at least one reference image, a similarity between the at least one reference image and the image to be processed satisfies a preset threshold, and a quality of the first reference frame image is greater than that of the image to be processed;
and the registration fusion module is used for performing registration fusion processing on the first reference frame image and the image to be processed so as to fuse image information matched with the image to be processed in the first reference frame image into the image to be processed to obtain a processed image.
11. A communication device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the communication device implements the image processing method according to any one of claims 1 to 9 when the processor executes the computer program.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 9.
CN202010104217.2A 2020-02-19 2020-02-19 Image processing method, image processing device, communication equipment and readable storage medium Pending CN113284077A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010104217.2A CN113284077A (en) 2020-02-19 2020-02-19 Image processing method, image processing device, communication equipment and readable storage medium
PCT/CN2020/127154 WO2021164329A1 (en) 2020-02-19 2020-11-06 Image processing method and apparatus, and communication device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010104217.2A CN113284077A (en) 2020-02-19 2020-02-19 Image processing method, image processing device, communication equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113284077A true CN113284077A (en) 2021-08-20

Family

ID=77275014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010104217.2A Pending CN113284077A (en) 2020-02-19 2020-02-19 Image processing method, image processing device, communication equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN113284077A (en)
WO (1) WO2021164329A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689362B (en) * 2021-10-27 2022-02-22 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075482A1 (en) * 2010-09-28 2012-03-29 Voss Shane D Image blending based on image reference information
CN105931210A (en) * 2016-04-15 2016-09-07 中国航空工业集团公司洛阳电光设备研究所 High-resolution image reconstruction method
US20190286875A1 (en) * 2018-03-19 2019-09-19 Rosemount Aerospace Limited Cloud detection in aerial imagery

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5451621B2 (en) * 2007-10-01 2014-03-26 コーニンクレッカ フィリップス エヌ ヴェ Detection and tracking of interventional instruments
CN105046676A (en) * 2015-08-27 2015-11-11 上海斐讯数据通信技术有限公司 Image fusion method and equipment based on intelligent terminal
CN105913409A (en) * 2016-07-12 2016-08-31 常俊苹 Image processing method based on fusion of multiple frames of images
CN110660088B (en) * 2018-06-30 2023-08-22 华为技术有限公司 Image processing method and device
CN109785233B (en) * 2018-12-25 2020-12-04 合肥埃科光电科技有限公司 Image super-resolution reconstruction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075482A1 (en) * 2010-09-28 2012-03-29 Voss Shane D Image blending based on image reference information
CN105931210A (en) * 2016-04-15 2016-09-07 中国航空工业集团公司洛阳电光设备研究所 High-resolution image reconstruction method
US20190286875A1 (en) * 2018-03-19 2019-09-19 Rosemount Aerospace Limited Cloud detection in aerial imagery

Also Published As

Publication number Publication date
WO2021164329A1 (en) 2021-08-26

Similar Documents

Publication Publication Date Title
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
US10410315B2 (en) Method and apparatus for generating image information
CN112802033B (en) Image processing method and device, computer readable storage medium and electronic equipment
WO2019042404A1 (en) Image processing method, terminal, and storage medium
CN111311482B (en) Background blurring method and device, terminal equipment and storage medium
CN109064504B (en) Image processing method, apparatus and computer storage medium
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
KR20210080291A (en) Method, electronic device, and storage medium for recognizing license plate
Joze et al. Imagepairs: Realistic super resolution dataset via beam splitter camera rig
KR102572986B1 (en) Object Tracking Based on Custom Initialization Points
CN112767295A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111131688B (en) Image processing method and device and mobile terminal
CN114298902A (en) Image alignment method and device, electronic equipment and storage medium
CN113298187B (en) Image processing method and device and computer readable storage medium
CN113658035B (en) Face transformation method, device, equipment, storage medium and product
CN106997366B (en) Database construction method, augmented reality fusion tracking method and terminal equipment
WO2021164329A1 (en) Image processing method and apparatus, and communication device and readable storage medium
CN113673474A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116823611A (en) Multi-focus image-based referenced super-resolution method
CN114944015A (en) Image processing method and device, electronic equipment and storage medium
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image
CN112203023B (en) Billion pixel video generation method and device, equipment and medium
CN111986246B (en) Three-dimensional model reconstruction method, device and storage medium based on image processing
CN112750164A (en) Lightweight positioning model construction method, positioning method and electronic equipment
CN112508801A (en) Image processing method and computing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination