CN115546071A - Data processing method and equipment suitable for image restoration - Google Patents

Data processing method and equipment suitable for image restoration Download PDF

Info

Publication number
CN115546071A
CN115546071A CN202211495957.9A CN202211495957A CN115546071A CN 115546071 A CN115546071 A CN 115546071A CN 202211495957 A CN202211495957 A CN 202211495957A CN 115546071 A CN115546071 A CN 115546071A
Authority
CN
China
Prior art keywords
image
block
geometric
current
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211495957.9A
Other languages
Chinese (zh)
Other versions
CN115546071B (en
Inventor
郭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Shiyun Information Technology Co ltd
Original Assignee
Nanjing Shiyun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Shiyun Information Technology Co ltd filed Critical Nanjing Shiyun Information Technology Co ltd
Priority to CN202211495957.9A priority Critical patent/CN115546071B/en
Publication of CN115546071A publication Critical patent/CN115546071A/en
Application granted granted Critical
Publication of CN115546071B publication Critical patent/CN115546071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a data processing method and equipment suitable for image recovery, which are characterized in that an image to be recovered and three-dimensional point cloud data are obtained; extracting image texture features and image structure features formed by projection of three-dimensional point cloud data in a plane area by using a feature extraction model; dividing the image to be restored into a plurality of characteristic areas according to the characteristic; searching target matching image blocks matched with the textures and/or structures of the geometric image blocks in the image to be restored by using an iterative optimization matching model, and combining all the target matching image blocks into a to-be-selected restoration image block set; and determining a region to be restored in the image to be restored according to the image structure characteristics, and calling the restoration blocks to be selected in the restoration block set to be selected to restore the image to the region to be restored through a preset restoration model. According to the method and the device, the characteristics in the three-dimensional point cloud data are extracted, and the restoration image blocks are constructed according to the characteristics to fill the to-be-restored area of the image, so that the image restoration efficiency and accuracy are improved.

Description

Data processing method and equipment suitable for image recovery
Technical Field
The present application relates to the field of image recovery, and in particular, to a data processing method and device suitable for image recovery.
Background
With the continuous progress of society and computer technology, digital image processing technology has been widely applied in the fields of scanners, digital cameras, digital televisions, mobile intelligent terminals, intelligent assistance or automatic driving of automobiles, monitoring, target detection and identification, and the like. However, the image quality is often degraded due to various uncontrollable factors in the processes of acquiring, transmitting and storing the image, such as the relative motion between the image pickup apparatus and the object, diffraction phenomena, turbulence effects, noise of electronic circuits, and the influence of bad weather such as rain, snow, fog, etc. There are many factors such as blurring, local loss, brightness or sharpness degradation, etc., which are collectively called image degradation, and image restoration is the inverse problem of image degradation.
At present, in order to improve the universality of an image recovery model, the mainstream image recovery relies on training the image recovery model through a large amount of data, or by deepening the convolution depth, increasing the complexity of a neural network model and the like, so that the training cost of the image recovery model is sharply improved, the obtained image recovery model still cannot perform a universal recovery effect on various image degradations, and the input-output ratio is low.
Therefore, the existing image restoration technology has the technical problems of low efficiency, high cost and poor universality.
Disclosure of Invention
The embodiment of the application provides a data processing method and equipment suitable for image restoration, and aims to solve the technical problems of low efficiency, high cost and poor universality in the existing image restoration technology.
In a first aspect of the embodiments of the present application, a data processing method suitable for image recovery is provided, including:
acquiring an image to be recovered and three-dimensional point cloud data, wherein a plane area corresponding to the image to be recovered is contained in a space area corresponding to the three-dimensional point cloud data;
extracting image texture features and image structure features formed by projection of three-dimensional point cloud data in a plane area by using a feature extraction model;
according to the image texture features and the image structure features, dividing an image to be restored into a plurality of feature regions, wherein each feature region corresponds to one image texture feature or one image structure feature, and each feature region comprises a plurality of geometric image blocks;
searching target matching image blocks matched with the textures and/or structures of the geometric image blocks in each characteristic region by using an iterative optimization matching model, and combining all the target matching image blocks into a to-be-selected restoration image block set;
and determining a region to be restored in the image to be restored according to the image structure characteristics, and calling the restoration blocks to be selected in the restoration block set to be selected to restore the image of the region to be restored through a preset restoration model to obtain a restored target image.
In one possible design, searching for a target matching block matching the texture and/or structure of each geometric block in each feature region using an iterative optimization matching model, includes:
randomly distributing an initial offset value for each geometric figure block in each characteristic region, and determining a matched figure block to be selected according to the initial offset value and the coordinates of the geometric figure blocks, wherein the matched figure block to be selected and the geometric figure block are in the same characteristic region;
selecting a geometric figure block as a current figure block, and determining the texture propagation direction of a characteristic region in which the current figure block is located according to the image texture characteristics corresponding to the characteristic region in which the current figure block is located;
judging whether the difference degree between the to-be-selected matching image block corresponding to the current image block and the current image block is greater than a preset difference threshold value by using a similarity discrimination model;
if yes, searching a target matching image block corresponding to the current image block in the characteristic area where the current image block is located according to the texture propagation direction, wherein the difference degree between the target matching image block corresponding to the current image block and the current image block is smaller than or equal to a preset difference threshold value.
Optionally, after the similarity judging model is used to judge whether the difference between the to-be-selected matching image block and the geometric image block is greater than a preset difference threshold, the method further includes:
if not, determining that the image blocks to be selected are target matching image blocks of the geometric image blocks, and transmitting the relative position relation between the geometric image blocks and the target matching image blocks to each adjacent image block adjacent to the geometric image blocks;
and taking the adjacent image blocks along the texture propagation direction as the current geometric image block, and verifying whether the difference degree of the image blocks to be matched corresponding to the current geometric image block is greater than a preset difference threshold value or not by circularly utilizing the similarity discrimination model until the edge of the characteristic region is reached.
In one possible design, determining a texture propagation direction of a feature region where a current image block is located according to an image texture feature corresponding to the feature region where the current image block is located includes:
calculating the maximum curvature point on the boundary of the characteristic region where the current image block is located by utilizing a preset curvature model:
Figure 843960DEST_PATH_IMAGE001
wherein, Q is a curvature, and Q is a curvature,
Figure 557838DEST_PATH_IMAGE002
is the gradient of the characteristic region(s),
Figure 134313DEST_PATH_IMAGE003
is a mode of the gradient and is a linear mode,
Figure 591839DEST_PATH_IMAGE004
is the divergence;
taking the direction pointing to the inside of the characteristic region where the current image block is located on the isolux line corresponding to the maximum curvature point as the texture propagation direction:
Figure 734107DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 618887DEST_PATH_IMAGE006
in the direction of the isolux line at the point of maximum curvature,
Figure 213816DEST_PATH_IMAGE007
is the gradient at the point of maximum curvature.
In one possible design, searching a target matching block corresponding to a current block in a feature region where the current block is located according to a texture propagation direction includes:
taking each geometric picture block in the texture propagation direction as each first to-be-selected matching picture block of the current picture block:
and repeating the steps of judging whether the first difference degree between each first to-be-selected matching image block and the current image block is greater than a preset difference threshold value by using the similarity discrimination model for multiple times until a preset ending condition is met, and determining a target matching image block corresponding to the current image block.
Optionally, the preset end condition includes:
calculating first difference degrees of all first to-be-selected matching image blocks in the characteristic region where the current image block is located in the texture propagation direction, and taking the first to-be-selected matching image block corresponding to the minimum first difference degree as a target matching image block corresponding to the current image block;
alternatively, the first and second electrodes may be,
and stopping judging as long as the first difference degree is smaller than or equal to the preset difference threshold, and taking the corresponding first to-be-selected matching image block as a target matching image block corresponding to the current image block.
In one possible design, after determining the target matching tile corresponding to the current tile, the method further includes:
determining a second offset value of the first geometric picture block adjacent to the current picture block according to the first offset value between the current picture block and a target matching picture block corresponding to the current picture block, wherein the second offset value is used for representing the relative position of a to-be-selected matching picture block redistributed to the first geometric picture block and the first geometric picture block;
and taking the first geometric figure block as a new current figure block, and continuously searching a target matching figure block corresponding to the current figure block in the characteristic region where the current figure block is located according to the texture propagation direction until the optimization of the target matching figure blocks corresponding to all geometric figure blocks is completed.
In a possible design, when the feature region corresponds to an image texture feature, the geometric blocks in the feature region are texture blocks, and the similarity determination model includes: a texture similarity model;
the specific steps of judging whether the difference degree between the matching image blocks to be selected and the geometric image blocks is greater than a preset difference threshold value by using the similarity discrimination model comprise:
judging whether the difference value between the first average gray value of the to-be-selected matching image block and the second average gray value of the geometric image block is larger than a preset difference threshold value:
Figure 615979DEST_PATH_IMAGE008
wherein, the first and the second end of the pipe are connected with each other,
Figure 612754DEST_PATH_IMAGE009
is a geometric figure block with (x, y) as the center,
Figure 934013DEST_PATH_IMAGE010
to be composed of
Figure 750660DEST_PATH_IMAGE011
The central graph block to be selected and matched,
Figure 956513DEST_PATH_IMAGE012
in order to be a preset difference threshold value,
Figure 813654DEST_PATH_IMAGE013
and
Figure 571394DEST_PATH_IMAGE014
the abscissa offset and the ordinate offset,
Figure 875337DEST_PATH_IMAGE015
is the average gray value.
In a possible design, when the feature region corresponds to an image structural feature, the geometric blocks in the feature region are structural blocks, and the similarity determination model includes: a structural similarity model;
the specific steps of judging whether the difference degree between the matching image blocks to be selected and the geometric image blocks is greater than a preset difference threshold value by using the similarity discrimination model comprise the following steps:
judging whether an included angle between the first structure vector of the to-be-selected matching image block and the second structure vector of the geometric image block is larger than a preset difference threshold value:
Figure 212777DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 918565DEST_PATH_IMAGE017
a second structure vector of the geometric tile centered at (x, y),
Figure 722573DEST_PATH_IMAGE018
to be provided with
Figure 779391DEST_PATH_IMAGE019
A first structure vector of the candidate matching tile as a center,
Figure 920522DEST_PATH_IMAGE020
in order to be the preset difference threshold value,
Figure 746396DEST_PATH_IMAGE013
and
Figure 580359DEST_PATH_IMAGE014
the abscissa offset and the ordinate offset.
In a second aspect of the embodiments of the present application, there is provided a data processing apparatus suitable for image restoration, including:
the acquisition module is used for acquiring an image to be recovered and three-dimensional point cloud data, and a plane area corresponding to the image to be recovered is contained in a space area corresponding to the three-dimensional point cloud data;
a processing module to:
extracting image texture features and image structure features formed by projection of three-dimensional point cloud data in a plane area by using a feature extraction model;
according to the image texture features and the image structure features, dividing an image to be restored into a plurality of feature regions, wherein each feature region corresponds to one image texture feature or one image structure feature, and each feature region comprises a plurality of geometric image blocks;
searching target matching image blocks matched with the textures and/or structures of the geometric image blocks in each characteristic region by using an iterative optimization matching model, and combining all the target matching image blocks into a to-be-selected restoration image block set;
and the restoration module is used for determining a to-be-restored area in the to-be-restored image according to the image structure characteristics, calling the to-be-selected restoration image blocks in the to-be-selected restoration image block set to restore the to-be-restored area through a preset restoration model, and obtaining a restored target image.
In a third aspect of the embodiments of the present application, there is provided an apparatus for data processing suitable for image restoration, including:
a memory for storing program instructions;
and the processor is used for calling and executing the program instructions in the memory to execute any one of the possible methods provided by the first aspect.
In a fourth aspect of the embodiments of the present application, a storage medium is provided, where a computer program is stored in the storage medium, and the computer program is configured to execute any one of the possible methods provided in the first aspect.
A fifth aspect of the embodiments of the present application provides a computer program product, which includes a computer program that, when executed by a processor, implements any one of the possible methods provided by the first aspect.
According to the data processing method and the data processing equipment suitable for image recovery, the image to be recovered and the three-dimensional point cloud data are obtained, and the plane area corresponding to the image to be recovered is contained in the space area corresponding to the three-dimensional point cloud data; extracting image texture features and image structure features formed by projection of three-dimensional point cloud data in a plane area by using a feature extraction model; according to the image texture features and the image structure features, segmenting an image to be restored into a plurality of feature regions, wherein each feature region corresponds to one image texture feature or one image structure feature and comprises a plurality of geometric image blocks; searching target matching image blocks matched with the textures and/or structures of the geometric image blocks in the image to be restored by using an iterative optimization matching model, and combining all the target matching image blocks into a to-be-selected restoration image block set; and determining a region to be restored in the image to be restored according to the image structure characteristics, and calling the restoration blocks to be selected in the restoration block set to be selected to restore the image of the region to be restored through a preset restoration model to obtain a restored target image. According to the image restoration method and device, the three-dimensional point cloud data of the three-dimensional space to which the image belongs are utilized, the characteristics in the three-dimensional point cloud data are extracted, accurate supervised learning data are provided for image restoration, the training requirements on an image restoration model are reduced, the three-dimensional point cloud data can also provide restoration image blocks for image restoration, and therefore the technical effect of improving the efficiency and accuracy of image restoration is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of a data processing method suitable for image restoration according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating an image to be restored is segmented into a plurality of feature regions according to an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating a process of searching for a target matching block in step S140 according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a data processing apparatus suitable for image restoration according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus suitable for image restoration according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein.
It should be understood that, in the various embodiments of the present application, the size of the serial number of each process does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should be understood that, in this application, "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "plurality" means two or more. "and/or" is merely an association relationship describing an associated object, meaning that there may be three relationships, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprising a, B and C", "comprising a, B, C" means that all three of a, B, C are comprised, "comprising a, B or C" means comprising one of three of a, B, C, "comprising a, B and/or C" means comprising any 1 or any 2 or 3 of three of a, B, C.
It should be understood that in the present application, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, from which B can be determined. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, the term "if" may be interpreted as "at \8230; …" or "in response to a determination" or "in response to a detection" depending on the context.
The technical means of the present application will be described in detail with specific examples. These several specific embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments.
With the continuous progress of society and computer technology, digital image processing technology has been widely applied in the fields of scanners, digital cameras, digital televisions, mobile intelligent terminals, intelligent assistance or automatic driving of automobiles, monitoring, target detection and identification, and the like. However, the image quality is often degraded due to various uncontrollable factors in the processes of acquiring, transmitting and storing the image, such as the relative motion between the image pickup apparatus and the object, diffraction phenomena, turbulence effects, noise of electronic circuits, and the influence of bad weather such as rain, snow, fog, etc. There are many factors such as blurring, local loss, brightness or sharpness degradation, etc., which are collectively called image degradation, and image restoration is the inverse problem of image degradation.
At present, mainstream image restoration relies on training an image restoration model through a large amount of data in order to improve the universality of the image restoration model, or by deepening the convolution depth, increasing the complexity of a neural network model and other modes, so that the training cost of the image restoration model is sharply improved, the obtained image restoration model still cannot perform a universal restoration effect on various image degradations, and the input and output are low.
Therefore, the existing image restoration technology has the technical problems of low efficiency, high cost and poor universality.
In order to solve the technical problems, the invention idea of the application is as follows:
when the images are collected, the three-dimensional point cloud data of the corresponding space area are collected through the radar, the radar is not easily interfered by rain, snow and fog weather factors, the sampling period of the radar is short, and the characteristics of motion blur caused by motion influence are not easily adopted, the three-dimensional point cloud data are projected to the plane area where the images are located, and the features in the images can be restored by performing feature extraction on the projection. Thus, for degraded images, the image restoration model is not required to be relied on to simulate or guess the characteristics of the degraded images, and the training requirement of the image restoration model is reduced. Secondly, based on the characteristics of the restored image interior, finding out image blocks similar to the to-be-restored area in the to-be-restored image, namely the degraded image, so that the to-be-restored area can be restored more accurately, and restoration of the image is completed. Compared with the prior art, the three-dimensional point cloud data and the image to be restored are acquired simultaneously, so that the missing characteristic information in the image to be restored can be extracted as much as possible, the three-dimensional point cloud data and the image to be restored are accurately matched, the image restoration model is not needed to predict the relevant characteristic information of the image to be restored according to the self structure, the image restoration model only needs to pay attention to how to fill the area to be restored by using the image blocks, and the filled content is well fused with the surrounding image areas. Therefore, the common thinking that a large amount of data is needed to train an image recovery model in the prior art is broken through the three-dimensional point cloud data, and meanwhile, the universality of image recovery processing is realized, namely, the universality of an image recovery processing method is decoupled from the image recovery model and is not limited by the image recovery processing method.
Fig. 1 is a schematic flowchart of a data processing method suitable for image restoration according to an embodiment of the present disclosure. As shown in fig. 1, the data processing method includes the following specific steps:
and S110, acquiring the image to be recovered and the three-dimensional point cloud data.
In this step, the image to be restored is acquired by a two-dimensional image acquisition device, such as a camera, and three-dimensional point cloud data is acquired by a three-dimensional point cloud acquisition device, such as a radar. It is noted that a planar area corresponding to the image to be restored is included in a spatial area corresponding to the three-dimensional point cloud data.
It should be noted that the installation positions and the collection directions of the two-dimensional image collector and the three-dimensional point cloud collector can be calibrated through experiments, so that the planar area corresponding to the image to be recovered is contained in the spatial area corresponding to the three-dimensional point cloud data.
For example, the two-dimensional image collector and the three-dimensional point cloud collector may be installed at the same position, and the collection directions of the two are the same. Of course, if the two are installed at different positions, the acquisition directions of the two need to be adjusted so that the planar area corresponding to the image to be restored is included in the spatial area corresponding to the three-dimensional point cloud data.
And S120, extracting image texture features and image structure features formed by projection of the three-dimensional point cloud data in a plane area by using the feature extraction model.
It should be noted that an image can be decomposed into two parts, namely, a structure part and a texture part, wherein the structure information represents an overall frame of the image and includes important description information such as edges of the image, and the texture information represents a detailed part in the frame of the image. The texture in the image is often used to describe the surface of various objects in nature, such as sand, skin, hair, plants, etc., i.e., the texture can be understood as a stably distributed visual pattern in some areas in an infinite two-dimensional space, and has certain regularity, randomness and directionality, and has the following characteristics:
1. a constant repetition of certain local sequences over a large area;
2. textures have regional characteristics, consisting primarily of texture elements that cause visual perception;
3. the various portions of the textured area are substantially uniform masses of similar size.
In this step, the projection of the three-dimensional point cloud data in the plane area is determined according to a preset coordinate transformation formula, specifically, the preset coordinate transformation formula may be represented by formula (1):
Figure 999839DEST_PATH_IMAGE021
(1)
wherein, the first and the second end of the pipe are connected with each other,
Figure 210241DEST_PATH_IMAGE022
the coordinates of the planar area in the three-dimensional space area,
Figure 625042DEST_PATH_IMAGE023
is the focal length of a two-dimensional image collector (such as a camera),
Figure 629907DEST_PATH_IMAGE024
are coordinates in a pixel coordinate system and,
Figure 130158DEST_PATH_IMAGE025
the origin of the image coordinate system of the image to be restored, the coordinates in the pixel coordinate system, dx and dy being the size of each pixel point in the two-dimensional image collector (such as a camera),
Figure 878671DEST_PATH_IMAGE026
is the ground clearance of a three-dimensional point cloud collector (such as a radar),
Figure 413558DEST_PATH_IMAGE027
the ground clearance of a two-dimensional image collector (such as a camera),
Figure 323745DEST_PATH_IMAGE028
the longitudinal distance between a three-dimensional point cloud collector (such as a radar) and a two-dimensional image collector (such as a camera).
The projection of the plane area where the image to be restored is located can be extracted from the three-dimensional point cloud data through the formula (1), and the projection can also be understood as a two-dimensional image with the same size as the image to be restored.
Then, the feature extraction model is used for extracting image texture features and image structure features corresponding to the projection, and specifically, the feature extraction model can realize feature extraction through one or more convolution layers in the neural network model.
And S130, segmenting the image to be restored into a plurality of characteristic areas according to the image texture characteristics and the image structure characteristics.
In this step, the image to be restored is segmented according to the same or similar image texture features or image structure features to obtain a plurality of feature regions. That is, each feature region corresponds to an image texture feature or an image structure feature, that is, the feature regions have two types: one type is a texture feature region, the other type is a structural feature region, each feature region comprises a plurality of geometric blocks, and the geometric blocks are also divided into two types, namely texture blocks and structural blocks.
It should be noted that a geometric block is a region with a certain geometric shape, which is composed of a plurality of continuous pixels, for example, a quadrilateral region of 3X3, or a quadrilateral region of 5X 5.
In one possible implementation, for each feature region, when the proportion of geometric blocks having the same image texture feature or image structure feature in the feature region is greater than or equal to a preset proportion threshold (e.g., 80%), it is determined that the feature region corresponds to the image texture feature or image structure feature.
Fig. 2 is a schematic diagram illustrating a to-be-restored image being segmented into a plurality of feature regions according to an embodiment of the present application. As shown in fig. 2, different feature regions are represented by different gray levels or filling patterns, and hexagons in the feature regions are geometric blocks, it should be noted that fig. 2 only schematically illustrates some geometric blocks, and omits other geometric blocks, and in fact the geometric blocks are distributed throughout the image to be restored, i.e. the image to be restored > the feature region > the geometric blocks > pixels.
It should be noted that the effect of this step is to use the image texture features and image structure features extracted from the three-dimensional point cloud data as prior information to constrain each to-be-selected repair block subsequently screened to fill up the to-be-restored area, that is, the search range in step S140, so as to improve the search efficiency and accuracy.
And S140, searching target matching image blocks matched with the textures and/or structures of the geometric image blocks in each characteristic region by using an iterative optimization matching model, and combining all the target matching image blocks into a to-be-selected restoration image block set.
In this step, the texture similarity or the structural similarity between the geometric image block and the target matching image block meets the requirement of the preset similarity, that is, the difference between the two is smaller than the preset difference threshold, so that the target matching image block can replace the geometric image block to reconstruct or restore the image to be restored.
Based on similar or identical textures or structures in each feature region, therefore, for an image to be restored with rich texture information, the geometric pattern blocks and the corresponding target matching pattern blocks should be in the same feature region. In the embodiment, the boundary of the characteristic region is set as the search constraint condition, so that the search efficiency, the search precision and the accuracy can be effectively improved compared with the random search in the whole image to be restored.
S150, determining a to-be-restored area in the to-be-restored image according to the image structure characteristics, and calling the to-be-restored blocks in the to-be-restored block set to restore the to-be-restored area through a preset restoration model to obtain a restored target image.
In this step, a target matching block corresponding to each geometric block in the region to be restored or a repair block to be selected that is most similar to the geometric block is screened out from a plurality of repair blocks to be selected in the repair block set to be selected, and then the geometric block in the region to be restored is replaced by the repair block to be selected.
Further, unlike the prior art which only considers the color information of the image and does not consider the image structure information, in the image restoration process, the geometric structure part of the image is often restored first to maintain the overall image consistency, if the importance of the geometric structure information is ignored, the texture block filling is performed first, and the situation that the details are similar but the overall structure is deviated occurs. In the image restoration process, the gradient factor is introduced to restrict the restoration of the image, so that the importance of the geometric structure information of the image can be highlighted, and the restoration quality of the restoration algorithm can be effectively improved.
Specifically, the image restoration may be performed according to formula (2):
Figure 328871DEST_PATH_IMAGE029
(2)
wherein the content of the first and second substances,
Figure 615496DEST_PATH_IMAGE030
the function of the loss is represented by,
Figure 4889DEST_PATH_IMAGE031
which represents the area to be restored and,
Figure 617136DEST_PATH_IMAGE032
representing pixel points
Figure 91979DEST_PATH_IMAGE033
The 4 neighborhoods of (a),
Figure 323241DEST_PATH_IMAGE034
representing pixels
Figure 832719DEST_PATH_IMAGE033
Mapping with the repair tiles to be selected, i.e.
Figure 350288DEST_PATH_IMAGE035
Representing to fill the blocks to be repaired with (x + i) as the center into the pixel points
Figure 578007DEST_PATH_IMAGE033
In the geometric figure block with the center, if the repair of the figure block to be repaired is completely effective, the value of the data item is 0 at the moment, R (x) represents the RGB color value of the geometric figure block with the pixel point x as the center,
Figure 347380DEST_PATH_IMAGE036
is represented by pixel points
Figure 976945DEST_PATH_IMAGE033
The RGB color value of the first to-be-selected repair image block allocated for the central geometric image block,
Figure 665415DEST_PATH_IMAGE037
represented by pixel points
Figure 380430DEST_PATH_IMAGE033
The RGB color values of the second candidate repair tile are assigned to the central geometric tile,
Figure 953494DEST_PATH_IMAGE038
representing the gradient value of the first to-be-selected repair tile,
Figure 906406DEST_PATH_IMAGE039
and the gradient value of the second to-be-selected repair image block is represented, alpha is a color weight value, and beta is a structure weight value.
The image recovery process is to solve the minimum loss function value Minf(L) Process.
In conclusion, the embodiment considers the difference between the color values of the adjacent geometric blocks and the difference between the gradient directions, so that the target image can be kept consistent in the whole structure, and the recovery quality of image recovery is improved.
In order to facilitate understanding, when the embodiment is applied to the field of automatic driving, a radar and a camera are installed on a vehicle at the same time, and since the vehicle moves relative to other objects on the road, such as other vehicles, pedestrians, obstacles and the like, the image shot by the camera has a problem of motion blur, and if the vehicle runs in rainy and snowy weather or heavy fog weather, the image shot by the camera is also blurred due to the influence of the weather. At the moment, the image processing equipment on the vehicle can acquire data of the camera and the radar at the same time, and the image shot by the camera is recovered through the three-dimensional point cloud data acquired by the radar in real time, so that more accurate target identification is realized through the recovered high-definition image, for example, license plate information of a front vehicle is identified, or information on a roadside warning sign is identified.
The embodiment provides a data processing method suitable for image recovery, wherein a planar area corresponding to an image to be recovered is contained in a spatial area corresponding to three-dimensional point cloud data by acquiring the image to be recovered and the three-dimensional point cloud data; extracting image texture features and image structure features formed by projection of three-dimensional point cloud data in a plane area by using a feature extraction model; according to the image texture features and the image structure features, segmenting an image to be restored into a plurality of feature regions, wherein each feature region corresponds to one image texture feature or one image structure feature and comprises a plurality of geometric image blocks; searching target matching image blocks matched with the textures and/or structures of all geometric image blocks in the image to be restored by using an iterative optimization matching model, and combining all the target matching image blocks into a to-be-selected restoration image block set; and determining a region to be restored in the image to be restored according to the image structure characteristics, and calling the restoration blocks to be selected in the restoration block set to be selected to restore the image of the region to be restored through a preset restoration model to obtain a restored target image. According to the image restoration method and device, the three-dimensional point cloud data of the three-dimensional space to which the image belongs are utilized, the features in the three-dimensional point cloud data are extracted, accurate supervised learning data are provided for image restoration, the training requirements for an image restoration model are reduced, the three-dimensional point cloud data can also provide restoration image blocks for image restoration, and therefore the technical effect of improving the efficiency and accuracy of image restoration is achieved.
For ease of understanding, a possible implementation of step S140 is described in further detail below.
Fig. 3 is a flowchart illustrating a process of searching for a target matching block in step S140 according to an embodiment of the present disclosure. As shown in fig. 3, the specific searching steps include:
and S141, randomly distributing initial offset values to the geometric image blocks in each characteristic region, and determining the matched image blocks to be selected according to the initial offset values and the coordinates of the geometric image blocks.
In this step, the matching block to be selected and the geometric block are in the same feature region, the offset value refers to a value of the central pixel point of the geometric block moving in the horizontal and longitudinal directions, and the matching block to be selected is a region of the same size as the geometric block obtained after the central pixel point moves the offset value. The initial offset value is randomly allocated, but needs to be constrained in a feature region to which the geometric image block belongs, because the screening effect of the final target matching image block is relatively sensitive to the influence of the initial offset value, in order to improve the screening precision and accuracy of the target matching image block, the embodiment proposes that image texture features and image structure features extracted from the three-dimensional point cloud data are used as prior information, that is, the value of the initial offset value is constrained based on the feature region.
And S142, optionally selecting one geometric picture block as the current picture block, and determining the texture propagation direction of the feature region where the current picture block is located according to the image texture feature corresponding to the feature region where the current picture block is located.
In this step, it specifically includes:
calculating the maximum curvature point on the boundary of the feature region where the current image block is located by using a preset curvature model, where the preset curvature model can be expressed by formula (3):
Figure 31357DEST_PATH_IMAGE040
(3)
wherein, Q is a curvature, and Q is a curvature,
Figure 702510DEST_PATH_IMAGE041
is the gradient of the characteristic region or regions,
Figure 203898DEST_PATH_IMAGE042
is a mode of the gradient and is a linear mode,
Figure 542476DEST_PATH_IMAGE043
is the divergence;
the direction pointing to the inside of the characteristic region where the current image block is located on the isolux line corresponding to the maximum curvature point is taken as the texture propagation direction, and the texture propagation direction can be expressed by a formula (4):
Figure 713694DEST_PATH_IMAGE044
(4)
wherein, the first and the second end of the pipe are connected with each other,
Figure 878002DEST_PATH_IMAGE045
in the direction of the isophote at the point of maximum curvature,
Figure 183082DEST_PATH_IMAGE046
is the gradient at the point of maximum curvature.
And S143, judging whether the difference degree between the to-be-selected matching image block corresponding to the current image block and the current image block is larger than a preset difference threshold value by using the similarity discrimination model.
In this step, when the feature region corresponds to an image structure feature, the geometric image blocks in the feature region are structure image blocks, and the similarity determination model includes: the structural similarity model specifically comprises the following steps:
the specific steps of judging whether the difference degree between the matching image blocks to be selected and the geometric image blocks is greater than a preset difference threshold value by using the similarity discrimination model comprise the following steps:
judging whether an included angle between the first structure vector of the to-be-selected matching image block and the second structure vector of the geometric image block is larger than a preset difference threshold value, wherein the specific calculation mode is as shown in a formula (5):
Figure 376166DEST_PATH_IMAGE047
(5)
wherein the content of the first and second substances,
Figure 577340DEST_PATH_IMAGE048
a second structure vector of the geometric block centered at (x, y),
Figure 364030DEST_PATH_IMAGE049
to be provided with
Figure 676063DEST_PATH_IMAGE050
The first structure vector of the candidate matching block as the center,
Figure 989233DEST_PATH_IMAGE051
in order to be a preset difference threshold value,
Figure 626887DEST_PATH_IMAGE052
and
Figure 25508DEST_PATH_IMAGE053
the abscissa offset and the ordinate offset.
When the feature region corresponds to an image texture feature, the geometric picture blocks in the feature region are texture picture blocks, and the similarity discrimination model comprises the following steps: the texture similarity model specifically comprises the following steps:
judging whether the difference value between the first average gray value of the to-be-selected matching image block and the second average gray value of the geometric image block is larger than a preset difference threshold value, wherein the specific calculation mode is as shown in a formula (6):
Figure 141231DEST_PATH_IMAGE054
(6)
wherein, the first and the second end of the pipe are connected with each other,
Figure 184274DEST_PATH_IMAGE055
is a geometric figure block taking (x, y) as a center,
Figure 992830DEST_PATH_IMAGE056
to be composed of
Figure 613167DEST_PATH_IMAGE057
The matching image block to be selected as the center,
Figure 798161DEST_PATH_IMAGE058
in order to be a preset difference threshold value,
Figure 554764DEST_PATH_IMAGE059
and
Figure 409588DEST_PATH_IMAGE060
an abscissa offset amount and an ordinate offset amount,
Figure 782800DEST_PATH_IMAGE061
is the average gray value.
In this embodiment, as for the check result of S143, if the check result is greater than the preset difference threshold, it is verified that the matching block to be selected is not the target matching block, and the search needs to be performed according to a preset mode:
searching a target matching image block corresponding to the current image block in the characteristic region where the current image block is located according to the texture propagation direction, wherein the searching comprises the following steps:
taking each geometric picture block in the texture propagation direction as each first to-be-selected matching picture block of the current picture block: and repeatedly using the similarity discrimination model to judge whether the first difference degree between each first to-be-selected matching image block and the current image block is greater than a preset difference threshold value or not until a preset ending condition is met, and determining a target matching image block corresponding to the current image block.
Further, after the target matching image block corresponding to the current image block is determined, according to the first offset value between the current image block and the target matching image block corresponding to the current image block, determining a second offset value of the first geometric image block adjacent to the current image block, wherein the second offset value is used for representing the relative position of the image block to be selected and the first geometric image block which are redistributed to the first geometric image block;
and taking the first geometric figure block as a new current figure block, and continuously searching a target matching figure block corresponding to the current figure block in the characteristic region where the current figure block is located according to the texture propagation direction until the optimization of the target matching figure blocks corresponding to all the geometric figure blocks is completed.
Optionally, the preset end condition includes:
calculating first difference degrees of all the first to-be-selected matching image blocks in the characteristic region where the current image block is located in the texture propagation direction, and taking the first to-be-selected matching image block corresponding to the minimum first difference degree as a target matching image block corresponding to the current image block; or, as long as the first difference degree is smaller than or equal to the preset difference threshold, the judgment is stopped, and the corresponding first to-be-selected matching image block is taken as the target matching image block corresponding to the current image block.
If the difference is smaller than or equal to the preset difference threshold value, determining that the image blocks to be matched are target matched image blocks of the geometric image blocks, and transmitting the relative position relation between the geometric image blocks and the target matched image blocks to each adjacent image block adjacent to the geometric image blocks; and taking the adjacent image blocks along the texture propagation direction as the current geometric image block, and verifying whether the difference degree of the image blocks to be matched corresponding to the current geometric image block is greater than a preset difference threshold value or not by circularly utilizing the similarity discrimination model until the edge of the characteristic region is reached.
In order to facilitate understanding of the above guided search of the subsequent target matching blocks according to the check result of S143, the following description is made with reference to fig. 3. In short, the determination at S143 is performed at S144 if yes, and at S147 if no.
And S144, taking each geometric picture block in the texture propagation direction as each first to-be-selected matching picture block of the current picture block.
In this embodiment, the geometric tiles adjacent to the current tile are sequentially used as the first candidate matching tiles along the texture propagation direction determined by equation (4).
The offset value between the center pixel point of the first to-be-selected matching image block and the center pixel point of the current image block can be expressed by formula (7):
Figure 240326DEST_PATH_IMAGE062
(7)
wherein the content of the first and second substances,
Figure 117015DEST_PATH_IMAGE063
representiGeometric pattern centered at = x, y,
Figure 261515DEST_PATH_IMAGE064
an offset value representing the geometric tile is determined,
Figure 856444DEST_PATH_IMAGE065
n =1,2,3, \8230;, up to an offset value, for the direction of propagation of the texture of the region of the feature in which the geometric pattern is locatedAt the edge position of the feature area.
S145, judging whether the first difference degree between each first to-be-selected matching image block and the current image block is larger than a preset difference threshold value by using the similarity discrimination model.
In this step, if yes, S146 is executed; if not, the loop step needs to be determined according to the preset ending condition, and the embodiment assumes that the preset ending condition is: and stopping the judgment as long as the first difference degree is smaller than or equal to the preset difference threshold value, and taking the corresponding first to-be-selected matching block as the target matching block corresponding to the current block, namely executing S148.
And S146, judging whether the image is the last first to-be-selected matching image block.
In this step, if yes, S147 is executed, otherwise, S145 is returned to.
And S147, taking the first to-be-selected matching image block with the minimum first difference as a target matching image block corresponding to the current image block.
And S148, taking the current first to-be-selected matching image block as a target matching image block corresponding to the current image block, and determining the offset value of the current image block according to the position of the target matching image block.
And S149, determining the offset value of each adjacent image block around the current image block according to the offset value of the current image block.
And S1410, taking the adjacent image block as a current image block.
After S1410 is executed, the process returns to S144 until all the geometric tiles are traversed, and this process may be understood as an optimization process for the initial offset value assigned in S141.
For ease of understanding, the above process is briefly described below:
after the initialization of the offset values of the geometric tiles is completed in S141, each geometric tile is assigned with an offset value, i.e., a starting offset value, and the relative position of the matching tile to be selected can be found by using the starting offset value. And then, carrying out similarity judgment on each matching image block to be selected by using a formula (5) or a formula (6). If the judgment result is a good matching block (for example, the matching block is defined as a good matching block when the included angle of the structural block is less than 0.5 or the difference value of the texture block is less than 0.04), propagating the matching information, namely the offset value, to a neighborhood block (namely, adding 1 to the offset value in the direction of the adjacent block), searching whether a better matching block to be selected exists or not by the neighborhood block along the texture propagation direction after obtaining the matching information of the neighborhood block, if the better matching block is found, updating the offset value of the neighborhood block, and propagating the updated matching information to other neighborhood blocks adjacent to the neighborhood block; if no better matching block is found, the received matching information is directly propagated to other adjacent geometric blocks; if the judgment result is a bad matching block, searching is directly carried out in the texture propagation direction to find a good matching block, and then matching information is carried out to other adjacent geometric blocks. And repeating the steps until convergence to obtain optimized image offset mapping, namely the relative position relation between each to-be-selected restoration image block and each geometric image block in the to-be-selected restoration image block set.
The image texture feature and the image structure feature extracted from the three-dimensional point cloud data are based on the image prior knowledge, the feature region and the texture propagation direction are used as constraint conditions, and the offset mapping of each geometric block is constrained, so that the matching precision of the target matching blocks is improved, more accurate repairing information is provided for subsequent image restoration, and the quality of image restoration is improved.
Fig. 4 is a schematic structural diagram of a data processing apparatus suitable for image restoration according to an embodiment of the present application. The data processing apparatus 400 suitable for image restoration may be implemented by software, hardware, or a combination of both.
As shown in fig. 4, the data processing apparatus 400 adapted to image restoration includes:
an obtaining module 401, configured to obtain an image to be restored and three-dimensional point cloud data, where a planar area corresponding to the image to be restored is included in a spatial area corresponding to the three-dimensional point cloud data;
a processing module 402 for:
extracting image texture features and image structure features formed by projection of three-dimensional point cloud data in a plane area by using a feature extraction model;
according to the image texture features and the image structure features, dividing an image to be restored into a plurality of feature regions, wherein each feature region corresponds to one image texture feature or one image structure feature, and each feature region comprises a plurality of geometric image blocks;
searching target matching image blocks matched with the textures and/or structures of the geometric image blocks in each characteristic region by using an iterative optimization matching model, and combining all the target matching image blocks into a to-be-selected restoration image block set;
the restoring module 403 is configured to determine a to-be-restored area in the to-be-restored image according to the image structure characteristics, and call, through a preset restoring model, a to-be-selected restoration block in the to-be-selected restoration block set to perform image restoration on the to-be-restored area, so as to obtain a restored target image.
In one possible design, the processing module 402 is configured to:
randomly distributing initial offset values for each geometric figure block in each characteristic region, and determining a to-be-selected matching figure block according to the initial offset values and the coordinates of the geometric figure blocks, wherein the to-be-selected matching figure block and the geometric figure blocks are in the same characteristic region;
selecting a geometric figure block as a current figure block, and determining the texture propagation direction of a characteristic region in which the current figure block is located according to the image texture characteristics corresponding to the characteristic region in which the current figure block is located;
judging whether the difference degree between the to-be-selected matching image block corresponding to the current image block and the current image block is greater than a preset difference threshold value by using a similarity discrimination model;
if so, searching a target matching image block corresponding to the current image block in the characteristic region where the current image block is located according to the texture propagation direction, wherein the difference degree between the target matching image block corresponding to the current image block and the current image block is smaller than or equal to a preset difference threshold value.
Optionally, the processing module 402 is further configured to:
if not, determining that the image blocks to be selected are target matching image blocks of the geometric image blocks, and transmitting the relative position relation between the geometric image blocks and the target matching image blocks to each adjacent image block adjacent to the geometric image blocks;
and taking the adjacent image blocks along the texture propagation direction as the current geometric image block, and verifying whether the difference degree of the image blocks to be matched corresponding to the current geometric image block is greater than a preset difference threshold value or not by circularly utilizing the similarity discrimination model until the edge of the characteristic region is reached.
In one possible design, the processing module 402 is configured to:
calculating the maximum curvature point on the boundary of the characteristic region where the current image block is located by using a preset curvature model:
Figure 117661DEST_PATH_IMAGE066
wherein, Q is a curvature, and Q is a curvature,
Figure 583277DEST_PATH_IMAGE067
is the gradient of the characteristic region or regions,
Figure 638958DEST_PATH_IMAGE068
is a mode of the gradient and is a linear mode,
Figure 721184DEST_PATH_IMAGE069
is the divergence;
taking the direction pointing to the inside of the characteristic region where the current image block is located on the isolux line corresponding to the maximum curvature point as the texture propagation direction:
Figure 661458DEST_PATH_IMAGE070
wherein, the first and the second end of the pipe are connected with each other,
Figure 512739DEST_PATH_IMAGE071
in the direction of the isolux line at the point of maximum curvature,
Figure 270480DEST_PATH_IMAGE072
is the gradient at the point of maximum curvature.
In one possible design, the processing module 402 is configured to:
taking each geometric block in the texture propagation direction as each first to-be-selected matching block of the current block:
and repeating the steps of judging whether the first difference degree between each first to-be-selected matching image block and the current image block is greater than a preset difference threshold value by using the similarity discrimination model for multiple times until a preset ending condition is met, and determining a target matching image block corresponding to the current image block.
Optionally, the preset ending condition includes:
calculating first difference degrees of all first to-be-selected matching image blocks in the characteristic region where the current image block is located in the texture propagation direction, and taking the first to-be-selected matching image block corresponding to the minimum first difference degree as a target matching image block corresponding to the current image block;
alternatively, the first and second electrodes may be,
and stopping judging as long as the first difference degree is smaller than or equal to the preset difference threshold, and taking the corresponding first to-be-selected matching image block as a target matching image block corresponding to the current image block.
In one possible design, the processing module 402 is further configured to:
determining a second offset value of the first geometric picture block adjacent to the current picture block according to the first offset value between the current picture block and a target matching picture block corresponding to the current picture block, wherein the second offset value is used for representing the relative position of a to-be-selected matching picture block redistributed to the first geometric picture block and the first geometric picture block;
and taking the first geometric figure block as a new current figure block, and continuously searching a target matching figure block corresponding to the current figure block in the characteristic region where the current figure block is located according to the texture propagation direction until the optimization of the target matching figure blocks corresponding to all the geometric figure blocks is completed.
In one possible design, when the feature region corresponds to an image texture feature, the geometric blocks in the feature region are texture blocks, and the similarity discrimination model includes: a texture similarity model;
a processing module 402 configured to:
judging whether the difference value between the first average gray value of the to-be-selected matching image block and the second average gray value of the geometric image block is larger than a preset difference threshold value:
Figure 308843DEST_PATH_IMAGE073
wherein, the first and the second end of the pipe are connected with each other,
Figure 911862DEST_PATH_IMAGE074
is a geometric figure block with (x, y) as the center,
Figure 883229DEST_PATH_IMAGE075
to be composed of
Figure 687237DEST_PATH_IMAGE076
The matching image block to be selected as the center,
Figure 478476DEST_PATH_IMAGE077
in order to be a preset difference threshold value,
Figure 885186DEST_PATH_IMAGE078
and
Figure 711060DEST_PATH_IMAGE079
an abscissa offset amount and an ordinate offset amount,
Figure 420390DEST_PATH_IMAGE080
is the average gray value.
In one possible design, when the feature region corresponds to an image structure feature, the geometric blocks in the feature region are structure blocks, and the similarity judging model includes: a structural similarity model;
a processing module 402 for:
judging whether an included angle between the first structure vector of the to-be-selected matching image block and the second structure vector of the geometric image block is larger than a preset difference threshold value:
Figure 964504DEST_PATH_IMAGE081
wherein the content of the first and second substances,
Figure 174905DEST_PATH_IMAGE082
the order of (x,y) a second structure vector of the geometric tile centered on,
Figure 595565DEST_PATH_IMAGE083
to be provided with
Figure 600431DEST_PATH_IMAGE084
A first structure vector of the candidate matching tile as a center,
Figure 366261DEST_PATH_IMAGE085
for the purpose of the preset difference threshold value,
Figure 990141DEST_PATH_IMAGE078
and
Figure 790606DEST_PATH_IMAGE079
the abscissa offset and the ordinate offset.
It should be noted that the apparatus provided in the embodiment shown in fig. 4 may execute the method provided in any of the above method embodiments, and the specific implementation principle, technical features, term explanations and technical effects thereof are similar and will not be described herein again.
Fig. 5 is a schematic structural diagram of a data processing apparatus suitable for image restoration according to an embodiment of the present application. As shown in fig. 5, the electronic device 500 may include: at least one processor 501 and memory 502. Fig. 5 shows an electronic device as an example of a processor.
The memory 502 is used for storing programs. In particular, the program may include program code including computer operating instructions.
Memory 502 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 501 is used to execute computer-executable instructions stored in the memory 502 to implement the methods in the method embodiments in the above embodiments.
The processor 501 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application.
Alternatively, the memory 502 may be separate or integrated with the processor 501. When the memory 502 is a device independent from the processor 501, the electronic device 500 may further include:
a bus 503 for connecting the processor 501 and the memory 502. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. Buses may be classified as address buses, data buses, control buses, etc., but do not represent only one bus or type of bus.
Alternatively, in a specific implementation, if the memory 502 and the processor 501 are integrated into a chip, the memory 502 and the processor 501 may complete communication through an internal interface.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and in particular, the computer-readable storage medium stores program instructions for implementing the methods in the method embodiments.
An embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the method in the foregoing method embodiments.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A data processing method adapted for image restoration, comprising:
acquiring an image to be recovered and three-dimensional point cloud data, wherein a plane area corresponding to the image to be recovered is contained in a space area corresponding to the three-dimensional point cloud data;
extracting image texture features and image structure features formed by projection of the three-dimensional point cloud data in the plane area by using a feature extraction model;
according to the image texture features and the image structure features, segmenting the image to be restored into a plurality of feature regions, wherein each feature region corresponds to one image texture feature or one image structure feature, and each feature region comprises a plurality of geometric image blocks;
searching target matching image blocks matched with the textures and/or structures of the geometric image blocks in each characteristic region by using an iterative optimization matching model, and combining all the target matching image blocks into a to-be-selected restoration image block set;
and determining a region to be restored in the image to be restored according to the image structure characteristics, and calling the restoration blocks to be selected in the restoration block set to restore the image of the region to be restored through a preset restoration model to obtain a restored target image.
2. The data processing method suitable for image restoration according to claim 1, wherein searching for a target matching tile matching the texture and/or structure of each geometric tile in each feature region by using an iterative optimization matching model comprises:
randomly distributing an initial offset value to each geometric figure block in each characteristic region, and determining a matched figure block to be selected according to the initial offset value and the coordinates of the geometric figure block, wherein the matched figure block to be selected and the geometric figure block are in the same characteristic region;
optionally selecting one geometric picture block as a current picture block, and determining the texture propagation direction of the characteristic region in which the current picture block is located according to the image texture characteristics corresponding to the characteristic region in which the current picture block is located;
judging whether the difference degree between the to-be-selected matching image block corresponding to the current image block and the current image block is greater than a preset difference threshold value or not by using a similarity discrimination model;
if so, searching a target matching image block corresponding to the current image block in a characteristic region where the current image block is located according to the texture propagation direction, wherein the difference degree between the target matching image block corresponding to the current image block and the current image block is smaller than or equal to the preset difference threshold value.
3. The data processing method suitable for image restoration according to claim 2, wherein after determining whether the degree of difference between the to-be-selected matching tile block and the geometric tile block is greater than a preset difference threshold value by using a similarity discrimination model, the method further comprises:
if not, determining the image blocks to be selected as target matching image blocks of the geometric image blocks, and transmitting the relative position relation between the geometric image blocks and the target matching image blocks to each adjacent image block adjacent to the geometric image blocks;
and taking the adjacent image blocks along the texture propagation direction as a current geometric image block, and verifying whether the difference degree of the image blocks to be matched corresponding to the current geometric image block is larger than a preset difference threshold value or not by circularly utilizing the similarity discrimination model until the edge of the characteristic region is reached.
4. The data processing method suitable for image restoration according to claim 2, wherein the determining the texture propagation direction of the feature area where the current image block is located according to the image texture feature corresponding to the feature area where the current image block is located includes:
calculating the maximum curvature point on the boundary of the characteristic region where the current image block is located by using a preset curvature model:
Figure 969927DEST_PATH_IMAGE001
wherein, Q is a curvature, and Q is a curvature,
Figure 403051DEST_PATH_IMAGE002
is the gradient of the region of the feature,
Figure 835170DEST_PATH_IMAGE003
is a mode of the gradient and is a linear mode,
Figure 804394DEST_PATH_IMAGE004
is the divergence;
taking the direction pointing to the inside of the characteristic region where the current image block is located on the isolux line corresponding to the maximum curvature point as the texture propagation direction:
Figure 460503DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 115344DEST_PATH_IMAGE006
the isolux direction at the point of maximum curvature,
Figure 757678DEST_PATH_IMAGE007
is the gradient at the point of maximum curvature.
5. The data processing method suitable for image restoration according to claim 2, wherein the searching for the target matching tile corresponding to the current tile in the feature region where the current tile is located according to the texture propagation direction comprises:
taking each geometric block in the texture propagation direction as each first to-be-selected matching block of the current block:
and repeatedly judging whether the first difference degree of each first to-be-selected matching image block and the current image block is greater than the preset difference threshold value by using the similarity discrimination model for multiple times until a preset finishing condition is met, and determining a target matching image block corresponding to the current image block.
6. The data processing method suitable for image restoration according to claim 5, wherein the preset end condition comprises:
calculating the first difference degree of all the first to-be-selected matching image blocks in the characteristic region where the current image block is located in the texture propagation direction, and taking the first to-be-selected matching image block corresponding to the minimum first difference degree as a target matching image block corresponding to the current image block;
alternatively, the first and second electrodes may be,
and stopping judging as long as the first difference degree is smaller than or equal to the preset difference threshold value, and taking the corresponding first to-be-selected matching image block as a target matching image block corresponding to the current image block.
7. The data processing method suitable for image restoration according to any one of claims 2-6, wherein after determining the target matching tile corresponding to the current tile, further comprising:
determining a second offset value of a first geometric block adjacent to the current block according to a first offset value between the current block and a target matching block corresponding to the current block, wherein the second offset value is used for representing the relative position of a to-be-selected matching block reassigned to the first geometric block and the first geometric block;
and taking the first geometric picture block as a new current picture block, and continuously searching a target matching picture block corresponding to the current picture block in a characteristic region where the current picture block is located according to the texture propagation direction until the optimization of the target matching picture blocks corresponding to all the geometric picture blocks is completed.
8. The data processing method suitable for image restoration according to any one of claims 2 to 6, wherein when the feature region corresponds to one of the image texture features, the geometric patches in the feature region are texture patches, and the similarity discrimination model includes: a texture similarity model;
the specific steps of judging whether the difference degree between the matching image blocks to be selected and the geometric image blocks is greater than a preset difference threshold value by using the similarity discrimination model comprise:
judging whether the difference value between the first average gray value of the to-be-selected matching image block and the second average gray value of the geometric image block is larger than the preset difference threshold value:
Figure 830676DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 346102DEST_PATH_IMAGE009
is the geometric figure block centered at (x, y),
Figure 301289DEST_PATH_IMAGE010
to be composed of
Figure 481735DEST_PATH_IMAGE011
The candidate matching block at the center,
Figure 924086DEST_PATH_IMAGE012
in order to be the preset difference threshold value,
Figure 125260DEST_PATH_IMAGE013
and
Figure 256158DEST_PATH_IMAGE014
an abscissa offset amount and an ordinate offset amount,
Figure 568191DEST_PATH_IMAGE015
is the average gray value.
9. The data processing method suitable for image restoration according to any one of claims 2-6, wherein when the feature region corresponds to one of the image structural features, the geometric patches in the feature region are structural patches, and the similarity discrimination model includes: a structural similarity model;
the specific steps of judging whether the difference degree between the matching image blocks to be selected and the geometric image blocks is greater than a preset difference threshold value by using the similarity discrimination model comprise the following steps:
judging whether an included angle between the first structure vector of the to-be-selected matching image block and the second structure vector of the geometric image block is larger than the preset difference threshold value:
Figure 661787DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 768283DEST_PATH_IMAGE017
a second structure vector of the geometric tile centered at (x, y),
Figure 307849DEST_PATH_IMAGE018
to be provided with
Figure 908726DEST_PATH_IMAGE011
A first structure vector of the candidate matching tile as a center,
Figure 607560DEST_PATH_IMAGE019
for the purpose of the preset difference threshold value,
Figure 134225DEST_PATH_IMAGE013
and
Figure 426666DEST_PATH_IMAGE014
the abscissa offset and the ordinate offset.
10. A data processing apparatus adapted for image restoration, comprising: a processor and a memory;
the memory for storing a computer program for the processor;
the processor is configured to perform the data processing method adapted for image restoration of any one of claims 1 to 9 via execution of the computer program.
CN202211495957.9A 2022-11-28 2022-11-28 Data processing method and equipment suitable for image recovery Active CN115546071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211495957.9A CN115546071B (en) 2022-11-28 2022-11-28 Data processing method and equipment suitable for image recovery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211495957.9A CN115546071B (en) 2022-11-28 2022-11-28 Data processing method and equipment suitable for image recovery

Publications (2)

Publication Number Publication Date
CN115546071A true CN115546071A (en) 2022-12-30
CN115546071B CN115546071B (en) 2023-03-31

Family

ID=84721597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211495957.9A Active CN115546071B (en) 2022-11-28 2022-11-28 Data processing method and equipment suitable for image recovery

Country Status (1)

Country Link
CN (1) CN115546071B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118260846A (en) * 2024-05-29 2024-06-28 江西方堂设计工程有限公司 Digital decoration design method and system based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN110246100A (en) * 2019-06-11 2019-09-17 山东师范大学 A kind of image repair method and system based on angle perception Block- matching
CN110349251A (en) * 2019-06-28 2019-10-18 深圳数位传媒科技有限公司 A kind of three-dimensional rebuilding method and device based on binocular camera
CN112070696A (en) * 2020-09-07 2020-12-11 上海大学 Image restoration method and system based on texture and structure separation, and terminal
CN113379815A (en) * 2021-06-25 2021-09-10 中德(珠海)人工智能研究院有限公司 Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN115265407A (en) * 2022-07-27 2022-11-01 中国石油大学(华东) Metal material three-dimensional shape measuring method based on stereoscopic vision and model recovery

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN110246100A (en) * 2019-06-11 2019-09-17 山东师范大学 A kind of image repair method and system based on angle perception Block- matching
CN110349251A (en) * 2019-06-28 2019-10-18 深圳数位传媒科技有限公司 A kind of three-dimensional rebuilding method and device based on binocular camera
CN112070696A (en) * 2020-09-07 2020-12-11 上海大学 Image restoration method and system based on texture and structure separation, and terminal
CN113379815A (en) * 2021-06-25 2021-09-10 中德(珠海)人工智能研究院有限公司 Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN115265407A (en) * 2022-07-27 2022-11-01 中国石油大学(华东) Metal material three-dimensional shape measuring method based on stereoscopic vision and model recovery

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
史金龙等: ""基于时空关联块匹配的动态变形表面三维重建"", 《江苏科技大学学报( 自然科学版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118260846A (en) * 2024-05-29 2024-06-28 江西方堂设计工程有限公司 Digital decoration design method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN115546071B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
Shim et al. Road damage detection using super-resolution and semi-supervised learning with generative adversarial network
CN113468967B (en) Attention mechanism-based lane line detection method, attention mechanism-based lane line detection device, attention mechanism-based lane line detection equipment and attention mechanism-based lane line detection medium
CN111209770B (en) Lane line identification method and device
CN111709416B (en) License plate positioning method, device, system and storage medium
CN112232349A (en) Model training method, image segmentation method and device
CN110210451B (en) Zebra crossing detection method
CN110491132B (en) Vehicle illegal parking detection method and device based on video frame picture analysis
Azimi et al. Eagle: Large-scale vehicle detection dataset in real-world scenarios using aerial imagery
CN108399424B (en) Point cloud classification method, intelligent terminal and storage medium
CN110705342A (en) Lane line segmentation detection method and device
CN110544211A (en) method, system, terminal and storage medium for detecting lens attachment
CN110619674B (en) Three-dimensional augmented reality equipment and method for accident and alarm scene restoration
CN105321189A (en) Complex environment target tracking method based on continuous adaptive mean shift multi-feature fusion
CN115546071B (en) Data processing method and equipment suitable for image recovery
CN111652033A (en) Lane line detection method based on OpenCV
CN107220632B (en) Road surface image segmentation method based on normal characteristic
FAN et al. Robust lane detection and tracking based on machine vision
Zhu et al. Super-resolving commercial satellite imagery using realistic training data
CN114663352A (en) High-precision detection method and system for defects of power transmission line and storage medium
Zhang et al. Lateral distance detection model based on convolutional neural network
CN110544232A (en) detection system, terminal and storage medium for lens attached object
CN116092035A (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN115880322A (en) Point cloud road landmark line extraction method based on gradient complexity
CN115965531A (en) Model training method, image generation method, device, equipment and storage medium
CN115631108A (en) RGBD-based image defogging method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant