CN117152398A - Three-dimensional image blurring method, device, equipment and storage medium - Google Patents

Three-dimensional image blurring method, device, equipment and storage medium Download PDF

Info

Publication number
CN117152398A
CN117152398A CN202311415265.3A CN202311415265A CN117152398A CN 117152398 A CN117152398 A CN 117152398A CN 202311415265 A CN202311415265 A CN 202311415265A CN 117152398 A CN117152398 A CN 117152398A
Authority
CN
China
Prior art keywords
image data
target
blurring
image
piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311415265.3A
Other languages
Chinese (zh)
Other versions
CN117152398B (en
Inventor
张雪兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Euclideon Technology Co ltd
Original Assignee
Shenzhen Euclideon Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Euclideon Technology Co ltd filed Critical Shenzhen Euclideon Technology Co ltd
Priority to CN202311415265.3A priority Critical patent/CN117152398B/en
Publication of CN117152398A publication Critical patent/CN117152398A/en
Application granted granted Critical
Publication of CN117152398B publication Critical patent/CN117152398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing, and discloses a three-dimensional image blurring method, device, equipment and storage medium, which are used for improving the accuracy of three-dimensional image blurring. Comprising the following steps: acquiring target three-dimensional image data, and performing space-based image data segmentation on the three-dimensional image data to obtain a plurality of segmented image data; carrying out spectral feature extraction on each piece of divided image data to obtain a spectral feature set of each piece of divided image data; respectively carrying out morphological feature extraction on each piece of divided image data to obtain a morphological feature set of each piece of divided image data; performing initial blurring processing on the target three-dimensional image data to obtain an initial blurring image; performing remarkable target detection on the initial virtual image through the morphological feature set to obtain a target detection result; and carrying out image target distribution analysis on the initial virtual image to obtain a target distribution result, and carrying out secondary virtual processing on the initial virtual image through the target distribution result to obtain a target virtual image.

Description

Three-dimensional image blurring method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for three-dimensional image blurring.
Background
In recent years, acquisition and processing of three-dimensional image data has become an important research direction in the fields of computer vision and image processing. Such data not only contains information of conventional two-dimensional images, but also provides three-dimensional geometric information of objects in the scene, which makes it potentially useful in a wide variety of applications, such as remote sensing, medical image analysis, autopilot, safety monitoring, etc.
In the prior art, when processing complex three-dimensional scenes, spatial segmentation of image data can be challenging. Particularly when there are multiple overlapping or touching objects, errors in the segmentation algorithm can occur, resulting in inaccurate blurring effects. Accuracy of target detection and distribution analysis: although significant target detection and distribution analysis are mentioned herein, these techniques have difficulty accurately capturing all important targets in practical applications, especially where there are similar features between targets or in a complex context. This results in insufficient blurring effect to accurately highlight or blur the target.
Disclosure of Invention
The invention provides a three-dimensional image blurring method, a device, equipment and a storage medium, which are used for improving the accuracy of three-dimensional image blurring.
The first aspect of the present invention provides a three-dimensional image blurring method, which includes: acquiring target three-dimensional image data, and performing space-based image data segmentation on the three-dimensional image data to obtain a plurality of segmented image data;
spectral feature extraction is carried out on each piece of divided image data to obtain a spectral feature set of each piece of divided image data;
respectively carrying out morphological feature extraction on each piece of divided image data to obtain a morphological feature set of each piece of divided image data;
performing initial blurring processing on the target three-dimensional image data through the spectrum feature set to obtain an initial blurring image;
performing remarkable target detection on the initial virtual image through the morphological feature set to obtain a target detection result;
and carrying out image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and carrying out secondary blurring treatment on the initial virtual image through the target distribution result to obtain a target virtual image.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the acquiring target three-dimensional image data, performing spatial image data segmentation on the three-dimensional image data to obtain a plurality of segmented image data, includes:
collecting the three-dimensional image data, and carrying out gray level conversion processing on the three-dimensional image data to obtain gray level image data;
extracting brightness extreme points from the gray image data to obtain a brightness extreme point set;
seed point construction is carried out on the brightness extreme point set through a preset watershed algorithm, so that a plurality of target seed points are obtained;
performing flood filling treatment on the three-dimensional image data based on a plurality of target seed points to obtain a plurality of image segmentation areas;
and based on the plurality of image segmentation areas, carrying out image data segmentation on the three-dimensional image data based on space to obtain a plurality of segmented image data.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the performing spectral feature extraction on each piece of segmented image data to obtain a spectral feature set of each piece of segmented image data includes:
Extracting the thermal infrared wave band of each piece of divided image data to obtain thermal infrared wave band data of each piece of divided image data;
respectively extracting pixel characteristics of each piece of divided image data to obtain pixel characteristic data of each piece of divided image data;
carrying out regional feature fusion on the thermal infrared band data of each piece of divided image data and the pixel feature data of each piece of divided image data to obtain corresponding fusion feature data;
and inputting the fusion characteristic data into a preset spectral characteristic matching model to perform spectral characteristic matching to obtain a spectral characteristic set corresponding to each piece of segmented image data.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, the performing pixel feature extraction on each piece of divided image data to obtain pixel feature data of each piece of divided image data includes:
respectively extracting color channels of each piece of divided image data to obtain color channel data of each piece of divided image data;
based on the color channel data of each piece of divided image data, respectively performing pixel value traversal on each piece of divided image data to obtain pixel value data of each piece of divided image data;
Based on pixel value data of each piece of divided image data, performing color space conversion on each piece of divided image data to obtain a plurality of pieces of converted image data;
respectively carrying out spectrum analysis on each piece of converted image data to obtain spectrum characteristic data of each piece of converted image data;
and respectively extracting pixel characteristics of each piece of divided image data based on the frequency spectrum characteristic data of each piece of converted image data to obtain the pixel characteristic data of each piece of divided image data.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the performing, by using the spectral feature set, an initial blurring process on the target three-dimensional image data to obtain an initial blurring image includes:
carrying out semantic information filling on the spectrum feature set to obtain a semantic feature set corresponding to the spectrum feature set;
acquiring depth information of the target three-dimensional image data to obtain pixel depth information of the target three-dimensional image data;
carrying out blurring direction analysis on the semantic feature set and the pixel depth information to obtain a target blurring direction;
constructing a fuzzy core of the target three-dimensional image data to obtain a target fuzzy core;
And carrying out initial blurring processing on the target three-dimensional image data through the target blurring kernel based on the target blurring direction to obtain an initial blurring image.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the performing, by using the morphological feature set, significant object detection on the initial virtual image to obtain an object detection result includes:
performing target edge detection on the initial virtual image through the morphological feature set to obtain edge position information;
thresholding the initial virtual image based on the edge position information to obtain corresponding processed image data;
carrying out connected region analysis on the processed image data to obtain a plurality of target connected regions corresponding to the processed image data;
filtering the target communication areas to obtain a plurality of filtering areas;
and performing remarkable target detection on the plurality of filtering areas to obtain the target detection result.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the performing image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and performing secondary blurring processing on the initial virtual image by using the target distribution result to obtain a target virtual image, includes:
Calibrating the target position of the initial virtual image according to the target detection result to obtain a calibrated position set;
performing image target distribution analysis on the initial virtual image through the calibration position set to obtain the target distribution result;
calculating the blurring parameters of the target distribution result to obtain target blurring parameters;
and performing secondary blurring processing on the initial blurring image through the target blurring parameters to obtain a target blurring image.
The second aspect of the present invention provides a three-dimensional image blurring apparatus including:
the acquisition module is used for acquiring target three-dimensional image data, and dividing the three-dimensional image data based on the image data in space to obtain a plurality of divided image data;
the first extraction module is used for extracting spectral features of each piece of divided image data to obtain a spectral feature set of each piece of divided image data;
the second extraction module is used for extracting morphological characteristics of each piece of divided image data to obtain a morphological characteristic set of each piece of divided image data;
the processing module is used for carrying out initial blurring processing on the target three-dimensional image data through the spectrum characteristic set to obtain an initial blurring image;
The detection module is used for carrying out remarkable target detection on the initial virtual image through the morphological feature set to obtain a target detection result;
and the analysis module is used for carrying out image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and carrying out secondary virtual processing on the initial virtual image through the target distribution result to obtain a target virtual image.
A third aspect of the present application provides a three-dimensional image blurring apparatus, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the three-dimensional image blurring apparatus to perform the three-dimensional image blurring method described above.
A fourth aspect of the present application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the three-dimensional image blurring method described above.
In the technical scheme provided by the application, target three-dimensional image data are collected, and space-based image data segmentation is carried out on the three-dimensional image data to obtain a plurality of segmented image data; spectral feature extraction is carried out on each piece of divided image data to obtain a spectral feature set of each piece of divided image data; respectively carrying out morphological feature extraction on each piece of divided image data to obtain a morphological feature set of each piece of divided image data; carrying out initial blurring processing on the target three-dimensional image data through the spectrum feature set to obtain an initial blurring image; performing remarkable target detection on the initial virtual image through the morphological feature set to obtain a target detection result; and carrying out image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and carrying out secondary virtual processing on the initial virtual image through the target distribution result to obtain a target virtual image. In the scheme of the application, the three-dimensional image data is segmented into a plurality of sub-images by utilizing a space segmentation technology, so that subsequent processing can be concentrated on a single target or region. This helps to improve the accuracy and efficiency of the process. By performing spectral feature extraction on each divided image data, information on color and texture can be obtained, which contributes to individualization and object detection of blurring processing. Extracting morphological features, such as edge and region information, helps to better understand the structure and content of the image during the blurring and detection phases. And (3) carrying out initial blurring processing on the target three-dimensional image data through the spectrum feature set so as to blur details in the image. May be used to reduce noise, privacy protection, or create special effects. And performing remarkable target detection on the initial virtual image by using the morphological feature set, so that a remarkable target in the image can be identified. This has potential applications in target recognition and highlighting, such as target tracking or object detection. By analyzing the distribution of the objects in the image, you can better understand the position, density and spatial distribution of the objects. This helps to control blurring and further image processing more finely. And carrying out secondary blurring processing on the initial blurring image according to the target distribution result, and optimizing blurring effect according to the distribution condition of the target in the image.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional image blurring method according to an embodiment of the present invention;
FIG. 2 is a flowchart of spectral feature extraction for each segmented image data according to an embodiment of the present invention;
FIG. 3 is a flowchart of a pixel feature extraction for each segmented image data according to an embodiment of the present invention;
FIG. 4 is a flowchart of an initialization process for the target three-dimensional image data through the spectral feature set according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a three-dimensional image blurring apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of a three-dimensional image blurring apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a three-dimensional image blurring method, a device, equipment and a storage medium, which are used for improving the accuracy of three-dimensional image blurring.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and one embodiment of a three-dimensional image blurring method in an embodiment of the present invention includes:
s101, acquiring target three-dimensional image data, and performing space-based image data segmentation on the three-dimensional image data to obtain a plurality of segmented image data;
it is to be understood that the execution subject of the present invention may be a three-dimensional image blurring device, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, three-dimensional image data of the target is acquired, which includes three-dimensional scene data acquired from various sensors (e.g., lidar, stereo cameras, or remote sensing satellites). These data contain rich geometric information and color information. And (3) carrying out gray level conversion processing to convert the multi-channel three-dimensional image data into a single-channel gray level image. This simplifies the processing and reduces the data dimension. And extracting the brightness extreme points to find the brightness extreme points in the image. These extreme points may represent high points (e.g., buildings) or low points (e.g., lakes or valleys) of the object. This helps to initially identify potential target areas. The server builds seed points for the brightness extreme point set by using a preset watershed algorithm so as to generate a plurality of target seed points. These seed points are candidates for potential targets or objects. Based on these target seed points, the server performs a flood fill process. This step may expand the seed points into regions in three-dimensional space, thereby more accurately segmenting the boundaries of the targets and objects. Based on these image division areas, the server performs spatial image data division. This may help the server more finely divide the three-dimensional image data into a plurality of sub-regions, each representing a different geographic feature or object. For example, assume a set of satellite images contains various features of the earth's surface, including forests, waters, urban buildings, and farms. The server firstly converts the image data into gray level images, and then finds out obvious geographic characteristic points such as the peak of lakes or mountains through brightness extreme point extraction. Through watershed algorithms, the server generates seed points representing potential geographic features. Next, using flood fill, the server expands these seed points into a complete geographic area. Based on the divided areas, the server can accurately distinguish geographic features of forests, water areas, urban buildings and the like, and a reliable basis is provided for further geographic information extraction and analysis.
S102, respectively extracting spectral features of each piece of divided image data to obtain a spectral feature set of each piece of divided image data;
specifically, thermal infrared band extraction is performed on each piece of divided image data, so as to obtain thermal infrared band data of each piece of divided image data. Thermal infrared information for each segmented image data is captured using a thermal infrared sensor or device. These data provide information about the temperature distribution of the different areas, helping to identify hot or cold spots in the image, and thus better understand the thermal properties of the object. The pixel feature extraction is performed on each piece of divided image data to obtain pixel feature data of each piece of divided image data, which helps to describe the pixel attribute of each piece of divided image data. Pixel characteristics include color, brightness, texture, shape, and the like. Analysis of pixel characteristics can help distinguish between different types of objects and regions and capture detailed information in an image. The server performs region feature fusion on the thermal infrared band data and the pixel feature data, and different types of feature information are integrated together to generate comprehensive feature description of each piece of divided image data. This can be achieved by statistically analyzing the thermal infrared data and the pixel characteristic data. And inputting the fusion characteristic data into a preset spectrum characteristic matching model to perform spectrum characteristic matching. This model is used to compare the fusion features to a database of known spectral features to determine a set of spectral features for each segmented image data. This step helps to identify and blurring objects or features in the image.
And extracting color channels of each piece of divided image data. The purpose is to separate the color channels of each segmented image data. Typically, an image typically contains three primary color channels, red, green, and blue, and sometimes other channels, such as an alpha channel, for transparency. By extracting these color channels, the server will obtain data about the image color information. Pixel value traversal is performed. In this step, the server traverses the pixel values in each color channel. This allows the server to analyze the brightness and color information for each pixel in the image. Pixel value traversal typically encompasses the entire image to obtain detailed pixel level data. Color space conversion is performed to convert color channel data from an RGB color space to other color spaces, such as HSV (hue, saturation, brightness) or Lab (brightness, green-red, blue-yellow), and the like. This conversion helps to better describe the color characteristics, especially the image under different illumination conditions. And performing spectrum analysis. The server performs spectrum analysis on the converted color channel data to obtain spectrum characteristic data. Spectral analysis may reveal periodic or frequency information in the image, which is very useful for capturing image textures and patterns. And performing final pixel characteristic extraction. The server comprehensively considers the color channel data, the pixel value data and the frequency spectrum characteristic data to generate final pixel characteristic data of each divided image data. These features may include color information, brightness, color space converted values, and spectral features. For example, for a medical image, the server first extracts the color channels and then traverses the values of each pixel. The server converts the color channel data from RGB to HSV color space, which helps better describe the color characteristics of different tissues in the image. By spectral analysis, the server detects texture information in the image, which is important for identifying a tumor or abnormal tissue. The final pixel characteristic data may be used for more accurate medical diagnosis.
S103, respectively carrying out morphological feature extraction on each piece of divided image data to obtain a morphological feature set of each piece of divided image data;
specifically, morphological feature extraction is performed on each of the segmented image data, and the morphological feature extraction is an image analysis technique that relies on mathematical morphological principles such as expansion, erosion, open operation, and closed operation. These operations may be used to change and extract structural and shape information in the image. These principles are equally applicable in three-dimensional image processing, but the specificity of three-dimensional data needs to be considered. One common morphological feature is the size and shape of the object. By applying the etching operation, the server reduces the size of the object, and by the expanding operation, the server increases the size of the object. This helps capture the contour and shape features of objects in the image. For example, in medical imaging, by morphological feature extraction, tumor size and shape can be analyzed, which is helpful for diagnosis and therapy planning. Another important morphological feature is connectivity and pores. The server eliminates small holes in the image by an open operation, and eliminates small connecting portions by a close operation. This is useful for segmenting and analyzing relationships between structures or objects in an image. In autopilot, this may be used to identify connectivity between road signs and obstacles on the road to improve driving decisions. Morphological features can also be used to detect texture and edges. By applying specific morphological operations, the server emphasizes texture features in the image, which is very useful in security monitoring for detecting suspicious activity or abnormal textures. In addition, by edge detection morphological operations, boundaries and contours in the image can be highlighted, facilitating object detection and blurring.
S104, carrying out initial blurring processing on the target three-dimensional image data through the spectrum feature set to obtain an initial blurring image;
specifically, semantic information filling is carried out on the spectrum feature set to obtain a semantic feature set corresponding to the spectrum feature set. By combining spectral features of the image with semantic information. For example, in a remote sensing image, a server associates different spectral features with Geographic Information System (GIS) data, thereby providing semantic tags for different areas in the image. The server acquires depth information to acquire pixel depth information of the target three-dimensional image data. This typically requires the use of a depth sensor or camera to capture distance information of the target. Depth information is critical to the blurring process because it helps the server determine the position of an object or target in three-dimensional space. Blurring direction analysis determines the direction of blurring from the semantic feature set and the pixel depth information. This helps to ensure the accuracy of the blurring process. For example, blurring direction analysis may help determine in which direction in an image blurring should be applied to highlight or blur a particular structure or tissue. And then, carrying out fuzzy core construction on the target three-dimensional image data to obtain a target fuzzy core. This is a mathematical function used to simulate the effects of blurring. The blurring kernel is constructed by considering blurring direction and depth information so as to ensure that blurring processing accords with actual conditions. And carrying out initial blurring processing on the target three-dimensional image data through the target blurring kernel based on the target blurring direction to obtain an initial blurring image. This step will apply blurring effects, highlighting or blurring the object or feature according to the characteristics of the blurring kernel. For example, assume that an autopilot system uses three-dimensional image data to detect obstacles on a road. Through the spectral feature set, the server identifies the shape and material of the obstacle. Through the depth information acquisition, the server knows the distance of the obstacle. In the blurring direction analysis, the server determines that blurring should be applied in front of the obstacle to ensure the accuracy of the driving decision. By constructing the blur kernel, the server generates an initial blurring image, and applies blurring to the obstacle, making it easier to detect and analyze.
S105, performing remarkable target detection on the initial virtual image through a morphological feature set to obtain a target detection result;
specifically, through the morphological feature set, the server can accurately detect the outline and edge position information of the target, and is helpful for determining the exact position and shape of the target. The server performs thresholding processing on the initial virtual image by relying on the edge position information, so that the target area is more remarkable, and the background is restrained. The purpose of this process is to separate the target from the background for subsequent analysis. In connected region analysis, the server performs in-depth analysis on thresholded image data to help the server distinguish between different targets or objects while determining their location and size. This step provides key information for separating different objects from the whole image. The filtering process is used to remove noise or irrelevant areas and may be screened according to the size, shape or other characteristics of the object. And performing remarkable target detection through the morphological feature set. This step ultimately determines an important target in the three-dimensional image that is made more pronounced for further analysis or blurring. For example, assume that the goal of the server is to detect and highlight suspicious activity in the crowd captured by the surveillance camera. Through the morphological feature set, the server implements salient object detection to ensure that potential problems can be perceived in time. Through target edge detection, the server can accurately obtain the outline of the person, and then highlight the outline through thresholding, so that the outline is easier to identify. Connected region analysis may be used to distinguish between different people, while filtering may remove irrelevant regions, such as noise. The salient object detection highlights suspicious behavior so that the monitoring personnel can take necessary measures.
S106, performing image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and performing secondary blurring processing on the initial virtual image through the target distribution result to obtain a target virtual image.
It should be noted that, the target position of the initial virtual image is calibrated according to the target detection result, so as to obtain a calibrated position set. This set includes the location information of each object found in the initial virtual image. And performing target distribution analysis on the initial virtual image by using the target position set. This step helps to understand the distribution of objects in the image, e.g. whether they are densely distributed in a specific area or dispersed throughout the image. The object distribution analysis provides important information about the object distribution, helping to get a better understanding of the image content. And calculating blurring parameters according to the target distribution result. The blurring parameter determines the degree of blurring applied in the secondary blurring process. If the distribution of objects indicates that the objects are densely distributed in a certain area, the degree of blurring may be increased to highlight other areas. Conversely, if the distribution of the object indicates that the object is scattered throughout the image, the degree of blurring may be moderately reduced to ensure that the object is still visible but not excessively prominent. And performing secondary blurring processing on the initial blurring image by using the calculated target blurring parameters, thereby generating a target blurring image. This process will blur or preserve the objects in the image depending on their distribution so as to highlight or obscure the objects, achieving the desired effect. For example, in remote sensing applications, locations of interest may be located and identified by target detection, and then the distribution of those locations determined by target distribution analysis. According to the distribution condition, the blurring parameters can be calculated, and finally, a target blurring image is generated, so that a specific place is highlighted or blurred, and the requirements of specific application are met.
In the embodiment of the application, target three-dimensional image data is collected, and image data segmentation based on space is carried out on the three-dimensional image data to obtain a plurality of segmented image data; spectral feature extraction is carried out on each piece of divided image data to obtain a spectral feature set of each piece of divided image data; respectively carrying out morphological feature extraction on each piece of divided image data to obtain a morphological feature set of each piece of divided image data; carrying out initial blurring processing on the target three-dimensional image data through the spectrum feature set to obtain an initial blurring image; performing remarkable target detection on the initial virtual image through the morphological feature set to obtain a target detection result; and carrying out image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and carrying out secondary virtual processing on the initial virtual image through the target distribution result to obtain a target virtual image. In the scheme of the application, the three-dimensional image data is segmented into a plurality of sub-images by utilizing a space segmentation technology, so that subsequent processing can be concentrated on a single target or region. This helps to improve the accuracy and efficiency of the process. By performing spectral feature extraction on each divided image data, information on color and texture can be obtained, which contributes to individualization and object detection of blurring processing. Extracting morphological features, such as edge and region information, helps to better understand the structure and content of the image during the blurring and detection phases. And (3) carrying out initial blurring processing on the target three-dimensional image data through the spectrum feature set so as to blur details in the image. May be used to reduce noise, privacy protection, or create special effects. And performing remarkable target detection on the initial virtual image by using the morphological feature set, so that a remarkable target in the image can be identified. This has potential applications in target recognition and highlighting, such as target tracking or object detection. By analyzing the distribution of the objects in the image, you can better understand the position, density and spatial distribution of the objects. This helps to control blurring and further image processing more finely. And carrying out secondary blurring processing on the initial blurring image according to the target distribution result, and optimizing blurring effect according to the distribution condition of the target in the image.
In a specific embodiment, the process of executing step S101 may specifically include the following steps:
(1) Collecting three-dimensional image data, and performing gray level conversion processing on the three-dimensional image data to obtain gray level image data;
(2) Extracting brightness extreme points from the gray image data to obtain a brightness extreme point set;
(3) Seed point construction is carried out on the brightness extreme point set through a preset watershed algorithm, so that a plurality of target seed points are obtained;
(4) Based on a plurality of target seed points, flood filling processing is carried out on the three-dimensional image data, and a plurality of image segmentation areas are obtained;
(5) Based on the plurality of image segmentation areas, the three-dimensional image data is subjected to image data segmentation based on space, so that a plurality of segmented image data are obtained.
In particular, the server collects three-dimensional image data derived from various sensors and devices, such as lidar, cameras, and the like. The three-dimensional image data is complex and contains three-dimensional geometric information of the object, and in order to better understand and process the data, the server performs gray-scale conversion processing on the data, and converts the original color or multi-channel data into single-channel gray-scale image data. This helps to simplify the data and makes subsequent processing steps easier. And extracting brightness extreme points from the gray image data to obtain a brightness extreme point set. Luminance extrema are pixels in an image that have the highest and lowest gray values, which typically represent significant features or object edges in the image. Extraction of these luminance extrema aids in determining potential targets or regions of interest. The server will process these luminance extremum points using a preset watershed algorithm, converting them into target seed points. Watershed algorithms are an image segmentation technique that determines different segmented regions by detecting local minima in an image. Here, the luminance extreme points are considered as seed points of the potential target, which will be used for further processing. And (3) carrying out flood filling treatment on the three-dimensional image data based on a plurality of target seed points to obtain a plurality of image segmentation areas, marking adjacent pixels as belonging to the same target area, wherein each area contains one or more targets. Based on these image division areas, the server performs space-based image data division to divide the original three-dimensional image data into a plurality of divided image data. Each segmented image data represents a specific region or object in the original data. This provides a better basis for further analysis and processing.
In a specific embodiment, as shown in fig. 2, the process of executing step S102 may specifically include the following steps:
s201, respectively carrying out thermal infrared band extraction on each piece of divided image data to obtain thermal infrared band data of each piece of divided image data;
s202, respectively extracting pixel characteristics of each piece of divided image data to obtain pixel characteristic data of each piece of divided image data;
s203, carrying out region feature fusion on the thermal infrared band data of each piece of divided image data and the pixel feature data of each piece of divided image data to obtain corresponding fusion feature data;
s204, inputting the fusion characteristic data into a preset spectral characteristic matching model to perform spectral characteristic matching, and obtaining a spectral characteristic set corresponding to each piece of segmented image data.
The server extracts the thermal infrared band for each piece of divided image data. The thermal infrared band is a portion of the spectral range that is commonly used to detect thermal distribution and temperature information of an object. Extracting data for this band helps the server understand thermal features in the segmented image, such as thermal distribution or temperature variation of the target. The server performs pixel feature extraction for each of the divided image data. Pixel characteristics refer to the properties of each pixel in the segmented image, such as color, brightness, texture, etc. These features help the server describe local characteristics and structural information in the image. And carrying out regional characteristic fusion on the thermal infrared band data and the pixel characteristic data. The aim is to integrate different types of feature information into one overall feature vector in order to better describe each segmented image data. The process of fusing includes combining the thermal infrared data with the pixel features to form a comprehensive feature set. These data are input into a preset spectral feature matching model. Spectral feature matching models are a tool for comparing and matching different images or data, typically for identifying objects or specific features. By inputting the fusion feature data into the model, the server obtains a set of spectral features corresponding to each of the segmented image data. For example, suppose an autonomous car is traveling in a city, encountering a traffic sign and a vehicle that is parked at the roadside. The vehicle can extract color and shape features from the camera images, extract temperature distribution features from the thermal infrared images, and fuse them together to form a comprehensive feature vector. This feature vector may be compared to a spectral feature matching model to help the vehicle identify traffic signs and determine if a vehicle stopped at the roadside is there. This helps the vehicle make corresponding decisions, such as slowing down or bypassing obstacles, ensuring safe driving.
In a specific embodiment, as shown in fig. 3, the process of executing step S202 may specifically include the following steps:
s301, respectively extracting color channels of each piece of divided image data to obtain color channel data of each piece of divided image data;
s302, respectively performing pixel value traversal on each piece of divided image data based on the color channel data of each piece of divided image data to obtain pixel value data of each piece of divided image data;
s303, respectively performing color space conversion on each piece of divided image data based on pixel value data of each piece of divided image data to obtain a plurality of pieces of converted image data;
s304, respectively carrying out spectrum analysis on each piece of converted image data to obtain spectrum characteristic data of each piece of converted image data;
s305, respectively extracting pixel characteristics of each piece of divided image data based on the frequency spectrum characteristic data of each piece of converted image data to obtain the pixel characteristic data of each piece of divided image data.
It should be noted that, for each divided image data, the server performs a color channel extraction operation. Different color channels, such as red, green, and blue, are separated from each image. This may be done by an image processing library (such as OpenCV in Python). The server will perform pixel value traversal based on the data for each color channel. The server checks the value of each pixel one by one. For example, on the red channel, the server traverses each pixel, obtaining the value of its red component. This will provide the server with color information about each pixel. The server performs color space conversion, converting the image from one color space to another as needed. For example, the server converts an image of an RGB color space into a grayscale color space. This is typically done using a mathematical transformation function, for example, for RGB to gray conversion, the following formula can be used: gray=0.299×r+0.587×g+0.114×b. This will generate a grey scale version of the image. The server performs a spectral analysis on each of the converted image data. This may include applying a fourier transform to convert the image to the frequency domain. Spectral analysis can help the server understand the frequency characteristics of the image, such as texture and pattern. Taking the fourier transform as an example, it converts the image into a frequency domain representation, where the high frequency parts represent details in the image and the low frequency parts represent global features in the image. Based on the spectral feature data of each converted image data, the server performs pixel feature extraction. This involves analyzing the spectral data to extract useful information about the image. For example, the server detects peaks in a particular frequency range to identify a particular pattern or structure in the image. The result of these feature extraction will be to provide the server with pixel feature data for each segmented image data.
In a specific embodiment, as shown in fig. 4, the process of executing step S104 may specifically include the following steps:
s401, carrying out semantic information filling on the spectrum feature set to obtain a semantic feature set corresponding to the spectrum feature set;
s402, acquiring depth information of the target three-dimensional image data to obtain pixel depth information of the target three-dimensional image data;
s403, carrying out blurring direction analysis on the semantic feature set and the pixel depth information to obtain a target blurring direction;
s404, performing fuzzy kernel construction on the target three-dimensional image data to obtain a target fuzzy kernel;
s405, performing initial blurring processing on the target three-dimensional image data through the target blurring kernel based on the target blurring direction to obtain an initial blurring image.
In particular, the server processes a set of spectral features, which are spectral data from remote sensing satellites or sensors. The goal of the server is to add semantic information to the data in order to better understand the image content. This may be accomplished by machine learning or deep learning models that map spectral features to semantic features, such as ground object types (e.g., water, vegetation, buildings, etc.). This will provide a server with a deeper understanding of the spectral data, thus better supporting subsequent processing. The server acquires pixel depth information of the target three-dimensional image data. This can be achieved by using a 3D sensor (such as a lidar or binocular camera) or a deep learning model. The depth information indicates the distance of each pixel from the camera or sensor. The server performs a blurring direction analysis on the set of semantic features and the pixel depth information to determine how to apply blurring in the image. The blurring direction is determined from the image content and the features. For example, if the server processes scenic photographs, the direction of blurring is selected based on the scene features in the horizontal and vertical directions. And constructing a fuzzy core of the target three-dimensional image data to obtain a target fuzzy core. The blur kernel is a mathematical matrix that describes how the image is blurred. It is usually a two-dimensional kernel, designed according to the direction of blurring and the requirements of the blurring degree. The construction of the blur kernel involves signal processing techniques such as convolution. The design of the blurring kernel can be optimized according to blurring requirements so as to ensure that the blurring image meets a specific visual effect. And based on the target blurring direction and the blurring kernel, the server performs initial blurring processing on the target three-dimensional image data. This will apply blurring effects to the image, blurring it. The result of the initial blurring is a blurred three-dimensional image in which the sharpness of the image is controlled by the blurring kernel and the blurring direction. This process can be implemented by image processing libraries and mathematical convolution operations. For example, assume that a server performs semantic information population on a set of light features to map different terrain types (e.g., waters, forests, cities) into an image. And through laser radar scanning, the server acquires three-dimensional data of the map, including pixel depth information. According to the feature type and the characteristics of the map, the server analyzes the blurring direction and performs blurring according to the main direction of the map. The server builds a blur kernel to simulate the blurring effect under the view angle, for example, according to a wide range of terrain contours. The server applies the blurring kernel to the three-dimensional image data to obtain an initial blurring image, so that the ground feature and the geographic feature in the image become blurred. This may be used to enhance the aesthetic effect of the map or to protect sensitive information.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Performing target edge detection on the initial virtual image through the morphological feature set to obtain edge position information;
(2) Thresholding is carried out on the initial virtual image based on the edge position information, so as to obtain corresponding processed image data;
(3) Carrying out connected region analysis on the processed image data to obtain a plurality of target connected regions corresponding to the processed image data;
(4) Filtering the multiple target communication areas to obtain multiple filtering areas;
(5) And performing remarkable target detection on the plurality of filtering areas to obtain a target detection result.
It should be noted that, the server uses the morphological feature set to perform object edge detection on the initial virtual image, so as to find the edges or contours of different objects in the image, so as to better understand their shape and position. After edge detection, the server obtains information about the location of the target edge in the image. Based on the edge position information, the server performs thresholding processing on the initial virtual image. This step helps the server to separate the image into a target region and a background region. By selecting an appropriate threshold, the server clearly separates the object from the background, thereby generating corresponding processed image data. The server performs connected region analysis to divide the processed image data into different connected regions, wherein each region represents an independent target or object. This segmentation helps the server separate the different elements in the image and better understand them. After the connected region analysis, the server obtains a plurality of target connected regions. The server performs filtering processing on the plurality of target connected areas. The purpose of filtering is to remove noise in the image, smooth the target area, or enhance features of interest. The filtering may be adjusted according to the specific application requirements. For example, the server uses gaussian filtering to smooth the target area, making it more continuous, or median filtering to remove small noise. The server performs salient object detection, identifying salient objects or regions of interest in the image. This may be achieved by computer vision algorithms such as feature-based object detection or deep learning methods. The salient object detection result will provide the server with information about the most important areas or objects in the image. For example, assume that there is a surveillance camera image of a city street. The server performs edge detection on the image to find the outlines of streets, buildings, and pedestrians. By thresholding, the server separates the image into different regions, e.g., separating pedestrians and vehicles from the background. Then, the server performs a connected area analysis to divide different vehicles and pedestrians into respective areas. The server performs a filtering process on these areas to remove noise in the image or to smooth the target area. By significant target detection, the server identifies the most important target in the image, e.g., traffic violations or anomalies are detected.
In a specific embodiment, the process of executing step S106 may specifically include the following steps:
(1) Calibrating the target position of the initial virtual image according to the target detection result to obtain a calibrated position set;
(2) Carrying out image target distribution analysis on the initial virtual image through the calibration position set to obtain a target distribution result;
(3) Calculating the blurring parameters of the target distribution result to obtain target blurring parameters;
(4) And performing secondary blurring processing on the initial blurring image through the target blurring parameters to obtain a target blurring image.
Specifically, the server obtains the position information of the target in the related image, namely the target position calibration, through the target detection result. The server determines specific location coordinates of each target or object in the image, which may be pixel coordinates or locations in other coordinate systems. Through the set of calibration positions, the server performs image object distribution analysis on the initial virtual images, and knows the distribution situation of objects in the images, such as whether the objects are concentrated in a certain area or scattered in the whole image. Such distribution analysis may provide important information about the scene. According to the distribution condition of the targets, the server calculates blurring parameters. The blurring parameter is typically a set of parameters for adjusting the blurring degree of the image. These parameters are related to factors such as size, distance, density of the target, etc. The calculation of the blurring parameters is determined according to the specific characteristics and distribution of the target. And performing secondary blurring processing on the initial blurring image by using the calculated blurring parameters. This processing step may be implemented using blurring filters or other image processing techniques. The degree of blurring will be adjusted according to the value of the blurring parameter in order to better highlight or blur the object in the image. This facilitates further analysis of the target distribution and features in the image. For example, assume that a server uses an on-board camera to identify other vehicles on a road. The target detection algorithm detects the position of other vehicles and marks them on the image. Through the calibration positions, the server performs image target distribution analysis to find that other vehicles are uniformly distributed on the road. From this distribution, the server calculates blurring parameters to determine how much image blurring is required in order to highlight the distribution of the vehicle. The server performs secondary blurring processing on the initial blurring image, so that other vehicles are more obvious, and the automatic driving system is supported to better understand the surrounding road conditions.
The three-dimensional image blurring method according to the embodiment of the present invention is described above, and the three-dimensional image blurring apparatus according to the embodiment of the present invention is described below, referring to fig. 5, where an embodiment of the three-dimensional image blurring apparatus according to the embodiment of the present invention includes:
the acquisition module 501 is configured to acquire target three-dimensional image data, and segment the three-dimensional image data based on spatial image data to obtain a plurality of segmented image data;
the first extraction module 502 is configured to perform spectral feature extraction on each piece of the segmented image data to obtain a spectral feature set of each piece of the segmented image data;
a second extraction module 503, configured to extract morphological features of each of the segmented image data, so as to obtain a morphological feature set of each of the segmented image data;
the processing module 504 is configured to perform an initial blurring process on the target three-dimensional image data through the spectral feature set, so as to obtain an initial blurring image;
the detection module 505 is configured to perform significant target detection on the initial virtual image through the morphological feature set, so as to obtain a target detection result;
the analysis module 506 is configured to perform image target distribution analysis on the initial virtual image based on the target detection result, obtain a target distribution result, and perform secondary virtual processing on the initial virtual image through the target distribution result, so as to obtain a target virtual image.
Collecting target three-dimensional image data through the cooperation of the components, and performing space-based image data segmentation on the three-dimensional image data to obtain a plurality of segmented image data; spectral feature extraction is carried out on each piece of divided image data to obtain a spectral feature set of each piece of divided image data; respectively carrying out morphological feature extraction on each piece of divided image data to obtain a morphological feature set of each piece of divided image data; carrying out initial blurring processing on the target three-dimensional image data through the spectrum feature set to obtain an initial blurring image; performing remarkable target detection on the initial virtual image through the morphological feature set to obtain a target detection result; and carrying out image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and carrying out secondary virtual processing on the initial virtual image through the target distribution result to obtain a target virtual image. In the scheme of the application, the three-dimensional image data is segmented into a plurality of sub-images by utilizing a space segmentation technology, so that subsequent processing can be concentrated on a single target or region. This helps to improve the accuracy and efficiency of the process. By performing spectral feature extraction on each divided image data, information on color and texture can be obtained, which contributes to individualization and object detection of blurring processing. Extracting morphological features, such as edge and region information, helps to better understand the structure and content of the image during the blurring and detection phases. And (3) carrying out initial blurring processing on the target three-dimensional image data through the spectrum feature set so as to blur details in the image. May be used to reduce noise, privacy protection, or create special effects. And performing remarkable target detection on the initial virtual image by using the morphological feature set, so that a remarkable target in the image can be identified. This has potential applications in target recognition and highlighting, such as target tracking or object detection. By analyzing the distribution of the objects in the image, you can better understand the position, density and spatial distribution of the objects. This helps to control blurring and further image processing more finely. And carrying out secondary blurring processing on the initial blurring image according to the target distribution result, and optimizing blurring effect according to the distribution condition of the target in the image.
The three-dimensional image blurring apparatus according to the embodiment of the present invention is described in detail above in terms of modularized functional entities in fig. 5, and the three-dimensional image blurring device according to the embodiment of the present invention is described in detail below in terms of hardware processing.
Fig. 6 is a schematic structural diagram of a three-dimensional image blurring apparatus according to an embodiment of the present invention, where the three-dimensional image blurring apparatus 600 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 610 (e.g., one or more processors) and a memory 620, and one or more storage mediums 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations on the three-dimensional image blurring apparatus 600. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the three-dimensional image blurring apparatus 600.
The three-dimensional image blurring apparatus 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as WindowsServe, macOSX, unix, linux, freeBSD, etc. It will be appreciated by those skilled in the art that the three-dimensional image blurring apparatus structure shown in fig. 6 does not constitute a limitation of the three-dimensional image blurring apparatus, and may include more or less components than illustrated, or may combine certain components, or may be a different arrangement of components.
The invention also provides a three-dimensional image blurring apparatus, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the three-dimensional image blurring method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the three-dimensional image blurring method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or passed as separate products, may be stored in a computer readable storage medium. Based on the understanding that the technical solution of the present invention may be embodied in essence or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A three-dimensional image blurring method, characterized in that the three-dimensional image blurring method comprises:
acquiring target three-dimensional image data, and performing space-based image data segmentation on the three-dimensional image data to obtain a plurality of segmented image data;
spectral feature extraction is carried out on each piece of divided image data to obtain a spectral feature set of each piece of divided image data;
respectively carrying out morphological feature extraction on each piece of divided image data to obtain a morphological feature set of each piece of divided image data;
performing initial blurring processing on the target three-dimensional image data through the spectrum feature set to obtain an initial blurring image;
Performing remarkable target detection on the initial virtual image through the morphological feature set to obtain a target detection result;
and carrying out image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and carrying out secondary blurring treatment on the initial virtual image through the target distribution result to obtain a target virtual image.
2. The method of claim 1, wherein the acquiring the target three-dimensional image data, performing spatial-based image data segmentation on the three-dimensional image data to obtain a plurality of segmented image data, comprises:
collecting the three-dimensional image data, and carrying out gray level conversion processing on the three-dimensional image data to obtain gray level image data;
extracting brightness extreme points from the gray image data to obtain a brightness extreme point set;
seed point construction is carried out on the brightness extreme point set through a preset watershed algorithm, so that a plurality of target seed points are obtained;
performing flood filling treatment on the three-dimensional image data based on a plurality of target seed points to obtain a plurality of image segmentation areas;
and based on the plurality of image segmentation areas, carrying out image data segmentation on the three-dimensional image data based on space to obtain a plurality of segmented image data.
3. The method of claim 1, wherein the performing spectral feature extraction on each of the segmented image data to obtain a set of spectral features of each of the segmented image data comprises:
extracting the thermal infrared wave band of each piece of divided image data to obtain thermal infrared wave band data of each piece of divided image data;
respectively extracting pixel characteristics of each piece of divided image data to obtain pixel characteristic data of each piece of divided image data;
carrying out regional feature fusion on the thermal infrared band data of each piece of divided image data and the pixel feature data of each piece of divided image data to obtain corresponding fusion feature data;
and inputting the fusion characteristic data into a preset spectral characteristic matching model to perform spectral characteristic matching to obtain a spectral characteristic set corresponding to each piece of segmented image data.
4. The method of claim 1, wherein the performing pixel feature extraction on each of the segmented image data to obtain pixel feature data of each of the segmented image data includes:
Respectively extracting color channels of each piece of divided image data to obtain color channel data of each piece of divided image data;
based on the color channel data of each piece of divided image data, respectively performing pixel value traversal on each piece of divided image data to obtain pixel value data of each piece of divided image data;
based on pixel value data of each piece of divided image data, performing color space conversion on each piece of divided image data to obtain a plurality of pieces of converted image data;
respectively carrying out spectrum analysis on each piece of converted image data to obtain spectrum characteristic data of each piece of converted image data;
and respectively extracting pixel characteristics of each piece of divided image data based on the frequency spectrum characteristic data of each piece of converted image data to obtain the pixel characteristic data of each piece of divided image data.
5. The method of claim 1, wherein the performing, by using the spectral feature set, an initial blurring process on the target three-dimensional image data to obtain an initial blurring image includes:
carrying out semantic information filling on the spectrum feature set to obtain a semantic feature set corresponding to the spectrum feature set;
Acquiring depth information of the target three-dimensional image data to obtain pixel depth information of the target three-dimensional image data;
carrying out blurring direction analysis on the semantic feature set and the pixel depth information to obtain a target blurring direction;
constructing a fuzzy core of the target three-dimensional image data to obtain a target fuzzy core;
and carrying out initial blurring processing on the target three-dimensional image data through the target blurring kernel based on the target blurring direction to obtain an initial blurring image.
6. The method of claim 1, wherein the performing significant object detection on the initial virtual image through the morphological feature set to obtain an object detection result includes:
performing target edge detection on the initial virtual image through the morphological feature set to obtain edge position information;
thresholding the initial virtual image based on the edge position information to obtain corresponding processed image data;
carrying out connected region analysis on the processed image data to obtain a plurality of target connected regions corresponding to the processed image data;
Filtering the target communication areas to obtain a plurality of filtering areas;
and performing remarkable target detection on the plurality of filtering areas to obtain the target detection result.
7. The three-dimensional image blurring method according to claim 1, wherein the performing image object distribution analysis on the initial blurring image based on the object detection result to obtain an object distribution result, and performing secondary blurring processing on the initial blurring image by using the object distribution result to obtain an object blurring image includes:
calibrating the target position of the initial virtual image according to the target detection result to obtain a calibrated position set;
performing image target distribution analysis on the initial virtual image through the calibration position set to obtain the target distribution result;
calculating the blurring parameters of the target distribution result to obtain target blurring parameters;
and performing secondary blurring processing on the initial blurring image through the target blurring parameters to obtain a target blurring image.
8. A three-dimensional image blurring apparatus, characterized in that the three-dimensional image blurring apparatus comprises:
the acquisition module is used for acquiring target three-dimensional image data, and dividing the three-dimensional image data based on the image data in space to obtain a plurality of divided image data;
The first extraction module is used for extracting spectral features of each piece of divided image data to obtain a spectral feature set of each piece of divided image data;
the second extraction module is used for extracting morphological characteristics of each piece of divided image data to obtain a morphological characteristic set of each piece of divided image data;
the processing module is used for carrying out initial blurring processing on the target three-dimensional image data through the spectrum characteristic set to obtain an initial blurring image;
the detection module is used for carrying out remarkable target detection on the initial virtual image through the morphological feature set to obtain a target detection result;
and the analysis module is used for carrying out image target distribution analysis on the initial virtual image based on the target detection result to obtain a target distribution result, and carrying out secondary virtual processing on the initial virtual image through the target distribution result to obtain a target virtual image.
9. A three-dimensional image blurring apparatus, characterized in that the three-dimensional image blurring apparatus comprises: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the three-dimensional image blurring apparatus to perform the three-dimensional image blurring method of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the three-dimensional image blurring method of any of claims 1-7.
CN202311415265.3A 2023-10-30 2023-10-30 Three-dimensional image blurring method, device, equipment and storage medium Active CN117152398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311415265.3A CN117152398B (en) 2023-10-30 2023-10-30 Three-dimensional image blurring method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311415265.3A CN117152398B (en) 2023-10-30 2023-10-30 Three-dimensional image blurring method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117152398A true CN117152398A (en) 2023-12-01
CN117152398B CN117152398B (en) 2024-02-13

Family

ID=88901109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311415265.3A Active CN117152398B (en) 2023-10-30 2023-10-30 Three-dimensional image blurring method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117152398B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130071028A1 (en) * 2011-09-16 2013-03-21 Stepen N. Schiller System and Method for Estimating Spatially Varying Defocus Blur in a Digital Image
CN112217992A (en) * 2020-09-29 2021-01-12 Oppo(重庆)智能科技有限公司 Image blurring method, image blurring device, mobile terminal, and storage medium
WO2021136078A1 (en) * 2019-12-31 2021-07-08 RealMe重庆移动通信有限公司 Image processing method, image processing system, computer readable medium, and electronic apparatus
CN113554676A (en) * 2021-07-08 2021-10-26 Oppo广东移动通信有限公司 Image processing method, device, handheld terminal and computer readable storage medium
CN114493988A (en) * 2020-11-11 2022-05-13 武汉Tcl集团工业研究院有限公司 Image blurring method, image blurring device and terminal equipment
CN116703995A (en) * 2022-10-31 2023-09-05 荣耀终端有限公司 Video blurring processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130071028A1 (en) * 2011-09-16 2013-03-21 Stepen N. Schiller System and Method for Estimating Spatially Varying Defocus Blur in a Digital Image
WO2021136078A1 (en) * 2019-12-31 2021-07-08 RealMe重庆移动通信有限公司 Image processing method, image processing system, computer readable medium, and electronic apparatus
CN112217992A (en) * 2020-09-29 2021-01-12 Oppo(重庆)智能科技有限公司 Image blurring method, image blurring device, mobile terminal, and storage medium
CN114493988A (en) * 2020-11-11 2022-05-13 武汉Tcl集团工业研究院有限公司 Image blurring method, image blurring device and terminal equipment
CN113554676A (en) * 2021-07-08 2021-10-26 Oppo广东移动通信有限公司 Image processing method, device, handheld terminal and computer readable storage medium
CN116703995A (en) * 2022-10-31 2023-09-05 荣耀终端有限公司 Video blurring processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUFENG ZHAO .ETC: "Fast image enhancement using multi-scale saliency extraction in infrared imagery", 《OPTIK HTTP://DX.DOI.ORG/10.1016/J.IJLEO.2014.01.117》, pages 4039 - 4042 *
李晓颖;周卫星;吴孙槿;李丹;胡晓晖;: "基于单目深度估计方法的图像分层虚化技术", 华南师范大学学报(自然科学版), no. 01, pages 124 - 128 *

Also Published As

Publication number Publication date
CN117152398B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
Son et al. Real-time illumination invariant lane detection for lane departure warning system
Jin et al. Vehicle detection from high-resolution satellite imagery using morphological shared-weight neural networks
Aytekın et al. Unsupervised building detection in complex urban environments from multispectral satellite imagery
CN105761266A (en) Method of extracting rectangular building from remote sensing image
CN104778721A (en) Distance measuring method of significant target in binocular image
KR20210111052A (en) Apparatus and method for classficating point cloud using semantic image
CN103996198A (en) Method for detecting region of interest in complicated natural environment
Yang et al. Fully constrained linear spectral unmixing based global shadow compensation for high resolution satellite imagery of urban areas
Tarsha Kurdi et al. Automatic filtering and 2D modeling of airborne laser scanning building point cloud
Fonseca et al. Digital image processing in remote sensing
Saeedi et al. Automatic building detection in aerial and satellite images
Kühnl et al. Visual ego-vehicle lane assignment using spatial ray features
Singh et al. A hybrid approach for information extraction from high resolution satellite imagery
CN117315210B (en) Image blurring method based on stereoscopic imaging and related device
CN114332644A (en) Large-view-field traffic density acquisition method based on video satellite data
CN107835998A (en) For identifying the layering Tiling methods of the surface type in digital picture
Kalantar et al. Uav and lidar image registration: A surf-based approach for ground control points selection
Raikar et al. Automatic building detection from satellite images using internal gray variance and digital surface model
CN117152398B (en) Three-dimensional image blurring method, device, equipment and storage medium
Yao et al. 3D object-based classification for vehicle extraction from airborne LiDAR data by combining point shape information with spatial edge
Schiewe Integration of multi-sensor data for landscape modeling using a region-based approach
CN114742955A (en) Flood early warning method and device, electronic equipment and storage medium
Rizvi et al. Wavelet based marker-controlled watershed segmentation technique for high resolution satellite images
Prince et al. Multifeature fusion for automatic building change detection in wide-area imagery
Sun et al. An automated approach for constructing road network graph from multispectral images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant