CN109255287B - Multi-dimensional data processing method driven by domain knowledge level set model - Google Patents

Multi-dimensional data processing method driven by domain knowledge level set model Download PDF

Info

Publication number
CN109255287B
CN109255287B CN201810809695.6A CN201810809695A CN109255287B CN 109255287 B CN109255287 B CN 109255287B CN 201810809695 A CN201810809695 A CN 201810809695A CN 109255287 B CN109255287 B CN 109255287B
Authority
CN
China
Prior art keywords
level set
scene
collimation
domain knowledge
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810809695.6A
Other languages
Chinese (zh)
Other versions
CN109255287A (en
Inventor
陈哲
徐立中
黄晶
李臣明
王峰
张丽丽
石爱业
高红民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sifang Wuxi Boiler Engineering Co ltd
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201810809695.6A priority Critical patent/CN109255287B/en
Publication of CN109255287A publication Critical patent/CN109255287A/en
Application granted granted Critical
Publication of CN109255287B publication Critical patent/CN109255287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multidimensional data processing method driven by a knowledge level set model in the field of complex scenes. Aiming at data related to scene appearance states obtained in a complex scene, the method can create a multidimensional data space composed of two primitives of original scene attribute information and scene feature information. Each dimension in the space corresponds to different attributes or characteristic information of the scene. On the basis, data in different dimensions are subjected to coupling processing based on domain knowledge, and guidance information which is helpful for a target detection task is extracted. Furthermore, the guidance information is fused with the level set model, a multi-dimensional data processing method driven by the domain knowledge level set model is realized, the method is applied to target detection in a complex scene, and the essential difficulties of target information attenuation and target-background contrast weakening in the complex scene due to high signal attenuation and strong noise interference can be overcome. The method can accurately represent two target attribute information of the target position and the space structure.

Description

Multi-dimensional data processing method driven by domain knowledge level set model
Technical Field
The invention relates to a multidimensional data processing method driven by a domain knowledge level set model, in particular to a method for processing multidimensional data consisting of scene multi-attribute elements and multi-feature elements by using a level set model in a high signal attenuation and strong noise interference scene with a peak signal-to-noise ratio of less than 40db so as to realize target detection, belonging to the technical field of pattern recognition.
Background
In complex scenes, an additional light source is necessarily required to aim at the target area due to the highly scattering, strongly attenuating characteristics of the light propagation medium. The mutual superposition of multi-source optical components, including natural parallel background light, skylight, scene scattered light and additional light, causes strong interference on a target detection task. In addition, the complex background noise and the weak target information cause the peak signal-to-noise ratio in the scene to be less than 40db, and the difficulty of target detection is further aggravated. Object detection in such complex scenarios is a bottleneck problem in the field of pattern recognition, especially in machine vision. Due to the above problems in complex scenes, the information acquisition is manifested as weak target information and low target-background contrast. In the prior art, target attributes such as a target area, a target contour structure and the like are difficult to accurately detect under the condition, and an updating technology is not suitable for a target detection task in a complex scene with a peak signal-to-noise ratio smaller than 40 db.
The current detection technology in a complex scene mainly includes two types: the method aims at artificial targets with specific optical characteristics and apparent characteristics designed artificially and aims at natural targets. For artificially designed targets, the existing method relies on prior template information, and similarity is calculated through template matching so as to realize target detection. The main advantage of this method is its high accuracy, however, since this kind of method is only suitable for a specific target, the general applicability of the method is severely limited, and it can not be used to detect a natural target with high aliasing to the background. For natural objects, different kinds of preprocessing methods are often used in the prior art to try to recover the original object information and stretch the contrast between the object and the background. The technology has the advantages of good popularization and universal application to various natural targets. However, this type of technique is disadvantageous in its low detection accuracy. Because the target characteristics are difficult to accurately recover by the preprocessing means and even distorted, errors generated in the preprocessing stage are transmitted to the post-processing stage, and finally, the detection result of the natural target is seriously deviated.
In view of the above current situation and in consideration of the application requirements of target detection, the invention discloses a multi-dimensional data processing method driven by a domain knowledge level set model, which is used for target detection in a complex scene. In complex scenes, additional light sources are often employed to aim at the target area to improve visibility of the target. Therefore, the aiming area of the additional light source and the target area are overlapped, and the distribution of the additional light source can reflect the target space structure information due to the reflectivity difference of the target. According to the principle, multi-dimensional data composed of multi-attribute primitives and multi-feature primitives of a scene in a complex scene with a peak signal-to-noise ratio smaller than 40db is constructed and processed to identify the collimation of an additional light source, guide information for target detection is formed, the guide information is combined with a level set model to guide the initialization and evolution processes of the guide information, so that the guide information is finally converged in a target area to form a target detection result, and two target attribute information of a target position and a space structure are accurately represented.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a multi-dimensional data processing method driven by a domain knowledge level set model, which constructs multi-dimensional data consisting of multi-attribute primitives and multi-feature primitives of a scene in a complex scene with an optical information peak signal-to-noise ratio of less than 40db, and identifies the collimation characteristics of an additional light source, including a collimation area and collimation distribution, through coupling calculation among different dimensional data. The level set model is guided by a sighting trait, wherein initialization of the level set is guided by a sighting region, and evolution of the level set is guided by a sighting distribution.
The technical scheme is as follows: a domain knowledge level set model driven multidimensional data processing method comprises the following steps:
firstly, constructing a complex scene with an optical information peak signal-to-noise ratio smaller than 40db, and forming multidimensional data by multi-attribute elements and multi-feature elements of the scene;
then, performing coupling analysis on the multidimensional data based on the domain knowledge, establishing a correlation relationship among different dimensional data, and extracting collimation characteristics;
then, analyzing the collimation characteristics of the scene according to the domain knowledge and the collimation characteristics to form a collimation area and collimation distribution as guide information;
finally, the guidance information is combined with the level set model to aim at the initialization process of the regional guidance level set and to aim at the evolution process of the distributed guidance level set. And finally, converging the domain knowledge level set model to a target area, and outputting two target attribute information of a target position and a space structure as a target detection result.
By adopting the technical scheme, the invention has the following beneficial effects:
1. the anti-interference and noise suppression capabilities are strong. The target detection method can effectively reduce the influence of the multi-source and mutual interference of scene information in the complex scene on target detection, inhibit high background noise of the target detection in the complex scene, solve the problem of strong attenuation of the target information and improve the accuracy of the target detection in the complex scene.
2. The system complexity is significantly reduced. The method of the invention does not need any intervention of a preprocessing method, and obviously reduces the structure and the complexity of the whole system.
Drawings
FIG. 1 is a flow chart of the proposed method of the present invention;
FIG. 2 is a sighting identification result, wherein: (a) collimation areas, (b) collimation distributions;
FIG. 3 is a domain knowledge level set model target detection result, wherein: (a) an initialization contour under guidance, (b) an evolution termination function under guidance, (c) a target region, and (d) a target structure contour.
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.
The target detection task in the underwater complex scene with the optical information peak signal-to-noise ratio smaller than 40db is taken as an embodiment. As shown in fig. 1, a multidimensional data space is formed by multi-attribute primitives and multi-feature primitives through decomposition and calculation of underwater scene information; the attribute primitives mainly comprise attribute information which is decomposed into three channels of red, green and blue from apparent optical information in an underwater complex scene with a peak signal-to-noise ratio of less than 40db to form front three-dimensional data; the feature elements mainly comprise the characteristics of global contrast, intensity-position relation, red channel contrast and channel difference obtained by extracting apparent optical information in an underwater complex scene with the optical information peak signal-to-noise ratio smaller than 40db, and four-dimensional data is formed; and finally, forming multidimensional data of the complex scene, and completely representing scene information in the complex scene.
Wherein, the attribute information on the red, green and blue channels in the attribute primitive is represented as:
red channel at point x:
Figure BDA0001738796100000031
green channel at point x:
Figure BDA0001738796100000032
blue channel at point x:
Figure BDA0001738796100000033
wherein, the x point is a point in the space in the scene;
corresponding to the first, second and third dimensions of the multidimensional data space, respectively, wherein IxFor original scene information at an x point, r, g and b respectively represent three channels;
the extraction process of the global contrast information in the feature primitives comprises the following steps:
Figure BDA0001738796100000034
wherein the content of the first and second substances,
Figure BDA0001738796100000041
is the absolute value of the difference in intensity values at the x and y points, calculated as
Figure BDA0001738796100000042
N is the set of all points in the image,
Figure BDA0001738796100000043
the intensity values at x and y are respectively calculated as the mean value of the first, second and third-dimensional data:
Figure BDA0001738796100000044
wherein, the extraction process of the strength-position relation in the characteristic elements comprises the following steps:
Figure BDA0001738796100000045
where D (x, m) is the euclidean distance from point x to point m of maximum global intensity, and x ═ ξ11]And m ═ ξ22]The spatial coordinates of points x and m, respectively.
The extraction process of the red channel contrast in the feature primitives comprises the following steps:
Figure BDA0001738796100000046
wherein the content of the first and second substances,
Figure BDA0001738796100000047
is the absolute value of the difference of the red channel at the x and y points, calculated as
Figure BDA0001738796100000048
N is the set of all points in the image,
Figure BDA0001738796100000049
and
Figure BDA00017387961000000410
the values of the red channel at the x and y points are divided to correspond to the first dimension of data in the multidimensional data space.
The extraction process of the channel difference in the characteristic primitives comprises the following steps:
Figure BDA00017387961000000411
wherein the content of the first and second substances,
Figure BDA00017387961000000412
the variance of the same mean value of the three channels r, g, b at the point x is obtained by calculating the first, second and three-dimensional data in the multi-dimensional data space;
finally, the multidimensional data of the scene can be characterized as:
Figure BDA00017387961000000413
wherein
Figure BDA00017387961000000414
For a multi-property primitive of a scene,
Figure BDA00017387961000000415
is a multi-feature primitive of a scene.
As shown in fig. 1, the domain knowledge-based coupling analysis of the multidimensional data establishes a correlation relationship between different dimensions of data in the multidimensional data space to form an collimation characteristic, wherein the formal representations of the domain knowledge and the collimation characteristic are respectively:
domain knowledge 1: the global contrast is inversely proportional to the intensity position: in the collimation area, the point with larger global contrast is necessarily closer to the point with maximum global intensity, and the fourth dimension and the fifth dimension data information in the multidimensional data space are obtained
Figure BDA0001738796100000051
The following are obtained by mutual calculation:
corr2(Ci,(1-Dd)) (7)
where corr2() is the cross-correlation calculation of a two-dimensional matrix, Ci,DdAre respectively composed of
Figure BDA0001738796100000052
The formed matrix;
domain knowledge 2: the global contrast is in direct proportion to the red channel: in the collimation area, the global contrast is kept consistent with that of the red signal, and the fourth and sixth dimensional data in the multidimensional data space are processed by the contrast
Figure BDA0001738796100000053
The cross-correlation calculation yields:
corr2(Ci,Cr) (8)
wherein, CrIs composed of
Figure BDA0001738796100000054
The formed matrix;
domain knowledge 3: the global contrast is inversely proportional to the channel difference: in the collimation area, the larger the global contrast, the smaller the channel difference at the point, and the fourth and seventh dimension data in the multidimensional data space are processed
Figure BDA0001738796100000055
The cross-correlation calculation yields:
corr2(Ci,(1-Vc)) (9)
wherein, VcIs composed of
Figure BDA0001738796100000056
The formed matrix;
domain knowledge 4: the strength position relationship is in direct proportion to the channel difference: in the collimation area, the point with smaller channel difference is necessarily closer to the point with maximum global intensity, and the point is obtained by comparing the fifth and the seventh dimension data in the multidimensional data space
Figure BDA0001738796100000057
The following are obtained by mutual calculation:
corr2(Dd,Vc) (10)
finally, the sighting features formed by multidimensional data processing are:
corr2(Ci,(1-Dd)),corr2(Ci,Cr),corr2(Ci,(1-Vc)),corr2(Dd,Vc);
as shown in fig. 1, the collimation characteristics of the scene are analyzed according to the domain knowledge and the collimation characteristics in the collimation characteristic analysis link based on the collimation characteristics, and mainly include a collimation area and collimation distribution;
according to domain knowledge, the decision function for the sighting region is constructed as:
S=corr2(Ci,(1-Dd))corr2(Ci,Cr)corr2(Ci,(1-Vc))corr2(Dd,Vc) (11)
according to domain knowledge, the decision for the sighting region is:
Figure BDA0001738796100000061
wherein, T is a threshold, T is 0.8, L is 1, and L is 0, so as to determine an collimation area, as shown in (a) of fig. 2;
according to domain knowledge, the estimation function for the collimation distribution is constructed as:
W=L(Ci+Cr-Dd-Vc) (13)
the calculation result of the collimation distribution is shown in (b) in fig. 2;
as shown in fig. 1, the collimation characteristics in combination with the level set model form two guidelines, respectively: the method comprises the steps of firstly, aiming at an area for guiding the initialization process of a level set, secondly, aiming at distribution for guiding the evolution process of the level set, and finally forming a domain knowledge level set model to realize multi-dimensional data processing in a complex scene with the optical information peak signal-to-noise ratio smaller than 40 db;
wherein the collimation area is used for guiding an initialization process of the level set, and formalized and characterized as follows:
φ0(x,y)=-4ω(0.5-L) (14)
wherein phi is0(x, y) is horizontalThe set initialization profile, ω, is a unit pulse function, formalized as:
Figure BDA0001738796100000062
wherein x is a variable of the unit pulse function;
wherein, the collimation distribution is used for constructing an evolution stopping function and guiding the evolution process of a level set, and the formalization representation is as follows:
g′=gρ (16)
wherein g' is a stop function under the guidance of collimation distribution, g is the gradient of an original image, rho is a collimation distribution guidance parameter, and the formalized representation is as follows:
ρ(W)=β(2(W-0.5))2 (17)
wherein beta is a modulation coefficient parameter;
finally, the profile φ is initialized with the level set0(x, y) and an evolution stopping function g' construct a level set, establish a multi-dimensional data processing method driven by a domain knowledge level set model, control the level set to converge to a target area, and apply the level set to target detection in a complex scene with an optical information peak signal-to-noise ratio smaller than 40db, as shown in fig. 3.

Claims (5)

1. A multidimensional data processing method driven by a domain knowledge level set model is characterized in that data related to scene appearance states obtained in a complex scene are utilized in consideration of high signal attenuation with a peak signal-to-noise ratio smaller than 40db and the multiple source and mutual interference of scene information in the complex scene interfered by strong noise, and aiming at the problems of high background noise and strong target attenuation in target detection in the complex scene, the method comprises the following steps:
firstly, decomposing and calculating scene information to form a multidimensional data space by using scene multi-attribute primitives and multi-feature primitives;
then, carrying out coupling analysis on data in different dimensions in a multi-dimensional data space by using a domain knowledge model to obtain the collimation characteristics of the additional light source;
then, analyzing the collimation characteristics of the scene according to the additional light source collimation characteristics and a judgment criterion, wherein the collimation characteristics mainly comprise a collimation area and collimation distribution;
finally, the initialization of the level set is guided by the aiming area of the additional light source: taking the edge of the collimation area as an initialization contour of the level set; the convergence process of the level set is guided by the aiming distribution of the additional light sources: smoothing texture features of the non-target region, highlighting a target edge, and constructing a level set evolution stopping function so as to guide the evolution of the level set to finally converge to the target edge region; the initialized contour and the evolution stopping function are applied to target detection based on the level set, and two target attribute information of a target position and a space structure are output.
2. The domain knowledge level set model-driven multidimensional data processing method of claim 1, wherein: forming a multi-dimensional data space by using multi-attribute primitives and multi-feature primitives; the attribute primitives mainly comprise attribute information which is decomposed into three channels of red, green and blue from original scene information to form three-dimensional data; the feature elements mainly comprise the steps of extracting global contrast, intensity-position relation, red channel contrast and channel difference features from original scene information to form four-dimensional data; and finally, forming multi-dimensional data, and completely representing scene information in the complex scene.
3. The domain knowledge level set model-driven multidimensional data processing method of claim 1, wherein: the attribute information on the three channels of red, green and blue in the attribute primitive is represented as:
red channel at point x:
Figure FDA0001738796090000011
green channel at point x:
Figure FDA0001738796090000012
blue channel at point x:
Figure FDA0001738796090000013
wherein, the x point is a point in the space in the scene;
respectively corresponding to the first, second and third dimensions in a multi-dimensional data space, wherein IxFor original scene information at an x point, r, g and b respectively represent three channels;
the extraction process of the global contrast information in the feature primitives comprises the following steps:
Figure FDA0001738796090000021
wherein the content of the first and second substances,
Figure FDA0001738796090000022
the absolute value of the intensity value difference at the x and y points of the original scene is calculated as
Figure FDA0001738796090000023
N is the set of all points in the image,
Figure FDA0001738796090000024
the intensity values at x and y are respectively calculated as the mean value of the first, second and third dimensional data in the multidimensional data space:
Figure FDA0001738796090000025
the extraction process of the strength-position relation in the characteristic elements comprises the following steps:
Figure FDA0001738796090000026
where D (x, m) is the euclidean distance from point x to point m of maximum global intensity, and x ═ ξ11]And m ═ ξ22]The spatial coordinates of points x and m, respectively;
the extraction process of the red channel contrast in the feature elements comprises the following steps:
Figure FDA0001738796090000027
wherein the content of the first and second substances,
Figure FDA0001738796090000028
is the absolute value of the difference of the red channel at the x and y points, calculated as
Figure FDA0001738796090000029
N is the set of all points in the image,
Figure FDA00017387960900000210
and
Figure FDA00017387960900000211
dividing the values of the red channel at the x point and the y point into values corresponding to a first dimension in a multi-dimensional data space;
the extraction process of the channel difference in the characteristic elements comprises the following steps:
Figure FDA00017387960900000212
wherein the content of the first and second substances,
Figure FDA00017387960900000213
the variance of the same mean value of the three channels r, g, b at the point x is obtained by calculating the first, second and three-dimensional data in the multi-dimensional data space;
finally, the multidimensional data of the scene can be characterized as:
Figure FDA00017387960900000214
4. the domain knowledge level set model-driven multidimensional data processing method of claim 1, wherein: performing coupling analysis on multidimensional data based on domain knowledge, establishing a correlation relationship among different dimensional data, and forming an collimation characteristic; the formal characterization of the domain knowledge and the sighting characteristics are respectively as follows:
domain knowledge 1: the global contrast is inversely proportional to the intensity position: in the collimation area, the point with larger global contrast is necessarily closer to the point with maximum global intensity, and the point is obtained by comparing the fourth dimension data and the fifth dimension data in the multidimensional data space
Figure FDA0001738796090000031
The following are obtained by mutual calculation:
corr2(Ci,(1-Dd)) (7)
where corr2() is the cross-correlation calculation of a two-dimensional matrix, Ci,DdAre respectively composed of
Figure FDA0001738796090000032
The formed matrix;
domain knowledge 2: the global contrast is in direct proportion to the red channel: in the collimation area, the global contrast is kept consistent with that of the red signal, and the fourth and sixth dimensional data in the multidimensional data space are processed by the contrast
Figure FDA0001738796090000033
The cross-correlation calculation yields:
corr2(Ci,Cr) (8)
wherein, CrIs composed of
Figure FDA0001738796090000034
The formed matrix;
domain knowledge 3: the global contrast is inversely proportional to the channel difference: in the collimation area, the larger the global contrast, the smaller the channel difference at the point, and the fourth and seventh in the multidimensional data spaceDimension data
Figure FDA0001738796090000035
The cross-correlation calculation yields:
corr2(Ci,(1-Vc)) (9)
wherein, VcIs composed of
Figure FDA0001738796090000036
The formed matrix;
domain knowledge 4: the strength position relationship is in direct proportion to the channel difference: in the collimation area, the point with smaller channel difference is necessarily closer to the point with maximum global intensity, and the point is obtained by comparing the fifth and the seventh dimension data in the multidimensional data space
Figure FDA0001738796090000037
The following are obtained by mutual calculation:
corr2(Dd,Vc) (10)
finally, the sighting features formed by multidimensional data processing are:
corr2(Ci,(1-Dd)),corr2(Ci,Cr),corr2(Ci,(1-Vc)),corr2(Dd,Vc)。
5. the domain knowledge level set model-driven multidimensional data processing method of claim 1, wherein: according to the field knowledge and the collimation characteristics of the scene, collimation guidance information is formed by combining a level set model: the first and second sighting areas are used for guiding the initialization process of the level set, and the second and third sighting distribution are used for guiding the evolution process of the level set:
according to domain knowledge, the decision function for the sighting region is constructed as:
S=corr2(Ci,(1-Dd))corr2(Ci,Cr)corr2(Ci,(1-Vc))corr2(Dd,Vc) (11)
according to domain knowledge, the decision for the sighting region is:
Figure FDA0001738796090000041
wherein, T is a threshold, T is 0.8, L is 1, and L is 0, so as to determine the collimation area;
according to domain knowledge, the estimation function for the collimation distribution is constructed as:
W=L(Ci+Cr-Dd-Vc) (13)
and then two kinds of guidance are respectively formed by combining the level set model: the method comprises the following steps of firstly, aiming at an area for guiding the initialization process of a level set, secondly, aiming at distribution for guiding the evolution process of the level set, and finally forming a domain knowledge level set model to realize scene multi-dimensional data processing;
wherein the collimation area is used for guiding an initialization process of the level set, and formalized and characterized as follows:
φ0(x,y)=-4ω(0.5-L) (14)
wherein phi is0(x, y) is the level set initialization profile, ω is the unit pulse function, and the formalization is characterized as:
Figure FDA0001738796090000042
wherein, the collimation distribution is used for constructing an evolution stopping function and guiding the evolution process of a level set, and the formalization representation is as follows:
g′=gρ (16)
wherein g' is a stop function under the guidance of collimation distribution, g is the gradient of an original image, rho is a collimation distribution guidance function, and the formalization representation is as follows:
ρ(W)=β(2(W-0.5))2 (17)
wherein beta is a modulation coefficient parameter;
finally, the profile φ is initialized with the level set0(x, y) and an evolution stopping function g' construct a level set model, establish a multi-dimensional data processing method driven by a domain knowledge level set model, control the level set to be converged to a target area, and be applied to target detection in a complex scene with a peak signal-to-noise ratio smaller than 40 db.
CN201810809695.6A 2018-07-23 2018-07-23 Multi-dimensional data processing method driven by domain knowledge level set model Active CN109255287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810809695.6A CN109255287B (en) 2018-07-23 2018-07-23 Multi-dimensional data processing method driven by domain knowledge level set model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810809695.6A CN109255287B (en) 2018-07-23 2018-07-23 Multi-dimensional data processing method driven by domain knowledge level set model

Publications (2)

Publication Number Publication Date
CN109255287A CN109255287A (en) 2019-01-22
CN109255287B true CN109255287B (en) 2021-08-10

Family

ID=65049013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810809695.6A Active CN109255287B (en) 2018-07-23 2018-07-23 Multi-dimensional data processing method driven by domain knowledge level set model

Country Status (1)

Country Link
CN (1) CN109255287B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930531A (en) * 2016-06-08 2016-09-07 安徽农业大学 Method for optimizing cloud dimensions of agricultural domain ontological knowledge on basis of hybrid models
CN107818586A (en) * 2017-10-10 2018-03-20 河海大学 A kind of object detection method based on multiple features coupling model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI441096B (en) * 2011-08-10 2014-06-11 Univ Nat Taipei Technology Motion detection method for comples scenes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930531A (en) * 2016-06-08 2016-09-07 安徽农业大学 Method for optimizing cloud dimensions of agricultural domain ontological knowledge on basis of hybrid models
CN107818586A (en) * 2017-10-10 2018-03-20 河海大学 A kind of object detection method based on multiple features coupling model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于光强-光谱-偏振信息融合的水下目标检测;陈哲等;《通信学报》;20130331;192-198 *

Also Published As

Publication number Publication date
CN109255287A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
WO2021114508A1 (en) Visual navigation inspection and obstacle avoidance method for line inspection robot
Kim et al. Adaptive smoothness constraints for efficient stereo matching using texture and edge information
WO2015010451A1 (en) Method for road detection from one image
CN111681197B (en) Remote sensing image unsupervised change detection method based on Siamese network structure
CN110837768B (en) Online detection and identification method for rare animal protection
Ansar et al. Enhanced real-time stereo using bilateral filtering
Hane et al. Direction matters: Depth estimation with a surface normal classifier
Li et al. Saliency based image segmentation
CN108010075B (en) Local stereo matching method based on multi-feature combination
Lin et al. Construction of fisheye lens inverse perspective mapping model and its applications of obstacle detection
US11403491B2 (en) Object recognition from images using cad models as prior
CN109255287B (en) Multi-dimensional data processing method driven by domain knowledge level set model
CN102298780B (en) Method for detecting shadow of color image
Rekik et al. Review of satellite image segmentation for an optimal fusion system based on the edge and region approaches
Komati et al. Kss: Using region and edge maps to detect image boundaries
CN111161219B (en) Robust monocular vision SLAM method suitable for shadow environment
CN109961413B (en) Image defogging iterative algorithm for optimized estimation of atmospheric light direction
CN101799929B (en) Designated color layer extracting device and method
Chen et al. Study of the lane recognition in haze based on kalman filter
CN111435532B (en) Method for detecting tree-like structure end point in digital image
Cai et al. A stereo matching algorithm based on color segments
Liu et al. A new segment-based algorithm for stereo matching
Lorenti et al. Unsupervised TOF Image Segmentation through Spectral Clustering and Region Merging
Jidong et al. Research on the recognition method for obscured apple in natural environment
Yu et al. Foreground target extraction in bounding box based on sub-block region growing and Grab Cut

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221013

Address after: No.12, Changkang Road, Binhu District, Wuxi City, Jiangsu Province, 214000

Patentee after: SHANGHAI SIFANG WUXI BOILER ENGINEERING Co.,Ltd.

Address before: 211100 No. 8 West Buddha Road, Jiangning District, Jiangsu, Nanjing

Patentee before: HOHAI University