CN105389806B - One kind is caved in detection method and device - Google Patents
One kind is caved in detection method and device Download PDFInfo
- Publication number
- CN105389806B CN105389806B CN201510701041.8A CN201510701041A CN105389806B CN 105389806 B CN105389806 B CN 105389806B CN 201510701041 A CN201510701041 A CN 201510701041A CN 105389806 B CN105389806 B CN 105389806B
- Authority
- CN
- China
- Prior art keywords
- point
- virtual tag
- pixel
- msub
- tag point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/759—Region-based matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
One kind provided in an embodiment of the present invention is caved in detection method and device, is related to technical field of video monitoring, this method includes:In the case where meeting the default testing conditions that cave in, the target image of thing to be detected is obtained;According to default Corner Detection Algorithm, the image angle point in the target image is determined, and identified image angle point is determined as to the virtual tag point of the thing to be detected based on the target image;The position deviation amount for virtual tag point between each virtual tag point determined by calculating and each corresponding virtual tag point of the thing to be detected prestored;According to the position deviation amount for virtual tag point being calculated, determine whether the thing to be detected caves in.Using the embodiment of the present invention, it when that need not arrange multiple sensors for each entity indicia point, can still be assisted staff to find the situation of caving in of thing to be detected in time according to processing result image, reduce the workload for the detection that cave in for treating detectable substance.
Description
Technical field
The present invention relates to technical field of video monitoring, cave in detection method and device more particularly to one kind.
Background technology
Under the influence of natural environment, the wall of building is it is possible that a degree of cave in, for example, some are older
Ancient architecture in bad repair is crumbling, for the building that some are only built up by cob wall, is just more vulnerable to outside
The erosion of natural environment.In general, higher financial cost and human cost can be expended when being safeguarded to building.Therefore,
It is preferably able to be detected the situation of caving in of the wall of building, finds situation of caving in timely, reduction is maintained
When economy and manpower in terms of consumption.
In the prior art, there is provided such a building collapse detection method, in the wall of target construction to be detected
Multiple entity indicia points are set on body, in the location arrangements pressure where set each entity indicia point, tension force, vibrations etc.
Sensor, dynamic detection can be carried out to the internal stress situation of target construction by passing through arranged sensor, so as to detect
Obtain the situation of caving in of the wall of target construction.
Although the above method is capable of detecting when the situation of caving in of the wall of target construction, still, more due to needing to set
A entity indicia point, and each entity indicia point needs to arrange multiple sensors again, so, it is also necessary to which each sensor is carried out
Embedded and maintenance, heavy workload.
The content of the invention
The purpose of the embodiment of the present invention is to provide one kind and caves in detection method and device, is caved in situation with timely discovery,
Reduce the workload in building collapse detection process.
To reach above-mentioned purpose, cave in detection method the embodiment of the invention discloses one kind, the described method includes:
In the case where meeting the default testing conditions that cave in, the target image of thing to be detected is obtained;
According to default Corner Detection Algorithm, the image angle point in the target image is determined, and by identified image
Angle point is determined as the virtual tag point of the thing to be detected based on the target image;
Each virtual tag point determined by calculating and each corresponding virtual tag of the thing to be detected prestored
The position deviation amount for virtual tag point between point;
According to the position deviation amount for virtual tag point being calculated, determine whether the thing to be detected collapses
Collapse.
Preferably, it is described in the case where meeting the default testing conditions that cave in, the target image of thing to be detected is obtained, is wrapped
Include:
In the situation that current point in time is default detection time point and/or current light intensity is default intensity of illumination
Under, obtain the target image of thing to be detected.
Preferably, described according to default Corner Detection Algorithm, the image angle point in the target image is determined, and by institute
Definite image angle point is determined as the virtual tag point of the thing to be detected based on the target image, including:
Obtain the gray level image of the target image;
According to the around the gray value of each pixel in the gray level image and the pixel first default quantity picture
The gray value of vegetarian refreshments, determines the alternative virtual tag point of the gray level image;
Degree is contrasted according to the gray value of identified each alternative virtual tag point, from identified alternative virtual tag
The virtual tag point of the target image is obtained in point, wherein, the gray value contrasts degree, for representing alternative virtual tag
The gray value of point and the gray scale difference around it between gray value of the second default quantity pixel.
It is preferably, described pre- according to around the gray value of each pixel in the gray level image and the pixel first
If the gray value of quantity pixel, the alternative virtual tag point of the gray level image is determined, including:
Any pixel point P in the gray level image is determined by following steps0Whether it is the alternative of the gray level image
Virtual tag point:
According to following formula, to the pixel P0The default quantity pixel of surrounding first is classified,
Wherein, PXRepresent the pixel P0Any pixel point in the default quantity pixel of surrounding first,Represent
Pixel PXGray value,Represent the pixel P0Gray value, t represents the threshold value of default gray value difference, CPTable
Show the corresponding class indications of pixel P;
Judge identical with the presence or absence of the continuous 3rd default quantity class indication in the described first default quantity pixel
Pixel;
If in the presence of, it is determined that the pixel P0For the alternative virtual tag point of the gray level image.
Preferably, the gray value contrast degree of each alternative virtual tag point determined by the basis, from identified
The virtual tag point of the target image is obtained in alternative virtual tag point, including:
Order is determined according to default virtual tag point, is determined according to identified alternative virtual tag point initial alternative empty
Intend mark point P1;
Judge comprising the alternative virtual tag point P1The first preset range in whether comprising described initial alternative virtual
Mark point P1Outside alternative virtual tag point;
If comprising gray value contrasts degree most in the alternative virtual tag point for determining to be included in first preset range
Big alternative virtual tag point is a virtual tag point of the target image, and update should in the range of other alternative virtual marks
Note point is non-alternative virtual tag point, wherein, any alternative virtual tag point P2Gray value contrast degree reached according to following table
Formula obtains:
Wherein, PYFor the alternative virtual tag point P2Any pixel point in the default quantity pixel of surrounding second, S
For the alternative virtual tag point P2The set of the default quantity pixel of surrounding second, V represent the alternative virtual tag point P2
Gray value contrast degree;
Order is determined according to the default virtual tag point, does not carry out judging the alternative virtual tag of processing according to residue
The point renewal initial alternative virtual tag point, and return to the judgement and include the virtual tag point P1The first preset range
Inside whether include the virtual tag point P1Outside alternative virtual tag point the step of, until there is no not carrying out judgement processing
Alternative virtual tag point.
Preferably, the position deviation amount for virtual tag point that the basis is calculated, determines the thing to be detected
Whether cave in, including:
In the case where meeting following formula, determine that the thing to be detected is caved in, otherwise, it determines described to be checked
Thing is surveyed not cave in;
Wherein, it is more than default first departure in the position deviation amount for virtual tag point that m represents to be calculated
The number of the position deviation amount of threshold value, M represent the number for the position deviation amount for virtual tag point being calculated, and Th is represented
Default first caves in threshold value;
Or,
When the position deviation amount of identified virtual tag point is all greater than default second departure threshold value, institute is determined
State thing to be detected to be caved in, otherwise, it determines the thing to be detected does not cave in.
Preferably, after the target image for obtaining thing to be detected, further include:
Obtain the entity indicia point of thing to be detected described in the target image;
The each correspondent entity for calculating each entity indicia point obtained and the thing to be detected prestored marks
The position deviation amount for entity indicia point between point;
The position deviation amount for virtual tag point that the basis is calculated, determines whether the thing to be detected occurs
Cave in, including:
According to the position deviation amount for virtual tag point being calculated and the position deviation amount for entity indicia point,
Determine whether the thing to be detected caves in.
Preferably, the entity indicia point for obtaining thing to be detected described in the target image, including:
Obtain the bianry image of the target image;
According to default first pixel scanning sequency, by following steps to any pixel point in the bianry image
Q1It is scanned, and then obtains the entity indicia point of thing to be detected described in the target image:
Judge the pixel Q1Whether it is white point;
If it is, judge the pixel Q1Whether the default quantity pixel of surrounding the 4th is all white point, if being all white
Point, according to the coordinate of the described 4th default quantity pixel, obtains a reality of thing to be detected described in the target image
The coordinate of body mark point.
Preferably, the entity indicia point for obtaining thing to be detected described in the target image, including:
Obtain the bianry image of the target image;
Determine apart from the recent entity indicia point to cave in obtained in detection of current point in time;
For identified any entity mark point Q2' corresponding pixel points the Q in the target image2, pass through following step
Suddenly the entity indicia point of thing to be detected described in the target image is obtained:
With the pixel Q2For starting point, it is scanned according to default second pixel scanning sequency, until for n's
One value meets:The all pixels point of condition is stain, and is scanned during writing scan white
The coordinate of point, wherein, the value of n is positive integer, and L represents any pixel point and the pixel Q in scanning process2Between away from
From;
According to the coordinate of the white point recorded, an entity indicia of thing to be detected described in the target image is determined
Point.
To reach above-mentioned purpose, cave in detection device the embodiment of the invention discloses one kind, described device includes:
Target image obtains module, in the case where meeting the default testing conditions that cave in, obtaining thing to be detected
Target image;
Virtual tag point determining module, for according to default Corner Detection Algorithm, determining the figure in the target image
Image angle point, and identified image angle point is determined as to the virtual tag point of the thing to be detected based on the target image;
First position departure computing module, described in calculating identified each virtual tag point and prestoring
The position deviation amount for virtual tag point between each corresponding virtual tag point of thing to be detected;
Cave in determining module, for treating described according to the position deviation amount for virtual tag point being calculated, determining
Whether detectable substance caves in.
One kind provided in an embodiment of the present invention is caved in detection method and device, can meet the default testing conditions that cave in
In the case of, obtain the target image of thing to be detected;Then, according to default Corner Detection Algorithm, the target image is determined
In image angle point, and identified image angle point is determined as to the virtual mark of the thing to be detected based on the target image
Note point;Then, each of each virtual tag point determined by calculating and the thing to be detected that prestores corresponding virtually marks
The position deviation amount for virtual tag point between note point;Finally, according to the position for virtual tag point being calculated
Departure, determines whether the thing to be detected caves in.It can be seen from the above that collapse using scheme provided in an embodiment of the present invention
Collapse detection when, still can be according to image procossing in the case where that need not arrange multiple sensors for each entity indicia point
As a result assist staff to find the situation of caving in of thing to be detected in time, therefore, reduce treat detectable substance carry out inspection of caving in
The workload of survey.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, without creative efforts, can be with
Other attached drawings are obtained according to these attached drawings.
Fig. 1 a are a kind of flow diagram for detection method of caving in provided in an embodiment of the present invention;
Fig. 1 b are the position view of the provided in an embodiment of the present invention a kind of first default quantity pixel;
Fig. 1 c are the position view of the provided in an embodiment of the present invention a kind of second default quantity pixel;
Fig. 1 d are a kind of schematic diagram of the first preset range for alternative virtual tag point provided in an embodiment of the present invention;
Fig. 2 a are that another kind provided in an embodiment of the present invention caves in the flow diagram of detection method;
Fig. 2 b are a kind of schematic diagram of default second pixel scanning sequency provided in an embodiment of the present invention;
Fig. 3 is a kind of structure diagram for detection device of caving in provided in an embodiment of the present invention;
Fig. 4 is that another kind provided in an embodiment of the present invention caves in the structure diagram of detection device.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work
Embodiment, belongs to the scope of protection of the invention.
It is well known that influenced by the long-term of natural environment, the wall of building it is possible that a degree of cave in,
Just started may not have particularly apparent sign, however, wait can naked-eye observation to cave in phenomenon when, often building
Situation of caving in just than more serious, at this time, the maintenance for building will spend more man power and material, especially right
In some ancient architectures etc, the man power and material spent will be more.
Certainly, the detection of caving in of the method being previously mentioned in the embodiment of the present invention, not only applicable building, can also use
Thing to be detected to have change in displacement to other is detected, for example, detection to bridge displacement, etc..
Fig. 1 a cave in detection method for one kind provided in an embodiment of the present invention, and this method may comprise steps of:
Step S101:In the case where meeting the default testing conditions that cave in, the target image of thing to be detected is obtained.
Specifically, can be default detection time point in current point in time and/or current light intensity is default light
In the case of according to intensity, the target image of thing to be detected is obtained.
Wherein, default detection time point can be treat detectable substance carry out first detect when time point, default light
It can be the intensity of illumination treated when detectable substance detect first according to intensity.Furthermore it is also possible to carry out head treating detectable substance
It is secondary cave in detection when, time point and intensity of illumination when recording detection first, using the time point of record and intensity of illumination as
The default testing conditions that cave in, and be stored in database.
When the target image of the thing to be detected to being obtained is handled, easily influenced be subject to ambient, therefore,
The embodiment of the present invention needs to set the intensity of illumination of a unified detection of caving in, to exclude ambient to testing result of caving in
Interference.Further, since time point sunray different in one day is different, the shadow part in the target image of shooting
Divide and also differ.Therefore, in order to avoid the interference of intensity of illumination and shadow positions, the embodiment of the present invention needs to set a unification
The testing conditions that cave in, judge to meet it is set cave in testing conditions when, treat detectable substance and be detected.
It should be noted that the present invention and need not be to morning and evening at the specific time point of default detection time point or default
The size of specific intensity of illumination of intensity of illumination limited, it is default cave in testing conditions the purpose of be to exclude external environment
Interference for testing result, to improve the accuracy rate for the testing result of caving in for treating detectable substance.In practical application, default inspection
Time point or default intensity of illumination are surveyed except the time point above-mentioned treated first when detectable substance is detected or illumination
Except in the case of intensity, can also reasonably it be set as the case may be by those skilled in the art, for example, can incite somebody to action
Default detection time point is arranged to 12:00am, can be arranged to 5*10 by default intensity of illumination4Lux (lux), etc.
Deng.
Step S102:According to default Corner Detection Algorithm, the image angle point in the target image is determined, and will determine
Image angle point be determined as the virtual tag point of the thing to be detected based on the target image.
In practical application, treat detectable substance cave in detection when, especially building historical relic is detected
When, be typically be not intended to building historical relic above make marks, just need at this time using virtual tag point treat detectable substance into
Capable detection of caving in.
So-called virtual tag point refers to, relevant image procossing is utilized for the target image of the thing to be detected obtained
Algorithm fictionalizes the mark point come on the image.Mainly come in the embodiment of the present invention by using default Corner Detection Algorithm
Obtain virtual tag point.
In a kind of specific embodiment of the present invention, step S102 can include following steps:
A:Obtain the gray level image of the target image.
B:According to the around the gray value of each pixel in the gray level image and the pixel first default quantity picture
The gray value of vegetarian refreshments, determines the alternative virtual tag point of the gray level image.
It should be noted that the present invention need not be defined the particular number of " the first default quantity ", and first is pre-
If quantity pixel can be for the pixel of rule distribution around each pixel in the gray level image or by people
For the pixel of the irregular distribution around each pixel in the specified gray level image, the present invention need not be pre- to first
If the setting rule of quantity pixel and the quantity set are defined, those skilled in the art are needed according to specific
Using reasonably being set.
Specifically, according to the around the gray value of each pixel in the gray level image and the pixel first default quantity
The gray value of a pixel, determines the alternative virtual tag point of the gray level image, can include following steps:
Any pixel point P in the gray level image is determined by following steps0Whether it is the alternative virtual of the gray level image
Mark point:
(1) according to following formula, to pixel P0The default quantity pixel of surrounding first is classified,
Wherein, PXRepresent pixel P0Any pixel point in the default quantity pixel of surrounding first,Represent picture
Vegetarian refreshments PXGray value,Represent pixel P0Gray value, t represents the threshold value of default gray value difference, CPRepresent picture
The corresponding class indications of vegetarian refreshments P.
It is readily comprehensible, work as CPValue be 1 when show pixel P0Than pixel PXIt is bright, work as CPValue be -1 when table
Bright pixel P0Than pixel PXSecretly, C is worked asPValue be 0 when show pixel P0With pixel PXGray value it is approximate.
It should be noted that the threshold value t of default gray value difference is positive number, the present invention need not be to the specific value of t
Scope is defined, and those skilled in the art need reasonably to be set as the case may be in practical application.
(2) judge identical with the presence or absence of the continuous 3rd default quantity class indication in the first default quantity pixel
Pixel.
(3) if in the presence of, it is determined that pixel P0For the alternative virtual tag point of the gray level image;Otherwise, it determines pixel P0
It is not the alternative virtual tag point of the gray level image.
With reference to Fig. 1 b, by an instantiation to according to the gray value of each pixel in the gray level image and
The gray value of first default quantity pixel around the pixel, determines the process of the alternative virtual tag point of the gray level image
Carry out lower explanation.
In Figure 1b, pixel in an intermediate position is any pixel point P in the gray level image0, in pixel P0's
There are 16 default pixels (i.e. the pixel of the position shown in the labelled 16 black round dots of band in Fig. 1 b) for surrounding.
Assuming that:Any pixel point P in the gray level image0Gray value be 115;Pixel P0Around 16 pixels
The gray value of point is as follows:
Label | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
Gray value | 30 | 48 | 76 | 120 | 145 | 168 | 201 | 225 |
Label | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |
Gray value | 240 | 213 | 108 | 110 | 102 | 65 | 10 | 25 |
According to expression formula (1), to pixel P0The default quantity pixel of surrounding first is classified, wherein t=20,
It is as follows to obtain classification results:
Label | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
Class indication | -1 | -1 | -1 | 0 | 1 | 1 | 1 | 1 |
Label | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |
Class indication | 1 | 1 | 0 | 0 | 0 | -1 | -1 | -1 |
Assuming that:When, there are during the continuous identical pixel of 4 class indications, determining picture in the first default quantity pixel
Vegetarian refreshments P0For the alternative virtual tag point of the gray level image.
Obviously, pixel P is judged06 pixels of the surrounding marked as 5 to 10 meet that there are continuous 4 classification designators
Condition, then, determines pixel P0For the alternative virtual tag point of the gray level image.
C:Degree is contrasted according to the gray value of identified each alternative virtual tag point, from identified alternative virtual mark
The virtual tag point of the target image is obtained in note point.
Wherein, gray value contrast degree, for representing the gray value of alternative virtual tag point and the second present count around it
Gray scale difference between the gray value of amount pixel.
It should be noted that the present invention need not be defined the particular number of " the second default quantity ", and second is pre-
If the pixel that quantity pixel can be distributed for rule around each pixel in the gray level image is (for example, when alternative
Virtual tag point for nine grids center grid corresponding pixel when, the second default quantity pixel can be removed in nine grids
The corresponding pixel of 8 grids outside the grid of center) or the gray level image by artificially specifying in each pixel
The pixel of the irregular distribution of surrounding, the setting rule and setting that the present invention need not be to the first default quantity pixel
Quantity be defined, those skilled in the art need reasonably set according to concrete application.
Specifically, contrast degree according to the gray value of identified each alternative virtual tag point, from identified alternative
The virtual tag point of the target image is obtained in virtual tag point, may comprise steps of:
(1) determine order according to default virtual tag point, determined according to identified alternative virtual tag point initial standby
Select virtual tag point P1。
(2) judge to include the alternative virtual tag point P1The first preset range in whether include the initial alternative virtual mark
Remember point P1Outside alternative virtual tag point.
(3) if comprising gray value contrasts degree in the alternative virtual tag point for determining to be included in first preset range
Maximum alternative virtual tag point is a virtual tag point of the target image, and update should in the range of other alternative virtual marks
Note point is non-alternative virtual tag point, wherein, any alternative virtual tag point P2Gray value contrast degree reached according to following table
Formula obtains:
Wherein, PYFor the alternative virtual tag point P2Any pixel point in the default quantity pixel of surrounding second, S are
The alternative virtual tag point P2The set of the default quantity pixel of surrounding second, V represent the alternative virtual tag point P2Gray scale
It is worth contrast degree.
Referring to Fig. 1 c, in a specific embodiment of the present invention, the gray value contrast of alternative virtual tag point is calculated
The process of degree is as follows:
Assuming that:
Alternative virtual tag point P2Gray value be 120;
Label 0 to 8 positions of label 7 shown in Fig. 1 c are alternative virtual tag point P2The default quantity picture of surrounding second
Vegetarian refreshments, and the gray value of this 8 pixels is as follows:
Label | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
Gray value | 120 | 133 | 108 | 166 | 200 | 201 | 220 | 185 |
According to expression formula (2), obtain:
Then, alternative void is obtained
Intend mark point P2Gray value contrast degree be 397.
It should be noted that when judging result is to include virtual tag point P1The first preset range in do not include that this is first
Begin alternative virtual tag point P1Outside alternative virtual tag point when, by virtual tag point P1It is determined as the one of the target image
A virtual tag point.
(4) determine order according to default virtual tag point, do not carry out judging the alternative virtual tag of processing according to residue
The point renewal initial alternative virtual tag point, and return to judgement and include virtual tag point P1The first preset range in whether wrap
Containing virtual tag point P1Outside alternative virtual tag point the step of, until there is no do not carry out judging processing it is alternative virtually
Mark point.
With reference to Fig. 1 d, by an instantiation, to the gray scale according to identified each alternative virtual tag point
It is worth contrast degree, the process for the virtual tag point for obtaining the target image from identified alternative virtual tag point retouch
State.
In Fig. 1 d, position shown in a black round dot corresponds to an alternative virtual tag point respectively, and there are 4 in Fig. 1 d
A alternative virtual tag point.
Order is determined according to default virtual tag point, for example, the gray level image of the target image based on thing to be detected
The definite order of "the" shape from top to bottom, it is initial alternative virtual tag point to determine the alternative virtual tag point 1 in Fig. 1 d;Sentence
(region a) does not include the alternative void outside alternative virtual tag point 1 in disconnected the first preset range comprising alternative virtual tag point 1
Intend mark point, accordingly, it is determined that the alternative virtual tag point 1 is a virtual tag point of the target image.
Order is determined according to default virtual tag point, and it is initial alternative empty to determine the alternative virtual tag point 2 in Fig. 1 d
Intend mark point;Judge comprising alternative virtual tag point 2 the first preset range in (region b) include alternative virtual tag point 2 it
Outer alternative virtual tag point 3;The gray value contrast degree that alternative virtual tag point 3 is calculated according to expression formula (2) is more than
The gray value contrast degree of alternative virtual tag point 2, then, determines the void that alternative virtual tag point 3 is the target image
Intend mark point, and update alternative virtual tag point 2 for non-alternative virtual tag point.
Order is determined according to default virtual tag point, and it is initial alternative empty to determine the alternative virtual tag point 3 in Fig. 1 d
Intend mark point;Judge comprising alternative virtual tag point 3 the first preset range in (region c) include alternative virtual tag point 3 it
Outer alternative virtual tag point 4;The gray value contrast degree that alternative virtual tag point 4 is calculated according to expression formula (2) is more than
The gray value contrast degree of alternative virtual tag point 3, then, determines the void that alternative virtual tag point 4 is the target image
Intend mark point, and update alternative virtual tag point 3 for non-alternative virtual tag point.
As it can be seen that for 4 alternative virtual tag points in Fig. 1 d, only alternative virtual tag point 1 and alternative virtual mark
Note point 4 is the virtual tag point of the target image to be detected.
Step S103:Each virtual tag point determined by calculating and each corresponding void of the thing to be detected prestored
Intend the position deviation amount for virtual tag point between mark point.
, can will be to be checked it should be noted that the position of each virtual tag point of the thing to be detected prestored
The virtual tag point that thing obtain when detecting first is surveyed to store into database, and by it with subsequently treating caving in for detectable substance
The virtual tag point determined during detection is compared.Certainly, the position of each virtual tag point of the thing to be detected prestored
Putting can also be specified or manual setting by those skilled in the art, and the embodiment of the present invention need not be to the void that prestores
Intend mark point specific set-up mode be defined, those skilled in the art can according to the concrete condition in practical application into
Row is rational to be set.
Step S104:According to the position deviation amount for virtual tag point being calculated, whether the thing to be detected is determined
Cave in.
In a specific embodiment of the present invention, according to the position deviation for virtual tag point being calculated
Amount, determines whether the thing to be detected caves in, can include:
In the case where meeting following formula, determine that the thing to be detected is caved in, otherwise, it determines the thing to be detected
Do not cave in;
Wherein, it is more than default first departure in the position deviation amount for virtual tag point that m represents to be calculated
The number of the position deviation amount of threshold value, M represent the number for the position deviation amount for virtual tag point being calculated, and Th is represented
Default first caves in threshold value.
For example, it is assumed that:First caves in threshold value Th for 50%, the position deviation for virtual tag point being calculated
The quantity M of amount is 100, and the position deviation amount of wherein virtual tag point has exceeded the virtual tag of default first departure threshold value
The quantity m of point is 60.
Obviously, obtained according to expression formula (3),Then, the thing to be detected is determined
Caved in.
In another specific implementation of the present invention, according to the position deviation for virtual tag point being calculated
Amount, determines whether the thing to be detected caves in, can include:
When the position deviation amount of identified virtual tag point is all greater than default second departure threshold value, determining should
Thing to be detected is caved in, otherwise, it determines the thing to be detected does not cave in.
It should be noted that detection method provided in an embodiment of the present invention of caving in is the target image based on thing to be detected
, the change in displacement situation that the virtual tag point of detectable substance is treated by video camera is detected, and then is mapped to thing to be detected
The change of the actual displacement on surface, it is achieved thereby that treating the detection of the situation of caving in of detectable substance.
In addition, in a kind of embodiment of the present invention, can also be when detecting that the thing to be detected caves in
Produce alarm signal, in such manner, it is possible to timely safeguarded for caving in for thing to be detected, can reduce for thing to be detected into
The human cost and financial cost of row maintenance, can be effectively protected the thing to be detected.
It can be seen from the above that using scheme provided in an embodiment of the present invention cave in detection when, without be directed to each entity
In the case that mark point arranges multiple sensors, still according to processing result image staff can be assisted to find in time to be checked
The situation of caving in of thing is surveyed, therefore, reduces the workload for the detection that cave in for treating detectable substance.
Fig. 2 a are that another kind provided in an embodiment of the present invention caves in the flow diagram of detection method, in the reality shown in Fig. 1 a
Apply on the basis of example, after step slol, this method can also include:
Step S105:Obtain the entity indicia point of the thing to be detected of this in the target image.
So-called entity indicia point, refers to the mark point for being fixed on physical presence on the surface of thing to be detected, usual entity
Mark point is different from the color of thing to be detected.
It should be noted that in order to improve the accuracy rate for the detection of caving in for treating detectable substance, preferably entity indicia point is set
It is set to the mark point that there is larger colour contrast with thing to be detected.Certainly, the present invention need not be to the specific of entity indicia point
Color is defined, and those skilled in the art need the concrete condition in practical application reasonably to be set.
In a specific embodiment of the present invention, the entity indicia of the thing to be detected of this in the target image is obtained
Point, can include following steps:
(1) bianry image of the target image is obtained;
(2) according to default first pixel scanning sequency, by following steps to any picture in the bianry image
Vegetarian refreshments Q1It is scanned, and then obtains the entity indicia point of thing to be detected described in the target image:
Judge the pixel Q1Whether it is white point;
If it is, judge the pixel Q1Whether the default quantity pixel of surrounding the 4th is all white point, if being all white
Point, according to the coordinate of the described 4th default quantity pixel, obtains a reality of thing to be detected described in the target image
The coordinate of body mark point.
Specifically, default first pixel scanning sequency, can be since the origin position of the target image, according to
It is scanned from top to bottom by left-to-right order.It should be noted that the present invention need not sweep default first pixel
Retouch order to be defined, those skilled in the art need reasonably to be set according to concrete application.
It should be noted that " the 4th default quantity pixel " above-mentioned can include pixel Q1, can be with
Not comprising pixel Q1。
In another specific implementation of the present invention, the entity indicia of the thing to be detected of this in the target image is obtained
Point, can include following steps:
(1) bianry image of the target image is obtained.
(2) determine apart from the recent entity indicia point to cave in obtained in detection of current point in time.
(3) it is directed to identified any entity mark point Q2' corresponding pixel points the Q in the target image2, by with
Lower step obtains the entity indicia point of thing to be detected described in the target image:
With the pixel Q2For starting point, it is scanned according to default second pixel scanning sequency, referring to Fig. 2 b, directly
Meet to a value for n:The all pixels point of condition is stain (for example, with Q2Centered on eight
8 pixels on neighborhood, etc.), and the coordinate of the white point scanned during writing scan, wherein, the value of n is just whole
Number, L represent any pixel point and the pixel Q in scanning process2The distance between;
According to the coordinate of the white point recorded, an entity indicia of thing to be detected described in the target image is determined
Point.
For the specific implementation of entity indicia point that the thing to be detected of this in the target image is obtained with reference to above two,
Former mode can be adapted for treating detectable substance and carry out first or non-situation about detecting first;Latter approach is then more suitable for
Treat detectable substance and carry out non-situation about detecting first, or pre-set the situation of the coordinate of entity indicia point.Certainly, originally
Invent and the specific implementation of the entity indicia point of the thing to be detected of this in the target image is obtained when need not be detected to caving in
It is defined, those skilled in the art need the concrete condition in practical application reasonably to be set.
Step S106:Calculate each corresponding reality of each entity indicia point obtained with the thing to be detected prestored
The position deviation amount for entity indicia point between body mark point.
Further, step S104 determines that this is to be checked according to the position deviation amount for virtual tag point being calculated
Survey whether thing caves in, can include:
According to the position deviation amount for virtual tag point being calculated and the position deviation amount for entity indicia point,
Determine whether the thing to be detected caves in.
It should be noted that according to being calculated for the position deviation amount of virtual tag point and for entity indicia point
Position deviation amount, determine whether the thing to be detected caves in, following several ways can be included:
Mode one:For virtual tag point
In the case where meeting following formula, determine that the thing to be detected is caved in, otherwise, it determines the thing to be detected
Do not cave in;
Wherein, it is more than default first departure in the position deviation amount for virtual tag point that m represents to be calculated
The number of the position deviation amount of threshold value, M represent the number for the position deviation amount for virtual tag point being calculated, and Th is represented
Default first caves in threshold value.
Mode two:For virtual tag point
When the position deviation amount of identified virtual tag point is all greater than default second departure threshold value, determining should
Thing to be detected is caved in, otherwise, it determines the thing to be detected does not cave in.
Mode three:For entity indicia point
When calculating in the position deviation amount for entity indicia point obtained, there are the position deviation of an entity indicia point
When amount is more than default second departure threshold value, it may be determined that whether the thing to be detected caves in.
Using the embodiment of the present invention, the situation of caving in of thing to be detected can be found in time;Although the in addition, embodiment of the present invention
Also arrange entity indicia point, but and need not arrange multiple sensors for each entity indicia point, therefore, reduce
Treat the workload of the detection that cave in of detectable substance.
Fig. 3 is a kind of structure diagram for detection device of caving in provided in an embodiment of the present invention, is directed to as shown in Figure 1a
Embodiment of the method, which can include:Target image obtains module 201, virtual tag point determining module
202nd, first position departure computing module 203 and determining module 204 of caving in.
Wherein, target image obtains module 201, in the case where meeting the default testing conditions that cave in, being treated
The target image of detectable substance;
Virtual tag point determining module 202, for according to default Corner Detection Algorithm, determining the figure in the target image
Image angle point, and identified image angle point is determined as to the virtual tag point of the thing to be detected based on the target image;
First position departure computing module 203, for each virtual tag point determined by calculating and prestores
The position deviation amount for virtual tag point between each corresponding virtual tag point of the thing to be detected;
Determining module of caving in 204, for according to the position deviation amount for virtual tag point being calculated, determining that this is treated
Whether detectable substance caves in.
Specifically, target image obtains module 201, can be used for:
In the situation that current point in time is default detection time point and/or current light intensity is default intensity of illumination
Under, obtain the target image of thing to be detected.
In a kind of specific embodiment of the present invention, virtual tag point determining module 202, can include:Gray level image obtains
Obtain unit, alternative virtual tag point determination unit and virtual tag point determination unit.
Wherein, gray level image obtaining unit, for obtaining the gray level image of the target image;
Alternative virtual tag point determination unit, for according to the gray value of each pixel in the gray level image and should
The gray value of first default quantity pixel around pixel, determines the alternative virtual tag point of the gray level image;
Virtual tag point determination unit, for contrasting journey according to the gray value of identified each alternative virtual tag point
Degree, obtains the virtual tag point of the target image from identified alternative virtual tag point, wherein, the gray value contrast
Degree, for representing the gray value of alternative virtual tag point and around it between gray value of the second default quantity pixel
Gray scale difference.
Specifically, alternative virtual tag point determination unit, is specifically used for:
Any pixel point P in the gray level image is determined by following steps0Whether it is the alternative of the gray level image
Virtual tag point:
According to following formula, to pixel P0The default quantity pixel of surrounding first is classified,
Wherein, PXRepresent the pixel P0Any pixel point in the default quantity pixel of surrounding first,Represent
Pixel PXGray value,Represent the pixel P0Gray value, t represents the threshold value of default gray value difference, CPTable
Show the corresponding class indications of pixel P;
Judge identical with the presence or absence of the continuous 3rd default quantity class indication in the described first default quantity pixel
Pixel;
If in the presence of, it is determined that the pixel P0For the alternative virtual tag point of the gray level image.
Specifically, virtual tag point determination unit, is specifically used for:
Order is determined according to default virtual tag point, is determined according to identified alternative virtual tag point initial alternative empty
Intend mark point P1;
Judge comprising the alternative virtual tag point P1The first preset range in whether comprising described initial alternative virtual
Mark point P1Outside alternative virtual tag point;
If comprising gray value contrasts degree most in the alternative virtual tag point for determining to be included in first preset range
Big alternative virtual tag point is a virtual tag point of the target image, and update should in the range of other alternative virtual marks
Note point is non-alternative virtual tag point, wherein, any alternative virtual tag point P2Gray value contrast degree reached according to following table
Formula obtains:
Wherein, PYFor the alternative virtual tag point P2Any pixel point in the default quantity pixel of surrounding second, S
For the alternative virtual tag point P2The set of the default quantity pixel of surrounding second, V represent the alternative virtual tag point P2
Gray value contrast degree;
Order is determined according to the default virtual tag point, does not carry out judging the alternative virtual tag of processing according to residue
The point renewal initial alternative virtual tag point, and return to the judgement and include the virtual tag point P1The first preset range
Inside whether include the virtual tag point P1Outside alternative virtual tag point the step of, until there is no not carrying out judgement processing
Alternative virtual tag point.
In a kind of specific embodiment of the present invention, determining module of caving in 204, can be used for:
In the case where meeting following formula, determine that the thing to be detected is caved in, otherwise, it determines described to be checked
Thing is surveyed not cave in;
Wherein, it is more than default first departure in the position deviation amount for virtual tag point that m represents to be calculated
The number of the position deviation amount of threshold value, M represent the number for the position deviation amount for virtual tag point being calculated, and Th is represented
Default first caves in threshold value.
In another specific embodiment of the present invention, determining module of caving in 204, can be used for:
When the position deviation amount of identified virtual tag point is all greater than default second departure threshold value, institute is determined
State thing to be detected to be caved in, otherwise, it determines the thing to be detected does not cave in.
It can be seen from the above that using the embodiment of the present invention, the situation of caving in of thing to be detected can be found in time, without for each
Entity indicia point arranges multiple sensors, therefore, reduces the workload for the detection that cave in for treating detectable substance.
Fig. 4 is that another kind provided in an embodiment of the present invention caves in the structure diagram of detection device, is directed to such as Fig. 2 a institutes
The embodiment of the method shown, and on the basis of the device embodiment shown in Fig. 3, which can also include:It is real
Body mark point determining module 205 and second place departure computing module 206.
Wherein, entity indicia point determining module 205, for obtaining the entity indicia of the thing to be detected of this in the target image
Point;
Second place departure computing module 206, for calculating each entity indicia point for being obtained and prestoring
The position deviation amount for entity indicia point between each correspondent entity mark point of the thing to be detected.
Further, determining module of caving in 204, for according to the position deviation amount for virtual tag point being calculated
With the position deviation amount for entity indicia point, determine whether the thing to be detected caves in.
Specifically, entity indicia point determining module 205, can include:
First bianry image obtaining unit, for obtaining the bianry image of the target image;
First instance mark point obtaining unit, for according to default first pixel scanning sequency, passing through following steps
To any pixel point Q in the bianry image1It is scanned, and then obtains the reality of thing to be detected described in the target image
Body mark point:
Judge the pixel Q1Whether it is white point;
If it is, judge the pixel Q1Whether the default quantity pixel of surrounding the 4th is all white point, if being all white
Point, according to the coordinate of the described 4th default quantity pixel, obtains a reality of thing to be detected described in the target image
The coordinate of body mark point.
Specifically, entity indicia point determining module 205, can include:
Second bianry image obtaining unit, for obtaining the bianry image of the target image;
Second instance mark point obtaining unit, for determining to be obtained in the recent detection of caving in of current point in time
The entity indicia point obtained;
For identified any entity mark point Q2' corresponding pixel points the Q in the target image2, pass through following step
Suddenly the entity indicia point of thing to be detected described in the target image is obtained:
With the pixel Q2For starting point, it is scanned according to default second pixel scanning sequency, until for n's
One value meets:The all pixels point of condition is stain, and is scanned during writing scan white
The coordinate of point, wherein, the value of n is positive integer, and L represents any pixel point and the pixel Q in scanning process2Between away from
From;
According to the coordinate of the white point recorded, an entity indicia of thing to be detected described in the target image is determined
Point.
It can be seen from the above that using scheme provided in an embodiment of the present invention cave in detection when, without be directed to each entity
In the case that mark point arranges multiple sensors, still according to processing result image staff can be assisted to find in time to be checked
The situation of caving in of thing is surveyed, therefore, reduces the workload for the detection that cave in for treating detectable substance.
For device embodiment, since it is substantially similar to embodiment of the method, so description is fairly simple, it is related
Part illustrates referring to the part of embodiment of the method.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended to
Non-exclusive inclusion, so that process, method, article or equipment including a series of elements not only will including those
Element, but also including other elements that are not explicitly listed, or further include as this process, method, article or equipment
Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that
Also there are other identical element in process, method, article or equipment including the key element.
Can one of ordinary skill in the art will appreciate that realizing that all or part of step in above method embodiment is
To instruct relevant hardware to complete by program, the program can be stored in computer read/write memory medium,
The storage medium designated herein obtained, such as:ROM/RAM, magnetic disc, CD etc..
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all
Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention
It is interior.
Claims (8)
- The detection method 1. one kind is caved in, it is characterised in that the described method includes:In the case where meeting the default testing conditions that cave in, the target image of thing to be detected is obtained;Obtain the gray level image of the target image;According to the around the gray value of each pixel in the gray level image and the pixel first default quantity pixel Gray value, determine the alternative virtual tag point of the gray level image;Order is determined according to default virtual tag point, and initial alternative virtual mark is determined according to identified alternative virtual tag point Remember point P1;Judge comprising the initial alternative virtual tag point P1The first preset range in whether include the initial alternative virtual mark Remember point P1Outside alternative virtual tag point;If comprising gray value contrasts degree maximum in the alternative virtual tag point for determining to be included in first preset range Alternative virtual tag point is a virtual tag point of the target image, and updates other alternative virtual tag points in the range of this For non-alternative virtual tag point, wherein, any alternative virtual tag point P2Gray value contrast degree obtained according to following formula :<mrow> <mi>V</mi> <mo>=</mo> <msub> <mo>&Sigma;</mo> <mrow> <mi>Y</mi> <mo>&Element;</mo> <mi>S</mi> </mrow> </msub> <mrow> <mo>|</mo> <mrow> <msub> <mi>I</mi> <msub> <mi>P</mi> <mi>Y</mi> </msub> </msub> <mo>-</mo> <msub> <mi>I</mi> <msub> <mi>P</mi> <mn>2</mn> </msub> </msub> </mrow> <mo>|</mo> </mrow> </mrow>Wherein, PYFor the alternative virtual tag point P2Any pixel point in the default quantity pixel of surrounding second, S is institute State alternative virtual tag point P2The set of the default quantity pixel of surrounding second, V represent the alternative virtual tag point P2Ash Angle value contrasts degree, and the gray value contrasts degree, and the gray value and around it second for representing alternative virtual tag point are pre- If the gray scale difference between the gray value of quantity pixel;Order is determined according to the default virtual tag point, does not carry out judging the alternative virtual tag point of processing more according to residue The new initial alternative virtual tag point, and return to the judgement and include the initial alternative virtual tag point P1It is first default In the range of whether include the initial alternative virtual tag point P1Outside alternative virtual tag point the step of, until there is no not Judge the alternative virtual tag point of processing;Each virtual tag point determined by calculating and each corresponding virtual tag point of the thing to be detected for prestoring it Between the position deviation amount for virtual tag point;According to the position deviation amount for virtual tag point being calculated, determine whether the thing to be detected caves in.
- 2. according to the method described in claim 1, it is characterized in that, described meeting the situation of the default testing conditions that cave in Under, the target image of thing to be detected is obtained, including:In the case where current point in time is default detection time point and/or current light intensity is default intensity of illumination, Obtain the target image of thing to be detected.
- 3. the according to the method described in claim 1, it is characterized in that, ash according to each pixel in the gray level image The gray value of first default quantity pixel around angle value and the pixel, determines the alternative virtual mark of the gray level image Note point, including:Any pixel point P in the gray level image is determined by following steps0Whether it is that the alternative of the gray level image is virtually marked Note point:According to following formula, to the pixel P0The default quantity pixel of surrounding first is classified,<mrow> <msub> <mi>C</mi> <mi>P</mi> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>I</mi> <msub> <mi>P</mi> <mi>x</mi> </msub> </msub> <mo>-</mo> <msub> <mi>I</mi> <msub> <mi>P</mi> <mn>0</mn> </msub> </msub> <mo>)</mo> <mo>&GreaterEqual;</mo> <mi>t</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mrow> <mo>|</mo> <mrow> <msub> <mi>I</mi> <msub> <mi>P</mi> <mi>x</mi> </msub> </msub> <mo>-</mo> <msub> <mi>I</mi> <msub> <mi>P</mi> <mn>0</mn> </msub> </msub> </mrow> <mo>|</mo> </mrow> <mo><</mo> <mi>t</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>(</mo> <msub> <mi>I</mi> <msub> <mi>P</mi> <mn>0</mn> </msub> </msub> <mo>-</mo> <msub> <mi>I</mi> <msub> <mi>P</mi> <mi>x</mi> </msub> </msub> <mo>)</mo> <mo>&GreaterEqual;</mo> <mi>t</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>Wherein, PXRepresent the pixel P0Any pixel point in the default quantity pixel of surrounding first,Represent pixel Point PXGray value,Represent the pixel P0Gray value, t represents the threshold value of default gray value difference, CPRepresent picture Vegetarian refreshments PXCorresponding class indication;Judge in the described first default quantity pixel with the presence or absence of the picture that the continuous 3rd default quantity class indication is identical Vegetarian refreshments;If in the presence of, it is determined that the pixel P0For the alternative virtual tag point of the gray level image.
- 4. the according to the method described in claim 1, it is characterized in that, position for virtual tag point that the basis is calculated Departure is put, determines whether the thing to be detected caves in, including:In the case where meeting following formula, determine that the thing to be detected is caved in, otherwise, it determines the thing to be detected Do not cave in;<mrow> <mfrac> <mi>m</mi> <mi>M</mi> </mfrac> <mo>></mo> <mi>T</mi> <mi>h</mi> </mrow>Wherein, it is more than default first departure threshold value in the position deviation amount for virtual tag point that m represents to be calculated Position deviation amount number, M represents the number of the position deviation amount for virtual tag point being calculated, and Th represents to preset First cave in threshold value;Or,When the position deviation amount of identified virtual tag point is all greater than default second departure threshold value, described treat is determined Detectable substance is caved in, otherwise, it determines the thing to be detected does not cave in.
- 5. according to the method described in claim 1, it is characterized in that, after the target image for obtaining thing to be detected, also wrap Include:Obtain the entity indicia point of thing to be detected described in the target image;Calculate each entity indicia point for being obtained and the thing to be detected prestored each correspondent entity mark point it Between the position deviation amount for entity indicia point;The position deviation amount for virtual tag point that the basis is calculated, determines whether the thing to be detected collapses Collapse, including:According to the position deviation amount for virtual tag point being calculated and the position deviation amount for entity indicia point, determine Whether the thing to be detected caves in.
- 6. according to the method described in claim 5, it is characterized in that, described obtain thing to be detected described in the target image Entity indicia point, including:Obtain the bianry image of the target image;According to default first pixel scanning sequency, by following steps to any pixel point Q in the bianry image1Into Row scanning, and then obtain the entity indicia point of thing to be detected described in the target image:Judge the pixel Q1Whether it is white point;If it is, judge the pixel Q1Whether the default quantity pixel of surrounding the 4th is all white point, if being all white point, root According to the coordinate of the described 4th default quantity pixel, an entity indicia of thing to be detected described in the target image is obtained The coordinate of point.
- 7. according to the method described in claim 5, it is characterized in that, described obtain thing to be detected described in the target image Entity indicia point, including:Obtain the bianry image of the target image;Determine apart from the recent entity indicia point to cave in obtained in detection of current point in time;For identified any entity mark point Q2' corresponding pixel points the Q in the target image2, obtained by following steps Obtain the entity indicia point of thing to be detected described in the target image:With the pixel Q2For starting point, it is scanned according to default second pixel scanning sequency, until one for n Value meets:The all pixels point of condition is stain, and the white point scanned during writing scan Coordinate, wherein, the value of n is positive integer, and L represents any pixel point and the pixel Q in scanning process2The distance between;According to the coordinate of the white point recorded, an entity indicia point of thing to be detected described in the target image is determined.
- The detection device 8. one kind is caved in, it is characterised in that described device includes:Target image obtains module, in the case where meeting the default testing conditions that cave in, obtaining the target of thing to be detected Image;Virtual tag point determining module, for according to default Corner Detection Algorithm, determining the image angle in the target image Point, and identified image angle point is determined as to the virtual tag point of the thing to be detected based on the target image;First position departure computing module, it is described to be checked with prestoring for calculating identified each virtual tag point Survey the position deviation amount for virtual tag point between each corresponding virtual tag point of thing;Cave in determining module, for according to the position deviation amount for virtual tag point being calculated, determining described to be detected Whether thing caves in;Specifically, the virtual tag point determining module includes:Gray level image obtaining unit, alternative virtual tag point determination unit With virtual tag point determination unit;Wherein, gray level image obtaining unit, for obtaining the gray level image of the target image;Alternative virtual tag point determination unit, for according to the gray value of each pixel in the gray level image and the pixel The gray value of first default quantity pixel around point, determines the alternative virtual tag point of the gray level image;Virtual tag point determination unit, for determining order according to default virtual tag point, according to identified alternative virtual Mark point determines initial alternative virtual tag point P1;Judge comprising the initial alternative virtual tag point P1The first preset range in whether include the initial alternative virtual mark Remember point P1Outside alternative virtual tag point;If comprising gray value contrasts degree maximum in the alternative virtual tag point for determining to be included in first preset range Alternative virtual tag point is a virtual tag point of the target image, and updates other alternative virtual tag points in the range of this For non-alternative virtual tag point, wherein, any alternative virtual tag point P2Gray value contrast degree obtained according to following formula :<mrow> <mi>V</mi> <mo>=</mo> <msub> <mo>&Sigma;</mo> <mrow> <mi>Y</mi> <mo>&Element;</mo> <mi>S</mi> </mrow> </msub> <mrow> <mo>|</mo> <mrow> <msub> <mi>I</mi> <msub> <mi>P</mi> <mi>Y</mi> </msub> </msub> <mo>-</mo> <msub> <mi>I</mi> <msub> <mi>P</mi> <mn>2</mn> </msub> </msub> </mrow> <mo>|</mo> </mrow> </mrow>Wherein, PYFor the alternative virtual tag point P2Any pixel point in the default quantity pixel of surrounding second, S is institute State alternative virtual tag point P2The set of the default quantity pixel of surrounding second, V represent the alternative virtual tag point P2Ash Angle value contrasts degree, and the gray value contrasts degree, and the gray value and around it second for representing alternative virtual tag point are pre- If the gray scale difference between the gray value of quantity pixel;Order is determined according to the default virtual tag point, does not carry out judging the alternative virtual tag point of processing more according to residue The new initial alternative virtual tag point, and return to the judgement and include the initial alternative virtual tag point P1It is first default In the range of whether include the initial alternative virtual tag point P1Outside alternative virtual tag point the step of, until there is no not Judge the alternative virtual tag point of processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510701041.8A CN105389806B (en) | 2015-10-26 | 2015-10-26 | One kind is caved in detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510701041.8A CN105389806B (en) | 2015-10-26 | 2015-10-26 | One kind is caved in detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105389806A CN105389806A (en) | 2016-03-09 |
CN105389806B true CN105389806B (en) | 2018-05-08 |
Family
ID=55422055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510701041.8A Active CN105389806B (en) | 2015-10-26 | 2015-10-26 | One kind is caved in detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105389806B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10402652B2 (en) | 2017-06-02 | 2019-09-03 | International Business Machines Corporation | Building black box |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831751A (en) * | 2012-09-04 | 2012-12-19 | 广东省公路管理局 | Road high-dangerous slope monitoring method based on double-camera imaging technology |
CN104134080A (en) * | 2014-08-01 | 2014-11-05 | 重庆大学 | Method and system for automatically detecting roadbed collapse and side slope collapse of road |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8081798B2 (en) * | 2007-11-20 | 2011-12-20 | Lawrence Livermore National Security, Llc | Method and system for detecting polygon boundaries of structures in images as particle tracks through fields of corners and pixel gradients |
US8209134B2 (en) * | 2008-12-04 | 2012-06-26 | Laura P. Solliday | Methods for modeling the structural health of a civil structure based on electronic distance measurements |
-
2015
- 2015-10-26 CN CN201510701041.8A patent/CN105389806B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831751A (en) * | 2012-09-04 | 2012-12-19 | 广东省公路管理局 | Road high-dangerous slope monitoring method based on double-camera imaging technology |
CN104134080A (en) * | 2014-08-01 | 2014-11-05 | 重庆大学 | Method and system for automatically detecting roadbed collapse and side slope collapse of road |
Non-Patent Citations (2)
Title |
---|
基于Harris角点检测的位移测量算法;苏恒强 等;《实验力学》;20120229;第27卷(第1期);摘要、第48-49页第1.2节、图1 * |
基于亚像素角点检测的试样变形图像测量方法;邵龙潭 等;《岩土力学》;20080531;第29卷(第5期);第1330-1331页、图2 * |
Also Published As
Publication number | Publication date |
---|---|
CN105389806A (en) | 2016-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107610124B (en) | Furnace mouth image preprocessing method | |
CN103994724A (en) | Method for monitoring two-dimensional displacement and strain of structure based on digital image processing technology | |
CN107119657B (en) | A kind of view-based access control model measurement pit retaining monitoring method | |
JP5591165B2 (en) | Method for detecting deformation area on concrete surface | |
JP5894013B2 (en) | Deterioration management method for concrete surface | |
CN115393727B (en) | Pavement linear crack identification method, electronic equipment and storage medium | |
CN108007374A (en) | A kind of building deformation laser point cloud data grid deviation analysis method | |
CN109472261A (en) | A kind of quantity of stored grains in granary variation automatic monitoring method based on computer vision | |
CN113724259B (en) | Well lid abnormity detection method and device and application thereof | |
CN106408583A (en) | Multi-edge defect detecting method and device | |
CN109410192A (en) | A kind of the fabric defect detection method and its device of multi-texturing level based adjustment | |
CN114241215B (en) | Non-contact detection method and system for apparent cracks of bridge | |
CN105389806B (en) | One kind is caved in detection method and device | |
JPH0765152A (en) | Device and method for monitoring | |
CN112308828A (en) | Artificial intelligence detection method and detection system for air tightness of sealing equipment | |
CN108876747A (en) | A kind of OLED draws the extracting method and device of element | |
JP7297634B2 (en) | Equipment construction inspection method and equipment construction inspection system | |
CN110031043B (en) | Civil engineering building structure real-time intelligent monitoring system | |
CN106646465A (en) | Cascaded constant false alarm rate (CFAR) detection method and cascaded CFAR detection device | |
CN116205884A (en) | Concrete dam crack identification method and device | |
CN115995043A (en) | Transmission line hidden danger target identification method and computer readable storage medium | |
CN207728381U (en) | Shield duct piece automatic assembling system | |
CN112907567A (en) | SAR image ordered artificial structure extraction method based on spatial reasoning method | |
JP2005227114A (en) | System for determining abnormality in water quality, method therefor, image monitor, program therefor, and storage medium therefor | |
CN116403165B (en) | Dangerous chemical leakage emergency treatment method, dangerous chemical leakage emergency treatment device and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |