CN108133460B - Color type state indicator identification method suitable for electric power robot - Google Patents

Color type state indicator identification method suitable for electric power robot Download PDF

Info

Publication number
CN108133460B
CN108133460B CN201711167244.9A CN201711167244A CN108133460B CN 108133460 B CN108133460 B CN 108133460B CN 201711167244 A CN201711167244 A CN 201711167244A CN 108133460 B CN108133460 B CN 108133460B
Authority
CN
China
Prior art keywords
image
indicator
scale
value
ellipse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711167244.9A
Other languages
Chinese (zh)
Other versions
CN108133460A (en
Inventor
程雷鸣
马路
冯维纲
熊少华
冯维颖
田继辉
熊金梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongyuan Huadian Science & Technology Co ltd
Original Assignee
Wuhan Zhongyuan Huadian Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongyuan Huadian Science & Technology Co ltd filed Critical Wuhan Zhongyuan Huadian Science & Technology Co ltd
Priority to CN201711167244.9A priority Critical patent/CN108133460B/en
Publication of CN108133460A publication Critical patent/CN108133460A/en
Application granted granted Critical
Publication of CN108133460B publication Critical patent/CN108133460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a color type state indicator identification method suitable for an electric power robot, which mainly comprises the following steps: (1) image preprocessing, namely calculating and accelerating the processed original image; (2) target detection, detecting color type status indicator equipment in the collected image; (3) and judging the state of the indicator according to the characteristics of the color type state indicator. The invention can quickly and accurately identify the color type state indicator of the transformer substation and judge the state of the indicator, so that the working personnel can really know the position of the switch, and the dispatching command is more accurate and timely.

Description

Color type state indicator identification method suitable for electric power robot
Technical Field
The invention relates to a color type state indicator identification method suitable for an electric power robot, and belongs to the technical field of digital image processing, mode identification and machine learning.
Background
In recent years, Chinese economy develops rapidly, and electric power is a foundation stone for development of various industries. The state and the position of the switch are mainly judged according to the auxiliary node of the breaker in the power dispatching process. Due to corrosion, abrasion, aging and other reasons, the auxiliary switch is not switched in place, sometimes the real position of the switch cannot be judged correctly, and error information is provided for dispatching and commanding. However, the main switch equipment is linked with an on-off indicator, and the on-off state and the fault alarm of the power high-voltage circuit breaker switch can be identified by observing the on-off state of the indicator.
Common indicators are largely divided into two categories: color type indicators and pointing type indicators.
The current color type status indicator is observed mainly by substation operators on duty. There are several problems depending on human observation:
firstly, the requirement on the professional skills of observers is high, and special training is needed;
secondly, the working environment is unsafe, and the transformer substation has high voltage and discharge problems and is likely to cause damage to observation and inspection personnel;
and thirdly, removing the observation task and also having a daily inspection task. The polling period of polling personnel is long, and the problems can not be found in time.
With the development of technology, image processing technology has been widely used in various fields. The image processing technology in the power patrol direction also achieves the performance of happiness. The robot technology develops rapidly in this year, and gradually replaces the observation and inspection of operators on duty. The image processing technology and the robot patrol are combined, so that the labor cost is saved, the people are prevented from working in a dangerous environment, and the abnormity can be timely found and processed.
The prior art still has some problems:
firstly, the positions of the robot at each stop are different, so that the visual angles of each shooting cannot be guaranteed to be completely consistent;
secondly, the external environment interference problems, such as over-strong illumination, over-dark illumination, heavy fog and the like;
disclosure of Invention
The present invention is directed to solve the above problems, and provides a color type status indicator recognition method suitable for an electric power robot. The color type status indicator and the status thereof can be quickly and accurately identified, so that a dispatcher can really know the position of the switch, and the dispatching command is more accurate and timely.
The invention adopts the following technical scheme:
a color type status indicator identifying method suitable for an electric power robot, comprising the steps of:
(1-1) acquiring and calibrating an indicator and storing related information, wherein the stored related information comprises an indicator position, an indicator image, indicator image characteristics, an indicator detection method and an indicator current state;
(1-2) acquiring an image of the indicator at the current moment as an image to be detected;
(1-3) carrying out different pre-treatments on the image obtained in the step (1-2) according to a detection method;
(1-4) detecting the pointer in the image preprocessed in the step (1-3) according to a pointer detection method;
(1-5) determining the status of the indicator based on the color type status indicator characteristic.
The indicator detection method in the step (1-1) comprises 4 methods: a rectangular indicator detecting method, an elliptical indicator detecting method, a color map indicator detecting method, and a feature matching indicator detecting method,
the rectangular indicator detection method is suitable for the rectangular indicator without a frame;
the oval indicator detection method is suitable for the indicators which are round or oval and have no frame;
the color map indicator detection method is suitable for the condition that the shape of an indicator has no requirement, the shape of the indicator is matrix, circular or special, and the color of the indicator is not close to the background color;
the characteristic matching indicator detection method is suitable for indicators with frames.
The preprocessing in the step (1-3) is to eliminate the noise of the collected image to be detected and accelerate the subsequent calculation; when the indicator detection method is a rectangular indicator detection method or an elliptical indicator detection method, the preprocessing steps are as follows:
graying the collected image to be detected to obtain a gray image;
step (1-3-2) statistics gray average deviation value of gray image from reference brightness:
Figure BDA0001476465520000021
wherein: e represents an average offset value; mean represents the reference offset value, typically taken as 128; g (i, j) represents the gray value of the image at (i, j); w represents the image width; h represents the image height;
statistical image weighting offset:
Figure BDA0001476465520000022
wherein D represents a weighted offset; k represents a gray value, and the value range is 0-255; e represents a gray-scale average offset value; mean represents the reference offset value, taken 128; hist (k) represents the number of points with the gray value of k in the image; w represents the image width; h represents the image height; if | E | is > D; the brightness of the image is abnormal, E >0 represents excessive brightness, E <0 represents excessive darkness, and the gamma correction conversion parameter is adjusted according to the value of E to obtain a gamma correction image;
step (1-3-3) calculating a bilateral filtering denoising image of the gamma correction image;
step (1-3-4) self-adaptive scale transformation is carried out on the bilateral filtering image to obtain a pre-processed image to be detected;
the adaptive scale transformation specifically comprises: determining a suitable transformation factor scale, wherein the transformation factor scale is calculated according to the following formula:
scale=max(min(1,scale_X),min(1,scale_Y))
scale_X=sw/w,scale_Y=sh/h
wherein scale represents a transform factor; scale _ X represents an X-direction transformation factor; scale _ Y represents a Y-direction transform factor; w represents the width of the image to be detected, and h represents the height of the image to be detected; sw represents the reference picture width, 1920 is taken, sh represents the reference picture height, 1080 is taken.
And (1-3-5) carrying out self-adaptive canny transformation on the self-adaptive scale transformation image. The adaptive canny transformation automatically calculates the high and low thresholds of the canny transformation, and has better robustness than the common canny transformation. The method comprises the following specific steps:
the first derivatives in the x-direction and y-direction are first calculated according to the following formula.
Figure BDA0001476465520000023
Figure BDA0001476465520000024
Wherein: g denotes the original image, Gx denotes that G derives a first derivative in the x-direction, and Gy denotes that G derives a first derivative in the y-direction.
Then calculating a fusion gradient image;
magGrad(i,j)=Gx(i,j)+Gy(i,j)
mag=max(magGrad(i,j),0)
wherein: magGrad is a fusion gradient map; mag is the peak image gradient.
Then compressing the gray level of the fusion gradient image to 0-mag, and counting a gray histogram;
pmagGrad(i,j)=magGrad(i,j)/bin_size
bin_size=mag/NUM_BINS
wherein: pmagGrad is the gradient map after compression, bin _ size bit compression scale factor, NUM _ BINS is the compressed gray level, typically 64.
Then accumulating the histogram, and mapping back to the original gray level as a high threshold when the energy is greater than the threshold;
Figure BDA0001476465520000031
high_thresh=high_thresh*bin_size
wherein: hist represents a gray histogram; high _ thresh represents a high threshold; w represents the image width, h represents the image height not _ edge represents the non-boundary ratio, and is generally 0.95;
low threshold is calculated by high threshold:
low_thresh=high_thresh*ratio
where low _ thresh represents a low threshold; ratio represents the ratio of high and low thresholds, and is generally 0.3;
and finally, after calculating the high and low threshold values, performing canny transformation to obtain a boundary image.
When the detection method in the step (1-3) is a color map, the preprocessing involves performing adaptive scale transformation on an image to be detected, converting the image from an RGB space to an HSI space, performing bilateral filtering on an I component, performing histogram equalization on an S component and converting the image from the HSI space to the RGB space, and the specific steps are as follows:
(1-3-6) obtaining a scale-converted image by self-adaptive scale conversion: determining a suitable transformation factor scale, wherein the transformation factor scale is calculated according to the following formula:
scale=max(min(1,scale_X),mim(1,scale_Y))
scale_X=sw/w,scale_Y=sh/h
wherein scale represents a transform factor; scale _ X represents an X-direction transformation factor; scale _ Y represents a Y-direction transform factor; w represents the width of the image to be detected, and h represents the height of the image to be detected; sw represents the width of the reference image, 1920 is taken, sh represents the height of the reference image, and 1080 is taken;
(1-3-7) converting the scaled image to the HSI color space:
the conversion formula is as follows:
Figure BDA0001476465520000041
Figure BDA0001476465520000042
Figure BDA0001476465520000043
wherein
Figure BDA0001476465520000044
Is the angle value of the HSI color space hue component; r, G, B are red, green, and blue color components respectively; h is a hue component, S is a saturation component, and I is a brightness component;
(1-3-8) performing bilateral filtering on the I component in the step (1-3-7) and performing histogram equalization enhancement on the S component in the step (1-3-7);
(1-3-9) converting the image in the step (1-3-8) back to RGB color space processing;
Figure BDA0001476465520000045
when B is I (1-S),
Figure BDA0001476465520000046
G=3*I-B-R
Figure BDA0001476465520000047
when R is I (1-S),
Figure BDA0001476465520000048
B=3*I-G-R
Figure BDA0001476465520000049
when G is equal to I (1-S),
Figure BDA00014764655200000410
R=3*I-G-B
wherein: r, G, B are red, green, and blue color components respectively; h is the hue component, S is the saturation component, and I is the luminance component.
When the detection method in the step (1-4) is rectangular, the indicator detection involves morphological dilation processing and rectangle detection. The method specifically comprises the following steps:
(1-4-1) expanding the boundary image of the image preprocessed in the step (1-3);
(1-4-2) counting the contours of the dilated image, wherein the removing of the unqualified parts comprises: removing undersized or oversized contours; removing the contour with the length-width ratio not meeting the condition; removing contours with areas and circumferences which do not meet the conditions and removing contours with straight line distribution which does not meet the requirements;
(1-4-3) calculating a qualified outline bounding rectangle;
(1-4-4) non-maximum suppression merging rectangles, which specifically comprises the following steps: calculating the proportion of the rectangular overlapping area, and marking the overlapping proportion of the current area and the area when the overlapping area is larger than a threshold value; and when the overlapping area of the rectangles is a local extreme value, combining the rectangles to form a new rectangle and adding calculation.
And (3) when the detection method in the step (1-4) is elliptical, the detection of the indicator relates to elliptical detection. According to the detection of the geometric characteristics of the ellipse, the speed and the precision are higher than those of the traditional hough circle detection. Particularly, hough detection is almost insufficient for large eccentricity ellipses, and the method can have high detection precision.
(1-4-5) detecting the concave-convex property of the boundary image arc segment of the image preprocessed in the step (1-3):
Figure BDA0001476465520000051
wherein: the first derivative Gx in the x direction of the image; the first derivative Gy in the y direction of the image, where (i, j) denotes the coordinates of the image point;
(1-4-6) discarding arcs that are too short and contain too small of a rectangle;
(1-4-7) dividing the convex arc into first and third quadrants and the concave arc into second and fourth quadrants; the method comprises the following specific steps:
Figure BDA0001476465520000052
wherein: area _ b represents the area under the arc and area _ p represents the area above the arc.
(1-4-8) respectively taking arcs a, b and c in three quadrants;
(1-4-9) connecting the starting point of a and the midpoint of b as starting reference chords, solving a group of parallel chords by using an approximation method, and connecting the midpoints of the parallel chords pairwise to obtain a straight line. The above operations are performed for a, b and c respectively to obtain a series of straight lines l1,l2,l3...ln
(1-4-10) calculating the intersection points c of different groups of straight lines1,c2,c3...cmWhen the point falls within a smaller neighborhood, the set of arcs can be considered to form an ellipse, otherwise, the step (1-4-8) is carried out;
(1-4-11) if an ellipse is formed, estimating the parameters of the ellipse equation according to the following formula.
γ=q1q2-q3q4
β=(q3q4+1)(q1+q2)-(q1q2+1)(q3+q4)
Figure BDA0001476465520000053
Figure BDA0001476465520000054
θ=cos-1Kp
Figure BDA0001476465520000055
Figure BDA0001476465520000056
B=A*Np
The general equation is:
Figure BDA0001476465520000061
x′=x-x0
y′=y-y0
wherein: q1 is a, b arc parallel chord slope, q2 is a, b arc parallel chord endpoint connecting line slope, q3 is b, c arc parallel chord slope, q4 is b, c arc parallel chord endpoint connecting line slope, gamma, beta, Kp, Np, Ax, x ', y ' are intermediate variables in the calculation process, theta is an elliptic inclination angle, (x, x ', y0,y0) Is the central point of the ellipse, A is the major axis of the ellipse, B is the minor axis of the ellipse;
(1-4-12) points (x) on arc segments a, b, ci,yi) If the following formula is satisfied, the ellipse is considered to fall on, the statistical proportion is socre1,
Figure BDA0001476465520000062
x′=xi-x0
y′=yi-y0
wherein: d is the distance between the point and the ellipse, and is generally 1;
(1-4-13) calculating the similarity score2 of the arc segment and the 1/4 ellipse;
(1-4-14) calculating an ellipse score according to the following formula:
score=w1*score1+w2*score2
w1+w2=1
wherein: score is an ellipse score, w1,w2The weights are respectively 0.5 in general;
(1-4-15) sorting the ellipses by score from high to low;
(1-4-16) non-maximum suppression merging ellipses, which comprises the following specific steps: calculating all ellipses containing rectangles; taking the rectangle containing the ellipse with the highest score as an initial value, and performing rectangular non-maximum value inhibition and combination; and carrying out rectangular non-maximum suppression and combination in the residual sets until the sets are empty.
And (3) when the detection method in the step (1-4) is a digital map, the indicator detection relates to color map search.
(1-4-17) searching the whole digital map according to the x direction by using a target area digital map established during calibration to obtain n strip images bar1, bar2 and … … barn;
Figure BDA0001476465520000063
wherein: min _ num is the minimum number of the condition points, and is generally 45; g is a color image; (i, j) are image coordinates; r is the number of image lines; f is a relation function;
(1-4-18) searching according to the y direction aiming at each strip-shaped image to obtain a series of small subarea images;
Figure BDA0001476465520000071
wherein: min _ num is the minimum number of the condition points, and is generally 45; bar is a cut strip image; (i, j) are image coordinates; c is the number of image columns; f is a relation function;
(1-4-19) the specific steps of searching the whole digital map are as follows: searching continuous pixels falling in a specified RGB interval; searching continuous pixels with the component relation of R, G and B consistent with the template image; counting the starting points and the end points of the pixels, and the wave crest and wave trough filling areas;
(1-4-20) searching the whole image to be detected according to the directions of firstly y and then x to obtain a series of subarea images;
(1-4-21) confirming the pointer region using the sub-region coordinate relationship.
And (3) when the detection method in the step (1-4) is feature matching, detecting the indicator by relating to scale invariant transform (SIFT) feature matching.
The SIFT features are local features of the image, which keep invariance to rotation, scale scaling and brightness change and also keep a certain degree of stability to view angle change, affine transformation and noise; the uniqueness is good, the information content is rich, and the method is suitable for fast and accurate matching in a massive characteristic database; the multiplicity, even a few objects can generate a large number of SIFT feature vectors; high speed, optimized SIFT matching algorithm can even meet the real-time requirement.
The detection steps are as follows:
(1-4-22) extracting image features into Scale Invariant Feature Transform (SIFT) features;
(1-4-23) establishing a kd tree by utilizing SIFT characteristics of the template image;
(1-4-24) carrying out binary search on SIFT features of the image to be matched on the kd tree, establishing a backtracking search index by using a minimum priority queue in the search process, wherein a key value of the minimum priority queue is an absolute value of a difference value of corresponding dimension feature values; backtracking and searching according to the minimum priority queue sequence, and stopping searching when the minimum priority queue is empty or the upper limit of the searching times is reached; when the template image feature points correspond to a plurality of feature points to be matched, only the optimal value and the suboptimal value in the searching process are reserved; after the search is finished, screening out a final matching result according to the Euclidean distance relationship between the matching feature points and the optimal value and the second optimal value; the reference method is min _ Dis < max _ Dis 0.6; wherein min _ Dis is the Euclidean distance between the characteristic point and the optimal value, and max _ Dis is the Euclidean distance between the characteristic point and the next optimal value;
(1-4-25) calculating a corresponding transformation matrix H according to the coordinate relation between the image to be matched and the matching points of the matched image;
and (1-4-26) mapping the image to be matched to the same visual angle of the template image by using an inverse transformation matrix H _ inv of the transformation matrix H.
The step (1-5) of determining the indicator state involves determining based on a color-type state indicator characteristic. The method comprises the following specific steps: detecting a pixel proportion falling in a specified RGB interval; detecting the pixel proportion of the component relation of R, G and B consistent with the template image; the current indicator state is derived using the two ratios.
And (3) judging rules:
Figure BDA0001476465520000072
the calibration time is consistent with the calibration time, otherwise, the calibration is opposite;
wherein: g represents a color image, H represents an image height, W represents an image width, F is a relational function, and s represents a threshold value, typically 0.6.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention;
Detailed Description
The invention is further explained by comparing the embodiments with the attached drawings.
As shown in fig. 1. The method comprises the following steps:
the first step is as follows: collecting and calibrating an indicator and storing related information, wherein the stored related information comprises an indicator position, an indicator image, indicator image characteristics, an indicator detection method and an indicator current state;
the second step is that: and acquiring the image of the indicator at the current moment as an image to be detected, and preprocessing the image. And different preprocessing modes are self-adapted according to different detection methods.
The rectangular indicator detection method is suitable for the rectangular indicator without a frame;
the oval indicator detection method is suitable for the indicators which are round or oval and have no frame;
the color map indicator detection method is suitable for the condition that the shape of an indicator has no requirement, the shape of the indicator is matrix, circular or special, and the color of the indicator is not close to the background color;
the characteristic matching indicator detection method is suitable for indicators with frames.
When the indicator detection method is a rectangular indicator detection method or an elliptical indicator detection method, the preprocessing steps are as follows: (1) graying the image; (2) self-adaptive brightness correction; (3) carrying out bilateral filtering and denoising; (4) adaptive scaling, (5) adaptive canny filtering detection boundary.
Graying the collected image to be detected to obtain a gray image;
the graying formula is as follows: and Gray is 0.299R + 0.587G + 0.114B +0.5, wherein Gray is a Gray value, and R, G, B are three color components of red, green and blue respectively.
Step (1-3-2) self-adaptive brightness correction: a gray-scale average offset value of the gray-scale image from the reference luminance,
Figure BDA0001476465520000081
counting image weighted offsets, wherein: e represents an average offset value; mean represents the reference offset value, typically taken as 128; g (i, j) represents the gray value of the image at (i, j); w represents the image width; h represents the image height;
Figure BDA0001476465520000082
wherein D represents a weighted offset; k represents a gray value, and the value range is 0-255; e represents a gray-scale average offset value; mean represents the reference offset value, taken 128; hist (k) represents the number of points with the gray value of k in the image; w represents the image width; h represents the image height; if E > D, the image brightness is abnormal, E>0 represents an excess, E<0 means too dark. And adjusting the gamma corrected transformation parameters according to the value of E.
Step (1-3-3) calculating a bilateral filtering denoising image of the gamma correction image; meanwhile, the relation between the gray value and the space position is considered, and the boundary position is not changed during denoising.
Step (1-3-4) is to carry out self-adaptive scale transformation on the bilateral filtering image to obtain the image to be detected and preprocessed: the core here is to determine a suitable transformation factor to ensure that the subsequent calculation is accelerated without affecting the accuracy. The calculation formula is as follows:
scale=max(min(1,scale_X),min(1,scale_Y))
scale_X=sw/w,scale_Y=sh/h
wherein scale represents a transform factor, scale _ X represents an X-direction transform factor, and scale _ Y represents a Y-direction transform factor; w represents the image width and h represents the image height; sw represents the width of the reference image, generally 1920 is taken, sh represents the height of the reference image, generally 1080 is taken;
and (1-3-5) carrying out self-adaptive canny transformation on the self-adaptive scale transformation image. The adaptive canny transformation automatically calculates the high and low thresholds of the canny transformation, and has better robustness than the common canny transformation. The method comprises the following specific steps:
the first derivatives in the x-direction and y-direction are first calculated according to the following formula.
Figure BDA0001476465520000091
Figure BDA0001476465520000092
Wherein: g denotes the original image, Gx denotes that G derives a first derivative in the x-direction, and Gy denotes that G derives a first derivative in the y-direction.
Then calculating a fusion gradient image;
magGrad(i,j)=Gx(i,j)+Gy(i,j)
mag=max(magGrad(i,j),0)
wherein: magGrad is a fusion gradient map; mag is the peak image gradient.
Then compressing the gray level of the fusion gradient image to 0-mag, and counting a gray histogram;
pmagGrad(i,j)=magGrad(i,j)/bin_size
bin_size=mag/NUM_BINS
wherein: pmagGrad is the gradient map after compression, bin _ size bit compression scale factor, NUM _ BINS is the compressed gray level, typically 64.
Then accumulating the histogram, and mapping back to the original gray level as a high threshold when the energy is greater than the threshold;
Figure BDA0001476465520000093
high_thresh=high_thresh*bin_size
wherein: hist represents a gray histogram; high _ thresh represents a high threshold; w represents the image width, h represents the image height not _ edge represents the non-boundary ratio, and is generally 0.95;
low threshold is calculated by high threshold:
low_thresh=high_thresh*ratio
where low _ thresh represents a low threshold; ratio represents the ratio of high and low thresholds, and is generally 0.3;
and finally, after calculating the high and low threshold values, performing canny transformation to obtain a boundary image.
When the indicator detection method is a color map indicator detection method, the preprocessing steps are as follows: (1) self-adaptive scale transformation, (2) converting an image from an RGB space to an HSI space, (3) performing bilateral filtering on an I component, (4) performing histogram equalization on an S component, and (5) converting the image from the HSI space to the RGB space.
Step (1-3-6) self-adaptive scale transformation to obtain a scale transformation image: determining a suitable transformation factor scale, wherein the transformation factor scale is calculated according to the following formula:
scale=max(min(1,scale_X),min(1,scale_Y))
scale_X=sw/w,scale_Y=sh/h
wherein scale represents a transform factor; scale _ X represents an X-direction transformation factor; scale _ Y represents a Y-direction transform factor; w represents the width of the image to be detected, and h represents the height of the image to be detected; sw represents the width of the reference image, 1920 is taken, sh represents the height of the reference image, and 1080 is taken;
step (1-3-7) converting the scaled image to HSI color space:
the conversion formula is as follows:
Figure BDA0001476465520000101
Figure BDA0001476465520000102
Figure BDA0001476465520000103
wherein
Figure BDA0001476465520000104
Is the angle value of the HSI color space hue component; r, G, B are red, green, and blue color components respectively; h is a hue component, S is a saturation component, and I is a brightness component;
step (1-3-8) is to perform bilateral filtering on the I component in step (1-3-7) and perform histogram equalization enhancement on the S component in step (1-3-7);
step (1-3-9) converting the image in step (1-3-8) back to RGB color space;
Figure BDA0001476465520000105
when B is I (1-S),
Figure BDA0001476465520000106
G=3*I-B-R
Figure BDA0001476465520000107
when R is I (1-S),
Figure BDA0001476465520000108
B=3*I-G-R
Figure BDA0001476465520000109
when G is equal to I (1-S),
Figure BDA00014764655200001010
R=3*I-G-B
wherein: r, G, B are red, green, and blue color components respectively; h is the hue component, S is the saturation component, and I is the luminance component.
When the indicator detection method is the feature matching detection method, no preprocessing is performed.
The third step: the position of the indicator is detected. And different detection modes are self-adaptive according to different detection methods.
When the indicator detection method is a rectangular detection method, the indicator detection steps are as follows: (1) performing expansion processing on the boundary image in the step (1-3-5), and (2) calculating a rectangle.
Expanding the boundary image of the image preprocessed in the step (1-3) in the step (1-4-1);
step (1-4-2) is to count the outline of the expanded image, and the removal of the part which does not meet the condition comprises the following steps: removing undersized or oversized contours; removing the contour with the length-width ratio not meeting the condition; removing contours with areas and circumferences which do not meet the conditions and removing contours with straight line distribution which does not meet the requirements;
step (1-4-3) calculating a contour bounding rectangle meeting the conditions;
step (1-4-4) non-maximum suppression merging rectangles, which comprises the following steps: calculating the proportion of the rectangular overlapping area, and marking the overlapping proportion of the current area and the area when the overlapping area is larger than a threshold value; and when the overlapping area of the rectangles is a local extreme value, combining the rectangles to form a new rectangle and adding calculation.
When the indicator detection method is an ellipse detection method, the indicator detection steps are as follows: (1) and (4) carrying out ellipse detection on the boundary image in the step (1-3-5). According to the detection of the geometric characteristics of the ellipse, the speed and the precision are higher than those of the traditional hough circle detection. Particularly, hough detection is almost insufficient for large eccentricity ellipses, and the method can have high detection precision.
Step (1-4-5) detecting the concavity and convexity of the boundary image arc segment of the image preprocessed in the step (1-3-5):
Figure BDA0001476465520000111
wherein: the first derivative Gx in the x direction of the image; the first derivative Gy in the y direction of the image, where (i, j) denotes the coordinates of the image point;
discarding arcs which are too short and contain too small rectangles;
dividing the convex arc into a first quadrant and a third quadrant, and dividing the concave arc into a second quadrant and a fourth quadrant; the method comprises the following specific steps:
Figure BDA0001476465520000112
wherein: area _ b represents the area under the arc and area _ p represents the area above the arc.
Taking arcs a, b and c in three quadrants in the step (1-4-8);
and (1-4-9) connecting the starting point of the a and the midpoint of the b as starting reference chords, solving a group of parallel chords by using a clip approximation method, and connecting the midpoints of the parallel chords pairwise to obtain a straight line. The above operations are performed for a, b and c respectively to obtain a series of straight lines l1,l2,l3...ln
Step (1-4-10) calculating the intersection points c of different groups of straight lines1,c2,c3...cmWhen the point falls within a smaller neighborhood, the set of arcs can be considered to form an ellipse, otherwise, the step (1-4-8) is carried out;
and (1-4-11) if the ellipse is formed, estimating the parameters of the ellipse equation according to the following formula.
γ=q1q2-q3q4
β=(q3q4+1)(q1+q2)-(q1q2+1)(q3+q4)
Figure BDA0001476465520000121
Figure BDA0001476465520000122
θ=cos-1Kp
Figure BDA0001476465520000123
Figure BDA0001476465520000124
B=A*Np
The general equation is:
Figure BDA0001476465520000125
x′=x-x0
y′=y-y0
wherein: q1 is a, B arc parallel chord slope, q2 is a, B arc parallel chord end point connecting slope, q3 is B, c arc parallel chord slope, q4 is B, c arc parallel chord end point connecting slope, gamma, beta, Kp, Np, Ax, x ', y' are intermediate variables in the calculation process, theta is an ellipse inclination angle, (x0, y0) are central points of an ellipse, A is a long axis of the ellipse, and B is a short axis of the ellipse;
point (x) on arc segment a, b, c of step (1-4-12)i,yi) If the following formula is satisfied, the ellipse is considered to fall on, the statistical proportion is socre1,
Figure BDA0001476465520000126
x′=xi-x0
y′=yi-y0
wherein: d is the distance between the point and the ellipse, and is generally 1;
calculating similarity score2 of the arc segment and 1/4 ellipse;
calculating the ellipse score according to the following formula in the steps (1-4-14):
score=w1*score1+w2*score2
w1+w2=1
wherein: score is an ellipse score, w1,w2For weight, 0.5 is generally adopted;
step (1-4-15) sorting the ellipses from high to low according to the scores;
step (1-4-16) non-maximum suppression merging ellipses, which comprises the following steps: calculating all ellipses containing rectangles; taking the rectangle containing the ellipse with the highest score as an initial value, and performing rectangular non-maximum value inhibition and combination; and carrying out rectangular non-maximum suppression and combination in the residual sets until the sets are empty.
When the indicator detection method is a digital map detection method, the indicator detection relates to color map search.
Step (1-4-17) using the digital map of the target area established in calibration, firstly searching the whole digital map according to the x direction to obtain n strip images bar1, bar2 and … … barn;
Figure BDA0001476465520000131
wherein: min _ num is the minimum number of the condition points, and is generally 45; g is a color image; (i, j) are image coordinates; r is the number of image lines; f is a relation function;
step (1-4-18) searching each strip image according to the y direction to obtain a series of small subarea images;
Figure BDA0001476465520000132
wherein: min _ num is the minimum number of the condition points, and is generally 45; bar is a cut strip image; (i, j) are image coordinates; c is the number of image columns; f is a relation function;
the steps (1-4-19) are concrete steps of searching the whole digital map: searching continuous pixels falling in a specified RGB interval; searching continuous pixels with the component relation of R, G and B consistent with the template image; counting the starting points and the end points of the pixels, and the wave crest and wave trough filling areas;
step (1-4-20) searching the whole image to be detected according to the directions of first y and then x to obtain a series of subarea images;
and (1-4-21) confirming the indicator area by utilizing the coordinate relation of the sub-areas.
When the indicator detection method is a feature matching detection method, the indicator detection involves scale invariant transform (SIFT) feature matching.
The SIFT features are local features of the image, which keep invariance to rotation, scale scaling and brightness change and also keep a certain degree of stability to view angle change, affine transformation and noise; the uniqueness is good, the information content is rich, and the method is suitable for fast and accurate matching in a massive characteristic database; the multiplicity, even a few objects can generate a large number of SIFT feature vectors; high speed, optimized SIFT matching algorithm can even meet the real-time requirement.
The detection steps are as follows:
extracting image features into Scale Invariant Feature Transform (SIFT) features;
step (1-4-23) establishing a kd tree by utilizing SIFT characteristics of a template image;
step (1-4-24) the SIFT features of the image to be matched are subjected to binary search on a kd tree, a backtracking search index is established by using a minimum priority queue in the search process, and the key value of the minimum priority queue is the absolute value of the difference value of the corresponding dimension feature values; backtracking and searching according to the minimum priority queue sequence, and stopping searching when the minimum priority queue is empty or the upper limit of the searching times is reached; when the template image feature points correspond to a plurality of feature points to be matched, only the optimal value and the suboptimal value in the searching process are reserved; after the search is finished, screening out a final matching result according to the Euclidean distance relationship between the matching feature points and the optimal value and the second optimal value; the reference method is min _ Dis < max _ Dis 0.6; wherein min _ Dis is the Euclidean distance between the characteristic point and the optimal value, and max _ Dis is the Euclidean distance between the characteristic point and the next optimal value;
step (1-4-25) calculating a corresponding transformation matrix H according to the coordinate relation between the image to be matched and the matching point of the matched image;
and (1-4-26) mapping the image to be matched to the same visual angle of the template image by using an inverse transformation matrix H _ inv of the transformation matrix H.
The fourth step: detecting a pixel proportion falling in a specified RGB interval; detecting the pixel proportion of the component relation of R, G and B consistent with the template image; the current indicator state is derived using the two ratios.
And (3) judging rules:
Figure BDA0001476465520000141
and signTiming is consistent, otherwise, the timing is opposite to calibration;
wherein: g represents a color image, H represents an image height, W represents an image width, F is a relational function, and s represents a threshold value, typically 0.6. Counting the percentages R1 of the R component, the G component and the B component of the target region falling in a specified interval; and counting the percentage R2 of the R component, the G component and the B component which satisfy the relation function. And judging the current indicator state according to the statistical value.
Determining the current status of the indicator according to:
r1not less than s or r2The current indicator state can be regarded as consistent with the calibration time;
r1< s and r2If s, the current indicator state can be considered as opposite to the calibration time;
although the embodiments of the present invention have been described with reference to the accompanying drawings, it should be understood by those skilled in the art that various changes and modifications can be made without inventive faculty, and the scope of the invention is not limited by the embodiments of the invention.

Claims (7)

1. A color type status indicator identifying method suitable for an electric power robot, characterized by comprising the steps of:
(1-1) acquiring and calibrating an indicator and storing related information, wherein the stored related information comprises an indicator position, an indicator image, indicator image characteristics, an indicator detection method and an indicator current state; the indicator detection method comprises the following steps: the detection method of the rectangular indicator is suitable for the indicator which is rectangular and has no frame; the oval indicator detection method is suitable for the indicators which are round or oval and have no frame; the characteristic matching indicator detection method is suitable for indicators with frames;
(1-2) acquiring an image of the indicator at the current moment as an image to be detected;
(1-3) carrying out different preprocessing on the image to be detected obtained in the step (1-2) according to different indicator detection methods;
(1-4) detecting the pointer in the image preprocessed in the step (1-3) according to a pointer detection method;
(1-5) determining an indicator state based on the color type state indicator feature;
the preprocessing in the step (1-3) is to eliminate the noise of the collected image to be detected and accelerate the subsequent calculation; when the indicator detection method is a rectangular indicator detection method or an elliptical indicator detection method, the preprocessing steps are as follows:
(3-1) graying the collected image to be detected to obtain a grayscale image;
(3-2) detecting gray image self-adaptive brightness correction: counting the gray average deviation value of the gray image from the reference brightness:
Figure DEST_PATH_IMAGE001
wherein: e represents an average offset value; mean represents the reference offset value, taken 128; g (i, j) represents the gray value of the image at (i, j); w represents the image width; h represents the image height;
statistical image weighting offset: d
Figure DEST_PATH_IMAGE002
Wherein D represents a weighted offset; k represents a gray value, and the value range is 0-255; e represents a gray-scale average offset value; mean represents the reference offset value, taken 128; hist (k) represents the number of points with the gray value of k in the image; w represents the image width; h represents image height; if E is greater than D, the brightness of the image is abnormal, E is greater than 0 and represents excessive brightness, E is less than 0 and represents excessive darkness, and the gamma correction conversion parameter is adjusted according to the value of E to obtain a gamma correction image;
(3-3) calculating a bilateral filtering de-noising image of the gamma correction image;
(3-4) carrying out self-adaptive scale transformation on the bilateral filtering image to obtain a scale transformation image: determining a suitable transformation factor scale, wherein the transformation factor scale is calculated according to the following formula:
scale=max(min(1,scale_X),min(1,scale_Y))
scale_X=sw/w,scale_Y=sh/h
wherein scale represents a transform factor; scale _ X represents an X-direction transformation factor; scale _ Y represents a Y-direction transform factor; w represents the width of the image to be detected, and h represents the height of the image to be detected; sw represents the width of the reference image, 1920 is taken, sh represents the height of the reference image, and 1080 is taken;
(3-5) carrying out self-adaptive canny filtering on the scale transformation image to detect the boundary to obtain a boundary image:
(3-5-1) performing first-order derivation on the image in the x direction and the y direction;
Figure DEST_PATH_IMAGE003
wherein: g represents an original image, Gx represents that G finds a first derivative in the x direction, and Gy represents that G finds a first derivative in the y direction;
(3-5-2) calculating an image gradient peak value mag and calculating a fusion gradient image;
magGrad(i,j)=Gx(i,j)+Gy(i,j)
mag=max(magGrad(i,j),0)
wherein: fusing the gradient map magGrad and the statistical peak mag;
(3-5-3) compressing the gray level of the fusion gradient image to 0-mag, and counting a gray histogram;
pmagGrad(i,j)=magGrad(i,j)/bin_size
bin_size=mag/NUM_BINS
wherein: pmagGrad is a gradient map after compression, bin _ size is a compression scale factor, NUM _ BINS is a compressed gray level, and 64 is taken;
(3-5-4) accumulating the histogram, and mapping back to the original gray level as a high threshold when the energy is greater than the threshold;
Figure DEST_PATH_IMAGE004
high_thresh=high_thresh*bin_size
wherein: hist represents a gray histogram; high _ thresh represents a high threshold; w represents the image width, h represents the image height not _ edge represents the non-boundary ratio, and 0.95 is taken;
(3-5-5) solving a low threshold value through a high threshold value;
low_thresh=high_thresh*ratio
where low _ thresh represents a low threshold; ratio represents the ratio of high and low thresholds, and is taken as 0.3;
(3-5-6) utilizing two thresholds to solve canny transformation to obtain a boundary image.
2. A color type status indicator recognition method for an electric power robot as claimed in claim 1 wherein said steps (1-3) are pre-processed for eliminating noise of the image to be detected and subsequent calculation acceleration; when the indicator detection method is a color map indicator detection method, the preprocessing steps are as follows:
(4-1) carrying out self-adaptive scale transformation on the image to be detected to obtain a scale transformation image: determining a suitable transformation factor scale, wherein the transformation factor scale is calculated according to the following formula:
scale=max(min(1,scale_X),min(1,scale_Y))
scale_X=sw/w,scale_Y=sh/h
wherein scale represents a transform factor; scale _ X represents an X-direction transformation factor; scale _ Y represents a Y-direction transform factor; w represents the width of the image to be detected, and h represents the height of the image to be detected; sw represents the width of the reference image, 1920 is taken, sh represents the height of the reference image, 1080 is taken;
(4-2) converting the scaled image to the HSI color space:
the conversion formula is as follows:
Figure DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
for the angle value of the hue component of the HSI color space(ii) a R, G, B are red, green, and blue color components respectively; h is a hue component, S is a saturation component, and I is a brightness component;
(4-3) carrying out bilateral filtering on the I component in the step (4-2), and carrying out histogram equalization on the S component in the step (4-2);
(4-4) converting the image processed in the step (4-3) back to an RGB color space for processing;
Figure DEST_PATH_IMAGE007
wherein: r, G, B are red, green, and blue color components respectively; h is the hue component, S is the saturation component, and I is the luminance component.
3. A color type status indicator identifying method for an electric power robot as claimed in claim 1 wherein said step (1-3) of preprocessing is to eliminate noise of image acquisition to be detected and subsequent calculation acceleration; when the indicator detection method is the feature matching detection method, no preprocessing is performed.
4. A color type status indicator recognition method for an electric power robot as claimed in claim 1 wherein said step (1-4) detects the indicator in the image to be detected, and when the detection method is a rectangle detection method, the step of detecting the indicator is:
(6-1) expanding the boundary image of the image preprocessed in the step (1-3);
(6-2) counting the contours of the dilated image, wherein the removing of the unqualified parts comprises: removing undersized or oversized contours; removing the contour with the length-width ratio not meeting the condition; removing contours with areas and circumferences which do not meet the conditions and removing contours with straight line distribution which does not meet the requirements;
(6-3) calculating a qualified outline bounding rectangle;
(6-4) suppressing the non-maximum value and combining the rectangles, which comprises the following specific steps: calculating the proportion of the rectangular overlapping area, and marking the overlapping proportion of the current area and the area when the overlapping area is larger than a threshold value; and when the overlapping area of the rectangles is a local extreme value, combining the rectangles to form a new rectangle and adding calculation.
5. A color type status indicator identifying method for an electric power robot as claimed in claim 1 wherein said step (1-4) detects the indicator in the image to be detected, and when the detecting method is an ellipse detecting method, the indicator detecting step is:
(7-1) detecting the concavity and convexity of the boundary image arc segment of the image preprocessed in the step (1-3):
Figure DEST_PATH_IMAGE008
wherein: the first derivative Gx in the x direction of the image; the first derivative Gy in the y direction of the image, where (i, j) denotes the coordinates of the image point;
(7-2) discarding arcs that are too short and contain rectangles too small;
(7-3) dividing the convex arc into first and third quadrants and the concave arc into second and fourth quadrants; the method comprises the following specific steps:
Figure DEST_PATH_IMAGE009
wherein: area _ b represents the area under the arc, and area _ p represents the area above the arc;
(7-4) selecting arc segments of 3 quadrants, and estimating whether an ellipse is formed; the method comprises the following specific steps:
(7-4-1) respectively drawing arcs a, b and c in three quadrants;
(7-4-2) connecting the starting point of a and the midpoint of b as starting reference chords, solving a group of parallel chords by using a clip approximation method, connecting the midpoints of the parallel chords pairwise to obtain straight lines, and executing the operations aiming at arcs a, b and c respectively to obtain a series of straight lines l1,l2,l3…ln
(7-4-3) calculating the intersection points c of the different groups of straight lines1,c2,c3...cmWhen the point falls within a smaller neighborhood, the set of arcs is considered to form an ellipse, otherwise, the step (7-4) is carried out;
(7-5) if an ellipse is formed, calculating an ellipse equation;
estimating parameters of an elliptic equation according to the following formula;
γ=q1q2-q3q4
β=(q3q4+1)(q1+q2)-(q1q2+1)(q3+q4)
Figure DEST_PATH_IMAGE010
B=A*Np
the equation is:
Figure DEST_PATH_IMAGE011
x′=x-x0
y′=y-y0
wherein: q. q.s1Is the slope of the arc parallel chord of a, b, q2The slope of the end point connecting the parallel chords of the a and b arcs, q3B, c arc parallel chord slope, q4B, c, the slope of the end point connecting line of the arc parallel chord, gamma, beta, Kp, Np, Ax, x ', y' are respectively intermediate variables in the calculation process, theta is the inclination angle of the ellipse, and (x)0,y0) Is the central point of the ellipse, A is the major axis of the ellipse, B is the minor axis of the ellipse;
(7-6) evaluating the ellipse scores, and sorting all ellipses according to the scores; the method comprises the following specific steps:
(7-6-1) points (x) on the arc segments a, b, ci,yi) If the following formula is satisfied, the curve is considered to fall on an ellipse, the statistical proportion is score1,
Figure DEST_PATH_IMAGE012
x′=xi-x0
y′=yi-y0
wherein: d is the distance between the point and the ellipse, and 1 is taken;
(7-6-2) calculating the similarity score2 of the arc segment and the 1/4 ellipse;
(7-6-3) calculating an ellipse score according to the following formula:
score=w1*score1+w2*score2
w1+w2=1
wherein: score is an ellipse score, w1,w2Respectively, the weights are 0.5;
(7-7) sorting the ellipses according to the scores from high to low;
(7-8) suppressing and merging ellipses by using non-maximum values, which comprises the following specific steps: calculating all ellipses containing rectangles; taking the rectangle containing the ellipse with the highest score as an initial value, and performing rectangular non-maximum value inhibition and combination; and carrying out rectangular non-maximum suppression combination in the rest sets until the sets are empty.
6. A color type status indicator recognition method for an electric power robot as claimed in claim 1, wherein said step (1-4) detects the indicator in the image to be detected, and when the detection method is a feature matching indicator detection method, the detection steps are:
(9-1) extracting image features as Scale Invariant Feature Transform (SIFT) features;
(9-2) building a kd tree by using SIFT features of the template image;
(9-3) carrying out binary search on the SIFT features of the image to be matched on the kd tree, establishing an index of backtracking search by using a minimum priority queue in the search process, wherein a key value of the minimum priority queue is an absolute value of a difference value of corresponding dimension feature values; backtracking and searching according to the minimum priority queue sequence, and stopping searching when the minimum priority queue is empty or the upper limit of the searching times is reached; when the template image feature points correspond to a plurality of feature points to be matched, only the optimal value and the suboptimal value in the searching process are reserved; after the search is finished, screening out a final matching result according to the Euclidean distance relationship between the matching feature points and the optimal value and the second optimal value; the reference method is min _ Dis < max _ Dis 0.6; wherein min _ Dis is the Euclidean distance between the characteristic point and the optimal value, and max _ Dis is the Euclidean distance between the characteristic point and the next optimal value;
(9-4) calculating a corresponding transformation matrix H according to the coordinate relation between the image to be matched and the matching points of the matched image;
and (9-5) mapping the image to be matched to the same visual angle of the template image by using an inverse transformation matrix H _ inv of the transformation matrix H.
7. A color type status indicator identifying method for an electric power robot as claimed in claim 1 wherein said step (1-5) of determining the status of the indicator based on the color type status indicator characteristic comprises the steps of: detecting a pixel proportion falling in a specified RGB interval; detecting the pixel proportion of the component relation of R, G and B consistent with the template image; the current indicator state is derived using two ratios,
and (3) judging rules:
Figure DEST_PATH_IMAGE013
the calibration time is consistent with the calibration time, otherwise, the calibration is opposite;
wherein: g represents a color image, H represents the image height, W represents the image width, F is a relation function, s represents a threshold value, and 0.6 is taken; counting the percentages R1 of the R component, the G component and the B component of the target region falling in a specified interval; counting the percentage R2 of the R component, the G component and the B component which satisfy the relation function, and judging the current state of the indicator according to the following formula: r1 is greater than or equal to s or r2 is greater than or equal to s, and the current indicator state is considered to be consistent with the calibration time;
r1 < s and r2 < s, the current indicator state is considered opposite to the calibration time.
CN201711167244.9A 2017-11-21 2017-11-21 Color type state indicator identification method suitable for electric power robot Active CN108133460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711167244.9A CN108133460B (en) 2017-11-21 2017-11-21 Color type state indicator identification method suitable for electric power robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711167244.9A CN108133460B (en) 2017-11-21 2017-11-21 Color type state indicator identification method suitable for electric power robot

Publications (2)

Publication Number Publication Date
CN108133460A CN108133460A (en) 2018-06-08
CN108133460B true CN108133460B (en) 2021-10-12

Family

ID=62388771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711167244.9A Active CN108133460B (en) 2017-11-21 2017-11-21 Color type state indicator identification method suitable for electric power robot

Country Status (1)

Country Link
CN (1) CN108133460B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008881B (en) * 2019-03-28 2021-03-12 北京农业信息技术研究中心 Method and device for identifying cow behaviors of multiple moving targets
CN110222555B (en) * 2019-04-18 2022-12-20 灏图科技(上海)有限公司 Method and device for detecting skin color area
CN110082620B (en) * 2019-05-05 2021-09-24 北京云迹科技有限公司 Charging pile working state detection method and device
CN112990148B (en) * 2021-05-07 2021-08-03 武汉理工大学 Target identification method and system for intelligent transfer robot
CN113538568B (en) * 2021-08-04 2024-01-12 国网浙江省电力有限公司嘉兴供电公司 Robot switching operation image processing method and substation robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002056387A (en) * 2000-08-07 2002-02-20 Toko Electric Corp Recognition processing device and recognition processing method for state indicator
CN102129673A (en) * 2011-04-19 2011-07-20 大连理工大学 Color digital image enhancing and denoising method under random illumination
CN102663358A (en) * 2012-03-29 2012-09-12 山西省电力公司晋中供电分公司 Video image identification method of operating state of secondary protection equipment of substation
CN103345766A (en) * 2013-06-21 2013-10-09 东软集团股份有限公司 Method and device for identifying signal light
CN106250902A (en) * 2016-07-29 2016-12-21 武汉大学 Power system on off state detection method based on characteristics of image template matching
CN106570865A (en) * 2016-11-08 2017-04-19 国家电网公司 Digital-image-processing-based switch state detecting system of power equipment
CN106600569A (en) * 2016-11-28 2017-04-26 浙江宇视科技有限公司 Signal lamp color effect enhancement processing method and apparatus thereof
CN107103330A (en) * 2017-03-31 2017-08-29 深圳市浩远智能科技有限公司 A kind of LED status recognition methods and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6098784B2 (en) * 2012-09-06 2017-03-22 カシオ計算機株式会社 Image processing apparatus and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002056387A (en) * 2000-08-07 2002-02-20 Toko Electric Corp Recognition processing device and recognition processing method for state indicator
CN102129673A (en) * 2011-04-19 2011-07-20 大连理工大学 Color digital image enhancing and denoising method under random illumination
CN102663358A (en) * 2012-03-29 2012-09-12 山西省电力公司晋中供电分公司 Video image identification method of operating state of secondary protection equipment of substation
CN103345766A (en) * 2013-06-21 2013-10-09 东软集团股份有限公司 Method and device for identifying signal light
CN106250902A (en) * 2016-07-29 2016-12-21 武汉大学 Power system on off state detection method based on characteristics of image template matching
CN106570865A (en) * 2016-11-08 2017-04-19 国家电网公司 Digital-image-processing-based switch state detecting system of power equipment
CN106600569A (en) * 2016-11-28 2017-04-26 浙江宇视科技有限公司 Signal lamp color effect enhancement processing method and apparatus thereof
CN107103330A (en) * 2017-03-31 2017-08-29 深圳市浩远智能科技有限公司 A kind of LED status recognition methods and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A fast and effective ellipse detector for embedded vision applications";Michele Fornaciari等;《Pattern Recognition》;20141130;第47卷(第11期);全文 *
"Fast and robust optic disc detection using pyramidal decomposition and Hausdorff-based template matching";M. Lalonde等;《IEEE Transactions on Medical Imaging》;20011130;第20卷(第11期);全文 *
"基于KD树的海量图像匹配技术";张小莉;《计算机时代》;20140731;第2014年卷(第07期);第40-42、45页 *
"基于栅格地图的智能车辆运动目标检测";周俊静等;《***工程与电子技术》;20150228;第37卷(第2期);全文 *

Also Published As

Publication number Publication date
CN108133460A (en) 2018-06-08

Similar Documents

Publication Publication Date Title
CN108133460B (en) Color type state indicator identification method suitable for electric power robot
CN107066933B (en) Road sign identification method and system
CN109271937B (en) Sports ground marker identification method and system based on image processing
CN108319973A (en) Citrusfruit detection method on a kind of tree
JP2009086926A (en) Image recognition method and device
CN108876723A (en) A kind of construction method of the color background of gray scale target image
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN104915678A (en) Detection method and apparatus of target object in power transmission line
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
CN107977663B (en) Pointing type state indicator identification method suitable for electric power robot
CN108985305A (en) A kind of positioning of laser-induced thermal etching industrial detonator coded image and bearing calibration
CN106529556A (en) Visual inspection system for instrument indicator lamp
JP2018120445A (en) Car number recognition apparatus
CN101561316B (en) On-line test visual data processing system based on region of interest (ROI)
CN115082776A (en) Electric energy meter automatic detection system and method based on image recognition
CN110532938A (en) Papery operation page number recognition methods based on Faster-RCNN
CN106504211A (en) Based on the low-light-level imaging method for improving SURF characteristic matchings
CN106845506B (en) A kind of target surface location of pixels coding method
CN109635679B (en) Real-time target paper positioning and loop line identification method
US10115195B2 (en) Method and apparatus for processing block to be processed of urine sediment image
CN111260603B (en) Method and device for identifying blade tips of wind generating set
CN115471650A (en) Gas pressure instrument reading method, device, equipment and medium
CN110175257A (en) A kind of line original text image matching method, electronic equipment, storage medium
CN111667429B (en) Target positioning correction method for inspection robot
CN115222652A (en) Method for identifying, counting and centering end faces of bundled steel bars and memory thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant