CN110728304A - Cutter image identification method for on-site investigation - Google Patents

Cutter image identification method for on-site investigation Download PDF

Info

Publication number
CN110728304A
CN110728304A CN201910866132.5A CN201910866132A CN110728304A CN 110728304 A CN110728304 A CN 110728304A CN 201910866132 A CN201910866132 A CN 201910866132A CN 110728304 A CN110728304 A CN 110728304A
Authority
CN
China
Prior art keywords
target object
object region
pixel
image
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910866132.5A
Other languages
Chinese (zh)
Other versions
CN110728304B (en
Inventor
刘颖
李钊
公衍超
林庆帆
王富平
王玲
薛刚
梁伟
卢津
王昊
李兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN201910866132.5A priority Critical patent/CN110728304B/en
Publication of CN110728304A publication Critical patent/CN110728304A/en
Application granted granted Critical
Publication of CN110728304B publication Critical patent/CN110728304B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

A method for recognizing an on-site investigation cutter image comprises the steps of preparing a data set, preprocessing, positioning a target object region, extracting a minimum external rectangle of the target object region, extracting shape characteristics of the target object region, fusing the characteristics, training a support vector machine and recognizing the on-site investigation cutter image. On the basis of analyzing the on-site inspection cutter image related to the actual case, two typical characteristics of the on-site inspection cutter image are obtained: the tool is usually in an approximately neutral position in the image and the tool is one of the larger objects in the image; the image of the cutter is complete, and the shape information is comprehensive. The method for positioning the target object area in the image is provided, a group of shape feature descriptions are established, the shape feature descriptions are used as input, a support vector machine is trained, the on-site investigation cutter image is recognized, the problems that manual input of the on-site investigation cutter image is time-consuming and labor-consuming, and the accuracy of input information is easily influenced by human factors are solved, and the working efficiency of front-line case handling personnel is improved.

Description

Cutter image identification method for on-site investigation
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a method for recognizing an image of a cutter in site investigation.
Background
The on-site investigation image plays an important role in the processes of case solving, court evidence demonstration and the like. When a case occurs, the experimenter can quickly arrive at the site and collect site investigation images related to the case according to strict regulations. After the on-site survey image is acquired, the surveyor inputs the image and other information related to the case into a 'national public security institution on-site survey information system' for storage and management for subsequent serial and parallel cases and case solving. At present, case handling personnel mainly enter field investigation images in a manual mode, the method is time-consuming and labor-consuming, and the accuracy of information entry is easily influenced by human factors. The automatic field inspection image input technology based on image identification can effectively solve the problems existing in the manual mode.
In china, guns are strictly controlled, a cutter is the easiest to obtain and most damaging, a cutter image is a very important one in field survey images, and relevant research is internationally carried out on the identification of the cutter image in an infrared cutter image, an X-ray cutter image and a monitoring video. According to the search, the patent literature and the non-patent literature have no relevant reports aiming at the identification method of the on-site survey tool image.
The on-site inspection of the cutter image belongs to a natural optical image, and is completely different from a cutter identification method of an infrared image and an X-ray image. The cutter identification method aiming at the monitoring video does not fully consider the characteristics of the on-site investigation cutter image, so the method can not be directly and effectively applied to the on-site investigation cutter image identification, and is embodied in the following aspects:
(1) the background of a test video image used by the existing monitoring video cutter identification method is generally cleaner, which is beneficial to positioning and identifying the cutter area, in the actual case site, the cutter is randomly discarded or deliberately hidden, and the background of the on-site cutter image investigation is generally complex and changeable.
(2) The tools in the monitoring video are usually together with the human hands, and compared with an isolated tool, particularly under the background that the human faces and the related recognition technology of the human are mature, the addition of the human or the human hands can provide more borrowing information for target recognition, and the tools usually exist independently in the field investigation tool images.
In the technical field of on-site investigation cutter image identification, one technical problem to be solved at present is to provide an identification method of an on-site investigation cutter image with high identification accuracy and high identification speed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the on-site inspection cutter image identification method with high identification accuracy and high identification speed.
The technical scheme adopted for solving the technical problems comprises the following steps:
(1) preparing a data set
1000 field investigation images are selected from a database to serve as a training set, wherein 500 field investigation cutter images serve as positive samples, and 500 other field investigation images including fingerprint images, shoe print images, bloodstain images, hammerhead images, axe images and firearm images serve as negative samples of the training set.
(2) Pretreatment of
1) Gauss filtering
And performing convolution operation on the training set image by using a Gaussian kernel with the size of 3 multiplied by 3 to filter, wherein the standard deviation of the Gaussian kernel is 0.8.
2) Edge detection
And processing the image after Gaussian filtering by using an edge detection method based on the structural forest to obtain the area outlines of all objects in the image.
3) Morphological filtering
The object region contour is subjected to a morphological closing process with a structure factor K, wherein the structure factor K is 5 × 5 pixels in size.
4) Binarization method
And performing adaptive threshold binarization processing on the morphologically filtered image to obtain a binary image, setting the brightness value of a pixel at the edge of the object region in the binary image to be 255, and setting the brightness values of pixels at other positions to be 0.
(3) Locating a target object region
Clustering the boundary contours of all object regions in the preprocessed image, wherein the clustering operation comprises the following steps:
1) extracting pixel coordinates with the brightness value of 255 to form a set D1,D1Is { (x)1,y1),(x2,y2),...,(xjYj), where j is the number of pixels with a brightness value of 255 and is a finite positive integer, (x)1,y1) The 1 st coordinate of a pixel point with the brightness value of 255 is represented; the coordinates of the pixel of the central point of the image are (round (w/2) and round (h/2)), wherein w and h respectively represent the width and the height of the image, and round () is an integer function; computing a set D1The Euclidean distances between all the pixels and the pixel of the central point of the image are D, and the D is divided into a plurality of groups according to the sequence of the Euclidean distances from large to small1Rearranging the middle pixels to obtain a set D2,D2Is { (x)i,yi),(xr,yr),...,(xt,yt) J is less than or equal to i, r is less than or equal to j, t is less than or equal to j, and i, r and t are positive integers.
2) Get set D21 st pixel (x) in (b)i,yi) As a clustering kernel, look over D in turn2Whether there is a pixel in (x)i,yi) If any pixel satisfies the condition, the pixel is associated with (x)i,yi) Form a new subset D2,1And repeating the above operation with the pixel as a new clustering kernel until D2Until there are no pixels satisfying the condition.
3) Removing D2In (D)2,1The rest pixels are arranged according to the Euclidean distance descending order, the rearranged 1 st pixel is taken as a clustering kernel, and the operation of the step (2) in the step (3) is repeated to obtain a subset D2,2
4) Repeating the operation of 3) in the step (3) until
Figure BDA0002201329300000031
I.e. D2All pixels in (a) belong to any one of the subsets D2,1,D2,2,...,D2,SWherein S is not more than j, S is the number of the subsets and is a positive integer, and all the subsets are arranged according to the number of the contained pixels in a descending order.
① if condition 1 is satisfied, the 1 st subset contains pixels with a number greater than or equal to D2Mu% of the number of pixels in the set, and the object region corresponding to the edge pixel of the 1 st subset is the target object region, wherein mu belongs to [50,100 ]]。
② if the condition 1 is not satisfied but the condition 2 is satisfied, i.e. S is equal to 2, the 2 subsets are taken to perform the operation of step 5).
③ if the condition 1 and the condition 2 are not satisfied, the first 3 subsets are taken to perform the operation of step 5).
5) And determining the mean value of Euclidean distances between all pixels in the subset and the pixels at the center point of the image, wherein the region corresponding to the edge pixel of the subset with the minimum mean value is the region of the target object.
(4) Extracting minimum circumscribed rectangle of target object region
In a two-dimensional coordinate system, determining a circumscribed rectangle of the outline of the target object area, wherein the circumscribed rectangle of the outline of the target object area takes one side as cpx,max-cpx,minThe other side is cpy,max-cpy,minWherein cp isx,maxRepresenting the maximum abscissa, cp, of the pixel lying on the contour of the target object areax,minRepresenting the smallest abscissa, cp, of a pixel lying on the contour of the target object areay,maxRepresenting the maximum ordinate, cp, of a pixel lying on the contour of the target object regiony,minRepresenting the minimum ordinate of a pixel lying on the contour of the target object area; the minimum circumscribed rectangle is a circumscribed rectangle with the minimum area obtained by rotating the outline of the target object region, and the step of determining the minimum circumscribed rectangle is as follows:
1) external rectangular frame R for obtaining object region outline1And determining its area aR,1
2) Rotating the outline of the object region anticlockwise by theta to obtain a circumscribed rectangular frame R of the object region2And determining the area a thereofR,2In which theta e (0 DEG, 90 DEG)]。
3) Repeating the operation of the step (4) 2), and obtaining the corresponding circumscribed rectangle and the corresponding area after rotating theta every time
Figure BDA0002201329300000041
Maximum number of rotations of
Figure BDA0002201329300000042
Wherein
Figure BDA0002201329300000043
Is a rounded down function.
4) Comparison
Figure BDA0002201329300000044
And clockwise rotates the circumscribed rectangle corresponding to the minimum area value
Figure BDA0002201329300000045
And finally, obtaining the minimum circumscribed rectangle of the outline of the target object area.
(5) Extracting target object region shape features
1) Determining aspect ratio
The aspect ratio lwr of the target object region is determined by equation (1):
lwr=wR/hR(1)
wherein wRLength, h, of the minimum bounding rectangle corresponding to the target object regionRIndicating that the target object region corresponds to the width of the minimum bounding rectangle.
2) Determining the degree of rectangularity
The squareness rec of the target object area is determined according to the formula (2):
rec=aC/aR(2)
wherein a isCRepresenting the area of the region bounded by the edge contour of the target object region, aRAnd the area of the region of the target object corresponding to the minimum bounding rectangle is shown.
3) Determining circularity
The circularity cir of the target object region is determined by the equation (3):
wherein p isCIs the perimeter of the edge profile of the target object region.
4) Determining tip angle
① grouping the coordinates of M pixels of the edge contour of the target object region into a set E, E being { (M)1,n1),(m2,n2),...,(mM,nM) Where M is a finite positive integer, (M)1,n1) Representing the coordinates of the 1 st pixel point of the edge contour of the target object region; the shortest distance l between the M pixels and two wide sides of the minimum circumscribed rectangle corresponding to the target object regioncDetermining according to the formula (4):
Figure BDA0002201329300000051
where A, B, C is the coefficient of the general equation for the line in which the width of the minimum bounding rectangle lies, mcDenotes the abscissa, n, of the c-th pixel of the set EcRepresents the ordinate of the c-th pixel in the set E; will be two awayTwo pixel points corresponding to the minimum distance of the wide edge are respectively marked as p1、p2
② in p1And establishing a two-dimensional coordinate system for the x axis and the y axis in directions parallel to the long side and the wide side of the minimum external rectangle corresponding to the target object region as a coordinate center point.
③ calculating the edge contour of the target object region1The coordinates of N pixels with Euclidean distance less than lambda are formed into a 2 multiplied by N matrix F2Nλ ∈ {1,.. multidot., 100}, N is a finite positive integer, and the abscissa of N pixel points forms a matrix F2NElement f of line 11nN pixel points form a matrix F2NElement f of line 22nN is more than or equal to 1 and less than or equal to N, n and is a finite positive integer.
④ according to p1Coordinate values of (2) and (F)2NElement value of (1) to determine p1The opening orientation of the sharp corner at the point is 6 cases:
f1nis not less than 0, and f2nBoth positive and negative values exist; f. of1nIs not less than 0, and f2n≤0;f1nIs not less than 0, and f2n>0;f1n< 0, and f2nBoth positive and negative values exist; f. of1n< 0, and f2n≤0;f1n< 0, and f2n>0。
⑤ determining two end point pixels ep according to the orientation of the pointed opening1、ep2Determining ep separately1、ep2To p1And performing vector addition on the linear vector to obtain an angular bisector vector of the sharp corner.
⑥ dividing the distance p according to the angle bisector vector1Dividing N pixels with Euclidean distance smaller than lambda into two groups of pixel sets, fitting the two groups of pixel sets by using a least square method to obtain two straight lines, wherein the direction vectors L of the two straight lines1And L2Angle alpha between two straight lines1The cosine value of (2) is determined according to the formula (5):
cosα1=L1L2/|L1||L2| (5)
solving for p by inverse trigonometric function1Point corresponding sharp angle alpha1
⑦ following the procedure ② - ⑥ to obtain p2Point corresponding sharp angle alpha2The final tip angle α is determined by equation (6):
α=min{α12} (6)
(6) feature fusion
The respective eigenvalues of each training sample are normalized by the norm of L2 according to equation (7):
g′u,v=gu,v/||Gu||2(7)
wherein G isuIs a feature vector, GuIs [ g ]u,v|u∈{1,2,3,4},1≤v≤1000]TU is the feature number, and v is the training sample number and is a positive integer.
The 4 feature vectors are finally fused into a 1000 × 4 dimensional shape feature vector formed by equation (8):
G=[G1,G2,G3,G4](8)
(7) training support vector machine
And selecting a support vector machine with a kernel function as a radial basis function, and sending the 1000 x 4 dimensional shape feature vectors extracted from the training set into the support vector machine for training to obtain a support vector machine prediction model.
(8) Identifying on-site investigation tool images
And (3) obtaining a corresponding shape characteristic vector of each on-site investigation image to be identified according to the operations in the steps (2) to (6), sending the shape characteristic vector into the support vector machine prediction model obtained in the step (7) for identification, wherein the image with the identification result label of 1 represents the on-site investigation cutter image, and the image with the identification result label of 0 represents the off-site investigation cutter image.
And finishing the on-site inspection of the cutter image recognition.
In step ⑤ of the step of extracting shape features of the target object region (5) of the present invention, the determination of two end point pixels ep is based on the orientation of the pointed opening1、ep2The method comprises the following steps:
for f1nIs not less than 0, and f2nBoth positive and negative values being presentCase 1, ep1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1nIs not less than 0, and f2nIn case 2, ep. ltoreq.01Is at maximum f1nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1nIs not less than 0, and f2nCase > 0, case 3, ep1Is at maximum f2nCorresponding pixel point, ep2Is at maximum f1nCorresponding pixel points; for f1n< 0, and f2nCase 4, ep, where both positive and negative values are present1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1n< 0, and f2nIn case 5, ep. ltoreq.01Is a minimum of f1nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1n< 0, and f2n6 th case > 0, ep1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f1nAnd (4) corresponding pixel points.
In step ① of step 4) of the locating the target object region step (3) of the present invention, μ is preferably 75.
In step 2) of the step (4) of extracting the minimum bounding rectangle of the target object region, the θ is preferably 5 °.
In ③ of the present invention where the step of extracting shape features of the target object region (5) determines the tip angle 4), λ is preferably 50.
On the basis of analyzing the on-site investigation cutter image related to the actual case, the invention summarizes and obtains two typical characteristics of the on-site investigation cutter image: the tool is usually in an approximately neutral position in the image and the tool is one of the larger objects in the image; the image of the tool is usually positive and complete, and the shape information is comprehensive. Based on the first feature, a method for locating a target object region in an image is proposed. Based on the second characteristic, a group of shape feature descriptions are constructed for the positioned target object area, the shape feature descriptions are used as input, a support vector machine is trained, the identification of the on-site investigation cutter image is realized, the problems that manual input of the on-site investigation cutter image is time-consuming and labor-consuming, and the accuracy of input information is easily influenced by human factors are solved, and the working efficiency of front-line case handling personnel is greatly improved.
Drawings
Fig. 1 is a flowchart of a tool image recognition method for field inspection according to embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of a circumscribed rectangle corresponding to the target object region.
Fig. 3 is a schematic diagram of the positions of two sharp points of the tool in the binary image.
Fig. 4 shows 6 cases with sharp corners oriented.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, but the present invention is not limited to the examples.
Example 1
Taking 1000 field survey images selected from the self-built database of the applicant as an example of a training set, the field survey tool image recognition method of the embodiment comprises the following steps (as shown in fig. 1):
(1) preparing a data set
1000 field investigation images are selected from a database to serve as a training set, wherein 500 field investigation cutter images serve as positive samples, and 500 other field investigation images including fingerprint images, shoe print images, bloodstain images, hammerhead images, axe images and firearm images serve as negative samples of the training set.
(2) Pretreatment of
1) Gauss filtering
Filtering is carried out by carrying out convolution operation on the training set image by using a Gaussian kernel with the size of 3 multiplied by 3, the standard deviation of the Gaussian kernel is 0.8, and the convolution operation is common knowledge.
2) Edge detection
The method for detecting the edge based on the structural forest is disclosed in Fast edge detection using structural requirements [ J ]. IEEE Transactions on Pattern Analysis and Machine Analysis, 2015,37(8): 1558-.
3) Morphological filtering
The morphological closing processing is carried out on the outline of the object region by using a structural factor K, wherein the structural factor K is 5 multiplied by 5 pixels, and the morphological closing processing is common knowledge.
4) Binarization method
The image after the morphological filtering is subjected to binarization processing of an adaptive threshold to obtain a binary image, the brightness value of a pixel at the edge of an object region in the binary image is set to be 255, the brightness values of pixels at other positions are set to be 0, and the binarization processing of the adaptive threshold is common knowledge.
(3) Locating a target object region
Clustering the boundary contours of all object regions in the preprocessed image, wherein the clustering operation comprises the following steps:
1) extracting pixel coordinates with the brightness value of 255 to form a set D1,D1Is { (x)1,y1),(x2,y2),...,(xj,yj) J is the number of 255 pixels with brightness value and is a finite positive integer, (x)1,y1) The 1 st coordinate of a pixel point with the brightness value of 255 is represented; the coordinates of the pixel of the central point of the image are (round (w/2) and round (h/2)), wherein w and h respectively represent the width and the height of the image, and round () is an integer function; computing a set D1The Euclidean distances between all the pixels and the pixel of the central point of the image are D, and the D is divided into a plurality of groups according to the sequence of the Euclidean distances from large to small1Rearranging the middle pixels to obtain a set D2,D2Is { (x)i,yi),(xr,yr),...,(xt,yt) J is less than or equal to i, r is less than or equal to j, t is less than or equal to j, and i, r and t are positive integers.
2) Get set D21 st pixel (x) in (b)i,yi) As a clustering kernel, look over D in turn2Whether there is a pixel in (x)i,yi) If any pixel satisfies the condition, the pixel is associated with (x)i,yi) Group ofInto a new subset D2,1And repeating the above operation with the pixel as a new clustering kernel until D2Until there are no pixels satisfying the condition.
3) Removing D2In (D)2,1The rest pixels are arranged according to the Euclidean distance descending order, the rearranged 1 st pixel is taken as a clustering kernel, and the operation of the step (2) in the step (3) is repeated to obtain a subset D2,2
4) Repeating the operation of 3) in the step (3) until
Figure BDA0002201329300000091
I.e. D2All pixels in (a) belong to any one of the subsets D2,1,D2,2,...,D2,SWherein S is not more than j, S is the number of the subsets and is a positive integer, and all the subsets are arranged according to the number of the contained pixels in a descending order.
① if condition 1 is satisfied, the 1 st subset contains pixels with a number greater than or equal to D2Mu% of the number of pixels in the set, and the object region corresponding to the edge pixel of the 1 st subset is the target object region, wherein mu belongs to [50,100 ]]In this embodiment, μ is selected to be 75.
② if the condition 1 is not satisfied but the condition 2 is satisfied, i.e. S is equal to 2, the 2 subsets are taken to perform the operation of step 5).
③ if the condition 1 and the condition 2 are not satisfied, the first 3 subsets are taken to perform the operation of step 5).
5) And determining the mean value of Euclidean distances between all pixels in the subset and the pixels at the center point of the image, wherein the region corresponding to the edge pixels of the subset with the minimum mean value is a target object region, and the Euclidean distances are common knowledge.
(4) Extracting minimum circumscribed rectangle of target object region
In a two-dimensional coordinate system, determining a circumscribed rectangle of the outline of the target object area, wherein the circumscribed rectangle of the outline of the target object area takes one side as cpx,max-cpx,minThe other side is cpy,max-cpy,minWherein cp isx,maxRepresenting the maximum of a pixel lying on the contour of the target object areaAbscissa, cpx,minRepresenting the smallest abscissa, cp, of a pixel lying on the contour of the target object areay,maxRepresenting the maximum ordinate, cp, of a pixel lying on the contour of the target object regiony,minRepresenting the minimum ordinate of a pixel lying on the contour of the target object area; the minimum circumscribed rectangle is a circumscribed rectangle with the minimum area obtained by rotating the outline of the target object region, and the step of determining the minimum circumscribed rectangle is as follows:
1) external rectangular frame R for obtaining object region outline1As shown in fig. 2, and determining the area a thereofR,1
2) Rotating the outline of the object region anticlockwise by theta to obtain a circumscribed rectangular frame R of the object region2And determining the area a thereofR,2In which theta e (0 DEG, 90 DEG)]In this embodiment, θ is selected to be 5 °.
3) Repeating the operation of the step (4) 2), and obtaining the corresponding circumscribed rectangle and the corresponding area after rotating theta every time
Figure BDA0002201329300000101
Maximum number of rotations ofWhereinIs a rounded down function.
4) Comparison
Figure BDA0002201329300000104
And clockwise rotates the circumscribed rectangle corresponding to the minimum area valueAnd finally, obtaining the minimum circumscribed rectangle of the outline of the target object area.
(5) Extracting target object region shape features
1) Determining aspect ratio
The aspect ratio lwr of the target object region is determined by equation (1):
lwr=wR/hR(1)
wherein wRLength, h, of the minimum bounding rectangle corresponding to the target object regionRIndicating that the target object region corresponds to the width of the minimum bounding rectangle.
2) Determining the degree of rectangularity
The squareness rec of the target object area is determined according to the formula (2):
rec=aC/aR(2)
wherein a isCRepresenting the area of the region bounded by the edge contour of the target object region, aRAnd the area of the region of the target object corresponding to the minimum bounding rectangle is shown.
3) Determining circularity
The circularity cir of the target object region is determined by the equation (3):
wherein p isCIs the perimeter of the edge profile of the target object region.
4) Determining tip angle
① grouping the coordinates of M pixels of the edge contour of the target object region into a set E, E being { (M)1,n1),(m2,n2),...,(mM,nM) Where M is a finite positive integer, (M)1,n1) Representing the coordinates of the 1 st pixel point of the edge contour of the target object region; the shortest distance l between the M pixels and two wide sides of the minimum circumscribed rectangle corresponding to the target object regioncDetermining according to the formula (4):
Figure BDA0002201329300000107
where A, B, C is the coefficient of the general equation for the line in which the width of the minimum bounding rectangle lies, mcDenotes the abscissa, n, of the c-th pixel of the set EcRepresents the ordinate of the c-th pixel in the set E; respectively marking two pixel points corresponding to the minimum distance between the two wide sides as p1、p2As shown in fig. 3.,
② in p1And establishing a two-dimensional coordinate system for the x axis and the y axis in directions parallel to the long side and the wide side of the minimum external rectangle corresponding to the target object region as a coordinate center point.
③ calculating the edge contour of the target object region1The coordinates of N pixels with Euclidean distance less than lambda are formed into a 2 multiplied by N matrix F2Nλ ∈ {1,.. and 100}, where λ is 50, N is a finite positive integer, and the abscissa of N pixel points forms a matrix F2NElement f of line 11nN pixel points form a matrix F2NElement f of line 22nN is more than or equal to 1 and less than or equal to N, n and is a finite positive integer.
④ according to p1Coordinate values of (2) and (F)2NElement value of (1) to determine p1The pointed opening at the point faces 6 cases in total as shown in fig. 4.
f1nIs not less than 0, and f2nBoth positive and negative values exist, as shown in fig. 4 (a); f. of1nIs not less than 0, and f 2n0 or less as shown in FIG. 4 (b); f. of1nIs not less than 0, and f2n> 0, as shown in FIG. 4 (c); f. of1n< 0, and f2nBoth positive and negative values exist, as shown in FIG. 4 (d); f. of1n< 0, and f 2n0 or less as shown in FIG. 4 (e); f. of1n< 0, and f2n> 0, as shown in FIG. 4 (f).
⑤ determining two end point pixels ep according to the orientation of the pointed opening1、ep2Determining two end point pixels ep1、ep2The method comprises the following steps:
for f1nIs not less than 0, and f2nCase 1, ep, where both positive and negative values are present1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1nIs not less than 0, and f2nIn case 2, ep. ltoreq.01Is at maximum f1nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1nIs not less than 0, and f2nCase > 0, case 3, ep1Is at maximum f2nCorresponding pixel point, ep2Is at maximum f1nCorresponding pixel points; for f1n< 0, and f2nCase 4, ep, where both positive and negative values are present1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1n< 0, and f2nIn case 5, ep. ltoreq.01Is a minimum of f1nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1n< 0, and f2n6 th case > 0, ep1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f1nCorresponding pixel points; separately determining ep1、ep2To p1And performing vector addition on the linear vector to obtain an angular bisector vector of the sharp angle.
⑥ dividing the distance p according to the angle bisector vector1Dividing N pixels with Euclidean distance smaller than lambda into two groups of pixel sets, fitting the two groups of pixel sets by using a least square method to obtain two straight lines, wherein the direction vectors L of the two straight lines1And L2Angle alpha between two straight lines1The cosine value of (2) is determined according to the formula (5):
cosα1=L1L2/|L1||L2| (5)
solving for p by inverse trigonometric function1Point corresponding sharp angle alpha1
⑦ following the procedure ② - ⑥ to obtain p2Point corresponding sharp angle alpha2The final tip angle α is determined by equation (6):
α=min{α12} (6)
(6) feature fusion
The respective eigenvalues of each training sample are normalized by the norm of L2 according to equation (7):
g′u,v=gu,v/||Gu||2(7)
wherein G isuIs a feature vector, GuIs [ g ]u,v|u∈{1,2,3,4},1≤v≤1000]TU is the feature number, and v is the training sample number and is a positive integer.
The 4 feature vectors are finally fused into a 1000 × 4 dimensional shape feature vector formed by equation (8):
G=[G1,G2,G3,G4](8)
(7) training support vector machine
And selecting a support vector machine with a kernel function as a radial basis function, and sending the 1000 x 4 dimensional shape feature vectors extracted from the training set into the support vector machine for training to obtain a support vector machine prediction model.
(8) Identifying on-site investigation tool images
And (3) obtaining a corresponding shape characteristic vector of each on-site investigation image to be identified according to the operations in the steps (2) to (6), sending the shape characteristic vector into the support vector machine prediction model obtained in the step (7) for identification, wherein the image with the identification result label of 1 represents the on-site investigation cutter image, and the image with the identification result label of 0 represents the off-site investigation cutter image.
And finishing the on-site inspection of the cutter image recognition.
Example 2
Taking 1000 field investigation images selected from a database self-built by the applicant as an example of a training set, the field investigation tool image recognition method of the embodiment comprises the following steps:
(1) preparing a data set
This procedure is the same as in example 1.
(2) Pretreatment of
This procedure is the same as in example 1.
(3) Locating a target object region
Clustering the boundary contours of all object regions in the preprocessed image, wherein the clustering operation comprises the following steps:
1) extracting pixel coordinates with the brightness value of 255 to form a set D1,D1Is { (x)1,y1),(x2,y2),...,(xj,yj) J is the number of 255 pixels with brightness value and is a finite positive integer, (x)1,y1) The 1 st coordinate of a pixel point with the brightness value of 255 is represented; of pixels in the centre of the imageCoordinates are (round (w/2), round (h/2)), wherein w and h respectively represent the width and height of the image, and round () is an integer function; computing a set D1The Euclidean distances between all the pixels and the pixel of the central point of the image are D, and the D is divided into a plurality of groups according to the sequence of the Euclidean distances from large to small1Rearranging the middle pixels to obtain a set D2,D2Is { (x)i,yi),(xr,yr),...,(xt,yt) J is less than or equal to i, r is less than or equal to j, t is less than or equal to j, and i, r and t are positive integers.
2) Get set D21 st pixel (x) in (b)i,yi) As a clustering kernel, look over D in turn2Whether there is a pixel in (x)i,yi) If any pixel satisfies the condition, the pixel is associated with (x)i,yi) Form a new subset D2,1And repeating the above operation with the pixel as a new clustering kernel until D2Until there are no pixels satisfying the condition.
3) Removing D2In (D)2,1The rest pixels are arranged according to the Euclidean distance descending order, the rearranged 1 st pixel is taken as a clustering kernel, and the operation of the step (2) in the step (3) is repeated to obtain a subset D2,2
4) Repeating the operation of 3) in the step (3) until
Figure BDA0002201329300000131
I.e. D2All pixels in (a) belong to any one of the subsets D2,1,D2,2,...,D2,SWherein S is not more than j, S is the number of the subsets and is a positive integer, and all the subsets are arranged according to the number of the contained pixels in a descending order.
① if condition 1 is satisfied, the 1 st subset contains pixels with a number greater than or equal to D2Mu% of the number of pixels in the set, and the object region corresponding to the edge pixel of the 1 st subset is the target object region, wherein mu belongs to [50,100 ]]In this embodiment, μ is selected to be 50.
② if the condition 1 is not satisfied but the condition 2 is satisfied, i.e. S is equal to 2, the 2 subsets are taken to perform the operation of step 5).
③ if the condition 1 and the condition 2 are not satisfied, the first 3 subsets are taken to perform the operation of step 5).
5) And determining the mean value of Euclidean distances between all pixels in the subset and the pixels at the center point of the image, wherein the region corresponding to the edge pixel of the subset with the minimum mean value is the region of the target object.
(4) Extracting minimum circumscribed rectangle of target object region
In a two-dimensional coordinate system, determining a circumscribed rectangle of the outline of the target object area, wherein the circumscribed rectangle of the outline of the target object area takes one side as cpx,max-cpx,minThe other side is cpy,max-cpy,minWherein cp isx,maxRepresenting the maximum abscissa, cp, of the pixel lying on the contour of the target object areax,minRepresenting the smallest abscissa, cp, of a pixel lying on the contour of the target object areay,maxRepresenting the maximum ordinate, cp, of a pixel lying on the contour of the target object regiony,minRepresenting the minimum ordinate of a pixel lying on the contour of the target object area; the minimum circumscribed rectangle is a circumscribed rectangle with the minimum area obtained by rotating the outline of the target object region, and the step of determining the minimum circumscribed rectangle is as follows:
1) external rectangular frame R for obtaining object region outline1As shown in fig. 2, and determining the area a thereofR,1
2) Rotating the outline of the object region anticlockwise by theta to obtain a circumscribed rectangular frame R of the object region2And determining the area a thereofR,2In which theta e (0 DEG, 90 DEG)]In this embodiment, θ is 1 °.
3) Repeating the operation of the step (4) 2), and obtaining the corresponding circumscribed rectangle and the corresponding area after rotating theta every time
Figure BDA0002201329300000141
Maximum number of rotations of
Figure BDA0002201329300000143
Wherein
Figure BDA0002201329300000142
Is a rounded down function.
4) Comparison
Figure BDA0002201329300000144
And clockwise rotates the circumscribed rectangle corresponding to the minimum area value
Figure BDA0002201329300000145
And finally, obtaining the minimum circumscribed rectangle of the outline of the target object area.
(5) Extracting target object region shape features
Step 1) -step 3) were the same as in example 1.
4) Determining tip angle
① grouping the coordinates of M pixels of the edge contour of the target object region into a set E, E being { (M)1,n1),(m2,n2),...,(mM,nM) Where M is a finite positive integer, (M)1,n1) Representing the coordinates of the 1 st pixel point of the edge contour of the target object region; the shortest distance l between the M pixels and two wide sides of the minimum circumscribed rectangle corresponding to the target object regioncDetermining according to the formula (4):
Figure BDA0002201329300000146
where A, B, C is the coefficient of the general equation for the line in which the width of the minimum bounding rectangle lies, mcDenotes the abscissa, n, of the c-th pixel of the set EcRepresents the ordinate of the c-th pixel in the set E; respectively marking two pixel points corresponding to the minimum distance between the two wide sides as p1、p2
② in p1And establishing a two-dimensional coordinate system for the x axis and the y axis in directions parallel to the long side and the wide side of the minimum external rectangle corresponding to the target object region as a coordinate center point.
③ calculating the edge contour of the target object region1The coordinates of N pixels with Euclidean distance less than lambda are formed into a 2 multiplied by N matrix F2Nλ ∈ {1,. 100}, thisIn the embodiment, lambda is 1, N is a finite positive integer, and the abscissa of N pixel points forms a matrix F2NElement f of line 11nN pixel points form a matrix F2NElement f of line 22nN is more than or equal to 1 and less than or equal to N, n and is a finite positive integer.
④ according to p1Coordinate values of (2) and (F)2NElement value of (1) to determine p1The opening orientation of the sharp corner at the point is 6 cases:
f1nis not less than 0, and f2nBoth positive and negative values exist; f. of1nIs not less than 0, and f2n≤0;f1nIs not less than 0, and f2n>0;f1n< 0, and f2nBoth positive and negative values exist; f. of1n< 0, and f2n≤0;f1n< 0, and f2n>0。
⑤ determining two end point pixels ep according to the orientation of the pointed opening1、ep2Determining two end point pixels ep1、ep2The procedure was as in example 1. Separately determining ep1、ep2To p1And performing vector addition on the linear vector to obtain an angular bisector vector of the sharp corner.
⑥ dividing the distance p according to the angle bisector vector1Dividing N pixels with Euclidean distance smaller than lambda into two groups of pixel sets, fitting the two groups of pixel sets by using a least square method to obtain two straight lines, wherein the direction vectors L of the two straight lines1And L2Angle alpha between two straight lines1The cosine value of (2) is determined according to the formula (5):
cosα1=L1L2/|L1||L2| (5)
solving the corresponding sharp angle alpha at the point p1 by using an inverse trigonometric function1
⑦ obtaining the corresponding angle alpha at point p2 according to the operation of ② - ⑥2The final tip angle α is determined by equation (6):
α=min{α12} (6)
steps (6) to (8) were the same as in example 1.
And finishing the on-site inspection of the cutter image recognition.
Example 3
Taking 1000 field investigation images selected from a database self-built by the applicant as an example of a training set, the field investigation tool image recognition method of the embodiment comprises the following steps:
(1) preparing a data set
This procedure is the same as in example 1.
(2) Pretreatment of
This procedure is the same as in example 1.
(3) Locating a target object region
Clustering the boundary contours of all object regions in the preprocessed image, wherein the clustering operation comprises the following steps:
1) extracting pixel coordinates with the brightness value of 255 to form a set D1,D1Is { (x)1,y1),(x2,y2),...,(xj,yj) J is the number of 255 pixels with brightness value and is a finite positive integer, (x)1,y1) The 1 st coordinate of a pixel point with the brightness value of 255 is represented; the coordinates of the pixel of the central point of the image are (round (w/2) and round (h/2)), wherein w and h respectively represent the width and the height of the image, and round () is an integer function; computing a set D1The Euclidean distances between all the pixels and the pixel of the central point of the image are D, and the D is divided into a plurality of groups according to the sequence of the Euclidean distances from large to small1Rearranging the middle pixels to obtain a set D2,D2Is { (x)i,yi),(xr,yr),...,(xt,yt) J is less than or equal to i, r is less than or equal to j, t is less than or equal to j, and i, r and t are positive integers.
2) Get set D21 st pixel (x) in (b)i,yi) As a clustering kernel, look over D in turn2Whether there is a pixel in (x)i,yi) If any pixel satisfies the condition, the pixel is associated with (x)i,yi) Form a new subset D2,1And repeating the above operation with the pixel as a new clustering kernel until D2Until there are no pixels satisfying the condition.
3) Removing D2In (A) belong toD2,1The rest pixels are arranged according to the Euclidean distance descending order, the rearranged 1 st pixel is taken as a clustering kernel, and the operation of the step (2) in the step (3) is repeated to obtain a subset D2,2
4) Repeating the operation of 3) in the step (3) until
Figure BDA0002201329300000161
I.e. D2All pixels in (a) belong to any one of the subsets D2,1,D2,2,...,D2,SWherein S is not more than j, S is the number of the subsets and is a positive integer, and all the subsets are arranged according to the number of the contained pixels in a descending order.
① if condition 1 is satisfied, the 1 st subset contains pixels with a number greater than or equal to D2Mu% of the number of pixels in the set, and the object region corresponding to the edge pixel of the 1 st subset is the target object region, wherein mu belongs to [50,100 ]]In this embodiment, μ is selected to be 100.
② if the condition 1 is not satisfied but the condition 2 is satisfied, i.e. S is equal to 2, the 2 subsets are taken to perform the operation of step 5).
③ if the condition 1 and the condition 2 are not satisfied, the first 3 subsets are taken to perform the operation of step 5).
5) And determining the mean value of Euclidean distances between all pixels in the subset and the pixel of the central point of the image, wherein the region corresponding to the edge pixel of the subset with the minimum mean value is a target object region, and the Euclidean distance is the distance between two points.
(4) Extracting minimum circumscribed rectangle of target object region
In a two-dimensional coordinate system, determining a circumscribed rectangle of the outline of the target object area, wherein the circumscribed rectangle of the outline of the target object area takes one side as cpx,max-cpx,minThe other side is cpy,max-cpy,minWherein cp isx,maxRepresenting the maximum abscissa, cp, of the pixel lying on the contour of the target object areax,minRepresenting the smallest abscissa, cp, of a pixel lying on the contour of the target object areay,maxRepresenting the maximum ordinate, cp, of a pixel lying on the contour of the target object regiony,minIs shown inThe minimum ordinate of the pixel on the contour of the target object region; the minimum circumscribed rectangle is a circumscribed rectangle with the minimum area obtained by rotating the outline of the target object region, and the step of determining the minimum circumscribed rectangle is as follows:
1) external rectangular frame R for obtaining object region outline1And determining the area a thereofR,1
2) Rotating the outline of the object region anticlockwise by theta to obtain a circumscribed rectangular frame R of the object region2And determining the area a thereofR,2In which theta e (0 DEG, 90 DEG)]In this embodiment, θ is selected to be 90 °.
3) Repeating the operation of the step (4) 2), and obtaining the corresponding circumscribed rectangle and the corresponding area after rotating theta every timeMaximum number of rotations of
Figure BDA0002201329300000173
Wherein
Figure BDA0002201329300000172
Is a rounded down function.
4) Comparison
Figure BDA0002201329300000174
And clockwise rotates the circumscribed rectangle corresponding to the minimum area value
Figure BDA0002201329300000175
And finally, obtaining the minimum circumscribed rectangle of the outline of the target object area.
(5) Extracting target object region shape features
Step 1) -step 3) were the same as in example 1.
4) Determining tip angle
① grouping the coordinates of M pixels of the edge contour of the target object region into a set E, E being { (M)1,n1),(m2,n2),...,(mM,nM) Where M is a finite positive integer, (M)1,n1) Representing a target object regionThe coordinate of the 1 st pixel point of the edge profile; the shortest distance l between the M pixels and two wide sides of the minimum circumscribed rectangle corresponding to the target object regioncDetermining according to the formula (4):
where A, B, C is the coefficient of the general equation for the line in which the width of the minimum bounding rectangle lies, mcDenotes the abscissa, n, of the c-th pixel of the set EcRepresents the ordinate of the c-th pixel in the set E; respectively marking two pixel points corresponding to the minimum distance between the two wide sides as p1、p2
② in p1And establishing a two-dimensional coordinate system for the x axis and the y axis in directions parallel to the long side and the wide side of the minimum external rectangle corresponding to the target object region as a coordinate center point.
③ calculating the edge contour of the target object region1The coordinates of N pixels with Euclidean distance less than lambda are formed into a 2 multiplied by N matrix F2Nλ ∈ {1,.. and 100}, where λ is 100, N is a finite positive integer, and the abscissa of N pixel points forms a matrix F2NElement f of line 11nN pixel points form a matrix F2NElement f of line 22nN is more than or equal to 1 and less than or equal to N, n and is a finite positive integer.
④ according to p1Coordinate values of (2) and (F)2NElement value of (1) to determine p1The opening orientation of the sharp corner at the point is 6 cases:
f1nis not less than 0, and f2nBoth positive and negative values exist; f. of1nIs not less than 0, and f2n≤0;f1nIs not less than 0, and f2n>0;f1n< 0, and f2nBoth positive and negative values exist; f. of1n< 0, and f2n≤0;f1n< 0, and f2n>0。
⑤ determining two end point pixels ep according to the orientation of the pointed opening1、ep2Determining two end point pixels ep1、ep2Method of (1)The same is true. Separately determining ep1、ep2To p1And performing vector addition on the linear vector to obtain an angular bisector vector of the sharp corner.
⑥ dividing the distance p according to the angle bisector vector1Dividing N pixels with Euclidean distance smaller than lambda into two groups of pixel sets, fitting the two groups of pixel sets by using a least square method to obtain two straight lines, wherein the direction vectors L of the two straight lines1And L2Angle alpha between two straight lines1The cosine value of (2) is determined according to the formula (5):
cosα1=L1L2/|L1||L2| (5)
solving for p by inverse trigonometric function1Point corresponding sharp angle alpha1
⑦ following the procedure ② - ⑥ to obtain p2Point corresponding sharp angle alpha2The final tip angle α is determined by equation (6):
α=min{α12} (6)
steps (6) to (8) were the same as in example 1.
And finishing the on-site inspection of the cutter image recognition.
In order to verify the beneficial effects of the present invention, the inventor performed experiments on the field survey image by using the field survey tool image recognition method of embodiment 1 of the present invention.
1. Conditions of the experiment
The experimental test environment is a associative computer of a Windows l0 (64-bit) operating system, which is configured as an Inter Corei7-9750H and 8GB memory, and experimental operation is performed on a MATLAB2014a platform.
2. Introduction to test data
All tested field survey images are derived from a field survey image database of the university of western-style land and post, and 1000 field survey images are selected as test images, wherein 500 field survey cutter images are used as positive samples, and in addition, 500 other field survey images comprise fingerprint images, shoe print images, bloodstain images, hammer head images, axe images and gun images which are used as negative samples.
3. Evaluation index
Using recall recrPrecision ratio prerF1-score, accuracy accrAs an evaluation index. The calculation methods of the four evaluation indexes are respectively shown in formulas (9) to (12):
recr=tp/(tp+zv) (9)
prer=tp/(tp+zp) (10)
F1-score=2recr×prer/(recr+prer) (11)
accr=(tp+tv)/(tp+tv+zp+zv) (12)
wherein, tpIndicating that the algorithm identified the correct number of positive samples, zvNegative number of samples, z, indicating an algorithm identification errorpNumber of positive samples, t, representing algorithm recognition errorsvIndicating that the algorithm identified the correct number of negative samples.
The method for identifying the cutter image through on-site investigation is compared with a method based on other characteristics, wherein the characteristics comprise color histogram characteristics (HSV), wavelet texture characteristics (Gabor), direction gradient histogram characteristics (HOG), layered gradient direction histogram characteristics (PHOG), Zernike invariant moment (Zer), Hu invariant moment (H) and characteristics extracted by a network model based on a transfer learning algorithm (VGG-16).
The test was carried out according to the method of example 1, the properties of the different characterization methods being compared in Table 1.
TABLE 1 comparison of Performance of different characterization methods
Figure RE-GDA0002272378060000201
As can be seen from Table 1, the performance of the method based on HSV color histogram feature and Gabor wavelet texture feature can only reach 30.50% and 38.00% identification accuracy respectively, and the highest identification accuracy corresponding to five features of HOG feature, PHOG feature, Zernike invariant moment, Hu invariant moment and VGG-16 is respectively 56.50%, 73.00%, 58.50%, 54.00% and 82.50%. The identification accuracy of the on-site investigation cutter image identification method reaches 96.00 percent.

Claims (5)

1. A method for identifying an on-site inspection cutter image is characterized by comprising the following steps:
(1) preparing a data set
Selecting 1000 field investigation images from a database as a training set, wherein 500 field investigation cutter images are used as positive samples, and the other 500 field investigation images comprise fingerprint images, shoe print images, bloodstain images, hammerhead images, axe images and firearm images which are used as negative samples of the training set;
(2) pretreatment of
1) Gauss filtering
Carrying out convolution operation on the training set image by using a Gaussian kernel with the size of 3 multiplied by 3 to carry out filtering, wherein the standard deviation of the Gaussian kernel is 0.8;
2) edge detection
Processing the image after Gaussian filtering by using an edge detection method based on the structural forest to obtain the area outlines of all objects in the image;
3) morphological filtering
Carrying out morphological closing processing on the outline of the object region by using a structural factor K, wherein the structural factor K is 5 multiplied by 5 pixels;
4) binarization method
Performing binarization processing of a self-adaptive threshold value on the morphologically filtered image to obtain a binary image, setting the brightness value of a pixel at the edge of an object region in the binary image to be 255, and setting the brightness values of pixels at other positions to be 0;
(3) locating a target object region
Clustering the boundary contours of all object regions in the preprocessed image, wherein the clustering operation comprises the following steps:
1) extracting pixel coordinates with the brightness value of 255 to form a set D1,D1Is { (x)1,y1),(x2,y2),...,(xj,yj) J is the number of 255 pixels with brightness value and is a finite positive integer, (x)1,y1) The 1 st coordinate of a pixel point with the brightness value of 255 is represented; the coordinates of the pixel of the central point of the image are (round (w/2) and round (h/2)), wherein w and h respectively represent the width and the height of the image, and round () is an integer function; computing a set D1The Euclidean distances between all the pixels and the pixel of the central point of the image are used for sequentially dividing D according to the Euclidean distances from large to small1Rearranging the middle pixels to obtain a set D2,D2Is { (x)i,yi),(xr,yr),...,(xt,yt) J is less than or equal to i, r is less than or equal to j, t is less than or equal to j, and i, r and t are positive integers;
2) get set D21 st pixel (x) in (b)i,yi) As a clustering kernel, look over D in turn2Whether there is a pixel in (x)i,yi) If any pixel satisfies the condition, the pixel is associated with (x)i,yi) Form a new subset D2,1And repeating the above operation with the pixel as a new clustering kernel until D2Until there are no pixels satisfying the condition;
3) removing D2In (D)2,1The rest pixels are arranged according to the Euclidean distance descending order, the rearranged 1 st pixel is taken as a clustering kernel, and the operation of the step (2) in the step (3) is repeated to obtain a subset D2,2
4) Repeating the operation of 3) in the step (3) until
Figure FDA0002201329290000021
I.e. D2All pixels in (a) belong to any one of the subsets D2,1,D2,2,...,D2,SWherein S is not more than j, S is the number of the subsets and is a positive integer, and all the subsets are arranged according to the number of the contained pixels in a descending order:
① if condition 1 is satisfied, the 1 st subset contains pixels with a number greater than or equal to D2Mu% of the number of pixels in the set, and the object region corresponding to the edge pixel of the 1 st subset is the target object region, wherein mu belongs to [50,100 ]];
②, if the condition 1 is not satisfied but the condition 2 is satisfied, that is, the value of S is 2, and the 2 subsets are taken to perform the operation of the step 5);
③ if the condition 1 and the condition 2 are not satisfied, taking the first 3 subsets to perform the operation of the step 5);
5) determining the mean value of Euclidean distances between all pixels in the subset and the pixels of the central point of the image, wherein the region corresponding to the edge pixels of the subset with the minimum mean value is a target object region;
(4) extracting minimum circumscribed rectangle of target object region
In a two-dimensional coordinate system, determining a circumscribed rectangle of the contour of the target object region, wherein the circumscribed rectangle of the contour of the target object region is defined by a side cpx,max-cpx,minThe other side is cpy,max-cpy,minWherein cp isx,maxRepresenting the maximum abscissa, cp, of the pixel lying on the contour of the target object areax,minRepresenting the smallest abscissa, cp, of a pixel lying on the contour of the target object areay,maxRepresenting the maximum ordinate, cp, of a pixel lying on the contour of the target object areay,minRepresenting the minimum ordinate of a pixel lying on the contour of the target object area; the minimum circumscribed rectangle is a circumscribed rectangle with the minimum area obtained by rotating the outline of the target object region, and the step of determining the minimum circumscribed rectangle is as follows:
1) external rectangular frame R for obtaining target object area outline1And determining its area aR,1
2) Rotating the outline of the target object region anticlockwise by theta to obtain a circumscribed rectangular frame R of the target object region2And determining the area a thereofR,2In which theta e (0 DEG, 90 DEG)];
3) Repeating the operation of the step (4) 2), and obtaining the corresponding circumscribed rectangle and the corresponding area after rotating theta every time
Figure FDA0002201329290000031
Maximum number of rotations ofWherein
Figure FDA0002201329290000033
Is a rounded down function;
4) comparisonAnd clockwise rotates the circumscribed rectangle corresponding to the minimum area value
Figure FDA0002201329290000035
Then, obtaining the minimum circumscribed rectangle of the outline of the target object region;
(5) extracting target object region shape features
1) Determining aspect ratio
The aspect ratio lwr of the target object region is determined by equation (1):
lwr=wR/hR(1)
wherein wRLength, h, of the minimum bounding rectangle corresponding to the target object regionRRepresenting the width of the minimum circumscribed rectangle corresponding to the target object area;
2) determining the degree of rectangularity
The squareness rec of the target object area is determined according to the formula (2):
rec=aC/aR(2)
wherein a isCRepresenting the area of the region bounded by the edge contour of the target object region, aRRepresenting the area of a region surrounded by the minimum circumscribed rectangle corresponding to the target object region;
3) determining circularity
The circularity cir of the target object region is determined by the equation (3):
wherein p isCIs the perimeter of the edge profile of the target object region;
4) determining tip angle
① grouping the coordinates of M pixels of the edge contour of the target object region into a set E, E being { (M)1,n1),(m2,n2),...,(mM,nM) Where M is a finite positive integer, (M)1,n1) Representing the coordinates of the 1 st pixel point of the edge contour of the target object region; the shortest distance l between the M pixels and two wide sides of the minimum circumscribed rectangle corresponding to the target object regioncDetermining according to the formula (4):
Figure FDA0002201329290000041
where A, B, C is the coefficient of the general equation for the line in which the width of the minimum bounding rectangle lies, mcDenotes the abscissa, n, of the c-th pixel of the set EcRepresents the ordinate of the c-th pixel in the set E; respectively marking two pixel points corresponding to the minimum distance between the two wide sides as p1、p2
② in p1Establishing a two-dimensional coordinate system for the coordinate center point and the directions of the long side and the wide side of the corresponding minimum external rectangle of the target object region in parallel as an x axis and a y axis;
③ calculating the edge contour of the target object region1The coordinates of N pixels with Euclidean distance less than lambda are formed into a 2 multiplied by N matrix F2Nλ ∈ {1,.. multidot., 100}, N is a finite positive integer, and the abscissa of N pixel points forms a matrix F2NElement f of line 11nN pixel points form a matrix F2NElement f of line 22nN is more than or equal to 1 and less than or equal to N, n and is a finite positive integer;
④ according to p1Coordinate values of (2) and (F)2NElement value of (1) to determine p1The opening orientation of the sharp corner at the point is 6 cases:
f1nis not less than 0, and f2nBoth positive and negative values exist; f. of1nIs not less than 0, and f2n≤0;f1nIs not less than 0, and f2n>0;f1n< 0, and f2nBoth positive and negative values exist; f. of1n< 0, and f2n≤0;f1n< 0, and f2n>0;
⑤ determining two end point pixels ep according to the orientation of the pointed opening1、ep2Determining ep separately1、ep2To p1Carrying out vector addition on the linear vector to obtain an angular bisector vector of the sharp corner;
⑥ dividing the distance p according to the angle bisector vector1Dividing N pixels with Euclidean distance smaller than lambda into two groups of pixel sets, fitting the two groups of pixel sets by using a least square method to obtain two straight lines, wherein the direction vectors L of the two straight lines1And L2Angle alpha between two straight lines1The cosine value of (2) is determined according to the formula (5):
cosα1=L1L2/|L1||L2| (5)
solving for p by inverse trigonometric function1Point corresponding sharp angle alpha1
⑦ following the procedure ② - ⑥ to obtain p2Point corresponding sharp angle alpha2The final tip angle α is determined by equation (6):
α=min{α12} (6)
(6) feature fusion
The respective eigenvalues of each training sample are normalized by the norm of L2 according to equation (7):
g′u,v=gu,v/||Gu||2(7)
wherein G isuIs a feature vector, GuIs [ g ]u,v|u∈{1,2,3,4},1≤v≤1000]TU is a feature number, v is a training sample sequence number and is a positive integer;
the 4 feature vectors are finally fused into a 1000 × 4 dimensional shape feature vector formed by equation (8):
G=[G1,G2,G3,G4](8)
(7) training support vector machine
Selecting a support vector machine with a kernel function as a radial basis function, and sending 1000 x 4 dimensional shape feature vectors extracted from a training set into the support vector machine for training to obtain a support vector machine prediction model;
(8) identifying on-site investigation tool images
And (3) obtaining a corresponding shape characteristic vector of each on-site investigation image to be identified according to the operations in the steps (2) to (6), sending the shape characteristic vector into the support vector machine prediction model obtained in the step (7) for identification, wherein the image with the identification result label of 1 represents the on-site investigation cutter image, and the image with the identification result label of 0 represents the off-site investigation cutter image.
2. The on-site inspection tool image recognition method according to claim 1, wherein in the step ⑤ of the step (5) of extracting the shape feature of the target object region, the two end point pixels ep are determined according to the orientation of the sharp-angled opening1、ep2The method comprises the following steps:
for f1nIs not less than 0, and f2nCase 1, ep, where both positive and negative values are present1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1nIs not less than 0, and f2nIn case 2, ep. ltoreq.01Is at maximum f1nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1nIs not less than 0, and f2nCase > 0, case 3, ep1Is at maximum f2nCorresponding pixel point, ep2Is at maximum f1nCorresponding pixel points; for f1n< 0, and f2nCase 4, ep, where both positive and negative values are present1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1n< 0, and f2nIn case 5, ep. ltoreq.01Is a minimum of f1nCorresponding pixel point, ep2Is a minimum of f2nCorresponding pixel points; for f1n< 0, and f2n6 th case > 0, ep1Is at maximum f2nCorresponding pixel point, ep2Is a minimum of f1nAnd (4) corresponding pixel points.
3. The on-site survey tool image recognition method as claimed in claim 1, wherein μ is 75 in step ① of step 4) of the step (3) of locating the target object region.
4. The on-site survey tool image recognition method of claim 1, characterized in that: in the step 2) of the step (4) of extracting the minimum bounding rectangle of the target object region, the theta is 5 degrees.
5. The method for identifying the on-site survey tool image according to claim 1, wherein λ is 50 in the step ③ of determining the tip angle in the step (4) of extracting the shape feature of the target object region in the step (5).
CN201910866132.5A 2019-09-12 2019-09-12 Cutter image identification method for on-site investigation Expired - Fee Related CN110728304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910866132.5A CN110728304B (en) 2019-09-12 2019-09-12 Cutter image identification method for on-site investigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910866132.5A CN110728304B (en) 2019-09-12 2019-09-12 Cutter image identification method for on-site investigation

Publications (2)

Publication Number Publication Date
CN110728304A true CN110728304A (en) 2020-01-24
CN110728304B CN110728304B (en) 2021-08-17

Family

ID=69218942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910866132.5A Expired - Fee Related CN110728304B (en) 2019-09-12 2019-09-12 Cutter image identification method for on-site investigation

Country Status (1)

Country Link
CN (1) CN110728304B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106312692A (en) * 2016-11-02 2017-01-11 哈尔滨理工大学 Tool wear detection method based on minimum enclosing rectangle
CN106845443A (en) * 2017-02-15 2017-06-13 福建船政交通职业学院 Video flame detecting method based on multi-feature fusion
CN107688830A (en) * 2017-08-30 2018-02-13 西安邮电大学 It is a kind of for case string and show survey visual information association figure layer generation method
CN109241948A (en) * 2018-10-18 2019-01-18 杜海朋 A kind of NC cutting tool visual identity method and device
CN109583482A (en) * 2018-11-13 2019-04-05 河海大学 A kind of infrared human body target image identification method based on multiple features fusion Yu multicore transfer learning
CN109740595A (en) * 2018-12-27 2019-05-10 武汉理工大学 A kind of oblique moving vehicles detection and tracking system and method based on machine vision
US20190244346A1 (en) * 2018-02-07 2019-08-08 Analogic Corporation Visual augmentation of regions within images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106312692A (en) * 2016-11-02 2017-01-11 哈尔滨理工大学 Tool wear detection method based on minimum enclosing rectangle
CN106845443A (en) * 2017-02-15 2017-06-13 福建船政交通职业学院 Video flame detecting method based on multi-feature fusion
CN107688830A (en) * 2017-08-30 2018-02-13 西安邮电大学 It is a kind of for case string and show survey visual information association figure layer generation method
US20190244346A1 (en) * 2018-02-07 2019-08-08 Analogic Corporation Visual augmentation of regions within images
CN109241948A (en) * 2018-10-18 2019-01-18 杜海朋 A kind of NC cutting tool visual identity method and device
CN109583482A (en) * 2018-11-13 2019-04-05 河海大学 A kind of infrared human body target image identification method based on multiple features fusion Yu multicore transfer learning
CN109740595A (en) * 2018-12-27 2019-05-10 武汉理工大学 A kind of oblique moving vehicles detection and tracking system and method based on machine vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YING LIU 等: "Study on rotation-invariant texture feature extraction for tire pattern retrieval", 《MULTIDIM SYST SIGN PROCESS 》 *
刘颖 等: "现勘图像检索综述", 《电子学报》 *
郝文影 等: "黄瓜叶形的计算机自动分类识别", 《河北科技师范学院学报》 *

Also Published As

Publication number Publication date
CN110728304B (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN110543837B (en) Visible light airport airplane detection method based on potential target point
CN104715254B (en) A kind of general object identification method merged based on 2D and 3D SIFT features
CN107563396B (en) The construction method of protection screen intelligent identifying system in a kind of electric inspection process
Alsmadi et al. Fish recognition based on robust features extraction from size and shape measurements using neural network
CN104268539B (en) A kind of high performance face identification method and system
CN106909941A (en) Multilist character recognition system and method based on machine vision
CN105528595A (en) Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images
CN105574527A (en) Quick object detection method based on local feature learning
Bojanić et al. On the comparison of classic and deep keypoint detector and descriptor methods
CN110008828B (en) Pairwise constraint component analysis measurement optimization method based on difference regularization
CN102760228B (en) Specimen-based automatic lepidoptera insect species identification method
WO2019238104A1 (en) Computer apparatus and method for implementing classification detection of pulmonary nodule images
Chu et al. Strip steel surface defect recognition based on novel feature extraction and enhanced least squares twin support vector machine
CN111582337A (en) Strawberry malformation state detection method based on small sample fine-grained image analysis
CN110852186A (en) Visual identification and picking sequence planning method for citrus on tree and simulation system thereof
CN104751475A (en) Feature point optimization matching method for static image object recognition
CN113379777A (en) Shape description and retrieval method based on minimum circumscribed rectangle vertical internal distance proportion
CN109858386A (en) A kind of microalgae cell recognition methods based on fluorescence microscope images
CN111695498A (en) Wood identity detection method
CN110321890B (en) Digital instrument identification method of power inspection robot
Muzakir et al. Model for Identification and Prediction of Leaf Patterns: Preliminary Study for Improvement
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor
CN109977892B (en) Ship detection method based on local saliency features and CNN-SVM
CN110728304B (en) Cutter image identification method for on-site investigation
CN109829511B (en) Texture classification-based method for detecting cloud layer area in downward-looking infrared image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210817