CN109919228B - Target rapid detection method and device - Google Patents

Target rapid detection method and device Download PDF

Info

Publication number
CN109919228B
CN109919228B CN201910174066.5A CN201910174066A CN109919228B CN 109919228 B CN109919228 B CN 109919228B CN 201910174066 A CN201910174066 A CN 201910174066A CN 109919228 B CN109919228 B CN 109919228B
Authority
CN
China
Prior art keywords
feature
subset
weight
detection frame
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910174066.5A
Other languages
Chinese (zh)
Other versions
CN109919228A (en
Inventor
刘若堃
肖立波
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangwei Technology Zhejiang Co ltd
Original Assignee
Wangwei Technology Zhejiang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangwei Technology Zhejiang Co ltd filed Critical Wangwei Technology Zhejiang Co ltd
Priority to CN201910174066.5A priority Critical patent/CN109919228B/en
Publication of CN109919228A publication Critical patent/CN109919228A/en
Application granted granted Critical
Publication of CN109919228B publication Critical patent/CN109919228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for quickly detecting a target, which comprises the following steps: 1) Acquiring an image to be detected, and aiming at the size of a detection frame of the image to be detected, wherein the size of the detection frame is not larger than that of the image to be detected; 2) Combining the feature weight of the feature subset with the feature weight of other feature subsets corresponding to the ratio; 3) Calculating an integral value of each feature subset; 4) Judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not; 5) If yes, sliding the detection frame by a first step length according to the sliding direction of the detection frame, and returning to execute the step 3) until the target in the image to be detected is detected; 6) And if not, sliding the detection frame for a second step length according to the sliding direction of the detection frame and returning to execute the step 3) until the target in the image to be detected is detected. The invention discloses a rapid detection device for a target. By applying the embodiment of the invention, the operation complexity can be reduced.

Description

Target rapid detection method and device
Technical Field
The present invention relates to a method and an apparatus for detecting a target, and more particularly, to a method and an apparatus for rapidly detecting a target.
Background
The multi-target detection technology needs to detect the positions of a plurality of targets in an image in a picture, and is widely applied in the field of visual detection and identification, and the process generally comprises the following steps: 1. harr features are obtained through training, and generally the Harr features comprise: sum features, sqsum features for calculating variance; and the values of the three types of characteristics of the tilted characteristics. 2. And calculating an integral graph of the complete image, taking the characteristic of matrix integral sum as an example, and adopting a step-by-step accumulation mode because sum is the matrix integral, so that the operation complexity of the integral graph of the complete image at one time is as follows: h × W × 2 additions. 3. Then defining a detection window area, wherein the size of the area is fixed as wide detec _ w and high detec _ h, sliding in a complete image by a certain step length, then calculating whether the detection area meets Harr characteristic values, and judging whether the Harr characteristic of the area is greater than a predefined threshold, wherein the method comprises the following steps: a. calculating a detection window variance value by using a formula, namely detec _ nf = detec _ w detec _ h valsqsum-valsum, and calculating the detection window variance value, wherein valsum is the integral of each point in the detection window area, and valsqsum is the integral of each point in the detection window area. b. Since the Harr signature contains nstages sets, each set comprises ntrees [ i ] subsets, wherein i =0 \ 8230, nstage-1, and the total number of signatures is Harr _ num. By means of the formula(s),
featureValue=SUM
(weight [0] featureEvialator [0], \ 8230; weight [ n-1 ]. FeatureEvialator [ n-1 ]), calculating the characteristic value in each tree subset in the ntrees [ i ] subset respectively, wherein featureEvialator is the integral of the characteristic window, weight is the weighted value, and weight and featureEvialator of different subsets are independent respectively. Fig. 1 is a schematic diagram illustrating a principle of calculating a sum feature in Harr features according to an embodiment of the present invention, and as shown in fig. 1, the present invention is described by taking fig. 1 as an example (n = 3), where P [ m ] [0] to P [ m ] [3] respectively correspond to 4 integral map end points of the sum feature, and m = 0' \ 8230; 2, and the results are as follows,
featureValue[0]=P[0][0]+P[0][3]-P[0][1]-P[0][2];
featureValue[1]=P[1][0]+P[1][3]-P[1][1]-P[1][2];
featureValue[2]=P[2][0]+P[2][3]-P[2][1]-P[2][2];
if, width [0] =3, width 2, =2, width 2] = -1, the above parameters are substituted into the formula,
featureValue=SUM
(weight [0] featureEvaluator [0], \8230; weight [ n-1 ]. FeatureEvaluator [ n-1 ]), the feature value in each tree subset is calculated.
c. And then judging whether Harr characteristics in the tree subsets are larger than a predefined threshold, namely if (feature value [ node ]/detec _ nf > th _ node ], and screening positions larger than the threshold for subsequent processing, wherein the node =0 \ 8230, ntrees [ i ], and the th _ node threshold in each tree subset is independent. As can be seen from the above description, the operation complexity of each detection window region is: and (3) addition: 12 times; multiplication: 3 times; floating-point division: 1 time (featureValue node/detec _ nf, which ensures that precision is not lost excessively, and floating point operation is usually adopted in the implementation process). 4. If greater than the threshold, the location of the area in the image is recorded. And finally traversing each detection area of the image, calculating all recorded areas, and screening effective positions through position information. In summary, due to the processing of the sliding detection window, in this embodiment, the sliding step is 1, and the total computation complexity of the detection window is: eigenvalues x number of pixels (12 additions +3 multiplications +1 floating-point division). Wherein the number of pixels is H x W.
The inventor finds that the prior art has the technical problem of high operation complexity.
Disclosure of Invention
The invention aims to provide a method and a device for quickly detecting a target, so as to solve the technical problem of high operation complexity in the prior art.
The invention solves the technical problems through the following technical scheme:
the embodiment of the invention provides a method for quickly detecting a target, which comprises the following steps:
1) Acquiring an image to be detected, and aiming at the size of a detection frame of the image to be detected, wherein the size of the detection frame is not larger than that of the image to be detected;
2) For each feature subset, combining the feature weight of the feature subset with the feature weight of other feature subsets corresponding to the ratio according to the ratio between the feature weight of the feature subset and the feature weight of other feature subsets except the feature subset, and updating the feature weight of the feature subset and the feature weight of other feature subsets corresponding to the ratio into the combined feature weight;
3) Acquiring feature subsets in a corresponding size range in the image to be detected according to the size of the detection frame, and calculating an integral value of each feature subset in the detection frame area according to the feature weight of each feature subset;
4) Judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not;
5) If so, sliding the detection frame by a first step length according to the sliding direction of the detection frame, and returning to execute the step 3) until the target in the image to be detected is detected;
6) And if not, sliding the detection frame by a second step length according to the sliding direction of the detection frame and returning to execute the step 3) until the target in the image to be detected is detected.
Optionally, the step 2) includes:
a: for each feature subset, when the number of the types of the feature values of the feature subset is greater than a set number, judging whether the ratio of the feature weight of the feature subset to the feature weights of other feature subsets except the feature subset is greater than a first preset threshold;
b: if yes, updating the feature weight of the feature subset and the feature weight of the feature subset except the feature subset into the average value of the feature weight of the feature subset and the feature weight of the feature subset except the feature subset, and updating the number of types of feature values of the feature subset;
c: if not, judging whether the number of the types of the updated characteristic values of the characteristic subset is larger than the set number, if so, increasing the first preset threshold according to the set step length to obtain a second preset threshold, updating the first preset threshold to the second preset threshold, and returning to execute the step A until the number of the types of the updated characteristic values of the characteristic subset is not larger than the set number; if not, executing the step 3).
Optionally, the calculating an integral value of each feature subset in the detection frame region according to the combined feature weight includes:
updating the characteristic value of each characteristic subset in the image to be detected into a new integral value which is the product of the combined characteristic weight and the integral value of each pixel in the complete image; and calculating the integral value of each characteristic subset in the detection frame area according to the product of the characteristic weight of each characteristic subset and the new integral value of the corresponding pixel point.
Optionally, the determining whether the integral value of the feature subset in the detection frame area is greater than a set threshold value includes:
judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not by using a formula if (feature value node) > th _ node × detec _ nf,
if () is a decision function; featureValue [ node ] is an integral value of a feature subset within the detection frame region; th _ node is a set threshold value; detec _ nf is the variance value of the detection box.
Optionally, the determining whether the integral value of the feature subset in the detection frame area is greater than a set threshold value includes:
by means of the formula (I) and (II),
Figure GDA0003951068670000041
determining whether an integral value of a subset of features within the detection frame area is greater than a set threshold value, wherein,
if () is a decision function; featureValue [ node ] is an integral value of a feature subset within the detection frame region; b is an integer; a is an integer; detec _ nf is the variance value of the detection box, and node corresponds to the subscript of each feature subset.
The embodiment of the invention also provides a device for quickly detecting the target, which comprises:
the acquisition module is used for acquiring an image to be detected and the size of a detection frame aiming at the image to be detected, and the size of the detection frame is not larger than the size of the image to be detected;
a merging module, configured to merge, for each feature subset, the feature weight of the feature subset with the feature weight of another feature subset corresponding to a ratio according to the ratio between the feature weight of the feature subset and the feature weight of another feature subset except the feature subset, and update the feature weight of the feature subset and the feature weight of another feature subset corresponding to the ratio into a merged feature weight;
the calculation module is used for acquiring feature subsets in a corresponding size range in the image to be detected according to the size of the detection frame and calculating an integral value of each feature subset in the detection frame area according to the feature weight of each feature subset;
the judging module is used for judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not;
the first sliding module is used for sliding the detection frame by a first step length according to the sliding direction of the detection frame and triggering the calculation module until the target in the image to be detected is detected;
and the second sliding module is used for sliding the detection frame by a second step length according to the sliding direction of the detection frame under the condition that the judgment result of the judgment module is negative, and triggering the calculation module until the target in the image to be detected is detected.
Optionally, the merging module is configured to:
a: for each feature subset, when the number of the types of the feature values of the feature subset is greater than a set number, judging whether the ratio of the feature weight of the feature subset to the feature weights of other feature subsets except the feature subset is greater than a first preset threshold;
b: if yes, updating the feature weight of the feature subset and the feature weight of the feature subset except the feature subset into the average value of the feature weight of the feature subset and the feature weight of the feature subset except the feature subset, and updating the number of types of feature values of the feature subset;
c: if not, judging whether the number of the types of the feature values of the updated feature subset is larger than the set number, if so, increasing the first preset threshold according to the set step length to obtain a second preset threshold, updating the first preset threshold to the second preset threshold, and returning to execute the step A until the number of the types of the feature values of the updated feature subset is not larger than the set number; if not, triggering the calculation module.
Optionally, the calculating module is configured to:
updating the characteristic value of each characteristic subset in the image to be detected into a new integral value which is the product of the combined characteristic weight and the integral value of each pixel in the complete image; and calculating the integral value of each characteristic subset in the detection frame area according to the product of the characteristic weight of each characteristic subset and the new integral value of the corresponding pixel point.
Optionally, the determining module is configured to:
judging whether the integral value of the pixel points in the detection frame area is larger than a set threshold value or not by using a formula if (feature value node) > th _ node × detec _ nf,
if () is a decision function; featureValue [ node ] is an integral value of a feature subset within the detection frame region; th _ node is a set threshold value; detec _ nf is the variance value of the detection box.
Optionally, the determining module is configured to:
by means of the formula (I) and (II),
Figure GDA0003951068670000061
determining whether an integral value of a subset of features within the detection frame area is greater than a set threshold value, wherein,
if () is a decision function; the feature value [ node ] is an integral value of a feature subset in the detection frame area; b is an integer; a is an integer; detec _ nf is the variance value of the detection box, and node corresponds to the subscript of each feature subset.
Compared with the prior art, the invention has the following advantages:
by applying the embodiment of the invention, aiming at each pixel point, according to the ratio between the characteristic weight of the pixel point and the characteristic weights of other characteristic subsets except the pixel point, the characteristic weight of the pixel point and the characteristic weights of the other characteristic subsets corresponding to the ratio are combined, and the characteristic weight of the pixel point and the characteristic weights of the other characteristic subsets corresponding to the ratio are updated into the combined characteristic weight, so that the number of the characteristic weights for calculation is reduced, the number of the integral graphs is further reduced, and the operation complexity is further reduced.
Drawings
FIG. 1 is a schematic diagram illustrating the principle of eigenvalue calculation in the prior art;
fig. 2 is a schematic flowchart of a method for rapidly detecting a target according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a method for calculating an integral value according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an integral map calculation according to an embodiment of the present invention;
fig. 5 is a diagram illustrating an effect of detecting the Harr feature according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a device for rapidly detecting a target according to an embodiment of the present invention.
Detailed Description
The following examples are given for the detailed implementation and specific operation of the present invention, but the scope of the present invention is not limited to the following examples.
The embodiment of the invention provides a method and a device for quickly detecting a target, and firstly, the method for quickly detecting the target provided by the embodiment of the invention is introduced below.
Fig. 2 is a schematic flow chart of a method for rapidly detecting a target according to an embodiment of the present invention, as shown in fig. 2, the method includes:
s101: the method comprises the steps of obtaining an image to be detected, aiming at the size of a detection frame of the image to be detected, and enabling the size of the detection frame to be not more than the size of the image to be detected.
Illustratively, image data of a complete image to be detected is obtained, wherein the width is W, the height is H, the value of H in this embodiment is 640, and the value of W is 480.
And defining a sliding integral graph, wherein the fixed width of the integral graph area is the width W of the complete integral graph, and the height is the height of the detection window. The detection window area is defined to be detect _ w in width and detect _ h in height.
S102: and aiming at each feature subset of the haar features, combining the feature weight of the feature subset with the feature weight of other feature subsets corresponding to the ratio according to the ratio between the feature weight of the feature subset and the feature weight of other feature subsets except the haar features, and updating the feature weight of the feature subset and the feature weight of other feature subsets corresponding to the ratio into the combined feature weight.
Specifically, the step S102 may include: a: for each feature subset, when the number of the types of the feature values of the feature subset is greater than a set number, judging whether the ratio of the feature weight of the feature subset to the feature weights of other feature subsets except the feature subset is greater than a first preset threshold; b: if yes, updating the feature weight of the feature subset and the feature weight of the feature subset except the feature subset into the average value of the feature weight of the feature subset and the feature weight of the feature subset except the feature subset, and updating the number of types of feature values of the feature subset; c: if not, judging whether the number of the types of the feature values of the updated feature subset is larger than the set number, if so, increasing the first preset threshold according to the set step length to obtain a second preset threshold, updating the first preset threshold to the second preset threshold, and returning to execute the step A until the number of the types of the feature values of the updated feature subset is not larger than the set number; if not, the step S103 is executed.
Illustratively, the number of kinds of values of weight (feature weight) corresponding to all the feature subsets of the Harr feature is counted. The number of kinds of feature weights is still 3; that is, the number of types of feature weights is the number of values different from each other in the feature weight values, for example, the number of types of feature weights is 3 if the feature weights can be 0.1, 0.2, or 0.3; if the feature weight can be 0.1, 0.2, 0.3, 0.1. It is understood that the method for obtaining the feature subset of the haar feature is the prior art and will not be described herein.
In practical application, weight _ num values can be defined, merging is carried out according to a certain strategy, the number of the merged weights is limited not to exceed weight _ num _ max, and the merging principle is as follows:
(1) When weight _ num is less than weight _ num _ max, no combination is needed;
(2) And when weight _ num > weight _ num _ max, combining the two into one group as close as possible in ratio, for example:
setting an initial proportion range a =1.1, when weight [ m ]/weight [ n ] < a, wherein m, n =0 \ 8230; combining the weight of the characteristic values which can satisfy the inequality into a group, wherein the combining method is the average value of weight [ m ] and weight [ n ];
and when the number of groups after merging is less than weight _ num _ max, the merging is finished. And when the number of the combined groups is greater than weight _ num _ max, increasing the proportion range a =1.2, and further combining according to the principle until the number of the combined feature value weights is not greater than weight _ num _ max.
In practical application, the number of the combined weights may be com _ weight _ num, and the value range thereof may be com _ weight [0], \ 8230;, com _ weight [ com _ weight _ num-1], and com _ weight [0] is the average value of all weights in group 1, and so on.
For example, the value of com _ weight _ num is 3 if there are only 3 values of-1, 2, and 3 in the combined value of com _ weight.
S103: and acquiring feature subsets in a corresponding size range in the image to be detected according to the size of the detection frame, and calculating an integral value of each feature subset in the detection frame area according to the feature weight of each feature subset.
Illustratively, for feature subsets within the size of the detection box area 25 × 35, an integrated value is calculated for each feature subset. In general, the integral value of each feature subset may be as shown in fig. 3:
in fig. 3, sum [ x ] [ y ] represents the integral value of the pixel point corresponding to the coordinate, and the integral value of the newly added integral point is the integral value of a point one before the current row plus the accumulation of the numerical values of all points in front of the current column (including the current point), where x is the coordinate or the row number corresponding to the pixel point; y is the coordinate or the column number corresponding to the pixel point.
FIG. 3 is a schematic diagram illustrating a method for calculating an integral value according to an embodiment of the present invention; as shown in fig. 3, the feature value of each feature subset in the image to be detected may be updated in advance, and the product of the combined feature weight and the integral value of each pixel in the complete image is used as a new integral value; and calculating the integral value of each characteristic subset in the detection frame area according to the product of the characteristic weight of each characteristic subset and the new integral value of the corresponding pixel point.
The image in the middle of fig. 3 represents a complete image, from which 3 integrograms are derived with com weight-1 (lower left corner), 2 (upper left corner) and 3 (right side), respectively, i.e. one integrogram is corresponding to each feature weight.
Each point in each derived integrogram represents the com _ weight value multiplied by the integral value corresponding to the complete image, and each integrogram is defined as light _ tab [0], light _ tab [1] and light _ tab [2].
And then, calculating the characteristic value of each haar characteristic according to the product of the integral value of the pixel point in the detection frame area in each derived integral graph and the characteristic weight, and accumulating the characteristic values of each haar characteristic in the characteristic subsets to obtain the characteristic value of the characteristic subset in each detection frame area.
The complexity of the operation of the com _ weight integral graph is as follows:
com weight W H2 additions + com weight W H multiplications
If only 3 values of [ -1,2,3] exist in the com _ weight after combination, then com _ weight _ num =3.
S104: judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not; if yes, go to step S105; if not, go to S106.
Fig. 4 is a schematic diagram of a principle of integral map calculation according to an embodiment of the present invention, as shown in fig. 4, a sum feature is taken as an example to describe a processing procedure of an integral map, and an integral map of each weight is calculated by taking a detection window as a unit, as follows:
illustratively, the integral value of the feature subset within the detection frame area is calculated using the following formula,
featureValue=SUM
(weight _ tab [ weight _ idx [0] ] - > featureEvialator [0], \8230; weight _ tab [ weight _ idx [ n-1] ] - > featureEvialator [ n-1 ]), wherein,
weight _ tab [ ] is the integral graph obtained in step S103 for different com _ weights;
weight _ idx [ i ] is the index of the updated feature weight in weight _ tab, and i =0, 1, \8230, n-1; n is the number of subsets in the Harr signature;
weight _ tab [ weight _ idx [0] ] - > featureEviator [0] is an assignment statement, i.e., the integral value of the subset of features derived from the index of the feature weight in weight _ tab.
For example, com _ weight = -1, weight _ idx [ i ] =0;
com _ weight =2, weight _ idx [ i ] =1;
com _ weight =3, weight _ idx [ i ] =2; wherein i =0 \ 8230and n-1.
As shown in fig. 4, P [ m ] [0] to P [ m ] [3] correspond to 4 integraph end points of the ith feature, and m =0 \8230andn-1:
in the prior art, when a feature weight is calculated, each time a pixel area is selected by using a detection frame, integral values of all feature subsets in the detection frame area are calculated according to the product of the feature weight and the integral values of the feature subsets; the multiplication occupies a large proportion, and the calculation complexity corresponding to one detection frame is as follows:
feature number pixel number (12 additions +3 multiplications +1 multiplication +1 shift).
By applying the above embodiment of the present invention, the method of using the weight integral table is adopted, that is, the integral values of all the feature subsets in the image to be detected are calculated in advance according to the product of the feature weight and the integral value of the feature subset, and then in step S104, the integral values of the feature subsets in the part are directly called according to the area selected by the detection frame, and the calculation amount of the repeated calculation is avoided compared with the recalculation in each frame selection in the prior art. Although the new weight integral graph introduces the operation of com _ weight _ num × number of pixels 2 addition + com _ weight _ num × number of pixels multiplication, the characteristic number × number of pixels 3 multiplication is saved, and the calculation complexity is as follows:
the complexity of the operation of the com _ weight integral graph is as follows: com _ weight × W × H2 additions + com _ weight × W × H multiplications are clearly less complex than in the prior art, and thus, embodiments of the present invention reduce the computational complexity overall.
As can be seen from the above, to achieve the operation optimization effect, the maximum combining weight number weight _ num _ max should not exceed the feature number. Therefore, the number of com _ weight _ num is effectively limited by the way of combining weight, so that multiplication operation is well reduced in the process of engineering realization, the operation complexity of a detection window is effectively reduced, and finally the power consumption of a chip can be effectively reduced.
And then, judging whether the integral value of the feature subsets in the detection frame area is larger than a set threshold value or not, and if so, taking the set of the feature subsets with the integral value larger than the set threshold value as the area where the target is located.
Specifically, it may be determined whether the integral value of the feature subset in the detection frame region is greater than a set threshold value by using a formula, if (featureValue [ node ] > th _ node [ node ] > detec _ nf),
if () is a decision function; the feature value [ node ] is an integral value of a feature subset in the detection frame area; th _ node is a set threshold value; detec _ nf is the variance value of the detection box.
In practical applications, whether the Harr characteristic is larger than the division floating-point operation if in the predefined threshold may be: featureValue [ node ]/detec _ nf > th _ node [ node ],
to if (featureValue [ node ] > th _ node [ node ] + detec _ nf), the operation complexity is:
characteristic number H W (12 additions +3 multiplications +1 floating-point multiplications).
By applying the embodiment of the invention, the division floating-point operation is changed into the floating-point multiplication operation, so that the total operation expense can be reduced.
Further, in order to ensure the precision and reduce floating point operations, the value is converted from floating point to fixed point representation, and a formula can be used,
Figure GDA0003951068670000121
determining whether an integral value of a subset of features within the detection frame area is greater than a set threshold value, wherein,
if () is a decision function; the feature value [ node ] is an integral value of a feature subset in the detection frame area; b is an integer; a is an integer; detec _ nf is the variance value of the detection box, and node corresponds to the subscript of each feature subset.
Illustratively, a threshold th _ node [ node ] is converted into a fixed-point expression,
namely:
Figure GDA0003951068670000131
a and b are integers;
then, th _ node [ node ]]* detec _ nf may be represented as
Figure GDA0003951068670000132
Since (a × detec _ nf) > > b, the total computation complexity of the steps S103 and S104 is:
characteristic number H W (12 additions +3 multiplications +1 shifts).
By applying the embodiment of the invention, the operation expense can be further saved.
S105: sliding the detection frame by a first step length according to the sliding direction of the detection frame, and returning to execute the step S103 until the target in the image to be detected is detected;
illustratively, the calculation principle in step S105 is the same as that in step S104, and the difference is only that of the sliding step.
S106: and sliding the detection frame by a second step length according to the sliding direction of the detection frame and returning to execute the step S103 until the target in the image to be detected is detected.
By applying the embodiment of the invention, the total calculation complexity from the step S101 to the step S104 is as follows:
com _ weight _ num × W × H2 additions + com _ weight _ num × W × H multiplications + feature number × H × W (12 additions +1 multiplication +1 shift), it is clear that the computational complexity in the embodiment of the invention is lower than that of the prior art:
characteristic number W H (12 additions +3 multiplications +1 floating-point division).
By applying the embodiment shown in fig. 1 of the present invention, for each feature subset, according to the ratio between the feature weight of the feature subset and the feature weights of other feature subsets except for the feature subset, the feature weights of the feature subset and the feature weights of the other feature subsets corresponding to the ratio are combined, and the feature weights of the feature subset and the feature weights of the other feature subsets corresponding to the ratio are updated to the combined feature weights, so that the number of the feature weights used for calculation is reduced, the number of the integral graphs is reduced, and the operation complexity is reduced.
In addition, the embodiment of the invention reduces the operation complexity, thereby reducing the occupation of the target searching process to the memory.
Corresponding to the embodiment of the invention shown in FIG. 1, the embodiment of the invention also provides a method for manufacturing the same
Fig. 6 is a schematic structural diagram of an apparatus for rapidly detecting a target according to an embodiment of the present invention, as shown in fig. 6, the apparatus includes:
the acquisition module 601 is configured to acquire an image to be detected and a detection frame size for the image to be detected, where the detection frame size is not larger than the size of the image to be detected;
a merging module 602, configured to, for each feature subset, merge the feature weight of the feature subset with the feature weight of another feature subset corresponding to a ratio according to the ratio between the feature weight of the feature subset and the feature weight of another feature subset except for the feature subset, and update the feature weight of the feature subset and the feature weight of another feature subset corresponding to the ratio into a merged feature weight;
a calculating module 603, configured to obtain feature subsets in a corresponding size range in the image to be detected according to the size of the detection frame, and calculate an integral value of each feature subset in the detection frame area according to a feature weight of each feature subset;
a determining module 604, configured to determine whether an integral value of the feature subset in the detection frame area is greater than a set threshold value;
a first sliding module 605, configured to slide the detection frame by a first step length according to a sliding direction of the detection frame and trigger the calculating module 603 until the target in the image to be detected is detected;
a second sliding module 606, configured to slide the detection frame by a second step length according to the sliding direction of the detection frame and trigger the calculating module 603 until the target in the image to be detected is detected.
By applying the embodiment shown in fig. 6 of the present invention, for each feature subset, according to the ratio between the feature weight of the feature subset and the feature weights of other feature subsets except for the feature subset, the feature weights of the feature subset and the feature weights of the other feature subsets corresponding to the ratio are combined, and the feature weights of the feature subset and the feature weights of the other feature subsets corresponding to the ratio are updated to the combined feature weights, so that the number of the feature weights used for calculation is reduced, the number of the integral graphs is reduced, and the operation complexity is reduced.
In a specific implementation manner of the embodiment of the present invention, the merging module 602 is configured to:
a: for each feature subset, when the number of the types of the feature values of the feature subset is greater than a set number, judging whether the ratio of the feature weight of the feature subset to the feature weights of other feature subsets except the feature subset is greater than a first preset threshold value;
b: if so, updating the feature weight of the feature subset and the feature weight of the feature subset except for the feature subset into the average value of the feature weight of the feature subset and the feature weight of the feature subset except for the feature subset, and updating the number of the types of feature values of the feature subset;
c: if not, judging whether the number of the types of the feature values of the updated feature subset is larger than the set number, if so, increasing the first preset threshold according to the set step length to obtain a second preset threshold, updating the first preset threshold to the second preset threshold, and returning to execute the step A until the number of the types of the feature values of the updated feature subset is not larger than the set number; if not, the calculation module 603 is triggered.
In a specific implementation manner of the embodiment of the present invention, the calculating module 603 is configured to: updating the characteristic value of each characteristic subset in the image to be detected into a new integral value which is the product of the combined characteristic weight and the integral value of each pixel in the complete image; and calculating the integral value of each feature subset in the detection frame area according to the product of the feature weight of each feature subset and the new integral value of the corresponding pixel point.
In a specific implementation manner of the embodiment of the present invention, the determining module 604 is configured to:
judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not by using a formula if (feature value node) > th _ node × detec _ nf,
if () is a decision function; featureValue [ node ] is an integral value of a feature subset within the detection frame region; th _ node is a set threshold value; detec _ nf is the variance value of the detection box.
In a specific implementation manner of the embodiment of the present invention, the determining module is configured to:
by means of the formula (I) and (II),
Figure GDA0003951068670000161
determining whether an integral value of a subset of features within the detection frame area is greater than a set threshold value, wherein,
if () is a decision function; featureValue [ node ] is an integral value of a feature subset within the detection frame region; b is an integer; a is an integer; detec _ nf is the variance value of the detection box, and node corresponds to the subscript of each feature subset.
The above description is intended to be illustrative of the preferred embodiment of the present invention and should not be taken as limiting the invention, but rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

Claims (8)

1. A method for rapid detection of an object, the method comprising:
1) Acquiring an image to be detected, and aiming at the size of a detection frame of the image to be detected, wherein the size of the detection frame is not larger than that of the image to be detected;
2) Combining the feature weight of each haar feature subset with the feature weight of other feature subsets corresponding to the ratio according to the ratio between the feature weight of the feature subset and the feature weight of other feature subsets except the haar feature subset, and updating the feature weight of the feature subset and the feature weight of other feature subsets corresponding to the ratio into the combined feature weight;
the step 2) comprises the following steps:
a: for each feature subset, when the number of the types of the feature values of the feature subset is greater than a set number, judging whether the ratio of the feature weight of the feature subset to the feature weights of other feature subsets except the feature subset is greater than a first preset threshold;
b: if so, updating the feature weight of the feature subset and the feature weight of the feature subset except for the feature subset into the average value of the feature weight of the feature subset and the feature weight of the feature subset except for the feature subset, and updating the number of the types of feature values of the feature subset;
c: if not, judging whether the number of the types of the updated characteristic values of the characteristic subset is larger than the set number, if so, increasing the first preset threshold according to the set step length to obtain a second preset threshold, updating the first preset threshold to the second preset threshold, and returning to execute the step A until the number of the types of the updated characteristic values of the characteristic subset is not larger than the set number; if not, executing the step 3);
3) Acquiring feature subsets in a corresponding size range in the image to be detected according to the size of the detection frame, and calculating an integral value of each feature subset in the detection frame area according to the feature weight of each feature subset;
4) Judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not;
5) If yes, sliding the detection frame by a first step length according to the sliding direction of the detection frame, and returning to execute the step 3) until the target in the image to be detected is detected;
6) And if not, sliding the detection frame by a second step length according to the sliding direction of the detection frame and returning to execute the step 3) until the target in the image to be detected is detected.
2. The method for rapidly detecting the target according to claim 1, wherein the calculating the integral value of each feature subset in the detection frame area according to the combined feature weight comprises:
updating the characteristic value of each characteristic subset in the image to be detected into a new integral value which is the product of the combined characteristic weight and the integral value of each pixel in the complete image; and calculating the integral value of each characteristic subset in the detection frame area according to the product of the characteristic weight of each characteristic subset and the new integral value of the corresponding pixel point.
3. The method of claim 1, wherein the determining whether the integral value of the subset of features in the detection frame area is greater than a set threshold value comprises:
judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not by using a formula if (feature value node) > th _ node × detec _ nf,
if () is a decision function; featureValue [ node ] is an integral value of a feature subset within the detection frame region; th _ node is a set threshold value; detec _ nf is the variance value of the detection box, and node corresponds to the subscript of each feature subset.
4. The method of claim 3, wherein the determining whether the integral value of the subset of features in the detection frame area is greater than a set threshold value comprises:
by means of the formula (I) and (II),
Figure FDA0003951068660000021
determining whether an integral value of a subset of features within the detection frame area is greater than a set threshold value, wherein,
if () is a decision function; the feature value [ node ] is an integral value of a feature subset in the detection frame area; b is an integer; a is an integer; detec _ nf is the variance value of the detection box, and node corresponds to the subscript of each feature subset.
5. An apparatus for rapid detection of an object, the apparatus comprising:
the acquisition module is used for acquiring an image to be detected and the size of a detection frame aiming at the image to be detected, and the size of the detection frame is not more than the size of the image to be detected;
a merging module, configured to merge, for each feature subset, the feature weight of the feature subset with the feature weights of other feature subsets corresponding to a ratio according to the ratio between the feature weight of the feature subset and the feature weights of the feature subsets other than the feature subset, and update the feature weights of the feature subsets and the feature weights of the feature subsets other than the feature subsets corresponding to the ratio to a merged feature weight;
the merging module is configured to:
a: for each feature subset, when the number of the types of the feature weights of the feature subsets is greater than a set number, judging whether the ratio of the feature weight of the feature subset to the feature weights of other feature subsets except the feature subset is greater than a first preset threshold value;
b: if yes, updating the feature weight of the feature subset and the feature weight of the feature subset except the feature subset into the average value of the feature weight of the feature subset and the feature weight of the feature subset except the feature subset, and updating the number of types of feature values of the feature subset;
c: if not, judging whether the number of the types of the updated characteristic values of the characteristic subset is larger than the set number, if so, increasing the first preset threshold according to the set step length to obtain a second preset threshold, updating the first preset threshold to the second preset threshold, and returning to execute the step A until the number of the types of the updated characteristic values of the characteristic subset is not larger than the set number; if not, triggering a calculation module;
the calculation module is used for acquiring the feature subsets in the corresponding size range in the image to be detected according to the size of the detection frame, and calculating an integral value of each feature subset in the complete image area according to the feature weight of each feature subset, namely the feature weight and the original integral value;
the judging module is used for judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not;
the first sliding module is used for sliding the detection frame by a first step length according to the sliding direction of the detection frame and triggering the calculation module until the target in the image to be detected is detected;
and the second sliding module is used for sliding the detection frame by a second step length according to the sliding direction of the detection frame under the condition that the judgment result of the judgment module is negative, and triggering the calculation module until the target in the image to be detected is detected.
6. The apparatus for rapid detection of an object according to claim 5, wherein the calculation module is configured to:
updating the characteristic value of each characteristic subset in the image to be detected into a new integral value which is the product of the combined characteristic weight and the integral value of each pixel in the complete image; and calculating the integral value of each characteristic subset in the detection frame area according to the product of the characteristic weight of each characteristic subset and the new integral value of the corresponding pixel point.
7. The apparatus for rapid detection of an object according to claim 5, wherein the determining module is configured to:
judging whether the integral value of the feature subset in the detection frame area is larger than a set threshold value or not by using a formula if (feature value node) > th _ node × detec _ nf,
if () is a decision function; the feature value [ node ] is an integral value of a feature subset in the detection frame area; th _ node is a set threshold value; detec _ nf is the variance value of the detection box.
8. The apparatus for rapid detection of an object according to claim 7, wherein the determining module is configured to:
by means of the formula (I) and (II),
Figure FDA0003951068660000041
determining whether an integral value of a subset of features within the detection frame area is greater than a set threshold value, wherein,
if () is a decision function; the feature value [ node ] is an integral value of a feature subset in the detection frame area; b is an integer; a is an integer; detec _ nf is the variance value of the detection box, and node corresponds to the subscript of each feature subset.
CN201910174066.5A 2019-03-08 2019-03-08 Target rapid detection method and device Active CN109919228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910174066.5A CN109919228B (en) 2019-03-08 2019-03-08 Target rapid detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910174066.5A CN109919228B (en) 2019-03-08 2019-03-08 Target rapid detection method and device

Publications (2)

Publication Number Publication Date
CN109919228A CN109919228A (en) 2019-06-21
CN109919228B true CN109919228B (en) 2023-04-11

Family

ID=66963847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910174066.5A Active CN109919228B (en) 2019-03-08 2019-03-08 Target rapid detection method and device

Country Status (1)

Country Link
CN (1) CN109919228B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205687A (en) * 2018-02-01 2018-06-26 通号通信信息集团有限公司 Based on focus mechanism positioning loss calculation method and system in object detection system
CN108470194A (en) * 2018-04-04 2018-08-31 北京环境特性研究所 A kind of Feature Selection method and device
CN109241969A (en) * 2018-09-26 2019-01-18 旺微科技(上海)有限公司 A kind of multi-target detection method and detection system
CN109409360A (en) * 2018-09-26 2019-03-01 旺微科技(上海)有限公司 A kind of multiple dimensioned image object detection method and detection system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205687A (en) * 2018-02-01 2018-06-26 通号通信信息集团有限公司 Based on focus mechanism positioning loss calculation method and system in object detection system
CN108470194A (en) * 2018-04-04 2018-08-31 北京环境特性研究所 A kind of Feature Selection method and device
CN109241969A (en) * 2018-09-26 2019-01-18 旺微科技(上海)有限公司 A kind of multi-target detection method and detection system
CN109409360A (en) * 2018-09-26 2019-03-01 旺微科技(上海)有限公司 A kind of multiple dimensioned image object detection method and detection system

Also Published As

Publication number Publication date
CN109919228A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
EP2869023B1 (en) Image processing apparatus, image processing method and corresponding computer program
CN112215856B (en) Image segmentation threshold determining method, device, terminal and storage medium
CN109033955B (en) Face tracking method and system
CN111757008B (en) Focusing method, device and computer readable storage medium
CN102891966A (en) Focusing method and device for digital imaging device
CN108346148B (en) High-density flexible IC substrate oxidation area detection system and method
US7835543B2 (en) Object detection method
CN111275737B (en) Target tracking method, device, equipment and storage medium
JP2020205118A (en) Systems and methods for object detection
CN113920022A (en) Image optimization method and device, terminal equipment and readable storage medium
CN111489344A (en) Method, system and related device for determining image definition
CN113793076B (en) Dynamic risk pool monitoring method, system, equipment and readable storage medium
CN111383157B (en) Image processing method and device, vehicle-mounted operation platform, electronic equipment and system
CN109919228B (en) Target rapid detection method and device
CN109087347B (en) Image processing method and device
CN110866484B (en) Driver face detection method, computer device and computer readable storage medium
CN110706254B (en) Target tracking template self-adaptive updating method
CN111025525A (en) Automatic focusing method and device
CN109416748B (en) SVM-based sample data updating method, classification system and storage device
US7450262B2 (en) Form recognizing apparatus, form recognizing method, program and storage medium
JP7315002B2 (en) Object tracking device, object tracking method, and program
KR101793400B1 (en) Bottom Line Detecting Method of Vehicle in ROI
CN118220152A (en) Method for calculating bending compensation of automatic lane changing of vehicle
CN113222864B (en) Image edge denoising method, device and equipment based on amplitude limiting filtering
CN101727667B (en) Boundary detecting method and device of net image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant