CN105426893A - Local target appearance image characteristic description method - Google Patents
Local target appearance image characteristic description method Download PDFInfo
- Publication number
- CN105426893A CN105426893A CN201510735126.8A CN201510735126A CN105426893A CN 105426893 A CN105426893 A CN 105426893A CN 201510735126 A CN201510735126 A CN 201510735126A CN 105426893 A CN105426893 A CN 105426893A
- Authority
- CN
- China
- Prior art keywords
- formula
- target
- histograms
- region
- histogram
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a local target appearance image characteristic description method. The method comprises steps that 1, image regions R are divided into background image regions and target image regions, image regions R respectively acquire characteristic histograms, and the characteristic histograms are gray scale histograms H(n), direction gradient histograms H(n) and texture histograms H(n); 2, normalization processing on each type of the characteristic histograms acquired in the step 1 is carried out, class-condition probability density distribution of each type of characteristic histograms is acquired, the class-condition probability density distribution of the characteristic histograms of the background image regions is p<f>o(n), and the class-condition probability density distribution of the characteristic histograms of the target image regions is p<f>b(n); 3, likelihood L<f>(n) of each type characteristic is calculated by utilizing the p<f>o(n) and the p<f>b(n) acquired in the step 2 according to a formula (1) described in the specification.
Description
Technical field
The present invention relates to and belongs to image processing field, is exactly a kind of local image characteristics describing method of target appearance.
Background technology
Utilize computer vision and image processing techniques, video image is processed in real time, to concern realization of goal detection and tracking.The various features information such as the shape of target, size, attitude, gray scale, texture, motion are comprised in image, and with scene changes and target travel, the characteristics of image of target may alter a great deal, adopt single features to carry out target detection to be often difficult to prove effective, such as adopt cross-correlation template matches, when object and background changes greatly, more new template will inevitably recognition failures, template renewal is improper causes drift also can cause recognition failures, and tracing it to its cause is caused by character recognition ability declines.Have a lot of method to target's feature-extraction at present, each method has some conditions to limit, and to changeable applied environment, often poor effect, feature is not obvious.
Summary of the invention
For solving the problem, the object of this invention is to provide a kind of local image characteristics describing method of target appearance.
The present invention for achieving the above object, be achieved through the following technical solutions: a kind of local image characteristics describing method of target appearance, it is characterized in that: comprise the steps: 1. image-region R to be divided into background image region and object region, each image-region R obtains feature histogram respectively, and feature histogram is grey level histogram H (n), histograms of oriented gradients H (n) and Texture similarity H (n); 2. often kind of feature histogram step 1. obtained is normalized respectively, and obtain the class conditional probability density distribution of often kind of feature histogram, wherein the class conditional probability density of the feature histogram of background image region is distributed as p
o fn (), the class conditional probability density of the feature histogram of object region is distributed as p
b f(n); 3. 2. step is obtained p
o f(n) and p
b fn () calculates the likelihood L obtaining often kind of feature according to formula (1)
f(n);
(1), in formula, ε is arithmetic number.
For realizing object of the present invention further, can also by the following technical solutions: described grey level histogram H (n) calculates according to formula (2) and obtains:
(2) in formula (2), i is pixel number, and R is target or background area, and f is gray scale value, and δ is Dirac function.
Described histograms of oriented gradients H (n) calculates according to formula (3) and obtains:
(3) in formula (3), i is pixel number, and R is target or background area, and f is gradient direction value, and A is gradient amplitude value, and δ is Dirac function.
Described Texture similarity H (n) calculates according to formula (4) and obtains:
(4) in formula (3), i is pixel number, and R is target or background area, and f is textural characteristics LBP
8,1value, δ is Dirac function; Wherein textural characteristics LBP
8,1calculate according to formula (5) and obtain:
(5) in formula (5), i is pixel number, counterclockwise value from left to right in 8 neighborhoods centered by pixel c, g
iwith g
cfor the gray scale value of pixel i and c, I is unit indicator function.
The invention has the advantages that: target identification of the present invention is considered as two classification of object and background pixel, according to the likelihood of class conditional probability density structural attitude as the degree of confidence distinguishing object and background pixel, the optimum identification and classification device of on-line training, distinguishes object and background effectively.According to circumstances different, likelihood L
fn () is larger may be more target on the occasion of expression, and less negative value represents it may is more background, uncertain close to zero expression.Target following is considered as two classification problems by the present invention, adopt local image characteristics, as gray scale, direction gradient and local binary patterns describe (i.e. texture) target appearance, by the likelihood of class conditional probability density structural attitude, as the degree of confidence distinguishing object and background pixel, can under various circumstances by the threshold value adjusting degree of confidence, realize target is separated with the optimization of background.Show the target following experimental result of video, the inventive method feature extraction in the situations such as background illumination, target carriage change and local are blocked has good stability.
Accompanying drawing explanation
Fig. 1 is embodiment virgin state figure; Fig. 2 is that embodiment background image region and object region are in conjunction with grey level histogram; Fig. 3 is embodiment background image region and object region bonding position histogram of gradients line; Fig. 4 is the Texture similarity that embodiment background image region and object region combine; Fig. 5 is embodiment gray scale likelihood distribution plan; Fig. 6 is embodiment direction gradient likelihood distribution plan; Fig. 7 is embodiment texture likelihood distribution plan; Fig. 8 is embodiment gray scale likelihood image; Fig. 9 is embodiment direction gradient likelihood image; Figure 10 is embodiment texture likelihood image; Figure 11 is embodiment Fusion Features image.
Embodiment
Embodiment:
A local image characteristics describing method for target appearance, is characterized in that: comprise the steps:
1. image-region R is divided into background image region and object region, as shown in Figure 1, Fig. 1 is original image, and figure center A is background area, and frame B is target area.Background area and target area obtain feature histogram respectively, and feature histogram is grey level histogram H (n), histograms of oriented gradients H (n) and Texture similarity H (n).
Grey level histogram H (n) calculates according to formula (2) and obtains, background image region and object region are bonded in same histogram, as shown in Figure 2, wherein A is background image region intensity histogram figure line to result, and B is object region intensity histogram figure line:
(2) in formula (2), i is pixel number, and R is target or background area, and f is gray scale value, and δ is Dirac function.Histograms of oriented gradients H (n) calculates according to formula (3) and obtains, background image region and object region are bonded in same histogram, as shown in Figure 3, wherein A is background image region histograms of oriented gradients line to result, and B is object region histograms of oriented gradients line:
(3) in formula (3), i is pixel number, and R is target or background area, and f is gradient direction value, and A is gradient amplitude value, and δ is Dirac function.
Texture similarity H (n) according to formula (4) calculate obtain, background image region and object region are bonded in same histogram, as shown in Figure 4, wherein A is background image region Texture similarity line to result, and B is object region Texture similarity line:
(4) in formula (4), i is pixel number, and R is target or background area, and f is textural characteristics LBP
8,1value, δ is Dirac function; Wherein textural characteristics LBP
8,1calculate according to formula (5) and obtain:
(5) in formula (5), i is pixel number, counterclockwise value from left to right in 8 neighborhoods centered by pixel c, g
iwith g
cfor the gray scale value of pixel i and c, I is unit indicator function.2. often kind of feature histogram step 1. obtained is normalized respectively, obtains the class conditional probability density distribution of often kind of feature histogram, and as shown in Figure 5, as shown in Figure 6, texture likelihood distribution as shown in Figure 7 in direction gradient likelihood distribution in gray scale likelihood distribution.
Wherein the class conditional probability density of the feature histogram of background image region is distributed as p
o fn (), the class conditional probability density of the feature histogram of object region is distributed as p
b f(n); 3. 2. step is obtained p
o f(n) and p
b fn () calculates the likelihood L obtaining often kind of feature according to formula (1)
f(n);
(1), in formula, ε is arithmetic number, and ε appoints and gets very little arithmetic number, to avoid taking the logarithm to zero.By this process, be the feature likelihood L for classifying by object and background feature multimodal distribution shifts
fn (), larger may be more target on the occasion of expression, and less negative value represents it may is more background, uncertain close to zero expression.
Likelihood distribution according to different characteristic can obtain gray scale likelihood image, direction gradient likelihood image, texture likelihood image, and respectively as Fig. 8, Fig. 9 and Figure 10, wherein figure center A is background area, and frame B is target area.As seen from the figure, the ability of each feature differentiation object and background is different, and in Fusion Features image (as shown in figure 11), object and background discrimination is more obvious.
Technical scheme of the present invention is not restricted in the scope of embodiment of the present invention.The technology contents of the not detailed description of the present invention is known technology.
Claims (4)
1. the local image characteristics describing method of a target appearance, it is characterized in that: comprise the steps: 1. image-region R to be divided into background image region and object region, each image-region R obtains feature histogram respectively, and feature histogram is grey level histogram H (n), histograms of oriented gradients H (n) and Texture similarity H (n); 2. often kind of feature histogram step 1. obtained is normalized respectively, and obtain the class conditional probability density distribution of often kind of feature histogram, wherein the class conditional probability density of the feature histogram of background image region is distributed as p
o fn (), the class conditional probability density of the feature histogram of object region is distributed as p
b f(n); 3. 2. step is obtained p
o f(n) and p
b fn () calculates the likelihood L obtaining often kind of feature according to formula (1)
f(n);
(1), in formula, ε is arithmetic number.
2. the local image characteristics describing method of a kind of target appearance according to claim 1, is characterized in that: described grey level histogram H (n) calculates according to formula (2) and obtains:
(2) in formula (2), i is pixel number, and R is target or background area, and f is gray scale value, and δ is Dirac function.
3. the local image characteristics describing method of a kind of target appearance according to claim 1, is characterized in that: described histograms of oriented gradients H (n) calculates according to formula (3) and obtains:
(3) in formula (3), i is pixel number, and R is target or background area, and f is gradient direction value, and A is gradient amplitude value, and δ is Dirac function.
4. the local image characteristics describing method of a kind of target appearance according to claim 1, is characterized in that: described Texture similarity H (n) calculates according to formula (4) and obtains:
(4) in formula (3), i is pixel number, and R is target or background area, and f is textural characteristics LBP
8,1value, δ is Dirac function; Wherein textural characteristics LBP
8,1calculate according to formula (5) and obtain:
(5) in formula (5), i is pixel number, counterclockwise value from left to right in 8 neighborhoods centered by pixel c, g
iwith g
cfor the gray scale value of pixel i and c, I is unit indicator function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510735126.8A CN105426893A (en) | 2015-11-03 | 2015-11-03 | Local target appearance image characteristic description method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510735126.8A CN105426893A (en) | 2015-11-03 | 2015-11-03 | Local target appearance image characteristic description method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105426893A true CN105426893A (en) | 2016-03-23 |
Family
ID=55505092
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510735126.8A Pending CN105426893A (en) | 2015-11-03 | 2015-11-03 | Local target appearance image characteristic description method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105426893A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070065008A1 (en) * | 2005-09-21 | 2007-03-22 | Marketech International Corp. | Method and apparatus for dynamic image contrast expansion |
CN104282008A (en) * | 2013-07-01 | 2015-01-14 | 株式会社日立制作所 | Method for performing texture segmentation on image and device thereof |
CN105005786A (en) * | 2015-06-19 | 2015-10-28 | 南京航空航天大学 | Texture image classification method based on BoF and multi-feature fusion |
-
2015
- 2015-11-03 CN CN201510735126.8A patent/CN105426893A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070065008A1 (en) * | 2005-09-21 | 2007-03-22 | Marketech International Corp. | Method and apparatus for dynamic image contrast expansion |
CN104282008A (en) * | 2013-07-01 | 2015-01-14 | 株式会社日立制作所 | Method for performing texture segmentation on image and device thereof |
CN105005786A (en) * | 2015-06-19 | 2015-10-28 | 南京航空航天大学 | Texture image classification method based on BoF and multi-feature fusion |
Non-Patent Citations (1)
Title |
---|
肖行诠等: "贝叶斯目标跟踪技术在变电站作业管控中的应用研究", 《华东电力》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108304873B (en) | Target detection method and system based on high-resolution optical satellite remote sensing image | |
Kim et al. | End-to-end ego lane estimation based on sequential transfer learning for self-driving cars | |
Kumar et al. | Review of lane detection and tracking algorithms in advanced driver assistance system | |
Hu et al. | Fast detection of multiple objects in traffic scenes with a common detection framework | |
Bailo et al. | Robust road marking detection and recognition using density-based grouping and machine learning techniques | |
Kheyrollahi et al. | Automatic real-time road marking recognition using a feature driven approach | |
Yuan et al. | Robust lane detection for complicated road environment based on normal map | |
Gupta et al. | A framework for camera-based real-time lane and road surface marking detection and recognition | |
Romdhane et al. | An improved traffic signs recognition and tracking method for driver assistance system | |
Liang et al. | Moving object classification using a combination of static appearance features and spatial and temporal entropy values of optical flows | |
Ahmad et al. | An edge-less approach to horizon line detection | |
Wang et al. | An overview of 3d object detection | |
Gawande et al. | SIRA: Scale illumination rotation affine invariant mask R-CNN for pedestrian detection | |
Kuang et al. | Real-Time Detection and Recognition of Road Traffic Signs using MSER and Random Forests. | |
Abedin et al. | Traffic sign recognition using surf: Speeded up robust feature descriptor and artificial neural network classifier | |
Deng et al. | Detection and recognition of traffic planar objects using colorized laser scan and perspective distortion rectification | |
Blondel et al. | Human detection in uncluttered environments: From ground to UAV view | |
Ghahremannezhad et al. | Robust road region extraction in video under various illumination and weather conditions | |
Galarza-Bravo et al. | Pedestrian detection at night based on faster R-CNN and far infrared images | |
Spinello et al. | Multimodal People Detection and Tracking in Crowded Scenes. | |
Shang et al. | A novel method for vehicle headlights detection using salient region segmentation and PHOG feature | |
Wang et al. | Hand posture recognition from disparity cost map | |
Ahmed et al. | Dynamic Adoptive Gaussian Mixture Model for Multi-Object Detection Over Natural Scenes | |
Tang et al. | An improved algorithm for road markings detection with SVM and ROI restriction: comparison with a rule-based model | |
Dilawari et al. | Toward generating human-centered video annotations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160323 |
|
RJ01 | Rejection of invention patent application after publication |