CN106529412A - Intelligent video recognition method and system - Google Patents

Intelligent video recognition method and system Download PDF

Info

Publication number
CN106529412A
CN106529412A CN201610892187.XA CN201610892187A CN106529412A CN 106529412 A CN106529412 A CN 106529412A CN 201610892187 A CN201610892187 A CN 201610892187A CN 106529412 A CN106529412 A CN 106529412A
Authority
CN
China
Prior art keywords
video
image
video image
value
intelligent video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610892187.XA
Other languages
Chinese (zh)
Inventor
袁真
李首峰
陈放
王亚博
孟欣欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guozhengtong Polytron Technologies Inc
Original Assignee
Guozhengtong Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guozhengtong Polytron Technologies Inc filed Critical Guozhengtong Polytron Technologies Inc
Priority to CN201610892187.XA priority Critical patent/CN106529412A/en
Publication of CN106529412A publication Critical patent/CN106529412A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • G06F18/21322Rendering the within-class scatter matrix non-singular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • G06F18/21322Rendering the within-class scatter matrix non-singular
    • G06F18/21324Rendering the within-class scatter matrix non-singular involving projections, e.g. Fisherface techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention proposes an intelligent video recognition system, and the system employs video collection equipment, and is used for collecting a video image of a collection object; a video image positioning module which is used for obtaining the video image, carrying out the position modeling of the video from five sense organs to the profile, and determining that the position of the collection object is matched with a to-be-compared image; an image preprocessing module which is used for carrying out the preprocessing of the image data after the video position is determined, adjusting the image data, and optimizing the comparison effect; an image feature extraction module which is used for extracting needed data from the image after preprocessing according to the requirements of an algorithm; a retrieval database which comparing the extracted data with to-be-authenticated data in a database; and a result display module which feeds back a processing result of a system, and carries out the further processing according to the result.

Description

A kind of intelligent video recognition methods and system
Technical field
The present invention relates to video field, more particularly to a kind of to be processed using computer graphic image and mode identification technology Intelligent video recognition methods and system.
Background technology
Video identification is a kind of technology of identification being identified based on video image characteristic information, in recent years at some Field achieves application, and such as video identification can apply to gate control system, attendance checking system, smart mobile phone etc..
In video identification technology, mainly there are two steps:Characteristic vector is extracted from video image to be identified;By feature Vector carries out contrast with the characteristic vector of image in database and obtains recognition result.Wherein, first step directly affects video The accuracy of recognition result.In the prior art, video recognition algorithms are a lot, but cannot all ensure to be adapted to all samples, from And affect the accuracy of video identification.
Local binary patterns (Local Binary Pattern, LBP) are proposed by Ojala, are spent in image local neighborhood Amount pixel value size simultaneously extracts texture information, has robustness to illumination variation.Its calculating is easy, anti-light according to interference, differentiation energy Power is strong, the recognition of face being widely used under illumination variation.But when illumination acute variation, LBP cannot represent the play of change Strong degree, therefore reliability declines to a great extent, Tan et al. has also been proposed three value pattern (Local Ternary of local on this basis Pattern,LTP)。
LTP operators are improved to LBP operators, are encoded using three values, to improve the classification capacity of whole feature space. Pixel in neighborhood and center pixel are compared by the window of 3 × 3, self-defined threshold value t, and pixel value difference is mapped in gc It is quantified as in the region that 0, width is [- t ,+t], difference is+1 more than the Interval Coding, difference is -1 less than the Interval Coding, Difference is 0 in interval range interior coding.So, the binary system signed number of 8 can be produced in neighborhood, then by its position Different weights are given, and sums obtain three value pattern (LTP) characteristic value of local of the window, the area is described with this number The texture information in domain.
By the research to LBP and improvement, LTP solves the problems, such as the identification under illumination acute variation, to acute variation Image-forming condition (such as noise etc.) is with robustness.But LTP itself adopts self-defined threshold value, need to be looked for according to priori, be set Optimal threshold, ageing meeting are impacted, meanwhile, threshold value cannot take into account the difference between sample, also there is Problem of Universality.Therefore, Needs adopt new operator to improve the discrimination to video image identification, and the optimization of threshold value becomes a desirable direction.
In government affairs, the people's livelihood, environment, public safety, urban service, commercial activity, market, bank, customs, military restricted zone etc. In scene for personage or background Dynamic Recognition for the construction of intelligent city exist inherence power demand.
Video identification technology is Computer Image Processing, graphical configuration, pattern-recognition, computer visualization and cognitive science Etc. multiple technologies and the complex technique in field.Video identification technology due to its data complexity and collection, treatment technology it is tired Difficulty, which does not also much reach the requirement of application.
The content of the invention
The purpose of the present invention is achieved through the following technical solutions.
The present invention proposes a kind of video recognition system, and the video recognition system includes following functional module:
Video capture device, for the collection of video image is carried out to acquisition target;
Video image locating module, for obtaining after video image the position modeling to video from face to profile, it is determined that The position of acquisition target is matched with picture position to be compared;
Image pre-processing module:After determining video location, view data is pre-processed, adjust view data, it is excellent Change and compare effect;
Characteristics of image module is extracted, is required according to algorithm, the data needed in the image for having pre-processed is extracted;
Searching database, for obtaining video image training set, video image training in the data and database extracted Collection needs the data of certification to compare
Result display module, reponse system result, is for further processing according to result.
According to an aspect of the present invention, the searching database is further used for, and determines the training set of video image, root Scatter Matrix is answered according to overall, one group of orthogonal characteristic vector is obtained by the method for singular value decomposition.
According to an aspect of the present invention, the extraction characteristics of image module is further used for, to be identified for arbitrary Video image ItBy formula yk=ETIt, extract its feature.
The invention allows for a kind of method of video identification, it is characterised in that comprise the steps:
Step one, the transformation space projection properties value determined in video image to be identified;
Step 2, upper mode characteristic values and lower mode characteristic values are determined according to transformation space projection properties value;
Pattern feature face and lower pattern feature face in step 3, determination, upper pattern feature face is by mould on each pixel Formula eigenvalue cluster is into lower pattern feature face is made up of mode characteristic values under each pixel;
Step 4:Determine the training set of video image, Scatter Matrix is answered according to overall, asked by the method for singular value decomposition Go out one group of orthogonal characteristic vector;
Step 5, the video image I to be identified for arbitrarytBy formula yk=ETIt, extract its feature;
Step 6, video identification is carried out using the nearest neighbor classifier of Euclidean distance, if the result of identification is equal to minimum Value, then video image It to be identified and training image Ir belong to same class object.
According to an aspect of the present invention, in the step 4, expression of the image in higher dimensional space is converted to into which in phase The characteristic of lower dimensional space is answered, the extraction to characteristics of image is realized.
Description of the drawings
By the detailed description for reading hereafter preferred embodiment, various other advantages and benefit are common for this area Technical staff will be clear from understanding.Accompanying drawing is only used for the purpose for illustrating preferred embodiment, and is not considered as to the present invention Restriction.And in whole accompanying drawing, it is denoted by the same reference numerals identical part.In the accompanying drawings:
Accompanying drawing 1 shows the intelligent video identifying system schematic diagram according to embodiment of the present invention.
Accompanying drawing 2 shows the intelligent video recognition methods schematic diagram according to embodiment of the present invention.
Specific embodiment
The illustrative embodiments of the disclosure are more fully described below with reference to accompanying drawings.Although showing in accompanying drawing public The illustrative embodiments opened, it being understood, however, that may be realized in various forms the disclosure and the reality that should not be illustrated here The mode of applying is limited.On the contrary, there is provided these embodiments are able to be best understood from the disclosure, and can be by this public affairs What the scope opened was complete conveys to those skilled in the art.
Video identification is the living things feature recognition skill that a kind of visual signature information of utilization video image carries out identification Art.Video identification compared with other traditional biological technology of identification has the advantages that to be easy to collection, convenient and swift, interaction friendly, It is gradually accepted by the public.
Several algorithms of video identification include:
1st, template matching algorithm (Correlationalgorithm):Picture position is calculated directly by obtaining video image The distance between vector it is whether similar to weigh video image.It is exactly briefly to obtain that video is most basic, intuitively feature (such as ear, nose, shape of face) carrying out the comparison of similarity, is the benchmark algorithm of video identification.This algorithm recognition speed is fast, accounts for But it is low with the little accuracy rate of Installed System Memory, it is not suitable for the system that high identification is required.
2nd, Eigenface:The algorithm is carried out on the basis of Eigenface by based on principal component analysis (PCA) method Optimization can make algorithm more effectively, benchmark recognizer when being video image contrast test.
3rd, Fisher faces algorithm:Linear discriminant analysis method is most classification capacity is extracted from higher dimensional space low Dimensional feature, the characteristic after projection, in lower dimensional space, different classes of sample is separated as far as possible, at the same time it is wished that each The sample of classification is as intensive as possible, that is to say, that between-class scatter is the bigger the better, and within-cluster variance is the smaller the better.
4th, the algorithm based on Gabor characteristic:Eigenface and Fisher faces algorithm carry out signature analysis using gray scale in image. And gradation of image can be analyzed from multiple angles based on the algorithm of Gabor characteristic, simulate mammal cortex cell Zone profile, and it is better than eigenface and Fisher faces algorithm for the adaptability of illumination.The selection of video recognition algorithms will be examined Consider the objective condition such as video data acquiring environment, collecting device condition, image real time transfer optimization, not all algorithm is all Suitable system to be set up.
For ease of illustrating intelligent video recognition methods provided in an embodiment of the present invention and device, first in the embodiment of the present invention Various scenes and technological reserve in the video recognition algorithms being related to simply are introduced.
IMAQ can be captured by camera or is directly selected from hard disk using camera as imageing sensor A facial image is taken, then face image data source is stored in database.
Image semantic classification can include greyscale transformation, binary transform, noise reduction process, then utilize the side based on Adaboost Method carries out the detection of video and positioning, if detecting effective video, among being stored in database.
With 0~255 grey degree for representing every bit, 0 is black, and 255 are white for greyscale transformation.RGB color is straight Linear change was connected, R, G, B three-component was processed successively, RGB was by being converted to gray-scale map below formula:Gray= 0.299*R+0.587*G+0.114*B。
Binary transform is by the Sequence Transformed image sequence into 255-0 of 0-1 images.
Le is the operator that image texture characteristic is described in a kind of tonal range than operator, is mainly used to assisted extraction image office The contrast metric in portion region.It is using the gray value of central pixel point as threshold value, in center pixel neighborhood of a point to strangle than operator Inside sampled, for example, taken 3 × 3 neighborhood, then the gray value of 8 pixels adjacent with central pixel point and threshold value are carried out Relatively, if neighbor pixel gray value is more than threshold value (i.e. central pixel point gray value), the location of pixels is marked as 1, no 0 is labeled as then.8 bits can be so produced, 8 bits are converted to into decimal number, as middle imago The LBP characteristic values of vegetarian refreshments, the decimal number span being converted to due to 8 bits is 0-255, therefore characteristic value takes Value scope is 0-255.If providing one seeks the instantiation strangled than characteristic value, the grey scale pixel value of central pixel point is 9, Neighborhood territory pixel gray value is compared with center pixel gray value, is obtained 8 bits 01000111, is converted to decimal number 71 compare characteristic value as Le.
But, Le only compares the size of gray value than operator and have ignored the contrast value between pixel, when the picture in neighborhood When plain gray value changes on the premise of magnitude relationship is kept, strangle and keep constant than coding result.Therefore, strangle and cannot retouch than operator The difference before and after nonlinear change is stated, the textural characteristics that part may finally be caused important are dropped.
Unconventional operator is, to strangling the improvement than operator, to be encoded using three values, to improve the classification capacity of whole feature space. One threshold value t of User Defined, greatly enhances the sensitivity to noise, what balanced to a certain extent violent illumination caused Bloom, the gray value in light region.Specific unconventional operator operation process is when neighborhood territory pixel point gray value and central pixel point The difference of gray value is more than or equal to t, and the location of pixels is marked as 1, neighborhood territory pixel point gray value and central pixel point gray value Difference be less than-t, the location of pixels is marked as -1, is otherwise labeled as 0.In order to simplify calculating, the cataloged procedure of unconventional can be with It is decomposed on the occasion of calculating section and negative value calculating section, on the occasion of the side than operator calculating is strangled in application respectively with each part of negative value Method.Decomposition computation process is shown in Figure 2, and the coding result for extracting "+1 " is designated as " 1 " remaining is designated as " 0 ", by strangling than coding Mode obtains pattern feature;The coding result of extraction " -1 " is designated as " 1 ", and remaining is designated as " 0 ", is obtained than coded system by strangling Lower pattern feature.So after unconventional feature extraction conversion, the sign and classification performance of whole feature space sample are entered One step strengthens and improves.
The soft and hardware program for realizing that video identification needs are perfect of video recognition system.An enforcement of the invention Mode, intelligent video identifying system are configured as shown in figure 1, including following part:
Video capture device:The collection of face-image is carried out to acquisition target, object is generally required and is not worn ornament (such as Glasses, cap etc.) ensure to gather the integrality of image.Collection video needs to meet the illumination of system requirements, shooting angle, background Etc. objective condition.
Video image locating module:Position after obtaining image to video from face to profile models, and determines acquisition target Position match with picture position to be compared.
Image pre-processing module:After determining video location, view data is pre-processed, adjust view data, it is excellent Change and compare effect.
Extract characteristics of image module:Required according to algorithm, the data needed in the image for having pre-processed are extracted.
Searching database:For obtaining video image training set, video image training in the data and database extracted Collection needs the data of certification to compare.
Result display module:Reponse system result, is for further processing according to result.
The training set of hypothesis video image is C.C has m object video, and each object has n width video images.Note Each image all contains deep data (being represented with depth) and grey data (being represented with intn).Imaginary unit represented with i, then kth Width (1≤k≤n × m) high clear video image IkCan be expressed as:
Ik=depthk+int nk×i (1)
An embodiment of the invention, the present invention propose a kind of video detecting method of Dynamic Recognition.
First, the average under whole training set image complex fieldsCan be expressed as:
In formula:Ip_q represents q image of p-th object in training set.
Scatter Matrix S is the totality of video training set C again:
In formula:IkFor k-th training image,For the mean value of training sample, scales of the n × m for training set.
Scatter Matrix is answered according to overall, one group of orthogonal characteristic vector is obtained by the method for singular value decomposition:u1, U2 ..., ut and its corresponding characteristic value:λ 1, λ 2 ..., λ t, wherein 1 >=λ of λ 2 >=... >=λ t.From front d (d<T) individual non-zero is special The corresponding characteristic vector of value indicative is used as orthogonal basis.D is referred to as intrinsic dimensionality N.Orthogonal basis is pressed into pattern matrix arrangement, resulting figure Picture referred to as eigenface.In the subspace E of eigenface, video sample IkY can be just projected ask.By such method, Expression of the piece image in higher dimensional space is converted to into its characteristic in corresponding lower dimensional space, is realized to characteristics of image Extract:
yk=ETIk (4)
Through above-mentioned feature extraction, the column vector of corresponding d × 1 dimension of each training video image is special to preserve which Reference ceases.Training set has m × n images, so finally giving matrix Y={ y1, y2 ..., ym × n }
Preserve the characteristic information of all training images.Arbitrary video image I to be identifiedtAlso can be extracted by formula (4) Its feature, and save as yt.Using the nearest neighbor classifier of Euclidean distance, define:
If met:
Dist(yt,yr)=min [Dist (yt,yc)]yc∈Y (6)
Then yt,yrBelong to same class object.Video image I i.e. to be identifiedtWith training image IrBelong to same class object.
On the basis of the video detecting method of above-mentioned Dynamic Recognition, an embodiment of the invention, it is proposed that A kind of intelligent video recognition methods, as shown in Fig. 2 methods described comprises the steps:
Step one, the transformation space projection properties value determined in video image to be identified;
The corresponding transformation space projection properties value of each image is by each pixel in the image and the neighborhood of pixel points The gray scale difference value of each pixel determines.
Step 2, upper mode characteristic values and lower mode characteristic values are determined according to transformation space projection properties value;
Video image to be identified for carrying out video identification adopts gray level image, first according in video image to be identified In each pixel and the neighborhood of pixel points, the gray value of pixel determines the adaptive threshold of the pixel, recycles unconventional to calculate Son is calculated as the threshold value of unconventional operator using the adaptive threshold of the pixel when calculating the characteristic value of the pixel, i.e., Determine that using the unconventional operator with adaptive threshold the unconventional adaptive threshold of each pixel in video image to be identified is special Value indicative.
In some embodiments of the invention, determine the unconventional adaptive threshold of each pixel in video image to be identified Implementing for characteristic value can include:
Each pixel in video image to be identified is traveled through, it is determined that the current pixel point for traversing presets each in neighborhood The gray scale difference of the gray value of pixel and the gray value of current pixel point.
The standard deviation of multiple gray scale difference values is calculated as the corresponding adaptive threshold of current pixel point.
Using current pixel point corresponding adaptive threshold as the threshold value of unconventional operator, the unconventional with adaptive threshold is adopted Operator determines the unconventional characteristic value of current pixel point, and the unconventional characteristic value of current pixel point is adaptive for the unconventional of current pixel point Answer threshold trait value.
General default neighborhood can take 3 × 3 neighborhood block, then have 8 neighbor pixels after removing central pixel point, respectively The gray scale difference of this 8 neighbor pixels and central pixel point is calculated, this group can be calculated by this 8 gray scale difference value of group tried to achieve The standard deviation of gray scale difference recycles unconventional operator to calculate the unconventional of current pixel point as the corresponding adaptive threshold of current pixel point Characteristic value.
In some embodiments of the invention, video frequency identifying method provided in an embodiment of the present invention can also include:To treat Identification unconventional image is pre-processed and is divided into the polylith of equalization;The local three with adaptive threshold can be so adopted to be worth Pattern unconventional operator block-by-block calculates the unconventional adaptive threshold characteristic value of each pixel.
Pattern feature face and lower pattern feature face in step 3, determination, upper pattern feature face is by mould on each pixel Formula eigenvalue cluster is into lower pattern feature face is made up of mode characteristic values under each pixel.
On pixel, the span of mode characteristic values is 0-255, and the span of lower mode characteristic values is also 0-255, So, the gray value of pixel is replaced with into corresponding upper mode characteristic values or lower mode characteristic values, can determine respectively on Pattern feature face image and lower pattern feature face image.One video image to be identified can be converted to pattern feature face with And lower pattern feature face.
Step 4:Determine the training set of video image, Scatter Matrix is answered according to overall, asked by the method for singular value decomposition Go out one group of orthogonal characteristic vector;
For above-mentioned obtained orthogonal eigenvectors, the column vector of corresponding d × 1 dimension of each training video image To preserve its characteristic information.
Step 5, the video image I to be identified for arbitrarytBy formula yk=ETIt, extract its feature;
Step 6, video identification is carried out using the nearest neighbor classifier of Euclidean distance, if the result of identification is equal to minimum Value, then video image It to be identified and training image Ir belong to same class object.
Current video identification mostly is static identification, that is to say, that people must stand and be known on a fixed position Not, such technology of identification has recognition speed slowly, the narrow problem of use range.Cannot all meet in many important occasions The requirement of society.Dynamic video identification according to embodiments of the present invention can realize that people on the way walks, and crawl is regarded video camera at random Frequency image carries out the technique effect of quick identification.
According to video frequency identifying method of the present invention, quickly know in multiple targets that can be from dynamic video image The object for wanting to recognize is not gone out.
The above, the only present invention preferably specific embodiment, but protection scope of the present invention is not limited thereto, Any those familiar with the art the invention discloses technical scope in, the change or replacement that can be readily occurred in, Should all be included within the scope of the present invention.Therefore, protection scope of the present invention should be described with the protection model of claim Enclose and be defined.

Claims (7)

1. a kind of intelligent video identifying system, it is characterised in that the intelligent video identifying system includes:
Video capture device, for the collection of video image is carried out to acquisition target;
Video image locating module, for obtaining after video image the position modeling to video from face to profile, it is determined that collection The position of object is matched with picture position to be compared;
Image pre-processing module:After determining video location, view data is pre-processed, adjust view data, optimization ratio To effect;
Characteristics of image module is extracted, is required according to algorithm, the data needed in the image for having pre-processed is extracted;
Searching database, for obtaining video image training set, needs video image training set in the data and database extracted Data to be authenticated are compared;
Result display module, reponse system result, is for further processing according to result.
2. intelligent video identifying system as claimed in claim 1, it is characterised in that:
The searching database is further used for, and determines the training set of video image, answers Scatter Matrix according to overall, by unusual The method that value is decomposed obtains one group of orthogonal characteristic vector.
3. intelligent video identifying system as claimed in claim 1, it is characterised in that:
The extraction characteristics of image module is further used for, the video image I to be identified for arbitrarytBy formula yk=ETIt, Extract its feature.
4. intelligent video identifying system as claimed in claim 1, it is characterised in that:
The intelligent video identifying system is applied to the identification of object in the scenes such as market, bank, customs, military restricted zone.
5. intelligent video is carried out using intelligent video identifying system as claimed in claim 1 and knows method for distinguishing, it is characterised in that Comprise the steps:
Step one, the transformation space projection properties value determined in video image to be identified;
Step 2, upper mode characteristic values and lower mode characteristic values are determined according to transformation space projection properties value;
Pattern feature face and lower pattern feature face in step 3, determination, upper pattern feature face are special by pattern on each pixel Value indicative is constituted, and lower pattern feature face is made up of mode characteristic values under each pixel;
Step 4:Determine the training set of video image, Scatter Matrix is answered according to overall, obtain one by the method for singular value decomposition The orthogonal characteristic vector of group;
Step 5, the video image I to be identified for arbitrarytBy formula yk=ETIt, extract its feature;
Step 6, video identification is carried out using the nearest neighbor classifier of Euclidean distance, if the result of identification is equal to minimum of a value, Video image It to be identified and training image Ir belongs to same class object.
6. intelligent video as claimed in claim 5 knows method for distinguishing, it is characterised in that:
In the step 4, expression of the image in higher dimensional space is converted to into its characteristic in corresponding lower dimensional space, is realized Extraction to characteristics of image.
7. intelligent video recognition methods as claimed in claim 5, it is characterised in that:
The intelligent video recognition methods is applied to the identification of object in the scenes such as market, bank, customs, military restricted zone.
CN201610892187.XA 2016-10-12 2016-10-12 Intelligent video recognition method and system Pending CN106529412A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610892187.XA CN106529412A (en) 2016-10-12 2016-10-12 Intelligent video recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610892187.XA CN106529412A (en) 2016-10-12 2016-10-12 Intelligent video recognition method and system

Publications (1)

Publication Number Publication Date
CN106529412A true CN106529412A (en) 2017-03-22

Family

ID=58331646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610892187.XA Pending CN106529412A (en) 2016-10-12 2016-10-12 Intelligent video recognition method and system

Country Status (1)

Country Link
CN (1) CN106529412A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422909A (en) * 2020-11-09 2021-02-26 安徽数据堂科技有限公司 Video behavior analysis management system based on artificial intelligence

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163283A (en) * 2011-05-25 2011-08-24 电子科技大学 Method for extracting face characteristic based on local three-value mode
CN103927527A (en) * 2014-04-30 2014-07-16 长安大学 Human face feature extraction method based on single training sample

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163283A (en) * 2011-05-25 2011-08-24 电子科技大学 Method for extracting face characteristic based on local three-value mode
CN103927527A (en) * 2014-04-30 2014-07-16 长安大学 Human face feature extraction method based on single training sample

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚骋天等: "一种改进的局部三值模式的人脸识别方法", 《中国计量学院学报》 *
罗鑫等: "基于 PCA 算法人脸识别的 matlab 实现", 《黑龙江科技信息》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422909A (en) * 2020-11-09 2021-02-26 安徽数据堂科技有限公司 Video behavior analysis management system based on artificial intelligence
CN112422909B (en) * 2020-11-09 2022-10-14 安徽数据堂科技有限公司 Video behavior analysis management system based on artificial intelligence

Similar Documents

Publication Publication Date Title
KR101185525B1 (en) Automatic biometric identification based on face recognition and support vector machines
Qin et al. Deep representation for finger-vein image-quality assessment
CN106845328B (en) A kind of Intelligent human-face recognition methods and system based on dual camera
CN109255289B (en) Cross-aging face recognition method based on unified generation model
CN111126240B (en) Three-channel feature fusion face recognition method
CN102982322A (en) Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis)
CN102906787A (en) Facial analysis techniques
Bouchaffra et al. Structural hidden Markov models for biometrics: Fusion of face and fingerprint
CN108108760A (en) A kind of fast human face recognition
CN111274883B (en) Synthetic sketch face recognition method based on multi-scale HOG features and deep features
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN109977887A (en) A kind of face identification method of anti-age interference
CN109325472B (en) Face living body detection method based on depth information
CN107220598A (en) Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN110598574A (en) Intelligent face monitoring and identifying method and system
Warrell et al. Labelfaces: Parsing facial features by multiclass labeling with an epitome prior
CN106548130A (en) A kind of video image is extracted and recognition methods and system
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
CN113450369A (en) Classroom analysis system and method based on face recognition technology
Pei et al. Convolutional neural networks for class attendance
CN106529412A (en) Intelligent video recognition method and system
Scherhag Face morphing and morphing attack detection
CN113269136B (en) Off-line signature verification method based on triplet loss
CN114519897A (en) Human face in-vivo detection method based on color space fusion and recurrent neural network
Vivekanandam et al. Face recognition from video frames using hidden markov model classification model based on modified random feature extraction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170322

RJ01 Rejection of invention patent application after publication