CN107798331A - From zoom image sequence characteristic extracting method and device - Google Patents

From zoom image sequence characteristic extracting method and device Download PDF

Info

Publication number
CN107798331A
CN107798331A CN201711155771.8A CN201711155771A CN107798331A CN 107798331 A CN107798331 A CN 107798331A CN 201711155771 A CN201711155771 A CN 201711155771A CN 107798331 A CN107798331 A CN 107798331A
Authority
CN
China
Prior art keywords
network
training
slowly
layer
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711155771.8A
Other languages
Chinese (zh)
Other versions
CN107798331B (en
Inventor
赵彦明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN107798331A publication Critical patent/CN107798331A/en
Application granted granted Critical
Publication of CN107798331B publication Critical patent/CN107798331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • G06F18/21355Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis nonlinear criteria, e.g. embedding a manifold in a Euclidean space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides from zoom image sequence characteristic extracting method and device, gather from zoom image sequence;The control variable of initialisation image sequence subgraph;Become feature forest in feature extraction and structure layer slowly to carrying out slow become from zoom image, generation becomes network slowly with visual signature in the layer corresponding from zoom image;Judge to control whether variable meets training requirement, the character network in generation layer in the case where being unsatisfactory for requiring;Otherwise, the interlayer connection that visual signature in adjacent layer becomes all elements in network slowly is established;According to training set and self-defined training rule training interlayer network, the pond method extraction characteristic vector of deep learning is taken to be stored in feature database;Network Recognition or the new class of increase are trained according to unknown class.Present invention ensures that the non-linear slow expression for becoming feature extraction and becoming relation slowly with interlayer in layer of natural image, the base for extending algorithm have the self-contained visual selective feature of natural image and algorithm elasticity, reduce original algorithm computation complexity.

Description

From zoom image sequence characteristic extracting method and device
Technical field
The present invention relates to from zoom image technical field, more particularly, to from zoom image sequence characteristic extracting method and dress Put.
Background technology
Feature extraction is a concept in computer vision and image procossing.It is referred to using computer extraction image Information, determines whether the point of each image or region belong to a characteristics of image.The result of feature extraction is on image Point or region are divided into different subsets, and these subsets tend to belong to isolated point, continuous curve or continuous region.Figure As the extraction and selection of feature are links critically important in image processing process, there is important influence to successive image classification, And there is the characteristics of sample is few, and dimension is high for view data, useful information is extracted from image, it is necessary to be special to image Sign carries out dimension-reduction treatment, and feature extraction and feature selecting are exactly maximally effective dimension reduction method, and the purpose is to obtain a reflection number According to the higher proper subspace of essential structure, discrimination.
Linear Principal Component Analysis Algorithm (PCA) lacks non-linear to natural image in existing slow feature-extraction analysis algorithm The extraction of principal component feature, the Polynomial Expansion algorithm in feature of waiting a moment primal algorithm calculate complicated, while former slow feature extraction Algorithm is difficult to ensure that the self-contained visual selective feature of natural image and algorithm elasticity.
The content of the invention
In view of this, it is of the invention from zoom image sequence characteristic extracting method and device, and it is an object of the present invention to provide from zoom Image sequence characteristic extracting method and device, it is ensured that the non-linear slow feature extraction of natural image, while make what algorithm extended Base has the self-contained visual selective feature of natural image and algorithm elasticity, and passes through Monte Carlo Markov algorithm MCMC reduces algorithm computation complexity.
In a first aspect, the embodiments of the invention provide include from zoom image sequence characteristic extracting method:
According to collecting from zoom image, generated using geometric transform method from zoom image arrangement set;
Initialization controls variable to control the position from zoom sequence subgraph;
Non-linear slow change feature extraction is carried out from zoom image described in being specified to the control variable, and it is slow in structure layer Becoming feature and become feature forest with slow, generation becomes network slowly with visual signature in the layer corresponding from zoom image, wherein, institute State visual signature in layer and become network slowly and include visual signature in K layers and become visual signature in network and K+1 layers slowly and become network slowly;
Judge whether the control variable is less than from zoom image sequence size, if meeting to require, under algorithm construction Visual signature becomes network slowly in the layer of one image;Otherwise in the case where being unsatisfactory for requirement, according to visual signature in the layer It is slow to become network generation interlayer character network;
Initialization establishes visual signature in the K layers and becomes visual signature in network and K+1 layers slowly becomes institute in network slowly The interlayer for having element connects and generates the interlayer character network;
The interlayer character network is trained according to training set and customized training rule, and using the pond side of deep learning Method construction is slow to become feature, extraction characteristic vector deposit feature database;
Self-defined pigeon-hole principle and new class produce rule, and classification is identified to images to be recognized or produces new class.
With reference in a first aspect, the embodiments of the invention provide the possible embodiment of the first of first aspect, wherein, institute State and carry out non-linear slow change feature extraction from zoom image described in being specified to the control variable, and become feature in structure layer slowly Become feature forest with slow, generation becomes network slowly with visual signature in the layer corresponding from zoom image to be included:
To control variable sequentially to choose as main sequence described from zoom image using described;
The slow initial data set for becoming signature analysis is obtained from zoom image progress stochastical sampling to described, and to described initial Data set carries out geometric invariance conversion, to extend initial data set;
The initial data set after extension is serialized and normalized;
The original slow Principal Component Analysis Algorithm become in feature analysis al is replaced using Nonlinear Principal Component Analysis algorithm, is carried The Non-linear Principal Component feature of right image sub-sequence is derived from, and generates non-linear set of bases;
MCMC non-linear stochastics are carried out to the non-linear set of bases and extend the non-linear basis set that is expanded;
Beta pruning is carried out to the non-linear basis set of extension using self-defined nearly orthogonal beta pruning method and is obtaining nonlinear approximation just Hand over basis set;
Albefaction is carried out to each element of the orthogonal basis set of the nonlinear approximation;
The vision of the non-linear base of the orthogonal basis set of the nonlinear approximation after albefaction is fitted with Gabor fitting algorithms Receptive field parameter;
The first rule is defined, and the slow variable topological set of vision in the layer of base is become according to first rule construct slowly, according to The visual selective of receptive field is theoretical to establish connection side.
With reference to the first possible embodiment of first aspect, the embodiments of the invention provide second of first aspect Possible embodiment, the non-linear basis set bag that is expanded to the non-linear set of bases progress non-linear stochastic extension Include:
The non-linear base distribution density of element in the non-linear set of bases is calculated according to law of great number;
Using histogram method, calculate the descending distributed areas of the non-linear base distribution density, and according to the first criterion from Adapt to choose the most intensive region of distribution density;
To each selection region application MCMC algorithms of the descending distributed areas, predict and generate new basic function sequence;
By the new basic function sequence add the non-linear set of bases obtain the non-linear basis set of extension with supplement because Sampled images and the information lost.
With reference in a first aspect, the embodiments of the invention provide the possible embodiment of the third of first aspect, wherein, institute State according to training set and customized training rule training interlayer network, and become special slowly using the pond method construct of deep learning Sign, extraction characteristic vector deposit feature database include:
Main sequence and time main sequence are provided, is defined according to the main sequence and the secondary main sequence and initializes the first three-dimensional array, its In, first three-dimensional array includes training set identifier;
Each subnet in layer and the training set are subjected to self-defined convolutional calculation, it is maximum to obtain each subnet convolution Value, and the subbase collection that corresponding subnet includes, and be saved in the second three-dimensional array;
Using the convolution maximum as main sequence, descending arranges second three-dimensional array, using the convolution maximum as rope Draw the regularity of distribution of the second three-dimensional array described in adaptive polo placement, and intercept second three-dimensional array, generate from zoom image The level course of sequence generating layer participates in the subtree of network generation;
Networking is carried out to adjacent two layers using the second criterion;
Judge to control whether variable meets to require;
If being unsatisfactory for requiring, according to the first three-dimensional array value described in the first rules modification;
Judge that the training set identifies whether to meet to require;
If meeting to require, the self-defined convolutional calculation is carried out, and generate the new training subset of similar image to complete New training process;
If being unsatisfactory for requiring, according to the first three-dimensional array output characteristic vector to property data base;
Until the training of all classes terminates, output vision three becomes training network slowly.
With reference to the third possible embodiment of first aspect, the embodiments of the invention provide the 4th of first aspect kind Possible embodiment, wherein, it is described to include each subnet in layer and the self-defined convolutional calculation of training set progress:
The size of each subnet in computation layer;
Convolution window is intercepted according to the training set header element and with the size, and calculates the training set and the volume The convolution of product window, by the storage of convolution value into interim array;
Gradually sample and carry out convolutional calculation, the convolution value that each subnet is calculated is appended to the interim array In;
The maximum of the convolution value in the interim array is calculated, and maximum write-in described second is three-dimensional In array.
With reference in a first aspect, the embodiments of the invention provide the possible embodiment of the 5th of first aspect kind, wherein, institute Stating to be trained Network Recognition according to the unknown class of offer or increase new class includes:
Images to be recognized is passed through into the stochastical sampling method formation sequence image collection that not exclusively overlaps;
Using the sequence image set as input, and each subtree for becoming with vision three i-th layer of training network slowly does convolution, The sample average and variance of global variable convolution value are calculated, calculate each subtree according to the first rule is responded by the training set Response times, wherein, i is that the vision three becomes the level of training network slowly;
Subtree and the interlayer mapping relations of i-th layer of determination are corresponded at the i-th -1 layer according to nonzero element in the response times, It is determined that i-th layer of subtree set of convolution is done with the sequence image set;
Judge to control whether variable i meets to require;
If be unsatisfactory for, the 3rd three-dimensional array is characterized as according to the first Rule Extraction images to be recognized, and according to second Rule calculates the distance of bidimensional after the 3rd three-dimensional array and the first three-dimensional array, and some points are exported by the size of probability numbers Class;
Uniformly and incidence is small whether the probability distribution for judging the classification, and set described to be identified in a case of yes Image is new class, and calls training subalgorithm to carry out new class training.
Second aspect, the embodiments of the invention provide include from zoom image sequence signature extraction element:
Collecting unit, for being generated according to what is collected from zoom image from zoom image arrangement set;
Initialization unit, variable is controlled to control the position from zoom sequence subgraph for initializing;
Extraction unit, carried for carrying out the non-linear slow feature that becomes from zoom image described in being specified to the control variable Take, and become feature in structure layer slowly and become feature forest, generation and visual signature in the layer corresponding from zoom image with slow It is slow to become network, wherein, visual signature becomes network and become slowly in network and K+1 layers including visual signature in K layers slowly in the layer Visual signature becomes network slowly;
Judging unit, for for judging whether the control variable meets to require, satisfaction to be when requiring, in extraction unit Construct next from visual signature becomes network slowly in layer corresponding to zoom picture;Otherwise in the case where being unsatisfactory for requirement, building Network generation interlayer character network is become according to visual signature in the layer slowly in vertical unit;
Unit is established, establish visual signature in the K layers for initialization becomes visual signature in network and K+1 layers slowly The slow interlayer connection for becoming all elements in network, generates the interlayer character network;
Training unit, self-defined training rule, interlayer network is trained according to training set, and extract characteristic vector deposit feature Storehouse;
Recognition unit, customized sorting technique and new class Production conditions are established, be trained according to the unknown class of offer Network Recognition or the new class of increase.
With reference to second aspect, the embodiments of the invention provide the possible embodiment of the first of second aspect, wherein, institute Stating extraction unit includes:
To control variable sequentially to choose as main sequence described from zoom image using described;
The slow initial data set for becoming signature analysis is obtained from zoom image progress stochastical sampling to described, and to described initial Data set carries out geometric invariance conversion, to extend initial data set;
The initial data set after extension is serialized and normalized;
Non-linear Principal Component using Nonlinear Principal Component Analysis algorithm extraction nature image sub-sequence becomes feature slowly, and raw Into non-linear set of bases;
MCMC non-linear stochastics are carried out to the non-linear set of bases and extend the non-linear basis set that is expanded;
Beta pruning is carried out to the non-linear basis set of extension using self-defined nearly orthogonal beta pruning method and is obtaining nonlinear approximation just Hand over basis set;
Albefaction is carried out to each element of the orthogonal basis set of the nonlinear approximation;
The vision of the non-linear base of the orthogonal basis set of the nonlinear approximation after albefaction is fitted with Gabor fitting algorithms Receptive field parameter;
The first rule is defined, and the slow variable topological set of vision in the layer of base is become according to first rule construct slowly, according to The similarity degree of receptive field establishes connection side.
With reference to second aspect, the embodiments of the invention provide the possible embodiment of second of second aspect, wherein, institute Stating training unit also includes:
Main sequence and time main sequence are provided, is defined according to the main sequence and the secondary main sequence and initializes the first three-dimensional array, its In, first three-dimensional array includes training set identifier;
Each subnet in layer and the training set are subjected to self-defined convolutional calculation, it is maximum to obtain each subnet convolution Value, and the subbase collection that corresponding subnet includes, and be saved in the second three-dimensional array;
Using the convolution maximum as main sequence, descending arranges second three-dimensional array, using the convolution maximum as rope Draw the regularity of distribution of the second three-dimensional array described in adaptive polo placement, and intercept second three-dimensional array, generate from zoom image The level course of sequence generating layer participates in the subtree of network generation;
Networking is carried out to adjacent two layers using the second criterion;
Judge to control whether variable meets to require;
If being unsatisfactory for requiring, according to the first three-dimensional array value described in the first rules modification;
Judge that the training set identifies whether to meet to require;
If meeting to require, the self-defined convolutional calculation is carried out, and generate the new training subset of similar image to complete New training process;
If being unsatisfactory for requiring, according to the first three-dimensional array output characteristic vector to property data base;
Until the training of all classes terminates, output vision three becomes training network slowly.
With reference to second aspect, the embodiments of the invention provide the possible embodiment of the third of second aspect, wherein, institute Stating recognition unit includes:
Images to be recognized is passed through into the stochastical sampling method formation sequence image collection that not exclusively overlaps;
Using the sequence image set as input, and each subtree for becoming with vision three i-th layer of training network slowly does convolution, The sample average and variance of global variable convolution value are calculated, calculate each subtree according to the first rule is responded by the training set Response times, wherein, i is that the vision three becomes the level of training network slowly;
Subtree and the interlayer mapping relations of i-th layer of determination are corresponded at the i-th -1 layer according to nonzero element in the response times, It is determined that i-th layer of subtree set of convolution is done with the sequence image set;
Judge to control whether variable i meets to require;
If be unsatisfactory for, the 3rd three-dimensional array is characterized as according to the first Rule Extraction images to be recognized, and according to second Rule calculates the distance of bidimensional after the 3rd three-dimensional array and the first three-dimensional array, and some points are exported by the size of probability numbers Class;
Uniformly and incidence is small whether the probability distribution for judging the classification, and set described to be identified in a case of yes Image is new class, and calls training subalgorithm to carry out new class training.
The invention provides from zoom image sequence characteristic extracting method and device, gather from zoom image sequence;Initially Change the control variable of image sequence subgraph;To gloomy from slow change feature in the slow change feature extraction of zoom image progress and structure layer Woods, generation become network slowly with visual signature in the layer corresponding from zoom image;Judge to control whether variable meets training requirement, The character network in generation layer in the case where being unsatisfactory for requiring;Otherwise, establish visual signature in adjacent layer and become all in network slowly The interlayer connection of element;According to training set and self-defined training rule training interlayer network, the pond method of deep learning is taken to carry Characteristic vector is taken to be stored in feature database;Network Recognition or the new class of increase are trained according to unknown class.Present invention ensures that scheme naturally The non-linear slow expression for becoming feature extraction and becoming relation slowly with interlayer in layer of picture, makes the base that algorithm extends have natural image itself Comprising visual selective feature and algorithm elasticity, reduce original algorithm computation complexity
Other features and advantages of the present invention will illustrate in the following description, also, partly become from specification Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages are in specification, claims And specifically noted structure is realized and obtained in accompanying drawing.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate Appended accompanying drawing, is described in detail below.
Brief description of the drawings
, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical scheme of the prior art The required accompanying drawing used is briefly described in embodiment or description of the prior art, it should be apparent that, in describing below Accompanying drawing is some embodiments of the present invention, for those of ordinary skill in the art, before creative work is not paid Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is provided in an embodiment of the present invention from zoom image sequence characteristic extracting method flow chart;
Fig. 2 is provided in an embodiment of the present invention from zoom image sequence signature extraction element schematic diagram;
Fig. 3 is identification process schematic diagram provided in an embodiment of the present invention;
Fig. 4 is construction interlayer network provided in an embodiment of the present invention and feature extraction subalgorithm schematic diagram;
Fig. 5 is provided in an embodiment of the present invention another from zoom image sequence characteristic extracting method flow chart.
Icon:
10- collecting units;20- initialization units;30- extraction units;40- judging units;50- establishes unit;60- is trained Unit;70- recognition units.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with accompanying drawing to the present invention Technical scheme be clearly and completely described, it is clear that described embodiment is part of the embodiment of the present invention, rather than Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creative work premise Lower obtained every other embodiment, belongs to the scope of protection of the invention.
In existing slow change feature analysis al, linear Principal Component Analysis Algorithm (PCA) lacks non-linear to natural image The extraction of principal component feature, and the original slow Polynomial Expansion algorithm become in feature extraction algorithm calculates complexity, it is difficult to reflect The self-contained visual selective feature of natural image and algorithm elasticity.It is provided in an embodiment of the present invention from zoom figure based on this As sequence characteristic extracting method and device, it can be ensured that the Nonlinear feature extraction of natural image, while the base for extending algorithm With the self-contained visual selective feature of natural image and algorithm elasticity, algorithm computation complexity is reduced.
Embodiment one:
Fig. 1 is provided in an embodiment of the present invention from zoom image sequence characteristic extracting method flow chart.
Reference picture 1, include from zoom image sequence characteristic extracting method:
Step S101, generated according to collecting from zoom image from zoom image arrangement set;
Specifically, by being gathered from zoom image collecting device from zoom image sequence, formation sequence set LBS={ LB (i) |, LB (i) is represented from i-th of image, wherein i=1,2,3 ..., N in zoom image arrangement set }.
Step S102, initialization control variable to control the position from zoom sequence subgraph;
Specifically, initialization control variable i=1, is controlled from zoom sequence subgraph position.
Step S103, the control variable is specified described in from zoom image carry out it is non-linear it is slow become feature extraction, and Slowly become feature in structure layer and become feature forest with slow, generation becomes net slowly with visual signature in the layer corresponding from zoom image Network, wherein, visual signature becomes network and becomes vision spy in network and K+1 layers slowly including visual signature in K layers slowly in the layer Sign is slow to become network;
Specifically, using based on improve slow signature analysis from becoming feature extracting method in zoom image sequence layer slowly, it is right I-th of image LB (i) carries out slow change feature extraction in LBS set, and becomes feature forest in structure layer slowly, generation and LB (i) phases Visual signature becomes network slowly in corresponding RF_woeNL_Base (i, k) layer.I++, i.e., i is carried out for this every step plus 1 operation.
Step S104, judge to control whether variable is pointed to inside image sequence, and generated in the case where being unsatisfactory for requiring Character network in layer;
Specifically, i is worked as<During=N, algorithm return to step S103;Otherwise, algorithm transfers to step S105, generates interlayer Character network.
Step S105, establish visual signature in K layers become slowly visual signature in network and K+1 layers become slowly in network own The interlayer connection of element;
Specifically, the control variable of collection of network in layer is pointed in initialization, internetwork all elements in initialization layer Connection weight;Visual signature becomes network as RF_woeNL_Base (i, k) slowly in K layers, and visual signature becomes net slowly in K+1 layers Network is RF_woeNL_Base (j, k), each height established between RF_woeNL_Base (i, k) and RF_woeNL_Base (j, k) The full connection from i layers to j layers is set, i and j represent the forest on adjacent two layers.That is each member in RF_woeNL_Base (i, k) Element has element to establish interlayer connection with the calculation in RF_woeNL_Base (j, k), shown in connection weight such as formula (1),
Step S106, by the theoretical inspiration of visual selective, definition training rule, interlayer network is trained according to training set, and Extract characteristic vector deposit feature database;
Specifically, the training atlas User_training_set (index_trainning) given according to user, training Interlayer network wnet simultaneously extracts characteristic vector deposit feature database, as shown in Figure 5.
Step S107, self-defined pigeon-hole principle and new class produce rule, images to be recognized are identified classification or production Raw new class.
Specifically, the unknown class provided according to user, with reference to customized pigeon-hole principle, in the discontented then classificating requirement of training When, start the feature that feature extraction algorithm extracts new class, increase new class;Otherwise identified using training network.
The step of main algorithm flow, will be described in detail below.
Firstly, for core submethod, i.e. being carried from feature in zoom image sequence layer based on the slow change signature analysis of improvement Method is taken to be illustrated.Method and step is as follows:
(1) generation from zoom image arrangement set LBS=LB (i) | LB (i) is represented from zoom image arrangement set In i-th of image, wherein, i is image cursor, i=1,2,3 } in, to control variable i sequentially to be chosen from change from 1 to N as main sequence Burnt image LB (i).
(2) on from zoom image LB (i), stochastical sampling size is m1×n1Incomplete coincidence subgraph N1It is individual, formed The slow initial data set LB_SQ (i, k) for becoming signature analysis, wherein i represent that i-th of LBS set represents son from zoom image, k Image is in the position that LB_SQ (i, k) gathers, k=1,2,3 ... N;Opened by image geometry consistency and from zoom transform process Hair, geometric invariance conversion is carried out to set LB_SQ (i, k), extends initialization data set.
(3) set LB_SQ (i, k) is serialized and normalized.
(4) the linear principal component in original slow feature extraction algorithm is replaced with Nonlinear Principal Component Analysis (NLPCA) algorithm Parser (PCA), the Non-linear Principal Component feature of nature image sub-sequence figure is extracted, generates non-linear set of bases NL_Base (i,k)。
(5) to solve the problems, such as the loss information of image stochastical sampling, using based on the non-linear NL_ for improving MCMC algorithms Base (i) set of bases extended method substitutes the Polynomial Expansion algorithm in original slow feature extraction algorithm, realizes NL_Base (i) The non-linear stochastic extension of set of bases, prediction is possibly comprised in natural image but unsampled base, generates the non-thread of extension Property basis set eNL_Base (i, k).
(6) using self-defined nearly orthogonal pruning method to eNL_Base (i, k) beta pruning.Basis set eNL_Base (i, k) is Super complete set of bases, but test and find that data redundancy is big, computation complexity is too high.Based on this, nearly orthogonal criterion is taken R_orthogonal, the base eNL_Base (i, k) of beta pruning non-linear orthogonal, obtain optimizing the orthogonal basis set oeNL_ of nonlinear approximation Base(i,k).Nearly orthogonal method R_orthogonal criterions are expressed as shown in formula (2):
If R (i, k1,k2)<ε _ orth then oeNL_Base (i, k1) and oeNL_Base (i, k2) closely, rather than Nearly orthogonal, therefore, reject one of them;Otherwise both have the feature of nearly orthogonal, must be retained in simultaneously in base space.Its Middle ε _ orth is nearly orthogonal base method of discrimination control parameter, and when ε _ orth is larger, nearly orthogonal condition is loose, cuts dynamics It is larger;It is on the contrary then nearly orthogonal condition is strict, it is smaller to cut dynamics.This method can preferably cut eNL_Base (i, k) set Internal information redundancy, effectively control over-fitting problem.
(7) to each element albefaction of non-linear basis set oeNL_Base (i, k) of optimization, generate non-thread after whitening processing Property nearly orthogonal basis set woeNL_Base (i).
(8) Gabor fitting algorithms are used, the non-linear base RLB (i, s) for fitting woeNL_Base (i, k) element regards Feel receptive field parameter, the parameter is represented as shown in formula (3);
[i, frequency, orientation, x0, y0, phase, a, b, alpha, bieta]=T(RLB(i,s))
(3)
Wherein, T is Gabor fitting algorithms, and left side is visual experience open country parameter, and i, s represent that the parameter of fitting is being schemed respectively The position of image set and formation sequence, frequency represent Gabor function frequencies, and orientation represents the side of Gabor functions To x0, y0 represent Gabor functions center, and phase represents the phase of Gabor functions.A, b represent the axle of Gabor transformation It is long.
(9) due to primate visual selective (on same tomographic image, the receptive field of neighbour have similar frequency with Direction character, there is similar eye response feature, be distributed in same Visual tree.Mutually from receptive field there are distinct frequencies With direction character, there is different eye response features, be distributed in different Visual trees, can generate slow become with tomographic image regards Response forest is felt, for extracting the slow variable topological structure feature from zoom image) inspire.Define regular R, i.e., the first rule.With Based on regular R, construction i-th layer slow change base RF_woeNL_Base (i, k) layer in the slow variable topological set LB_S of vision (i, T), i represents i-th of image, t=1,2,3 ..., L;L<=K;Represent t-th of slow structure changes element of i-th of image.Regular R's Calculate as shown in formula (4) (5) (6);
Wherein s1、s2I-th layer of two bases are represented, dis represents Similarity-Weighted measured value.Dis values are smaller, two measurements s1、s2Similarity it is higher.R (i, s1, s2) represent i-th layer of receptive field s1With s2Similarity degree.R (i, s1, s2)=1 represents It is close, a connection side can be established, ε _ edge represents the connection constraints on structure side, represents two receptive field similarity degrees.By force Link is that the parameter is smaller, and the parameter is larger during weak interlinkage.λiFrequency and contribution degree of the direction in similarity are represented, is had Adaptive polo placement feature, and non-artificial regulation.
RF_woeNL_Base_para (i, k)={ [i, center_cluster_LB_S (I, t), RLB (i, s, 1), RLB (i,s,2),RLB(i,s,3),…,RLB(i,s,K)]}
(7)
As shown in formula (7), center_cluster_LB_S (i, t) represents the cluster centre of the receptive field functional group, It is the root node of the subtree.RLB (i, s, j) represents the jth stalk tree of forest in this layer.Subtree structure principle is to keep building side On principle basis, frequency and the direction of subtree root node left-hand subtree are respectively less than right side subtree, i.e. the number of left-hand subtree is equal to The value that right side subtree number is equal to 1/2 subtree number rounds.
, below will be to the son herein it should be noted that " set of bases extended method " on being mentioned in core submethod Method, i.e., it is described in detail based on non-linear NL_Base (i) the set of bases extended method for improving MCMC algorithms.Method and step It is as follows:
(1) according to statistical law of great number, element in NL_Base (i) basis set is calculated using density calculation formula (8) Probability distribution density
Wherein, p is probability distribution density, and N is occurrence number, i.e. frequency.
(2) histogram method is used, calculates non-linear base distribution densityDescending distributed areas be Wherein r1≤r, and utilize criterion RMCMC, i.e. the first criterion is adaptive to choose distribution most close quarters, is expressed as
It should be noted that criterion RMCMCAs shown in formula (9);
Wherein, k values are first t value that formula (8) is set up, and energy_ ε are that energy keeps the factor.
(3) it is rightEach selection region application MCMC algorithms, prediction generates new basic function sequence, and is added to NL_Base (i) basis set, to supplement the information lost by sampled images.MCMC prediction algorithms on [Pi, Pj] have in the realization of the step Body is:
Assuming that the prediction base number that will be generated is k1;K1 is odd number, preferably prime number;Calculate Pt, t >=I and t≤j; Pt (t+1)=pt (t)+delta*pt (t), is divided into limited subsequence set by [Pi, Pj], takes the boundary point of set to generate MCMC The transfer matrix of algorithm;Delta=(kj-ki)/K1;Predict and generate k1 basic function.
On the construction interlayer network in step S106 and feature extraction subalgorithm, will be described in detail below, such as Fig. 4 It is shown.Method and step is as follows:
(0) initialization control variable i=1;I represents the reference pointer from image in zoom image sequence.With last straton The frequency of tree is main sequence, using in identical subtree using direction as time main sequence, with the number number_base_ of last layer of base element N_layer is size, define and initialize three-dimensional array visual_feature_vector (gk, 2, number_base_N_ Layer)={ 0;0;0 }, i.e. the first three-dimensional array.Wherein, gk represents training set identifier, and 2 represent dimension, number_base_N_ Layer represents the dimension for the feature extracted.
(1) in layer, each subnet RF_woeNL_Base_para (i, k) and training set trainning_ in i-th layer first Set carries out self-defined convolutional calculation, obtains the maximum of each subnet convolution sum and produces the subbase that the subnet of the result includes Collection, and be saved into three-dimensional array T_MAX (layer_inde x, sub_tre e_index, Ma x_conv_value, Sub_base_set), i.e. the second three-dimensional array, wherein, layer_index represents the level course position from zoom image sequence generating layer Put, Max_conv_value is convolution maximum, and sub_base_set is the subbase collection for producing maximum, sub_tree_index Subnet position in expression layer.
Self-defined convolution is:
(11) size for calculating RF_woeNL_Base_para (i, k) subnet is size_sub_net;
(12) since training set trainning_set header elements, using size_sub_n et as window size, interception volume Product window win_convplution;Calculate trainning_set and win_convplution convolution.By the knot of convolutional calculation To two dimension, array temp_max_array (con_value, window_index), wherein con_value are volume temporarily for fruit storage Product value, subnet position, i.e. sub_tree_index corresponding to the window_index convolution values;
(13) using step as window sample step-length, gradually sample, repeat step (12).The convolution that each subnet is calculated Value is appended in array temp_max_array;
The maximum MAX of con_value in array temp_max_array is calculated, and writes three-dimensional array T_MAX (layer_inde x, sub_tre e_index, Ma x_value, su b_base_set), wherein three-dimensional array meet Layer_index=i;Sub_tree_index=sub_tree_index;Max_value=MAX;sub_net(sub_t ree_index)。
(2) using Max_value as main sequence, descending arrangement array T_MAX, and ensuring energy according to energy contribution degree model Loss is not higher than under the conditions of 10%, the T_MAX regularity of distribution is calculated using Max_value as index adaptometer, with T_MAX's Max_value is classified as basis, and K maximum before the taking of T_MAX arrays is calculated using histogramming algorithm adaptometer.Using k as finger Mark interception T_MAX arrays, generation layer_index layers participate in the subtree of network generation.And i++;
(3) when i >=1 and i≤N.Utilize self-defined criterion Rcreate_net, i.e. the second criterion is to adjacent two layers i and j= I-1 networkings.Wherein inspired by visual selective, criterion Rcreate_netIt is defined as:Definition meets the two-dimensional function g of visual selective (frequency, oritation), calculate i-th k subnet of i layers to all T_MAX subnets of jth layer g (frequency, Oritation) gradient minimum value is diff (i, ik, j, t), and by w (i, j, t+1)=w (i, j, t)+η × diff (i, ik, j, K) weights that all ik subnets produce the subnet of minimal gradient change to j layers are updated;
(4) as i≤N, algorithm is transferred to (1);Otherwise, network training is completed, and algorithm is gone to (5);
(5) feature generating process.
According to regular R modification three-dimensional array visual_fea ture_vecto r (gk, 2, number_bas e_N_ Layer) value.Regular R is specially formula (10) (11):
(6) when gk≤every image frequency of training, algorithm go to (1), generate the new training subset of similar image, complete new Training process, and update characteristic variable;Otherwise, such picture training is completed, and output characteristic vector arrives property data base;
(7) at the end of all classes are trained, algorithm stops, and output class i visions three become training network wnet slowly.
On step S107 identification process subalgorithm, will be described in more detail below, as shown in Figure 3.Method and step is such as Shown in lower:
(0) initialization control variable i=1, i represents wnet (i) level;Images to be recognized identify_image is led to It is m to cross the stochastical sampling method generation size not exclusively overlapped1×n1Sequence image set indentify_set.
(1) input using indentify_set as identification network wnet (i), is rolled up with i-th layer of each subtree of wnet Product, the sample average and variance of global variable convolution value are calculated, according to estimating rule Rfiring, i.e. the first rule, calculate each subtree The number firing_munber (i, k) of collection response is trained to, and performs i=i+1 computings;Estimate rule and calculate such as formula (12) shown in (13);
Wherein, k is i-th image kth stalk tree k ∈ [1,2,3 ..., size_sub_tree (wnet (i)))].
(2) determined according to nonzero element in firing_munber (i, k) in subtree corresponding to wnet (i-1) and wnet (i) Interlayer mapping relations, it is determined that doing i-th layer of subtree set of convolution with indentify_set.
(3) as i≤N, algorithm returns to (1) step;Otherwise algorithm is transferred to (4) step extraction characteristic value;
(4) this step is then characterized as three-dimensional array visual_ according to regular R, i.e., the first rule, extraction images to be recognized Feature_vector_test (gk, 2, number_base_N_layer) value.
According to rule Ridentify, i.e. the second rule, calculate in feature visual_feature_vector_test and class libraries The distance of feature visual_feature_vector (gk, 2, number_base_N_layer) bidimensionals afterwards, it is defeated by probability size The larger classification of k probability before going out.
RidentifyIt is that side degree definition such as formula (14) (15) is shown:
δ2(test_feature (k), original_feature (k))=AND (test_feature (k), original_feature(k))(15)
Wherein, ptest(k) the kth position ignition times of testing image generation feature, pfeature(k) the kth position for being compared class is special Ignition times, the kth position feature of test_feature (k) testing images generation are levied, original_feature (k) is compared class Kth position feature.The side degree defines reflection detection object and matches the probability metrics between class, introduces probabilistic method and improves matching essence Degree.
When the distribution of each class probability is more uniform and incidence is smaller, the image is judged for new class, calls training Algorithm, realize new class training.
The invention provides from zoom image sequence characteristic extracting method and device, Nonlinear Principal Component Analysis algorithm is utilized (NLPCA) the slow linear Principal Component Analysis Algorithm (PCA) for becoming feature primal algorithm is substituted, extracts the non-linear master from zoom image Composition characteristics, it is ensured that the Nonlinear feature extraction of natural image;Also, changed using law of great number and natural image smooth features Enter MCMC algorithms and substitute Polynomial Expansion algorithm in primal algorithm, it is ensured that the base of innovatory algorithm extension has natural image itself bag The visual selective feature contained and algorithm elasticity, while reduce algorithm computation complexity;Moreover, it is also proposed that nearly orthogonal to Measure technology of prunning branches.The method for taking nearly orthogonal base, the basis set of generation is cut, under conditions of super complete basis set is ensured Realize that basis set optimizes;In addition, the pond computational theory of deep learning is introduced, user-defined feature vector structure, feature pool and self-defined Estimate, realize the raw characterization method of pond prediction, in probability space, realize that classification is correct reliable;Secondly, m:N interlayer maximum K rings Solution 1 should be mapped:M maps the interception problem to information, extracts broadly feature, improves the recognition capability of algorithm.
Embodiment two:
As shown in Fig. 2 a kind of include from zoom image sequence signature extraction element:
Reference picture 2, collecting unit 10, for being generated according to what is collected from zoom image from zoom image arrangement set;
Initialization unit 20, variable is controlled to control the position from zoom sequence subgraph for initializing;
Extraction unit 30, non-linear slow change feature extraction is carried out from zoom image for being specified to control variable, And become feature in structure layer slowly and become feature forest with slow, generation becomes net slowly with visual signature in the layer corresponding from zoom image Network, wherein, visual signature becomes network slowly in layer includes in K layers that to become visual signature in network and K+1 layers slowly slow for visual signature Become network;
It should be noted that it is special by vision in the adjacent layer from the extraction of zoom picture that visual signature becomes network collection slowly in layer Sign is slow to become what network formed.How many is individual to become network slowly from zoom picture with regard to visual signature in how many layer.
Judging unit 40, for judging to control whether variable meets to require, if meeting to require, go back to extraction unit 30;Otherwise in the case where being unsatisfactory for requirement, then it is transferred to and establishes unit 50, network generation interlayer is become according to visual signature in layer slowly Character network;
Unit 50 is established, establish visual signature in the K layers for initialization becomes vision spy in network and K+1 layers slowly The slow interlayer connection for becoming all elements in network of sign, i.e., vision becomes character network slowly in adjacent two layers layer;
Training unit 60, is inspired by visual selective, self-defined training rule, and interlayer network is trained according to training set, and Extract characteristic vector deposit feature database;
Recognition unit 70, according to the unknown class of offer, criteria theorem is produced according to customized pigeon-hole principle and new class, entered Row training network identifies or increased new class.
According to the exemplary embodiment of the present invention, extraction unit includes:
It is described from zoom image to control variable sequentially to choose as main sequence;
The initial data set of slow change signature analysis is obtained to carrying out stochastical sampling from zoom image, and initial data set is entered Row geometric invariance converts, to extend initial data set;
Initial data set after extension is serialized and normalized;
The original slow Principal Component Analysis Algorithm become in characteristic analysis method is substituted using Nonlinear Principal Component Analysis algorithm, is carried The Non-linear Principal Component feature of right image sub-sequence is derived from, and generates non-linear set of bases;
Using Monte Carlo Markov algorithm, the progress non-linear stochastic extension of non-linear set of bases is expanded non-thread Property basis set;
Using self-defined nearly orthogonal beta pruning method nonlinear approximation orthogonal basis is obtained to extending non-linear basis set progress beta pruning Collection;
Albefaction is carried out to each element of the orthogonal basis set of nonlinear approximation;
The visual experience of the non-linear base of the orthogonal basis set of nonlinear approximation after albefaction is fitted with Gabor fitting algorithms Wild parameter;
The first rule is defined, and the slow variable topological set of vision in the layer of base is become according to the first rule construct slowly, according to impression Wild visual selective principle establishes connection side.
According to the exemplary embodiment of the present invention, training unit also includes:
Main sequence and time main sequence are provided, is defined according to main sequence and time main sequence and initializes the first three-dimensional array, wherein, the one or three Dimension group includes training set identifier;
Each subnet in layer and the training set are subjected to self-defined convolutional calculation, obtain each subnet convolution maximum, with And the subbase collection that corresponding subnet includes, and be saved in the second three-dimensional array;
Using convolution maximum as main sequence, descending arranges the second three-dimensional array, using convolution maximum as index adaptive polo placement The regularity of distribution of second three-dimensional array, and the second three-dimensional array is intercepted, generate the level course participation from zoom image sequence generating layer The subtree of network generation;
Networking is carried out to adjacent two layers using the second criterion;
Judge to control whether variable meets to require;
If being unsatisfactory for requiring, according to first rules modification the first three-dimensional array value;
Whether training of judgement set identifier, which meets, requires;
If meeting to require, self-defined convolutional calculation is carried out, and generate the new training subset of similar image to complete newly to instruct Practice process;
If being unsatisfactory for requiring, according to the first three-dimensional array output characteristic vector to property data base;
Until the training of all classes terminates, output vision three becomes training network slowly.
According to the exemplary embodiment of the present invention, recognition unit includes:
Images to be recognized is passed through into the stochastical sampling method formation sequence image collection that not exclusively overlaps;
Using sequence image set as input, and each subtree for becoming with vision three i-th layer of training network slowly does convolution, calculating The sample average and variance of global variable convolution value, calculate each subtree according to the first rule and be trained to collect the response times responded, Wherein, i is the level that vision three becomes training network slowly;
According to nonzero element in response times in the i-th -1 layer corresponding subtree and the interlayer mapping relations of i-th layer of determination, it is determined that I-th layer of subtree set of convolution is done with sequence image set;
Judge to control whether variable i meets to require;
If be unsatisfactory for, the 3rd three-dimensional array is characterized as according to the first Rule Extraction images to be recognized, and according to second Rule calculates the distance of bidimensional after the 3rd three-dimensional array and the first three-dimensional array, and some points are exported by the size of probability numbers Class;
Whether uniformly and incidence is small to judge the probability distribution of classification, and it is new to set images to be recognized in a case of yes Class, and call training subalgorithm to carry out new class training.
What the embodiment of the present invention was provided produces from the computer program of zoom image sequence characteristic extracting method and device Product, including the computer-readable recording medium of program code is stored, the instruction that described program code includes can be used for before performing Method described in the embodiment of the method for face, specific implementation can be found in embodiment of the method, will not be repeated here.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description With the specific work process of device, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
If the function is realized in the form of SFU software functional unit and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part to be contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are causing a computer equipment (can be People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the present invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.In addition, Term " first ", " second ", " the 3rd " are only used for describing purpose, and it is not intended that instruction or hint relative importance.
Finally it should be noted that:Embodiment described above, it is only the embodiment of the present invention, to illustrate the present invention Technical scheme, rather than its limitations, protection scope of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair It is bright to be described in detail, it will be understood by those within the art that:Any one skilled in the art The invention discloses technical scope in, it can still modify to the technical scheme described in previous embodiment or can be light Change is readily conceivable that, or equivalent substitution is carried out to which part technical characteristic;And these modifications, change or replacement, do not make The essence of appropriate technical solution departs from the spirit and scope of technical scheme of the embodiment of the present invention, should all cover the protection in the present invention Within the scope of.Therefore, protection scope of the present invention described should be defined by scope of the claims.

Claims (10)

  1. It is 1. a kind of from zoom image sequence characteristic extracting method, it is characterised in that including:
    Generated according to collecting from zoom image from zoom image arrangement set;
    Initialization controls variable to control the position from zoom sequence subgraph;
    Non-linear slow change feature extraction is carried out from zoom image described in being specified to the control variable, and is become slowly in structure layer special Sign becomes feature forest with slow, and generation becomes network slowly with visual signature in the layer corresponding from zoom image, wherein, the layer Interior visual signature becomes network and includes visual signature in K layers slowly to be become visual signature in network and K+1 layers and becomes network slowly slowly;
    Judge whether the control variable meets to require, if meeting to require, method returns to the process that the preceding paragraph falls expression, calculates Visual signature becomes network slowly in the layer of next adjacent layer;Otherwise in the case where being unsatisfactory for requirement, method is transferred to following expression Process, according to visual signature in the layer become slowly network generation interlayer character network;
    Establish visual signature in the K layers and become visual signature in network and the K+1 layers slowly and become all elements in network slowly Interlayer connection;
    Become special slowly according to training set and customized training rule training interlayer network, and using the pond method construct of deep learning Sign, extraction characteristic vector deposit feature database;
    Self-defined pigeon-hole principle and new class produce rule, and classification is identified to images to be recognized or produces new class.
  2. It is 2. according to claim 1 from zoom image sequence characteristic extracting method, it is characterised in that described to the control Variable specify it is described from zoom image carry out it is non-linear it is slow become feature extraction, and become feature in structure layer slowly and slow change feature is gloomy Woods, generation becomes network slowly with visual signature in the layer corresponding from zoom image to be included:
    To control variable sequentially to choose as main sequence described from zoom image using described;
    The slow initial data set for becoming signature analysis is obtained from zoom image progress stochastical sampling to described, and to the primary data Collection carries out geometric invariance conversion, to extend initial data set;
    The initial data set after extension is serialized and normalized;
    Using Nonlinear Principal Component Analysis algorithm, the Non-linear Principal Component feature of nature image sub-sequence is extracted, and is generated non-thread Property set of bases;
    Monte Carlo Markov MCMC non-linear stochastics are carried out to the non-linear set of bases and extend the non-linear base that is expanded Collection;
    Beta pruning is carried out to the non-linear basis set of extension using self-defined nearly orthogonal beta pruning method and obtains nonlinear approximation orthogonal basis Collection;
    Albefaction is carried out to each element of the orthogonal basis set of the nonlinear approximation;
    The visual experience of the non-linear base of the orthogonal basis set of the nonlinear approximation after albefaction is fitted with Gabor fitting algorithms Wild parameter;
    The first rule is defined, and the slow variable topological set of vision in the layer of base is become according to first rule construct slowly, according to impression Wild visual selective is theoretical to establish connection side, generates visual signature in the layer and becomes network slowly.
  3. It is 3. according to claim 2 from zoom image sequence characteristic extracting method, it is characterised in that described to described non-thread Property set of bases carry out the MCMC non-linear stochastics extension non-linear basis set that is expanded and include:
    The non-linear base distribution density of element in the non-linear set of bases is calculated according to law of great number;
    Using histogram method, the descending distributed areas of the non-linear base distribution density are calculated, and it is adaptive according to the first criterion Choose the most intensive region of distribution density;
    To each selection region application MCMC algorithms of the descending distributed areas, predict and generate new basic function sequence;
    The new basic function sequence is added into the non-linear set of bases and obtains the non-linear basis set of extension to supplement because of sampling Image and the information lost.
  4. It is 4. according to claim 1 from zoom image sequence characteristic extracting method, it is characterised in that described according to training set With customized training rule training interlayer network, and feature is become using the pond method construct of deep learning slowly, extraction feature to Amount deposit feature database includes:
    Main sequence and time main sequence are provided, is defined according to the main sequence and the secondary main sequence and initializes the first three-dimensional array, wherein, institute Stating the first three-dimensional array includes training set identifier;
    Each subnet in layer and the training set are subjected to self-defined convolutional calculation, obtain each subnet convolution maximum, with And the subbase collection that corresponding subnet includes, and be saved in the second three-dimensional array;
    Using the convolution maximum as main sequence, descending arranges second three-dimensional array, using the convolution maximum to index oneself Adapt to calculate the regularity of distribution of second three-dimensional array, and intercept second three-dimensional array, generate from zoom image sequence The level course of generating layer participates in the subtree of network generation;
    Networking is carried out to adjacent two layers using the second criterion;
    Judge to control whether variable meets to require;
    If being unsatisfactory for requiring, according to the first three-dimensional array value described in the first rules modification;
    Judge that the training set identifies whether to meet to require;
    If meeting to require, the self-defined convolutional calculation is carried out, and generate the new training subset of similar image to complete newly to instruct Practice process;
    If being unsatisfactory for requiring, according to the first three-dimensional array output characteristic vector to property data base;
    Until the training of all classes terminates, output vision three becomes training network slowly.
  5. It is 5. according to claim 4 from zoom image sequence characteristic extracting method, it is characterised in that described by each son in layer Net carries out self-defined convolutional calculation with the training set to be included:
    The size of each subnet in computation layer;
    Convolution window is intercepted according to the training set header element and with the size, and calculates the training set and the convolution window The convolution of mouth, by the storage of convolution value into interim array;
    Gradually sample and carry out convolutional calculation, the convolution value that each subnet is calculated is appended in the interim array;
    The maximum of the convolution value in the interim array is calculated, and the maximum is write into second three-dimensional array In.
  6. It is 6. according to claim 1 from zoom image sequence characteristic extracting method, it is characterised in that described to figure to be identified Include as classification is identified or produces new class:
    Images to be recognized is passed through into the stochastical sampling method formation sequence image collection that not exclusively overlaps;
    Using the sequence image set as input, and each subtree for becoming with vision three i-th layer of training network slowly does convolution, calculating The sample average and variance of global variable convolution value, the sound responded according to the first rule calculating each subtree by the training set Number is answered, wherein, i is the level that the vision three becomes training network slowly;
    According to nonzero element in the response times in the i-th -1 layer corresponding subtree and the interlayer mapping relations of i-th layer of determination, it is determined that I-th layer of subtree set of convolution is done with the sequence image set;
    Judge to control whether variable i meets to require;
    If be unsatisfactory for, the 3rd three-dimensional array is characterized as according to the first Rule Extraction images to be recognized, and according to the second rule The distance of bidimensional after the 3rd three-dimensional array and the first three-dimensional array is calculated, some classification are exported by the size of probability numbers;
    Whether uniformly and incidence is small the probability distribution for judging the classification, and sets the images to be recognized in a case of yes For new class, and training subalgorithm is called to carry out new class training.
  7. It is 7. a kind of from zoom image sequence signature extraction element, it is characterised in that including:
    Collecting unit, for being generated according to what is collected from zoom image from zoom image arrangement set;
    Initialization unit, variable is controlled to control the position from zoom sequence subgraph for initializing;
    Extraction unit, for described in being specified to the control variable from zoom image carry out it is non-linear it is slow become feature extraction, and Slowly become feature in structure layer and become feature forest with slow, generation becomes net slowly with visual signature in the layer corresponding from zoom image Network, wherein, visual signature becomes network and becomes vision spy in network and K+1 layers slowly including visual signature in K layers slowly in the layer Sign is slow to become network;
    Judging unit, for judging whether the control variable meets to require, when satisfaction requires, constructed in extraction unit next It is individual from visual signature becomes network slowly in layer corresponding to zoom picture;Otherwise in the case where being unsatisfactory for requirement, in unit is established Network generation interlayer character network is become according to visual signature in the layer slowly;
    Unit is established, become visual signature in network and the K+1 layers slowly for establishing visual signature in the K layers becomes net slowly The interlayer connection of all elements in network;
    Training unit, self-defined training rule, interlayer network is trained according to training set, and extract characteristic vector deposit feature database;
    Recognition unit, customized sorting technique and new class Production conditions are established, network is trained according to the unknown class of offer Identify or increase new class.
  8. It is 8. according to claim 7 from zoom image sequence signature extraction element, it is characterised in that the extraction unit bag Include:
    To control variable sequentially to choose as main sequence described from zoom image using described;
    The slow initial data set for becoming signature analysis is obtained from zoom image progress stochastical sampling to described, and to the primary data Collection carries out geometric invariance conversion, to extend initial data set;
    The initial data set after extension is serialized and normalized;
    Using Nonlinear Principal Component Analysis algorithm, the Non-linear Principal Component of extraction nature image sub-sequence becomes feature slowly, and generates Non-linear set of bases;
    Monte Carlo Markov MCMC non-linear stochastics are carried out to the non-linear set of bases and extend the non-linear base that is expanded Collection;
    Beta pruning is carried out to the non-linear basis set of extension using self-defined nearly orthogonal beta pruning method and obtains nonlinear approximation orthogonal basis Collection;
    Albefaction is carried out to each element of the orthogonal basis set of the nonlinear approximation;
    The visual experience of the non-linear base of the orthogonal basis set of the nonlinear approximation after albefaction is fitted with Gabor fitting algorithms Wild parameter;
    The first rule is defined, and the slow variable topological set of vision in the layer of base is become according to first rule construct slowly, according to impression Wild visual selective is theoretical to establish connection side.
  9. It is 9. according to claim 7 from zoom image sequence signature extraction element, it is characterised in that the training unit is also Including:
    Main sequence and time main sequence are provided, is defined according to the main sequence and the secondary main sequence and initializes the first three-dimensional array, wherein, institute Stating the first three-dimensional array includes training set identifier;
    Each subnet in layer and the training set are subjected to self-defined convolutional calculation, obtain each subnet convolution maximum, with And the subbase collection that corresponding subnet includes, and be saved in the second three-dimensional array;
    Using the convolution maximum as main sequence, descending arranges second three-dimensional array, using the convolution maximum to index oneself Adapt to calculate the regularity of distribution of second three-dimensional array, and intercept second three-dimensional array, generate from zoom image sequence The level course of generating layer participates in the subtree of network generation;
    Networking is carried out to adjacent two layers using the second criterion;
    Judge to control whether variable meets to require;
    If being unsatisfactory for requiring, according to the first three-dimensional array value described in the first rules modification;
    Judge that the training set identifies whether to meet to require;
    If meeting to require, the self-defined convolutional calculation is carried out, and generate the new training subset of similar image to complete newly to instruct Practice process;
    If being unsatisfactory for requiring, according to the first three-dimensional array output characteristic vector to property data base;
    Until the training of all classes terminates, output vision three becomes training network slowly.
  10. It is 10. according to claim 7 from zoom image sequence signature extraction element, it is characterised in that the recognition unit Including:
    Images to be recognized is passed through into the stochastical sampling method formation sequence image collection that not exclusively overlaps;
    Using the sequence image set as input, and each subtree for becoming with vision three i-th layer of training network slowly does convolution, calculating The sample average and variance of global variable convolution value, the sound responded according to the first rule calculating each subtree by the training set Number is answered, wherein, i is the level that the vision three becomes training network slowly;
    According to nonzero element in the response times in the i-th -1 layer corresponding subtree and the interlayer mapping relations of i-th layer of determination, it is determined that I-th layer of subtree set of convolution is done with the sequence image set;
    Judge to control whether variable i meets to require;
    If be unsatisfactory for, the 3rd three-dimensional array is characterized as according to the first Rule Extraction images to be recognized, and according to the second rule The distance of bidimensional after the 3rd three-dimensional array and the first three-dimensional array is calculated, some classification are exported by the size of probability numbers;
    Whether uniformly and incidence is small the probability distribution for judging the classification, and sets the images to be recognized in a case of yes For new class, and training subalgorithm is called to carry out new class training.
CN201711155771.8A 2017-09-05 2017-11-20 Method and device for extracting characteristics of off-zoom image sequence Active CN107798331B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710789723 2017-09-05
CN2017107897238 2017-09-05

Publications (2)

Publication Number Publication Date
CN107798331A true CN107798331A (en) 2018-03-13
CN107798331B CN107798331B (en) 2021-11-26

Family

ID=61536218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711155771.8A Active CN107798331B (en) 2017-09-05 2017-11-20 Method and device for extracting characteristics of off-zoom image sequence

Country Status (1)

Country Link
CN (1) CN107798331B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389043A (en) * 2018-09-10 2019-02-26 中国人民解放军陆军工程大学 A kind of crowd density estimation method of unmanned plane picture
CN111967585A (en) * 2020-09-25 2020-11-20 深圳市商汤科技有限公司 Network model processing method and device, electronic equipment and storage medium
CN112327701A (en) * 2020-11-09 2021-02-05 浙江大学 Slow characteristic network monitoring method for nonlinear dynamic industrial process

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510497A (en) * 2011-10-18 2012-06-20 清华大学 Method and device for encoding three-dimensional grid with quality scalability based on graded quantification
US20140037172A1 (en) * 2011-01-13 2014-02-06 Rutgers, The State University Of New Jersey Enhanced multi-protocol analysis via intelligent supervised embedding (empravise) for multimodal data fusion
CN105469061A (en) * 2015-08-04 2016-04-06 电子科技大学中山学院 Topographic feature line extraction method and device
CN106691378A (en) * 2016-12-16 2017-05-24 深圳市唯特视科技有限公司 Deep learning vision classifying method based on electroencephalogram data
CN107066553A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of short text classification method based on convolutional neural networks and random forest

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037172A1 (en) * 2011-01-13 2014-02-06 Rutgers, The State University Of New Jersey Enhanced multi-protocol analysis via intelligent supervised embedding (empravise) for multimodal data fusion
CN102510497A (en) * 2011-10-18 2012-06-20 清华大学 Method and device for encoding three-dimensional grid with quality scalability based on graded quantification
CN105469061A (en) * 2015-08-04 2016-04-06 电子科技大学中山学院 Topographic feature line extraction method and device
CN106691378A (en) * 2016-12-16 2017-05-24 深圳市唯特视科技有限公司 Deep learning vision classifying method based on electroencephalogram data
CN107066553A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of short text classification method based on convolutional neural networks and random forest

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MAGHSOUDI Y等: ""Speckle reduction for the forest mapping analysis of multi-temporal Radarsat-1 images"", 《INTERNATIONAL JOURNAL OF REMOTE SENSING》 *
王晓晓: ""基于拓扑结构的人脸图像特征提取及识别研究"", 《中国优秀硕士学位论文全文数据库·信息科技辑》 *
赵彦明等: ""基于自然图像复杂视觉信息的特征提取算法与应用"", 《计算机应用与软件》 *
陈雄: ""基于序列特征的随机森林表情识别"", 《中国优秀硕士学位论文全文数据库·信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389043A (en) * 2018-09-10 2019-02-26 中国人民解放军陆军工程大学 A kind of crowd density estimation method of unmanned plane picture
CN109389043B (en) * 2018-09-10 2021-11-23 中国人民解放军陆军工程大学 Crowd density estimation method for aerial picture of unmanned aerial vehicle
CN111967585A (en) * 2020-09-25 2020-11-20 深圳市商汤科技有限公司 Network model processing method and device, electronic equipment and storage medium
CN111967585B (en) * 2020-09-25 2022-02-22 深圳市商汤科技有限公司 Network model processing method and device, electronic equipment and storage medium
CN112327701A (en) * 2020-11-09 2021-02-05 浙江大学 Slow characteristic network monitoring method for nonlinear dynamic industrial process
CN112327701B (en) * 2020-11-09 2021-11-02 浙江大学 Slow characteristic network monitoring method for nonlinear dynamic industrial process

Also Published As

Publication number Publication date
CN107798331B (en) 2021-11-26

Similar Documents

Publication Publication Date Title
Zhang et al. Hyperspectral classification based on lightweight 3-D-CNN with transfer learning
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
Ghaseminezhad et al. A novel self-organizing map (SOM) neural network for discrete groups of data clustering
Pedrycz et al. Linguistic interpretation of self-organizing maps
CN108304826A (en) Facial expression recognizing method based on convolutional neural networks
CN108319957A (en) A kind of large-scale point cloud semantic segmentation method based on overtrick figure
CN106537422A (en) Systems and methods for capture of relationships within information
CN109063724A (en) A kind of enhanced production confrontation network and target sample recognition methods
CN109214503B (en) Power transmission and transformation project cost prediction method based on KPCA-LA-RBM
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN109446889A (en) Object tracking method and device based on twin matching network
CN109740734B (en) Image classification method of convolutional neural network by optimizing spatial arrangement of neurons
CN109034231A (en) The deficiency of data fuzzy clustering method of information feedback RBF network valuation
CN107798331A (en) From zoom image sequence characteristic extracting method and device
CN108009575A (en) A kind of community discovery method for complex network
CN103942571A (en) Graphic image sorting method based on genetic programming algorithm
CN110309835A (en) A kind of image local feature extracting method and device
CN110866134A (en) Image retrieval-oriented distribution consistency keeping metric learning method
Pal et al. Deep learning for network analysis: problems, approaches and challenges
Li et al. GoT: A growing tree model for clustering ensemble
Liao et al. Image segmentation based on deep learning features
CN111340133A (en) Image classification processing method based on deep convolutional neural network
CN108596118B (en) Remote sensing image classification method and system based on artificial bee colony algorithm
CN102779241B (en) PPI (Point-Point Interaction) network clustering method based on artificial swarm reproduction mechanism
CN109740672A (en) Multi-streaming feature is apart from emerging system and fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant