CN109858496A - A kind of image characteristic extracting method based on weighting depth characteristic - Google Patents

A kind of image characteristic extracting method based on weighting depth characteristic Download PDF

Info

Publication number
CN109858496A
CN109858496A CN201910045648.3A CN201910045648A CN109858496A CN 109858496 A CN109858496 A CN 109858496A CN 201910045648 A CN201910045648 A CN 201910045648A CN 109858496 A CN109858496 A CN 109858496A
Authority
CN
China
Prior art keywords
characteristic
image
depth
depth characteristic
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910045648.3A
Other languages
Chinese (zh)
Inventor
刘文印
王崎
康培培
徐凯
杨振国
谈季
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910045648.3A priority Critical patent/CN109858496A/en
Publication of CN109858496A publication Critical patent/CN109858496A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of image characteristic extracting methods based on weighting depth characteristic, it the described method comprises the following steps: choosing existing network model and carry out pre-training on image data set, the neural network forecast layer then removed in network model obtains final network model;Image to be extracted is input in final network model and carries out forward calculation, extracts depth characteristic figure of the convolutional layer in final network model before all pond layers as image;Characteristic aggregation vector is calculated to each convolutional layer, and obtains depth characteristic vector after each characteristic aggregation vector is carried out evolution, zero-mean normalized;Dimensionality reduction is carried out to all depth characteristic vectors, and distributes index of the corresponding weight parameter as each convolutional layer to the depth characteristic vector after dimensionality reduction;Depth characteristic vector with weight parameter is merged, depth image feature is obtained.The present invention is versatile, and the feature robustness of acquisition is good, ability to express is strong, semantic information is abundant, calculating is easy.

Description

A kind of image characteristic extracting method based on weighting depth characteristic
Technical field
The present invention relates to field of image processings, mention more particularly, to a kind of characteristics of image based on weighting depth characteristic Take method.
Background technique
In recent years, with the outburst of image big data, the technologies such as image recognition, retrieval, classification, positioning and detection are obtained Tremendous development, and in these common technologies, image characteristics extraction is key point.The superiority and inferiority of final image task is largely Quality depending on image characteristics extraction.Therefore how to extract a kind of better image and be characterized in the main research side in the field To.In the tasks such as traditional image classification, positioning and retrieval, typically some essential characteristics based on image, as color, Texture etc., these features can not solve dimensional variation, block, illumination, affine transformation the problems such as, the associated pictures such as subsequent shift The appearance of feature extraction algorithm solves the problems, such as this, these algorithms achieve very good achievement in some small data sets. In current big data era, some traditional image search method (such as vald, fv, bag- based on shift feature extraction Of-words it) becomes difficult to carry out.And the advantage of the method based on deep learning just highlights, and can obtain fabulous effect Fruit.
In recent years, deep learning all achieved great breakthrough in various Computer Vision Tasks, one of them is important Factor is exactly its powerful non-linear expression ability, it is to be understood that the deeper information of image, thus in image characteristics extraction Field is developed especially rapid.Currently with the continuous variation of various network structures and the appearance of various data sets, feature is extracted Diversified situation is presented in method, these methods usually require to carry out fine-tune for a certain data set, benefit after the completion of training Feature extraction is directly carried out with the network that training is completed.In this case it is necessary to which researcher has extraordinary optimization net Network structure adjusts the abilities such as ginseng.And for image big data, it can not be labeled, be needed directly to it under normal conditions It extracts.
At present in terms of the feature extraction based on image big data, extracting method poor universality calculates complicated, the spy of extraction It is low to levy robustness, therefore, how to find a kind of general image feature extracting method based on deep learning is those skilled in the art Member needs the main problem solved.
Summary of the invention
The present invention is to overcome the above-mentioned big data feature extracting method of image in the prior art poor universality, calculate complexity, is mentioned The low defect of the feature robustness taken provides a kind of image characteristic extracting method based on weighting depth characteristic.
The present invention is directed to solve above-mentioned technical problem at least to a certain extent.
Primary and foremost purpose of the invention is to provide a kind of image characteristic extracting method based on weighting depth characteristic.
In order to solve the above technical problems, technical scheme is as follows:
A kind of image characteristic extracting method based on weighting depth characteristic, the described method comprises the following steps:
S1: choosing existing network model and carry out pre-training on image data set, and by the network after the completion of pre-training Final network model is obtained after neural network forecast layer removal in model
S2: image to be extracted is directly inputted into the final network model carries out forward calculation first, then mentioned Take the convolutional layer in final network model before all pond layers as the depth characteristic figure of image;Volume before each pond layer Lamination corresponds to several single layer characteristic patterns, and the depth characteristic figure includes the single layer feature in the convolutional layer before all pond layers Figure;
S3: characteristic aggregation vector is calculated to each convolutional layer in depth characteristic figure respectively using binary channels component, and will The corresponding characteristic aggregation vector of each convolutional layer obtains the depth characteristic vector of corresponding convolutional layer after being normalized;
S4: carrying out dimension-reduction treatment to the depth characteristic vector of all convolutional layers respectively, and to the depth characteristic after dimensionality reduction to Amount distributes corresponding weight parameter respectively, using the weight parameter as the index of each convolutional layer;
S5: it will be merged after dimensionality reduction and with the picture depth feature vector of weight parameter, as final depth map As feature.
Further, for available data collection or picture number to be extracted is based on for the image data set of pre-training in step S1 According to collection.The available data integrates as ImageNet image data set or coco image data set or voc image data set.
Further, the existing model is sorter network model or detection and localization network model.
Further, image to be extracted is directly inputted into the network model in step S2 and carries out forward calculation, The image to be extracted is original image to be extracted, and the original image to be extracted is without image procossing.
Further, each convolutional layer in depth characteristic figure is calculated respectively using binary channels component described in step S3 Detailed process is as follows for characteristic aggregation vector:
S3.1: calculating the transverse response figure of the depth characteristic figure of image, and the size of transverse response figure is (n, n), and n indicates single Layer characteristic pattern is long and wide, and the transverse response obtains the corresponding superposition of all single layer characteristic patterns in depth characteristic figure laterally poly- Characteristic pattern is closed, transverse response figure is then calculated according to transverse response weight equation;
The transverse response weight calculation formula is as follows:
Wherein, LijIt is transverse response weight, L 'ijThe summation of all single layer characteristic patterns in all convolutional layers, (i, j) and (m, n) is the position coordinates of data point in convolutional layer;
S3.2: the vertical response figure of the depth characteristic figure of image, the size (1, k) of vertical response figure, vertical response are calculated The channel characteristics layer of depth characteristic figure is unfolded, and vertical response figure is calculated according to vertical response weight equation;
The calculation formula of vertical response weight is as follows:
Wherein PjIt is the vertical response weight in j-th of channel, i is the label in channel, and k is the total number in channel;xjIt is depth The summation of non-zero number, y in characteristic patternjIt is the summation in depth characteristic figure for null part number;
S3.3: depth characteristic figure is corresponding with transverse response figure and vertical response figure to be multiplied, specifically:
The transverse response figure of the depth characteristic figure dot product (n, n) of (n, n, k) is obtained into the transverse features vector of (1, k), so The vertical response figure with (1, k) corresponds to dot product afterwards, obtains the characteristic aggregation vector that size is (1, k).
Further, normalized described in step S3 is zero-mean normalized, specific formula is as follows:
Wherein, μ is all characteristic aggregation vector mean values, and X is characteristic aggregation vector to be dealt with, and V is zero-mean normalizing Change treated data, N indicates total sample size, xiIndicate sample to be processed.
Further, the corresponding characteristic aggregation vector of each convolutional layer is first opened before normalized in step S3 Side's processing.
Further, the depth characteristic vector of pair all convolutional layers described in step S4 carries out dimension-reduction treatment respectively, use Dimension-reduction treatment method is Principal Component Analysis or Fisher face.
Further, the depth characteristic vector after dimensionality reduction described in step S4 distributes corresponding weight parameter, the power respectively The convolutional layer weight parameter of weight sequentially rearward is greater than the convolutional layer weight parameter of front;
The physical relationship of weight parameter is as follows:
w1≤w2≤...≤wn
Wherein, w represents weight parameter, and n represents convolutional layer serial number.
Further, the convolutional layer weight parameter can be determined according to image task and be weighed during image recognition Weight values, and the sum of all weighted values are equal to 1.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
Original image to be processed is directly inputted into the network model that training finishes by 1 the method for the present invention, the characteristics of image of acquisition The abundant robustness of semanteme is good;The method of the present invention need not be to image preprocessing to be extracted, and characteristics of image can be extracted directly, versatility It is good;Convenience of calculation, it is easy to accomplish.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart.
Fig. 2 is the schematic diagram for the characteristic aggregation vector that binary channels component polymerize.
Fig. 3 is the characterization schematic diagram of image feature vector in the present invention.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
In order to better illustrate this embodiment, the certain components of attached drawing have omission, zoom in or out, and do not represent actual product Size;
To those skilled in the art, it is to be understood that certain known features and its explanation, which may be omitted, in attached drawing 's.
The following further describes the technical solution of the present invention with reference to the accompanying drawings and examples.
Embodiment 1
As shown in Figure 1, a kind of image characteristic extracting method based on weighting depth characteristic, the method includes following steps It is rapid:
S1: it chooses existing network model VGG16 image classification network and is instructed in advance on ImageNet image data set Practice, and the softmax layer in the network model after the completion of pre-training is removed with full articulamentum, obtains final VGG16 image Sorter network model;(the existing model is sorter network model or detection and localization network model, according to specific image task Selection.Image data set for pre-training for existing frequently-used data collection or is based on image data set to be extracted.)
S2: image to be extracted is directly inputted into the VGG16 for eliminating softmax layers and full articulamentum first and is schemed As carrying out forward calculation in sorter network model, then extract in VGG16 image classification network model before all pond layers Depth characteristic figure of the convolutional layer as image;Convolutional layer before each pond layer corresponds to several single layer characteristic patterns, the depth Characteristic pattern includes the single layer characteristic pattern in the convolutional layer before all pond layers;
S3: characteristic aggregation vector is calculated to each convolutional layer in depth characteristic figure respectively using binary channels component, and will The corresponding characteristic aggregation vector of each convolutional layer obtains the depth of corresponding convolutional layer after successively carrying out evolution, zero-mean normalized Spend feature vector;
As shown in Fig. 2, the process that each convolutional layer calculates characteristic aggregation vector is as follows:
The size (n, n, k) of depth characteristic figure
S3.1: calculating the transverse response figure of the depth characteristic figure of image, and the size of transverse response figure is (n, n), and n indicates single Layer characteristic pattern is long and wide, and the transverse response obtains the corresponding superposition of all single layer characteristic patterns in depth characteristic figure laterally poly- Characteristic pattern is closed, transverse response figure is then calculated according to transverse response weight equation;
The transverse response weight calculation formula is as follows:
Wherein, LijIt is transverse response weight, L 'ijThe summation of all single layer characteristic patterns in all convolutional layers, (i, j) and (m, n) is the position coordinates of data point in convolutional layer;
S3.2: the vertical response figure of the depth characteristic figure of image, the size (1, k) of vertical response figure, vertical response are calculated The channel characteristics layer of depth characteristic figure is unfolded, and vertical response figure is calculated according to vertical response weight equation;
The calculation formula of vertical response weight is as follows:
Wherein PjIt is the vertical response weight in j-th of channel, i is the label in channel, and k is the total number in channel;xjIt is depth The summation of non-zero number, y in characteristic patternjIt is the summation in depth characteristic figure for null part number;
S3.3: depth characteristic figure is corresponding with transverse response figure and vertical response figure to be multiplied, specifically:
The transverse response figure of the depth characteristic figure dot product (n, n) of (n, n, k) is obtained into the transverse features vector of (1, k), so The vertical response figure with (1, k) corresponds to dot product afterwards, obtains the characteristic aggregation vector that size is (1, k).
Each convolutional layer characteristic aggregation vector is obtained according to step S3.1-S3.3 and is denoted as { f1, f2 ... fn }, wherein each The dimension of characteristic aggregation vector is related to reel lamination channel sized.
S4: using Principal Component Analysis (or Fisher face) to the depth characteristic vectors of all convolutional layers respectively into Row dimensionality reduction, and corresponding weight parameter is distributed respectively to the depth characteristic vector after dimensionality reduction, using the weight parameter as each The index of convolutional layer;
{ f1, f2 ... fn } is denoted as to characteristic aggregation vector and carries out evolution processing, and it is normalized, is obtained The characteristic aggregation vector { v1, v2 ... vn } to dimensionality reduction is obtained, formula is handled are as follows:
Wherein, μ is all characteristic aggregation vector mean values, and X is characteristic aggregation vector to be dealt with, and V is zero-mean normalizing Change treated data, N indicates total sample size, xiIndicate sample to be processed.
PCA dimension-reduction treatment calculating process is as follows:
By taking v1 as an example, v1=(k1, k2 ..., kn), the vector obtained after dimensionality reduction is P:
The first step averages to v1Calculate the offset of characteristic aggregation vector
Second step calculates the main component of characteristic aggregation vector, that is, seeks covariance matrix C:
Third portion calculates the characteristic value μ of covariance matrix1μ2...μnWith feature vector β1β2...βn
4th, the vector P1=(μ after dimensionality reduction1μ2...μn)*V1
Characteristic aggregation vector after finding out the dimensionality reduction of v1 to vn according to the sequence of the step first step to the 4th step, is denoted as P= (P1, P2 ..., Pn).
Corresponding weight { w is distributed to the characteristic aggregation vector after dimensionality reduction1, w2... wn, usually leaned in the weight order Convolutional layer weight parameter afterwards is greater than the convolutional layer weight parameter of front;
The physical relationship of weight parameter is as follows:
w1≤w2≤...≤wn
Wherein, w represents weight parameter, and n represents convolutional layer serial number, and weight meets following condition:
w1+w2+...+wn=1
Then final feature vector is represented by (w1* P1, w2* P2 ..., wn*Pn)。
As shown in figure 3, S5: will be merged after dimensionality reduction and with the picture depth feature vector of weight parameter, as most Whole depth image feature.
Weighted value, which is distributed, according to specific image task seeks the specific range of weight by taking image retrieval task as an example:
When carrying out image querying, the character representation of every piece image is { P in image data base1, P2... Pn}num(num For database number), the character representation of an image to be checked can be with are as follows: { Q1, Q2...Qn }q, during inquiry, meter Distance of the query image relative to piece image every in database is calculated, COS distance is generallyd use:
S=P*Q/ | P | | Q |
For the width figure in query image and database, the distance of each characteristic layer Qn and Pn can be calculated,
Sn=Pn*Qn/ | Pn | | Qn |
It is at this time each distance distribution weight if required each characteristic layer distance is { s1, s2 ..., sn }:
{w1, w2... wn, meet w1+w2+…+wn=1, total distance S=w1*s1+w2*s2...+wn*sn.
The result of inquiry determines that this distribution weight K can be regarded as each layer in inquiry by all total distance S sequences Shared weight distribution in the process, and the weighted distributed then will affect the result of inquiry.Theoretically { w1, w2... wnBetween have Infinite more ratio, it is best can not to determine that ratio can achieve the effect that.
Based on practical experience, { w is provided1, w2... wnBetween constraint condition:
wn*sn≥wn-1*sn-1≥…≥w2*s2≥w1*s1,
When reality seeks final result according to data set, according to the constraint condition, weight space is greatly reduced.
Embodiment 2
A kind of image characteristic extracting method based on weighting depth characteristic, the described method comprises the following steps:
S1: the present embodiment chooses network model ResNet and carries out pre-training on coco image data set, and by pre-training Softmax layer and full articulamentum in network model after the completion remove, and obtain final ResNet image classification network model; (the existing model is sorter network model or detection and localization network model, according to specific image task choosing.For instructing in advance Experienced image data set is existing frequently-used data collection or is based on image data set to be extracted.)
S2: image to be extracted is directly inputted into the ResNet for eliminating softmax layers He full articulamentum first Forward calculation is carried out in image classification network model, is then extracted in ResNet image classification network model before all pond layers Depth characteristic figure of the convolutional layer as image;Convolutional layer before each pond layer corresponds to several single layer characteristic patterns, the depth It spends characteristic pattern and includes the single layer characteristic pattern in the convolutional layer before all pond layers;
S3: characteristic aggregation vector is calculated to each convolutional layer in depth characteristic figure respectively using binary channels component, and will The corresponding characteristic aggregation vector of each convolutional layer obtains the depth of corresponding convolutional layer after successively carrying out evolution, zero-mean normalized Spend feature vector;
As shown in Fig. 2, the process that each convolutional layer calculates characteristic aggregation vector is as follows:
The size (n, n, k) of depth characteristic figure
S3.1: calculating the transverse response figure of the depth characteristic figure of image, and the size of transverse response figure is (n, n), and n indicates single Layer characteristic pattern is long and wide, and the transverse response obtains the corresponding superposition of all single layer characteristic patterns in depth characteristic figure laterally poly- Characteristic pattern is closed, transverse response figure is then calculated according to transverse response weight equation;
The transverse response weight calculation formula is as follows:
Wherein, LijIt is transverse response weight, L 'ijThe summation of all single layer characteristic patterns in all convolutional layers, (i, j) and (m, n) is the position coordinates of data point in convolutional layer;
S3.2: the vertical response figure of the depth characteristic figure of image, the size (1, k) of vertical response figure, vertical response are calculated The channel characteristics layer of depth characteristic figure is unfolded, and vertical response figure is calculated according to vertical response weight equation;
The calculation formula of vertical response weight is as follows:
Wherein PjIt is the vertical response weight in j-th of channel, i is the label in channel, and k is the total number in channel;xjIt is depth The summation of non-zero number, y in characteristic patternjIt is the summation in depth characteristic figure for null part number;
S3.3: depth characteristic figure is corresponding with transverse response figure and vertical response figure to be multiplied, specifically:
The transverse response figure of the depth characteristic figure dot product (n, n) of (n, n, k) is obtained into the transverse features vector of (1, k), so The vertical response figure with (1, k) corresponds to dot product afterwards, obtains the characteristic aggregation vector that size is (1, k).
Each convolutional layer characteristic aggregation vector is obtained according to step S3.1-S3.3 and is denoted as { f1, f2 ... fn }, wherein each The dimension of characteristic aggregation vector is related to reel lamination channel sized.
S4: the present embodiment is special to the depth of all convolutional layers using Principal Component Analysis (or using Fisher face) Sign vector carries out dimension-reduction treatment respectively, and distributes corresponding weight parameter respectively to the depth characteristic vector after dimensionality reduction, will be described Index of the weight parameter as each convolutional layer;
{ f1, f2 ... fn } is denoted as to characteristic aggregation vector and carries out evolution processing, and it is normalized, is obtained The characteristic aggregation vector { v1, v2 ... vn } to dimensionality reduction is obtained, formula is handled are as follows:
Wherein, μ is all characteristic aggregation vector mean values, and X is characteristic aggregation vector to be dealt with, and V is zero-mean normalizing Change treated data, N indicates total sample size, xiIndicate sample to be processed.
PCA dimension-reduction treatment calculating process is as follows:
By taking v1 as an example, v1=(k1, k2 ..., kn), the vector obtained after dimensionality reduction is P:
The first step averages to v1Calculate the offset of characteristic aggregation vector
Second step calculates the main component of characteristic aggregation vector, that is, seeks covariance matrix C:
Third portion calculates the characteristic value μ of covariance matrix1μ2...μnWith feature vector β1β2...βn
4th, the vector P1=(μ after dimensionality reduction1μ2...μn)*V1
Characteristic aggregation vector after finding out the dimensionality reduction of v1 to vn according to the sequence of the step first step to the 4th step, is denoted as P= (P1, P2 ..., Pn).
Corresponding weight { w is distributed to the characteristic aggregation vector after dimensionality reduction1, w2... wn, usually leaned in the weight order Convolutional layer weight parameter afterwards is greater than the convolutional layer weight parameter of front;
The physical relationship of weight parameter is as follows:
w1≤w2≤...≤wn
Wherein, w represents weight parameter, and n represents convolutional layer serial number, and weight meets following condition:
w1+w2+...+wn=1
Then final feature vector is represented by (w1* P1, w2* P2 ..., wn*Pn)。
As shown in figure 3, S5: will be merged after dimensionality reduction and with the picture depth feature vector of weight parameter, as most Whole depth image feature.
Weighted value, which is distributed, according to specific image task seeks the specific range of weight by taking image retrieval task as an example:
When carrying out image querying, the character representation of every piece image is { P in image data base1, P2... Pn}num(num For database number), the character representation of an image to be checked can be with are as follows: { Q1, Q2 ... Qn }q, during inquiry, calculate Distance of the query image relative to piece image every in database out, generallys use COS distance:
S=P*Q/ | P | | Q |
For the width figure in query image and database, the distance of each characteristic layer Qn and Pn can be calculated,
Sn=Pn*Qn/ | Pn | | Qn |
It is at this time each distance distribution weight if required each characteristic layer distance is { s1, s2 ..., sn }:
{w1, w2... wn, meet w1+w2+...+wn=1, total distance S=w1*s1+w2*s2...+wn*sn.
The result of inquiry determines that this distribution weight K can be regarded as each layer in inquiry by all total distance S sequences Shared weight distribution in the process, and the weighted distributed then will affect the result of inquiry.Theoretically { w1, w2... wnBetween have Infinite more ratio, it is best can not to determine that ratio can achieve the effect that.
Based on practical experience, { w is provided1, w2... wnBetween constraint condition:
wn*sn≥wn-1*sn-1≥…≥w2*s2≥w1*s1,
When reality seeks final result according to data set, according to the constraint condition, weight space is greatly reduced.
This gives the method for a whole set of image characteristics extraction based on depth characteristic, more enough effective replies are each Kind characteristics of image identifies problem, and the feature vector extracted includes the feature of convolutional neural networks various pieces, both includes front end Local feature, have the global characteristics comprising rear end, have abstract language information abundant, under the distribution of respective weights, energy It is enough effectively to handle various image recognition class problems.
The same or similar label correspond to the same or similar components;
The terms describing the positional relationship in the drawings are only for illustration, should not be understood as the limitation to this patent;
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention Protection scope within.

Claims (10)

1. a kind of image characteristic extracting method based on weighting depth characteristic, which is characterized in that the described method comprises the following steps:
S1: it chooses network model and carries out pre-training, and the network in the network model that pre-training is finished on image data set Prediction interval removal;
S2: image to be extracted being directly inputted into the network model and carry out forward calculation, extracts in network model and owns Depth characteristic figure of the convolutional layer as image to be extracted before the layer of pond;Convolutional layer before each pond layer corresponds to several lists Layer characteristic pattern, the depth characteristic figure include the single layer characteristic pattern in the convolutional layer before all pond layers;
S3: characteristic aggregation vector is calculated to each convolutional layer in depth characteristic figure respectively using binary channels component, and will be each The corresponding characteristic aggregation vector of convolutional layer obtains the depth characteristic vector of corresponding convolutional layer after being normalized;
S4: dimension-reduction treatment is carried out respectively to the depth characteristic vector of all convolutional layers, and to the depth characteristic vector after dimensionality reduction point Corresponding weight parameter is not distributed, using the weight parameter as the index of each convolutional layer;
S5: will merge after dimensionality reduction and with the picture depth feature vector of weight parameter, special as final depth image Sign.
2. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 1, which is characterized in that step For available data collection or image data set to be extracted is based on for the image data set of pre-training in rapid S1;The available data collection For ImageNet image data set or coco image data set or voc image data set.
3. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 1, which is characterized in that institute Stating network model is sorter network model or detection and localization network model.
4. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 1, which is characterized in that step Image to be extracted is directly inputted into the network model in rapid S2 and carries out forward calculation, the image to be extracted is original Image to be extracted, the original image to be extracted is without image procossing.
5. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 1-4, special Sign is, described in step S3 using binary channels component respectively to each convolutional layer in depth characteristic figure calculate characteristic aggregation to Detailed process is as follows for amount:
S3.1: the transverse response figure of the depth characteristic figure of image is calculated:
The size of transverse response figure is (n, n), and n indicates that single layer characteristic pattern is long and wide, and the transverse response figure is by depth characteristic figure In the corresponding superposition of all single layer characteristic patterns, obtain lateral aggregation features figure, then calculated according to transverse response weight equation horizontal To response diagram;
The transverse response weight calculation formula is as follows:
Wherein, LijIt is transverse response weight, L 'ijIt is the summation of all single layer characteristic patterns in all convolutional layers, (i, j) and (m, n) It is the position coordinates of data point in convolutional layer;
S3.2: the vertical response figure of the depth characteristic figure of image is calculated:
The channel characteristics layer of depth characteristic figure is unfolded for the size (1, k) of vertical response figure, vertical response, and according to vertical response Weight equation calculates vertical response figure;
The calculation formula of vertical response weight is as follows:
Wherein PjIt is the vertical response weight in j-th of channel, i is the label in channel, and k is the total number in channel;xjIt is depth characteristic The summation of non-zero number, y in figurejIt is the summation in depth characteristic figure for null part number;
S3.3: depth characteristic figure is corresponding with transverse response figure and vertical response figure to be multiplied, specifically:
The transverse response figure of the depth characteristic figure dot product (n, n) of (n, n, k) is obtained into the transverse features vector of (1, k), then with The vertical response figure of (1, k) corresponds to dot product, obtains the characteristic aggregation vector that size is (1, k).
6. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 5, which is characterized in that step Normalized described in rapid S3 is zero-mean normalized, specific formula is as follows:
Wherein, μ is all characteristic aggregation vector mean values, and X is characteristic aggregation vector to be dealt with, and V is at zero-mean normalization Data after reason, N indicate total sample size, xiIndicate sample to be processed.
7. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 6, which is characterized in that step Evolution processing is first carried out to the corresponding characteristic aggregation vector of each convolutional layer before normalized in rapid S3.
8. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 1-4, special Sign is that the depth characteristic vector of pair all convolutional layers described in step S4 carries out dimension-reduction treatment, the dimension-reduction treatment side of use respectively Method is Principal Component Analysis or Fisher face.
9. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 8, which is characterized in that step Depth characteristic vector after dimensionality reduction described in rapid S4 distributes corresponding weight parameter respectively, sequentially convolutional layer weight parameter rearward Greater than the convolutional layer weight parameter of front;
The physical relationship of weight parameter is as follows:
w1≤w2≤...≤wn
Wherein, w represents weight parameter, and n represents convolutional layer serial number.
10. a kind of image characteristic extracting method based on weighting depth characteristic according to claim 9, which is characterized in that The convolutional layer weight parameter can determine weighted value, and all weights during image recognition according to image task The sum of value is equal to 1.
CN201910045648.3A 2019-01-17 2019-01-17 A kind of image characteristic extracting method based on weighting depth characteristic Pending CN109858496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910045648.3A CN109858496A (en) 2019-01-17 2019-01-17 A kind of image characteristic extracting method based on weighting depth characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910045648.3A CN109858496A (en) 2019-01-17 2019-01-17 A kind of image characteristic extracting method based on weighting depth characteristic

Publications (1)

Publication Number Publication Date
CN109858496A true CN109858496A (en) 2019-06-07

Family

ID=66895203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910045648.3A Pending CN109858496A (en) 2019-01-17 2019-01-17 A kind of image characteristic extracting method based on weighting depth characteristic

Country Status (1)

Country Link
CN (1) CN109858496A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753847A (en) * 2020-06-28 2020-10-09 浙江大华技术股份有限公司 Image preprocessing method and device, storage medium and electronic device
CN112138394A (en) * 2020-10-16 2020-12-29 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112990359A (en) * 2021-04-19 2021-06-18 深圳市深光粟科技有限公司 Image data processing method and device, computer and storage medium
CN113515661A (en) * 2021-07-16 2021-10-19 广西师范大学 Image retrieval method based on filtering depth convolution characteristics

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008217521A (en) * 2007-03-06 2008-09-18 Nippon Telegr & Teleph Corp <Ntt> Parameter estimation device, parameter estimation method, program with this method loaded, and recording medium with this program recorded
CN107194404A (en) * 2017-04-13 2017-09-22 哈尔滨工程大学 Submarine target feature extracting method based on convolutional neural networks
CN107844795A (en) * 2017-11-18 2018-03-27 中国人民解放军陆军工程大学 Convolutional neural networks feature extracting method based on principal component analysis
CN108491835A (en) * 2018-06-12 2018-09-04 常州大学 Binary channels convolutional neural networks towards human facial expression recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008217521A (en) * 2007-03-06 2008-09-18 Nippon Telegr & Teleph Corp <Ntt> Parameter estimation device, parameter estimation method, program with this method loaded, and recording medium with this program recorded
CN107194404A (en) * 2017-04-13 2017-09-22 哈尔滨工程大学 Submarine target feature extracting method based on convolutional neural networks
CN107844795A (en) * 2017-11-18 2018-03-27 中国人民解放军陆军工程大学 Convolutional neural networks feature extracting method based on principal component analysis
CN108491835A (en) * 2018-06-12 2018-09-04 常州大学 Binary channels convolutional neural networks towards human facial expression recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董荣胜等: "用于图像检索的多区域交叉加权聚合深度卷积特征", 《计算机辅助设计与图形学学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753847A (en) * 2020-06-28 2020-10-09 浙江大华技术股份有限公司 Image preprocessing method and device, storage medium and electronic device
CN112138394A (en) * 2020-10-16 2020-12-29 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112138394B (en) * 2020-10-16 2022-05-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112990359A (en) * 2021-04-19 2021-06-18 深圳市深光粟科技有限公司 Image data processing method and device, computer and storage medium
CN112990359B (en) * 2021-04-19 2024-01-26 深圳市深光粟科技有限公司 Image data processing method, device, computer and storage medium
CN113515661A (en) * 2021-07-16 2021-10-19 广西师范大学 Image retrieval method based on filtering depth convolution characteristics
CN113515661B (en) * 2021-07-16 2022-03-11 广西师范大学 Image retrieval method based on filtering depth convolution characteristics

Similar Documents

Publication Publication Date Title
CN109858496A (en) A kind of image characteristic extracting method based on weighting depth characteristic
CN106446930B (en) Robot operative scenario recognition methods based on deep layer convolutional neural networks
CN110532920B (en) Face recognition method for small-quantity data set based on FaceNet method
CN104217214B (en) RGB D personage&#39;s Activity recognition methods based on configurable convolutional neural networks
CN110188611A (en) A kind of pedestrian recognition methods and system again introducing visual attention mechanism
CN110298404A (en) A kind of method for tracking target based on triple twin Hash e-learnings
CN108090403A (en) Face dynamic identification method and system based on 3D convolutional neural network
CN110033440A (en) Biological cell method of counting based on convolutional neural networks and Fusion Features
CN113221625B (en) Method for re-identifying pedestrians by utilizing local features of deep learning
CN108596211A (en) It is a kind of that pedestrian&#39;s recognition methods again is blocked based on focusing study and depth e-learning
CN109325589A (en) Convolutional calculation method and device
CN110334584B (en) Gesture recognition method based on regional full convolution network
CN109993100A (en) The implementation method of facial expression recognition based on further feature cluster
CN109711401A (en) A kind of Method for text detection in natural scene image based on Faster Rcnn
CN108446589A (en) Face identification method based on low-rank decomposition and auxiliary dictionary under complex environment
CN107704924A (en) Synchronous self-adapting space-time characteristic expresses the construction method and correlation technique of learning model
CN109255382A (en) For the nerve network system of picture match positioning, method and device
CN108268890A (en) A kind of hyperspectral image classification method
CN107577983A (en) It is a kind of to circulate the method for finding region-of-interest identification multi-tag image
CN114239935A (en) Prediction method for non-uniform track sequence
Dasari et al. IOU–Siamtrack: IOU Guided Siamese Network for Visual Object Tracking
CN110070044A (en) Pedestrian&#39;s attribute recognition approach based on deep learning
CN114170657A (en) Facial emotion recognition method integrating attention mechanism and high-order feature representation
CN113673465A (en) Image detection method, device, equipment and readable storage medium
Djemame et al. Combining cellular automata and particle swarm optimization for edge detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190607