CN111488936A - Feature fusion method and device and storage medium - Google Patents

Feature fusion method and device and storage medium Download PDF

Info

Publication number
CN111488936A
CN111488936A CN202010290730.5A CN202010290730A CN111488936A CN 111488936 A CN111488936 A CN 111488936A CN 202010290730 A CN202010290730 A CN 202010290730A CN 111488936 A CN111488936 A CN 111488936A
Authority
CN
China
Prior art keywords
feature
features
identified
tag
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010290730.5A
Other languages
Chinese (zh)
Other versions
CN111488936B (en
Inventor
朱金华
徐�明
熊凡
陈婷
徐丽华
王强
裴卫斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Original Assignee
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ZNV Technology Co Ltd, Nanjing ZNV Software Co Ltd filed Critical Shenzhen ZNV Technology Co Ltd
Priority to CN202010290730.5A priority Critical patent/CN111488936B/en
Publication of CN111488936A publication Critical patent/CN111488936A/en
Application granted granted Critical
Publication of CN111488936B publication Critical patent/CN111488936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a feature fusion method, a device and a storage medium, wherein if the similarity between at least two features and a feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, the at least two features may be a class tag, and because the at least two features are scored, the tag of the feature with the highest score in the at least two features is taken as the tag of the feature to be identified, so that the features which may belong to the same class are gradually concentrated to one class of tags, and the class of tag features which should belong to the same class are gradually corrected to one class of tags.

Description

Feature fusion method and device and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a feature fusion method and device and a storage medium.
Background
With the development of artificial intelligence technology, the demand of intelligent processing of information such as images, texts and voices is increasing day by day, wherein feature labeling labels in a plurality of information can provide a basis for intelligent processing of subsequent information. For example, in the security and social improvement industry, countless human faces and human body features learned through a deep neural network are all the time, the real identity IDs of the human faces and the human body features in a real list library or the file IDs in a virtual file library are identified through a feature comparison mode, that is, the similarity between the human faces or the human body features and each feature in a database to be selected is calculated, and a label (the real identity ID or the file ID) corresponding to the feature with the maximum similarity is used as a label of the human faces or the human body features, so that a basic step is provided for retrieving human face images or human body images or identifying human faces at a later stage.
For a new feature of a to-be-labeled label extracted from an image, the prior art classifies the new feature by calculating the similarity between the new feature and the feature of the labeled class label, and the class label corresponding to the feature with the highest similarity is the label of the new feature. For example, for the facial features, the similarity of two facial features of the same person in two different forms (such as front face, side face, eyes open and eyes close, etc.) or two facial features extracted from two photos of the same person respectively due to the shortage of the feature extractor during the extraction of the facial features is low, if the similarity is used as the basis for feature classification, the two facial features cannot be labeled with labels of the same category, so that the two facial features are labeled with labels of different categories, if new facial features of the same person appear later, when the new face features are compared with the two face features respectively in similarity, the two face features have the highest chance of being similar to the new face features and are overlapped according to the similarity, the number of features in the category where the two face features are located is more and more, and the final feature recognition effect is affected, so that the multi-category label features which should be one category need to be corrected into one category label.
Disclosure of Invention
The invention mainly solves the technical problem of how to accurately label the characteristics.
According to a first aspect, there is provided in an embodiment a method of feature fusion, comprising:
acquiring an image, and extracting features to be identified in the image;
if the similarity between at least two features and the features to be identified is larger than or equal to a preset threshold value in a preset tag feature database, finding one feature with the highest score from the at least two features, labeling the features to be identified with the tag of the feature with the highest score, performing feature fusion on the feature with the highest score and the features to be identified, and adding the fused feature into the preset tag feature database.
Further, still include:
if the similarity between all the features in the preset tag feature database and the features to be identified is smaller than a preset threshold value, adding the features to be identified into the preset tag feature database, and labeling the features to be identified with a new tag;
if the similarity between one feature and the feature to be identified is larger than or equal to a preset threshold value, the feature to be identified is labeled with a label of the feature, the feature and the feature to be identified are subjected to feature fusion, and the fused feature is added into a preset label feature database.
Further, finding a highest scoring feature from the at least two features comprises:
and scoring each feature according to the fused feature number of the label corresponding to each feature in the at least two features and the time of the last feature fusion.
Further, the score for each of the at least two features is obtained by the following formula:
Vk=sk+Δk
wherein, VkIs the score, s, of the kth feature of the at least two featureskSimilarity between the kth feature of the at least two features and the feature to be identified;
Δ k is the gain of the kth of the at least two features, Δ k ═ 1-sk)*(Ck/Sum)+(-ln(Ik/5)), wherein CkIs the fused feature number of the label corresponding to the kth feature, Sum is the Sum of the fused feature numbers of all the features in at least two features, IkRanking the kth feature last fusion time in a reverse order ordering of each of the at least two features last fusion time.
Further, if the preset tag feature database does not include features, adding the features to be identified into the preset tag feature database, and labeling a new tag for the features to be identified.
Further, the image is a face image, and the feature to be recognized is a face feature.
Further, the similarity calculation method for calculating the features in the preset tag feature database and the features to be identified is an Euclidean distance algorithm, a Pearson correlation coefficient algorithm or a cosine distance algorithm.
According to a second aspect, there is provided in one embodiment a feature fusion apparatus comprising:
the acquisition module acquires an image and extracts the features to be identified in the image;
and the feature fusion module is used for finding a feature with the highest score from the at least two features when the similarity between the at least two features and the feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, labeling the feature to be identified with the tag of the feature with the highest score, performing feature fusion on the feature with the highest score and the feature to be identified, and adding the fused feature into the preset tag feature database.
According to a third aspect, there is provided in one embodiment an article comprising:
a memory for storing a program;
a processor for implementing the method of the above embodiment by executing the program stored in the memory.
According to a fourth aspect, an embodiment provides a computer-readable storage medium comprising a program executable by a processor to implement the method of the above-described embodiment.
According to the feature fusion method and apparatus, and the storage medium of the above embodiments, if there are at least two features in the preset tag feature database whose similarity to the feature to be recognized is greater than or equal to the preset threshold, the at least two features may be the tags of one class, and since the at least two features are scored, the tag of the feature with the highest score in the at least two features is taken as the tag of the feature to be recognized, so that the multiple classes of features that may belong to the same class are gradually concentrated in one class of tags, so as to gradually correct the multiple classes of tag features that should be the one class into one class of tags.
Drawings
FIG. 1 is a flow diagram of a feature fusion method of an embodiment;
FIG. 2 is a flow diagram of a feature fusion method according to another embodiment;
FIG. 3 is a flow chart of a feature fusion apparatus according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
In the security field, it is often necessary to collect a face image and classify the newly collected face image according to the classified face image in the database, and the face image of the same category is represented as a face image of the same person.
Referring to fig. 1, fig. 1 is a flowchart of a feature fusion method according to an embodiment, where the embodiment takes human face features as an example, the method includes:
s10, acquiring an image containing a human face; the embodiment can acquire images containing human faces through image acquisition devices such as a monitoring camera and the like.
And S20, extracting the human face features to be recognized in the image from the image containing the human face. In the embodiment, the facial features to be recognized in the image can be extracted by using an existing facial feature extraction method, such as a geometric feature-based method and a template matching method, a subspace analysis-based method, a wavelet theory-based facial recognition method, a neural network-based method, a hidden markov model-based method, and the like, and the extracted facial features are vector-form data.
And S30, classifying the human face features to be recognized. In the prior art, when classifying the face features to be recognized, similarity between the face features to be recognized and each feature in a preset tag feature database is calculated first, and the features in the preset tag feature database with the similarity larger than a preset threshold are classified into one class, however, there may be more than one feature in the preset tag feature database with the similarity larger than the preset threshold, there may be a case where there is no feature in the preset tag feature database with the similarity larger than the preset threshold with the face features to be recognized, and there may also be a case where there are at least two features in the preset tag feature database with the similarity larger than the preset threshold with the face features to be recognized.
In addition, in this embodiment, the preset tag feature data is empty in the initial condition, so that in the initial condition, the similarity does not need to be calculated, the face feature to be recognized is directly added to the preset tag feature database, and a new tag is labeled to the face feature to be recognized, and the tag may be a virtual ID. In the embodiment, any existing method is adopted to calculate the similarity of the features, such as a euclidean distance algorithm, a pearson correlation coefficient algorithm or a cosine distance algorithm.
The first embodiment is as follows:
in other words, if the similarity between all the features in the preset tag feature database and the face feature to be recognized is smaller than the preset threshold, the face feature to be recognized is added to the preset tag feature database, and a new tag is labeled to the face feature to be recognized. That is, a face image of the same person as the face feature to be recognized is not collected before, the face feature to be recognized can be used as a new category, a new tag is labeled and stored in a preset tag feature database, and therefore classification can be performed later if the face image of the same person is collected.
Example two:
in this embodiment, a case that only one feature exists in a preset tag feature database, and the similarity between the feature and the face feature to be recognized is greater than or equal to a preset threshold is described, and if only one feature exists in the preset tag feature database, and the similarity between the feature and the face feature to be recognized is greater than or equal to the preset threshold, it is described that the feature to be recognized and the feature whose similarity is greater than the preset threshold are classified into one class, and the class is directly classified, that is, a tag of the feature is labeled to the face feature to be recognized.
Since the facial features belonging to the same class may be facial features collected by the same person at different positions, such as facial features collected by the same front and facial features collected by the same side, if the front and side features of the same person's face are both stored in a preset tag database, the number of features in the database is increased, since the front and side features belong to the same person, the front and side features can be fused into one feature by means of feature fusion, and the number of features in the database is reduced, therefore, in this embodiment, the feature and the facial feature to be recognized need to be subjected to feature fusion, and the fused feature is added into the preset tag feature database.
Example three:
in this embodiment, a situation that the similarity between at least two features and the face feature to be recognized in the preset tag feature database is greater than or equal to a preset threshold is explained, and if the similarity between at least two features and the face feature to be recognized in the preset tag feature database is greater than or equal to the preset threshold, if the face feature to be recognized and the features greater than or equal to the preset threshold are classified into one category, the number of categories to be classified is increased due to accidental errors, and the effect of feature recognition is finally affected. Therefore, in this embodiment, a feature with the highest score is found from at least two features, a label of the feature with the highest score is labeled to the face feature to be recognized, the feature with the highest score and the face feature to be recognized are subjected to feature fusion, and the fused feature is added to a preset label feature database. In other words, the at least two features are all features of which the similarity with the face feature to be recognized in the preset tag feature database is greater than or equal to a preset threshold.
When n (n is more than or equal to 2) features (f1, f2 …, fk, … fn) in a preset tag database have similarity (s1, s2 …, sk, … sn) with the features of the face to be recognized, which is larger than a preset threshold value, scoring is carried out on the features, and the tag corresponding to the feature with the highest score is used as the tag of the feature of the face to be recognized. The labels corresponding to the characteristics f1, f2 …, fk and … fn are T1, T2 …, Tk and … Tn.
In this embodiment, each feature is scored according to the fused feature number of the label corresponding to each feature in the at least two features and the time of the last feature fusion, the fused feature number of the label corresponding to each feature is the number of times of feature fusion performed on the feature labeled by each label, and 1 is added to the fused feature number every time the feature fusion is performed. Assume that the fusion feature numbers of labels T1, T2 …, Tk, … Tn are C1, C2 …, Ck, … Cn. Obtaining a score for each of the at least two features by equation (1):
Vk=sk+Δk (1)
wherein, VkIs the score, s, of the kth feature of the at least two featureskSimilarity between the kth feature of the at least two features and the feature to be identified;
Δ k is the gain of the kth of the at least two features, Δ k ═ 1-sk)*(Ck/Sum)+(-ln(Ik/5)), wherein CkIs the number of fused features of the kth feature, Sum is the Sum of the number of fused features of all features of the at least two features, IkRanking the k-th feature last fusion time in the reverse order of the last fusion time of each of the at least two features, e.g. whenThe k-th feature is the last feature to be fused of the at least two features, then IkTaking 1; when the kth feature is a feature whose feature fusion is performed next to last of at least two features, IkAnd 2, and so on.
Wherein C iskSum is the positive excitation of the fusion feature ratio, the larger the fusion feature is, CkThe more the increase in/Sum. (-ln (I)k/5)) is the more weight excitation the last fusion time is, Ik1,2,3,4,5,6 correspond to values of 1.61,0.92,0.51,0.22,0, -0.18, so that it can be seen that the more recent fusion time is, the more the forward excitation is, the smaller the forward excitation effect of the 2 nd and later becomes, and the attenuation effect is obtained until the 6 th bit is arranged.
This example selects a score VkAnd (k-1, 2.., n) the feature label corresponding to the maximum value is the label of the face feature to be recognized, and feature fusion is carried out on the face feature to be recognized and the feature corresponding to the maximum score value.
In this embodiment, each time feature fusion is performed, the number of fusion features of the corresponding tag is increased by 1, and the time of the last feature fusion is updated.
In a specific embodiment, the similarity between 6 features (corresponding to the tags T1 and T2 … … T6, respectively) in the preset tag feature database and the feature x to be identified is greater than or equal to a preset threshold, as shown in table 1.
TABLE 1
T1 T2 T3 T4 T5 T6
Similarity sk 0.95 0.96 0.97 0.96 0.98 0.98
Total fused feature number Sum 2000 700 100 200 800 180 20
Ck/Sum 0.35 0.05 0.10 0.40 0.09 0.01
1-sk 0.05 0.04 0.03 0.04 0.02 0.02
Last fused time ordering 1 2 3 4 5 6
(-ln(Ik/5) 1.609 0.916 0.510 0.223 0 -0.182
(1-sk)*(Ck/Sum) 0.0175 0.002 0.003 0.016 0.0018 0.0002
Δk 0.0496 0.0203 0.0132 0.0204 0.0018 -0.0034
Vk 0.9996 0.9803 0.9832 0.9804 0.9818 0.9765
As can be seen from table 1, the similarity between the feature x to be identified and the features corresponding to the tags T5 and T6 is 0.98, but the fusion feature ratio of the two tags is smaller, the last fusion time is also older, and the final scores are 0.9818 and 0.9765 respectively. The similarity between the feature x to be identified and the feature corresponding to the tag T1 is the minimum, but the feature corresponding to the tag T1 is just fused, the number of fused features is large, and the score is the highest finally.
Based on the above embodiments, please refer to fig. 2, fig. 2 is a specific flowchart of a feature fusion method according to an embodiment, which includes:
and S11, acquiring the image by an image acquisition device such as a camera.
S12, extracting features to be recognized from the acquired image, for example, extracting human face features to be recognized from a human face image.
S13, calculating the number of features in the preset tag database, which have similarity with the features to be identified greater than or equal to the preset threshold, according to the above embodiment, the number of features is divided into three cases, which respectively correspond to the following three steps:
and S14, if the preset label database does not have the feature with the similarity larger than or equal to the preset threshold, adding the feature to be identified into the preset feature database, and labeling a new label for the feature to be identified.
And S15, if only one feature exists in the preset tag database, the similarity between the feature and the feature to be identified is greater than or equal to a preset threshold value, labeling the tag of the feature to be identified with the feature and performing feature fusion.
S16, if the similarity between at least two characteristics and the characteristics to be identified exists in the preset label database and is larger than or equal to the preset threshold value, finding out one characteristic with the highest score from the at least two characteristics, labeling the characteristic label with the highest score for the characteristics to be identified, and performing characteristic fusion.
Example four:
referring to fig. 3, the embodiment further provides a feature fusion apparatus, which includes an obtaining module 10 and a feature fusion module 20.
And the obtaining module 10 is used for obtaining the acquired image and extracting the features to be identified of the image according to the existing feature extraction method. The obtaining module 20 of this embodiment obtains the image collected by the image collecting device such as a camera from the image collecting device, and extracts the facial features to be recognized from the facial image through the existing feature extraction algorithm, where the facial features to be recognized in this embodiment are data vectors.
The feature fusion module is used for adding the feature to be identified into a preset tag feature database when the similarity between all features in the preset tag feature database and the feature to be identified is smaller than a preset threshold value, and marking a new tag on the feature to be identified. In this embodiment, similarity calculation is performed on all face features in a preset tag feature database and face features to be recognized, and if the similarity between all the face features in the database and the face features to be recognized is smaller than a preset threshold, it indicates that the face features in the database and the face features to be recognized are not in a category, so that a new tag is labeled on the face features to be recognized, and the new tag is separately classified into a category.
When only one feature has similarity with the feature to be identified which is more than or equal to a preset threshold value in a preset tag feature database, labeling the feature to be identified with a tag of the feature, performing feature fusion on the feature and the feature to be identified, and adding the fused feature into the preset tag feature database; if only one feature in the preset label feature database is similar to the face feature to be recognized, the face feature to be recognized and the only feature are classified into one class.
When the similarity between at least two characteristics and the characteristics to be identified is larger than or equal to a preset threshold value in a preset label characteristic database, finding one characteristic with the highest score from the at least two characteristics, labeling the characteristics to be identified with the label of the characteristic with the highest score, fusing the characteristic with the characteristic to be identified with the highest score, and adding the fused characteristic into the preset label characteristic database. If the similarity between two features and the feature to be identified is greater than or equal to a preset threshold value in the preset tag feature database, one feature is selected from the multiple features by calculating the score, and the feature to be identified are classified into one class.
In the embodiment, each feature is scored according to the fused feature number of the label corresponding to each feature in at least two features and the time of the last feature fusion. The fusion feature number of the label corresponding to each feature is the number of times of feature fusion of the feature under the label, and the more the number of times of feature fusion is, the more the time of the last feature fusion is, the higher the activity of the feature is, the higher the weight is given, and the higher the score is. Thus, even if errors occur occasionally, the multiple types of features which should be under the same type of label are concentrated to one type over time, and the errors are gradually corrected.
The method in the foregoing embodiments may be implemented by hardware, and this embodiment provides a product including a memory and a processor, where in this embodiment, the processor may be an integrated circuit chip and has signal processing capability. The processor may also be a general purpose Microprocessor (MCU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. For more functions and steps of the processor in this embodiment, reference may be made to the description in the embodiment of the feature fusion method, and details are not described here.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (10)

1. A method of feature fusion, comprising:
acquiring an image, and extracting features to be identified in the image;
if the similarity between at least two features and the features to be identified is larger than or equal to a preset threshold value in a preset tag feature database, finding one feature with the highest score from the at least two features, labeling the features to be identified with the tag of the feature with the highest score, performing feature fusion on the feature with the highest score and the features to be identified, and adding the fused feature into the preset tag feature database.
2. The feature fusion method of claim 1, further comprising:
if the similarity between all the features in the preset tag feature database and the features to be identified is smaller than a preset threshold value, adding the features to be identified into the preset tag feature database, and labeling the features to be identified with a new tag;
if the similarity between one feature and the feature to be identified is larger than or equal to a preset threshold value, the feature to be identified is labeled with a label of the feature, the feature and the feature to be identified are subjected to feature fusion, and the fused feature is added into a preset label feature database.
3. The feature fusion method of claim 1 wherein finding a highest scoring feature from the at least two features comprises:
and scoring each feature according to the fused feature number of the label corresponding to each feature in the at least two features and the time of the last feature fusion.
4. The feature fusion method of claim 2 wherein the score for each of the at least two features is obtained by the formula:
Vk=sk+Δk
wherein, VkIs the score, s, of the kth feature of the at least two featureskSimilarity between the kth feature of the at least two features and the feature to be identified;
Δ k is the gain of the kth of the at least two features, Δ k ═ 1-sk)*(Ck/Sum)+(-ln(Ik/5)), wherein CkIs the fused feature number of the label corresponding to the kth feature, Sum is the Sum of the fused feature numbers of all the features in at least two features, IkRanking the kth feature last fusion time in a reverse order ordering of each of the at least two features last fusion time.
5. The feature fusion method of claim 1, wherein if the preset tag feature database does not include features, adding the feature to be recognized to the preset tag feature database, and labeling a new tag for the feature to be recognized.
6. The feature fusion method according to any one of claims 1 to 4, wherein the image is a face image, and the feature to be recognized is a face feature.
7. The feature fusion method according to any one of claims 1 to 4, wherein the similarity algorithm for calculating the features in the preset tag feature database and the features to be identified is Euclidean distance algorithm, Pearson correlation coefficient algorithm or cosine distance algorithm.
8. A feature fusion apparatus, comprising:
the acquisition module acquires an image and extracts the features to be identified in the image;
and the feature fusion module is used for finding a feature with the highest score from the at least two features when the similarity between the at least two features and the feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, labeling the feature to be identified with the tag of the feature with the highest score, performing feature fusion on the feature with the highest score and the feature to be identified, and adding the fused feature into the preset tag feature database.
9. A product characterized by comprising:
a memory for storing a program;
a processor for implementing the method of any one of claims 1-7 by executing a program stored by the memory.
10. A computer-readable storage medium, characterized by comprising a program executable by a processor to implement the method of any one of claims 1-7.
CN202010290730.5A 2020-04-14 2020-04-14 Feature fusion method and device and storage medium Active CN111488936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010290730.5A CN111488936B (en) 2020-04-14 2020-04-14 Feature fusion method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010290730.5A CN111488936B (en) 2020-04-14 2020-04-14 Feature fusion method and device and storage medium

Publications (2)

Publication Number Publication Date
CN111488936A true CN111488936A (en) 2020-08-04
CN111488936B CN111488936B (en) 2023-07-28

Family

ID=71797998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010290730.5A Active CN111488936B (en) 2020-04-14 2020-04-14 Feature fusion method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111488936B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993102A (en) * 2019-03-28 2019-07-09 北京达佳互联信息技术有限公司 Similar face retrieval method, apparatus and storage medium
CN110222566A (en) * 2019-04-30 2019-09-10 北京迈格威科技有限公司 A kind of acquisition methods of face characteristic, device, terminal and storage medium
CN110263703A (en) * 2019-06-18 2019-09-20 腾讯科技(深圳)有限公司 Personnel's flow statistical method, device and computer equipment
CN110348362A (en) * 2019-07-05 2019-10-18 北京达佳互联信息技术有限公司 Label generation, method for processing video frequency, device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993102A (en) * 2019-03-28 2019-07-09 北京达佳互联信息技术有限公司 Similar face retrieval method, apparatus and storage medium
CN110222566A (en) * 2019-04-30 2019-09-10 北京迈格威科技有限公司 A kind of acquisition methods of face characteristic, device, terminal and storage medium
CN110263703A (en) * 2019-06-18 2019-09-20 腾讯科技(深圳)有限公司 Personnel's flow statistical method, device and computer equipment
CN110348362A (en) * 2019-07-05 2019-10-18 北京达佳互联信息技术有限公司 Label generation, method for processing video frequency, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111488936B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
US9798956B2 (en) Method for recognizing target object in image, and apparatus
Endres et al. Category independent object proposals
Chuang et al. A feature learning and object recognition framework for underwater fish images
Hoiem et al. Object-based image retrieval using the statistical structure of images
JP5588395B2 (en) System and method for efficiently interpreting images with respect to objects and their parts
US9977955B2 (en) Method and system for identifying books on a bookshelf
CN110909618B (en) Method and device for identifying identity of pet
CN112632980A (en) Enterprise classification method and system based on big data deep learning and electronic equipment
US9524430B1 (en) Method for detecting texts included in an image and apparatus using the same
US20120308141A1 (en) Information processing apparatus and method of processing information, storage medium and program
WO2020164278A1 (en) Image processing method and device, electronic equipment and readable storage medium
CN110222582B (en) Image processing method and camera
CN102339391A (en) Multiobject identification method and device
Scheirer Extreme value theory-based methods for visual recognition
CN111488943A (en) Face recognition method and device
CN115497124A (en) Identity recognition method and device and storage medium
Rodriguez-Serrano et al. Data-driven detection of prominent objects
Lu et al. Personal object discovery in first-person videos
CN111931856A (en) Video classification method and device, electronic equipment and storage medium
CN111488936A (en) Feature fusion method and device and storage medium
US11423248B2 (en) Hierarchical sampling for object identification
Jaimes et al. Integrating multiple classifiers in visual object detectors learned from user input
Gopalan et al. Statistical modeling for the detection, localization and extraction of text from heterogeneous textual images using combined feature scheme
Deselaers et al. Local representations for multi-object recognition
CN112507805B (en) Scene recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant