CN112906701A - Fine-grained image identification method and system based on multi-attention neural network - Google Patents

Fine-grained image identification method and system based on multi-attention neural network Download PDF

Info

Publication number
CN112906701A
CN112906701A CN202110170873.7A CN202110170873A CN112906701A CN 112906701 A CN112906701 A CN 112906701A CN 202110170873 A CN202110170873 A CN 202110170873A CN 112906701 A CN112906701 A CN 112906701A
Authority
CN
China
Prior art keywords
image
size
fine
full
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110170873.7A
Other languages
Chinese (zh)
Other versions
CN112906701B (en
Inventor
彭德光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Megalight Technology Co ltd
Original Assignee
Chongqing Megalight Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Megalight Technology Co ltd filed Critical Chongqing Megalight Technology Co ltd
Priority to CN202110170873.7A priority Critical patent/CN112906701B/en
Publication of CN112906701A publication Critical patent/CN112906701A/en
Application granted granted Critical
Publication of CN112906701B publication Critical patent/CN112906701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fine-grained image recognition method and a fine-grained image recognition system based on a multi-attention neural network, wherein the fine-grained image recognition method comprises the following steps of: acquiring a plurality of part mask images corresponding to different local feature positions from the full-size image; adjusting the size of a corresponding part frame in the part mask image, and acquiring a part image from the full-size image based on the adjusted part mask image; classifying and weighting the part features corresponding to the part images to obtain the identification result of the full-size image; the invention can effectively improve the identification accuracy of fine-grained characteristics.

Description

Fine-grained image identification method and system based on multi-attention neural network
Technical Field
The invention relates to the field, in particular to a fine-grained image identification method and system based on a multi-attention neural network.
Background
At present, the image recognition field mainly comprises a traditional method and a deep learning method, and the traditional method mainly realizes positioning and recognition of subordinate categories by relying on manual annotation frames and partial annotations. The prior art has the following disadvantages: 1. the large human involvement makes part definition and annotation expensive and subjective, which is clearly not the optimal choice for all fine-grained image recognition tasks. 2. An attention mechanism is utilized to generate an annotation mask without part tags/annotations and corresponding image portions are extracted from the annotation mask. 3. The image is cropped using a fixed-size rectangle to extract the portion of interest, regardless of the size of the object to be recognized, causing a bad influence on the subsequent feature expression.
Disclosure of Invention
In view of the problems in the prior art, the invention provides a fine-grained image recognition method and system based on a multi-attention neural network, and mainly solves the problems that the existing method is high in labor participation cost and insufficient in recognition accuracy.
In order to achieve the above and other objects, the present invention adopts the following technical solutions.
A fine-grained image recognition method based on a multi-attention neural network comprises the following steps:
acquiring a plurality of part mask images corresponding to different local feature positions from the full-size image;
adjusting the size of a corresponding part frame in the part mask image, and acquiring a part image from the full-size image based on the adjusted part mask image;
and classifying and weighting the part features corresponding to the part images to obtain the identification result of the full-size image.
Optionally, obtaining a plurality of part mask images corresponding to different local feature positions from the full-size image includes:
and acquiring a characteristic diagram corresponding to the full-size image, inputting the characteristic diagram into a plurality of channels, and clustering based on the position vectors of the characteristics to obtain the part mask image.
Optionally, a cropping module is constructed, and the size of the corresponding part frame in the part mask image is adjusted through the cropping module.
Optionally, the cutting module includes a full connection layer, the part mask image is input into the full connection layer, and the size of the part frame is adjusted according to the set size.
Optionally, the size adjustment manner is expressed as:
wa=wf+dw
ha=hf+dh
wherein, wfAnd hfThe size of the part area in the part film-cured image; w is aaAnd haThe width and the height of the frame are respectively set; dwAnd dhThe offsets corresponding to the width and height, respectively, are taken as the outputs of the fully-connected layers.
Optionally, the classifying and weighting process on the part features corresponding to the part image includes:
extracting a part feature vector of the part image through the convolution layer, inputting the part feature vector into a plurality of full connection layers, and setting the weight of the part feature vector through the classification output probability of a classifier connected with the full connection layers.
Optionally, a fusion loss function is constructed, and network parameters are updated through iterative learning according to the fusion loss function.
Optionally, the fusion loss function is represented as:
Lf=Lsortab)+Lfcls(Y,Y*)
Figure BDA0002938893780000021
wherein L issortRepresenting multiple channel packet loss, LfclsRepresenting classification loss of feature vector of part, Y representing predicted mark vector, and Y*Representing a true token vector; omegaabWeights for the feature vectors of the part; p is a radical ofa tAnd pb tThe predicted probabilities of the a-th and b-th correct class labels t, respectively.
A multi-attention neural network-based fine-grained image recognition system, comprising:
the mask acquisition module is used for acquiring a plurality of part mask images corresponding to different local feature positions from the full-size image;
the part adjusting module is used for adjusting the size of a frame of a corresponding part in the part mask image and acquiring a part image from the full-size image based on the adjusted part mask image;
and the classification identification module is used for classifying and weighting the part features corresponding to the part images to obtain the classification identification result of the full-size images.
As described above, the fine-grained image recognition method and system based on the multi-attention neural network of the present invention have the following advantages.
When the image is cut, the size of a part frame is adjusted in a self-adaptive mode, the part is weighted, and the more distinctive fine-grained characteristic is generated, so that the subsequent characteristic expression can be effectively enhanced, and the identification accuracy is guaranteed.
Drawings
Fig. 1 is a flowchart of a fine-grained image recognition method based on a multi-attention neural network according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an MA-CNN network according to an embodiment of the present invention.
FIG. 3 is a schematic structural diagram of a cropping module according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Referring to fig. 1, the present invention provides a fine-grained image recognition method based on a multi-attention neural network, including steps S01-S03.
In step S01, multiple part mask images corresponding to different local feature locations are acquired from the full-size image:
in one embodiment, a full-size image may be input to a MA-CNN Network (Multi-attribute conditional Neural Network). The MA-CNN network consists of convolutional layers, channel groups, and partial classification subnetworks, and refer to fig. 2 for a specific network structure. The convolutional layer can be trained in advance, and the full-size image is input into the convolutional layer to be subjected to local feature extraction, so that a plurality of feature maps are obtained. The expression of the signature can be expressed as W X, where X represents the convolution, W represents the overall parameter, whose dimensions are W X h c, where W represents the width, h represents the height, and c represents the number of signature channels.
In one embodiment, feature maps are input into a plurality of channels of a channel group, and a position vector of a local part is constructed, the position vector corresponding to a peak response in an input image. And clustering the position vector obtained by each channel to obtain a part membrane pickling image (occlusion mask). Since it is difficult to acquire the attribution masks using a single channel, a channel grouping method is adopted to acquire more attribution masks. Channel grouping can cluster spatially correlated subtle patterns as a compact discriminating part from a group of channels whose peak responses appear in adjacent positions. Specifically, assuming that there are 512 feature channels, the 512 feature channels can be divided into 4 parts, and the attribute masks are equivalent to a weighted sum of 512 feature maps.
Through a set of fully connected layers (FC layers) [ f1,f2,f3,f4]A set of channel weights is generated. For each part, fiThe convolution characteristic is fiOf a weight vector di(X) is represented as follows:
di(X)=fi(W*X) (1)
wherein d isiEach element d of (X)i∈(0,1)。
The convolution characteristic channel corresponds to a certain type of visual mode, so that the passed channels are grouped to generate attribute masks;
a form of channel grouping and weighting sub-network is therefore proposed to cluster the compact and distinguishable parts of a set of channels present in adjacent positions as peak responses of subtle spatial correlation patterns, each eigenchannel being represented by a position vector represented as follows:
[tx 1,ty 1,tx 2,ty 2,...,tx Ω,ty Ω] (2)
the position vector is taken as a feature and the different channels are divided into N groups, which are N partial detectors. Using an indicator vector with length c (number of eigen channels) to determine whether each channel belongs to the group, if so, the channel is 1, and if not, the channel is 0, and each group can be represented by an index function:
[1{1},...,1{j},...,1{c}] (3)
where 1 {. cndot } indicates if the jth signal channel belongs to the ith packet. With the gradient of the FC layer in the channel packet continuously updated, d in the final formula (1)i(X) will tend to equation (3), and the result d of channel grouping is obtained by the above stepsi(X)。
In step S02, adjusting the size of the frame of the corresponding part in the part mask image, and acquiring a part image from the full-size image based on the adjusted part mask image:
in one embodiment, a cropping module is constructed, and the size of the corresponding part frame in the part mask image is adjusted through the cropping module. Referring to fig. 3, the cropping module may be composed of full connection layers, the full connection layers may be a multi-layer structure, each full connection layer uses TanH as an activation function, and the attribution masks obtained by grouping each channel are used as the input of the module, so that the MA-CNN can automatically obtain appropriate parts.
In particular, when cutting, the FC layer can obtain the height of the part frameAnd the offset of the width. Redefining the height waAnd width haThe following were used:
wa=wf+dw (4)
ha=hf+dh (5)
wherein d iswAnd dhIs the output of the FC layer, wfAnd hfThe size of the part area in the part film-cured image; inputting the cut part into a Character-CNN of the MA-CNN network to generate a part feature vector.
In step S03, the classification and weighting process is performed on the part features corresponding to the part image, and the classification and identification result of the full-size image is obtained:
some part features may play a minor role in the results, as different parts contribute differently. The fine-grained features of the 4 parts are therefore balanced by part feature weighting.
In one embodiment, a part feature vector of a part image is extracted through a convolutional layer (Character-CNN), the feature vector is input into a plurality of fully connected layers, and the weight of the part feature vector is set through the classification output probability of a classifier connected with the fully connected layers.
Specifically, the average merged result of Character-CNN can be regarded as a feature vector with dimension 512. Inputting the feature vector of each part into the FC layer, and extracting a value v representing the compression featurei. The resulting 4 compression characteristics v are then used1,v2,v3,v4Then sending the next FC layer, extracting the part weight:
vi=fi(Wp i*Xp i) (6)
w1,w2,w3,w4=fw(v1,v2,v3,v4) (7)
wherein Wp iDenotes the part detected by the ith Character-CNN, Xp iThe detected moiety in expression (6),wiIndicating the weight corresponding to the ith part.
In one embodiment, a fusion loss function L is definedfThe features in the Character-CNNs are subjected to self-adaptive fusion through part feature vector weighting and fusion classification, so that classification of the input full-size images is realized; the fusion loss function is the sum of the classification loss of the feature vector of the part and the identification loss of the full-size image.
Fusion loss function LfThe definition is as follows:
Lf=Lsortab)+Lfcls(Y,Y*) (8)
l in the formula (8)sortRepresenting multiple channel packet loss, LfclsRepresenting classification loss of feature vector of part, Y representing predicted mark vector, and Y*Representing a true token vector; omegaabIs the weight of the feature vector of the part. L issortThe L sequence can weight the parts according to the performance of a single Character-CNN on a fine-grained category. The greater the output probability of the classifier softmax of Character-CNN, the greater the weight of the part. L sequence LsortIs defined as follows:
Figure BDA0002938893780000061
wherein p isa tAnd pb tRespectively represent the prediction probabilities of the a-th and b-th correct class labels t in the Character-CNN.
In one embodiment, network parameters are iteratively updated through back propagation of a fusion loss function, the parts membrane-cured image is adjusted through a cutting module in the iterative process, and the weight parameters of all connection layers in the network are further corrected until LsortAnd LfclsAnd (4) stabilizing the value and outputting a final full-size image recognition result.
The embodiment also provides a fine-grained image recognition system based on the multi-attention neural network, which is used for executing the fine-grained image recognition method based on the multi-attention neural network in the foregoing method embodiments. Since the technical principle of the system embodiment is similar to that of the method embodiment, repeated description of the same technical details is omitted.
In one embodiment, a multi-attention neural network based fine-grained image recognition system includes: a mask acquisition module, a part adjustment module and a classification identification module, wherein the mask acquisition module is used for assisting in executing the step S01 in the foregoing method embodiment; the part adjustment module is used to assist in performing step S02 in the foregoing method embodiment; the classification identification module is used to assist in performing step S03 in the foregoing method embodiments.
In summary, the invention provides a fine-grained image recognition method and system based on a multi-attention neural network, and provides an adaptive clipping module based on information of attribute masks, which is used for properly adjusting the size of a clipping rectangle and facilitating the feature expression of a follow-up Character-CNN; the channel grouping loss for compact and diversified part learning is provided, the part identification capability is improved by using the category label, and the user experience is improved; and providing a part weighting and self-adaptive cutting module to evaluate the part contribution and further balance and fuse all concerned parts. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (9)

1. A fine-grained image recognition method based on a multi-attention neural network is characterized by comprising the following steps:
acquiring a plurality of part mask images corresponding to different local feature positions from the full-size image;
adjusting the size of a corresponding part frame in the part mask image, and acquiring a part image from the full-size image based on the adjusted part mask image;
and classifying and weighting the part features corresponding to the part images to obtain the identification result of the full-size image.
2. The fine-grained image recognition method based on the multi-attention neural network as claimed in claim 1, wherein the step of obtaining a plurality of part mask images corresponding to different local feature positions from the full-size image comprises the steps of:
and acquiring a characteristic diagram corresponding to the full-size image, inputting the characteristic diagram into a plurality of channels, and clustering based on the position vectors of the characteristics to obtain the part mask image.
3. The fine-grained image recognition method based on the multi-attention neural network as claimed in claim 1, wherein a cropping module is constructed, and the size of the corresponding part frame in the part mask image is adjusted through the cropping module.
4. The fine-grained image recognition method based on the multi-attention neural network according to claim 3, wherein the cropping module comprises a fully connected layer, the part mask image is input into the fully connected layer, and the part frame size is adjusted according to a set size.
5. The fine-grained image recognition method based on the multi-attention neural network according to claim 4, wherein the resizing manner is expressed as:
wa=wf+dw
ha=hf+dh
wherein, wfAnd hfThe size of the part area in the part film-cured image; w is aaAnd haThe width and the height of the frame are respectively set; dwAnd dhThe offsets corresponding to the width and height, respectively, are taken as the outputs of the fully-connected layers.
6. The fine-grained image recognition method based on the multi-attention neural network as claimed in claim 1, wherein the classifying and weighting process is performed on the part features corresponding to the part images, and comprises the following steps:
extracting a part feature vector of the part image through the convolution layer, inputting the part feature vector into a plurality of full connection layers, and setting the weight of the part feature vector through the classification output probability of a classifier connected with the full connection layers.
7. The fine-grained image recognition method based on the multi-attention neural network according to claim 6, characterized in that a fusion loss function is constructed, and network parameters are updated through iterative learning according to the fusion loss function.
8. The fine-grained image recognition method based on the multi-attention neural network according to claim 2 or 6, wherein the fusion loss function is expressed as:
Lf=Lsorta,ωb)+Lfcls(Y,Y*)
Figure FDA0002938893770000021
wherein L issortRepresenting multiple channel packet loss, LfclsRepresenting classification loss of feature vector of part, Y representing predicted mark vector, and Y*Representing a true token vector; omegaa,ωbWeights for the feature vectors of the part; p is a radical ofa tAnd pb tThe predicted probabilities of the a-th and b-th correct class labels t, respectively.
9. A multi-attention neural network-based fine-grained image recognition system, comprising:
the mask acquisition module is used for acquiring a plurality of part mask images corresponding to different local feature positions from the full-size image;
the part adjusting module is used for adjusting the size of a frame of a corresponding part in the part mask image and acquiring a part image from the full-size image based on the adjusted part mask image;
and the classification identification module is used for classifying and weighting the part features corresponding to the part images to obtain the identification result of the full-size image.
CN202110170873.7A 2021-02-08 2021-02-08 Fine-granularity image recognition method and system based on multi-attention neural network Active CN112906701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110170873.7A CN112906701B (en) 2021-02-08 2021-02-08 Fine-granularity image recognition method and system based on multi-attention neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110170873.7A CN112906701B (en) 2021-02-08 2021-02-08 Fine-granularity image recognition method and system based on multi-attention neural network

Publications (2)

Publication Number Publication Date
CN112906701A true CN112906701A (en) 2021-06-04
CN112906701B CN112906701B (en) 2023-07-14

Family

ID=76123934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110170873.7A Active CN112906701B (en) 2021-02-08 2021-02-08 Fine-granularity image recognition method and system based on multi-attention neural network

Country Status (1)

Country Link
CN (1) CN112906701B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2184010A1 (en) * 2007-06-29 2010-05-12 KATO, Toshinori White matter enhancing device, white matter enhancing method, and program
CN107122375A (en) * 2016-12-12 2017-09-01 南京理工大学 The recognition methods of image subject based on characteristics of image
CN108288288A (en) * 2018-01-16 2018-07-17 华东交通大学 Accurate shaft size measurement method, the device and system of view-based access control model identification
CN108305250A (en) * 2018-01-30 2018-07-20 昆明理工大学 The synchronous identification of unstructured robot vision detection machine components and localization method
KR101928391B1 (en) * 2017-07-17 2018-12-12 서울시립대학교 산학협력단 Method and apparatus for data fusion of multi spectral image and radar image
CN109085178A (en) * 2018-08-28 2018-12-25 武汉科技大学 A kind of accurate on-line monitoring method of defect fingerprint and feedback strategy for increasing material manufacturing
CN109902693A (en) * 2019-02-16 2019-06-18 太原理工大学 One kind being based on more attention spatial pyramid characteristic image recognition methods
CN110570439A (en) * 2018-06-06 2019-12-13 康耐视公司 system and method for finding and classifying lines in an image using a vision system
CN110598029A (en) * 2019-09-06 2019-12-20 西安电子科技大学 Fine-grained image classification method based on attention transfer mechanism
CN111046980A (en) * 2020-03-16 2020-04-21 腾讯科技(深圳)有限公司 Image detection method, device, equipment and computer readable storage medium
CN111144490A (en) * 2019-12-26 2020-05-12 南京邮电大学 Fine granularity identification method based on alternative knowledge distillation strategy
CN111340088A (en) * 2020-02-21 2020-06-26 苏州工业园区服务外包职业学院 Image feature training method, model, device and computer storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2184010A1 (en) * 2007-06-29 2010-05-12 KATO, Toshinori White matter enhancing device, white matter enhancing method, and program
CN107122375A (en) * 2016-12-12 2017-09-01 南京理工大学 The recognition methods of image subject based on characteristics of image
KR101928391B1 (en) * 2017-07-17 2018-12-12 서울시립대학교 산학협력단 Method and apparatus for data fusion of multi spectral image and radar image
CN108288288A (en) * 2018-01-16 2018-07-17 华东交通大学 Accurate shaft size measurement method, the device and system of view-based access control model identification
CN108305250A (en) * 2018-01-30 2018-07-20 昆明理工大学 The synchronous identification of unstructured robot vision detection machine components and localization method
CN110570439A (en) * 2018-06-06 2019-12-13 康耐视公司 system and method for finding and classifying lines in an image using a vision system
CN109085178A (en) * 2018-08-28 2018-12-25 武汉科技大学 A kind of accurate on-line monitoring method of defect fingerprint and feedback strategy for increasing material manufacturing
CN109902693A (en) * 2019-02-16 2019-06-18 太原理工大学 One kind being based on more attention spatial pyramid characteristic image recognition methods
CN110598029A (en) * 2019-09-06 2019-12-20 西安电子科技大学 Fine-grained image classification method based on attention transfer mechanism
CN111144490A (en) * 2019-12-26 2020-05-12 南京邮电大学 Fine granularity identification method based on alternative knowledge distillation strategy
CN111340088A (en) * 2020-02-21 2020-06-26 苏州工业园区服务外包职业学院 Image feature training method, model, device and computer storage medium
CN111046980A (en) * 2020-03-16 2020-04-21 腾讯科技(深圳)有限公司 Image detection method, device, equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宗学文等: "基于面成型光固化3D打印制件的尺寸精度", 《 工程塑料应用 》, pages 41 - 46 *

Also Published As

Publication number Publication date
CN112906701B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN110222653B (en) Skeleton data behavior identification method based on graph convolution neural network
CN108446689B (en) Face recognition method
CN109063649B (en) Pedestrian re-identification method based on twin pedestrian alignment residual error network
CN108090472B (en) Pedestrian re-identification method and system based on multi-channel consistency characteristics
CN107767416B (en) Method for identifying pedestrian orientation in low-resolution image
CN112464865A (en) Facial expression recognition method based on pixel and geometric mixed features
CN109711422A (en) Image real time transfer, the method for building up of model, device, computer equipment and storage medium
CN110674874A (en) Fine-grained image identification method based on target fine component detection
US20240070858A1 (en) Capsule endoscope image recognition method based on deep learning, and device and medium
CN112036260B (en) Expression recognition method and system for multi-scale sub-block aggregation in natural environment
CN106503672A (en) A kind of recognition methods of the elderly's abnormal behaviour
CN110751027B (en) Pedestrian re-identification method based on deep multi-instance learning
CN112396036B (en) Method for re-identifying blocked pedestrians by combining space transformation network and multi-scale feature extraction
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN112364747A (en) Target detection method under limited sample
CN115713546A (en) Lightweight target tracking algorithm for mobile terminal equipment
CN114283326A (en) Underwater target re-identification method combining local perception and high-order feature reconstruction
CN114492634A (en) Fine-grained equipment image classification and identification method and system
CN114170659A (en) Facial emotion recognition method based on attention mechanism
CN114066844A (en) Pneumonia X-ray image analysis model and method based on attention superposition and feature fusion
CN116796248A (en) Forest health environment assessment system and method thereof
CN111144422A (en) Positioning identification method and system for aircraft component
CN116580289A (en) Fine granularity image recognition method based on attention
CN113903043B (en) Method for identifying printed Chinese character font based on twin metric model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 400000 6-1, 6-2, 6-3, 6-4, building 7, No. 50, Shuangxing Avenue, Biquan street, Bishan District, Chongqing

Applicant after: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

Address before: 400000 2-2-1, 109 Fengtian Avenue, tianxingqiao, Shapingba District, Chongqing

Applicant before: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant