CN111340021B - Unsupervised domain adaptive target detection method based on center alignment and relation significance - Google Patents

Unsupervised domain adaptive target detection method based on center alignment and relation significance Download PDF

Info

Publication number
CN111340021B
CN111340021B CN202010104273.6A CN202010104273A CN111340021B CN 111340021 B CN111340021 B CN 111340021B CN 202010104273 A CN202010104273 A CN 202010104273A CN 111340021 B CN111340021 B CN 111340021B
Authority
CN
China
Prior art keywords
domain
target
center
proposal
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010104273.6A
Other languages
Chinese (zh)
Other versions
CN111340021A (en
Inventor
张勇东
张天柱
吴泽远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202010104273.6A priority Critical patent/CN111340021B/en
Publication of CN111340021A publication Critical patent/CN111340021A/en
Application granted granted Critical
Publication of CN111340021B publication Critical patent/CN111340021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unsupervised domain adaptive target detection method based on center alignment and relationship significance.A training stage is used for generating a corresponding target area proposal for images of a source domain and a target domain through a detector; carrying out relation modeling on the target area proposal and the category center, and updating the category center and the target area proposal; the distance between each class of the target domain and the source domain is shortened by utilizing the updated class center, so that the distance between different classes of the target domain is shortened by virtue of the source domain information; and after training is finished, directly carrying out classification detection on the target domain image. The method does not need to calculate the category center separately, but the category center and the target area proposal are put into the graph to be updated together, so that the model can be trained end to end; when the class centers are aligned, the difference between the classes of the target domain can be enlarged while the distribution difference between the source domain and the target domain is reduced, so that the target domain can be effectively classified.

Description

Unsupervised domain adaptive target detection method based on center alignment and relation significance
Technical Field
The invention relates to the technical field of target detection, in particular to an unsupervised domain adaptive target detection method based on center alignment and relationship significance.
Background
Object detection is taken as a basic problem of computer vision direction, rapid progress is achieved under the push of deep learning in recent years, however, object detection faces a serious problem, when the distribution of test data is different from that of training data, the detection performance is seriously reduced, the problem is called domain deviation, a data domain used for training a model is called a source domain, a test data domain is called a target domain, one way to solve the problem is to collect data of the target domain, label the data, and then train based on the data of the target domain, however, manual data labeling consumes a great deal of manpower and material resources, and especially, the task of labeling positions is needed in object detection.
In recent years, a method called unsupervised domain adaptation appears in the field of image recognition, and the main idea is to perform distribution matching of a source domain and a target domain. The method is also applied to the field of target detection, but the current method basically adopts the experience of image recognition, does not fully consider the characteristics of target detection, and is specifically represented as follows: (1) considering objects in an image individually fails to take full advantage of the relationship of multiple objects in the image. (2) The adopted unsupervised domain adaptation method is often aligned from the angle of the whole domain, and the classification effect is poor due to the fact that the class information of the target domain data cannot be considered in a detailed mode.
Disclosure of Invention
The invention aims to provide an unsupervised domain adaptive target detection method based on center alignment and relationship significance, which can be effectively used for target detection.
The purpose of the invention is realized by the following technical scheme:
an unsupervised domain adaptive target detection method based on center alignment and relationship significance comprises the following steps:
a training stage, generating a corresponding target area proposal for the images of a source domain and a target domain through a detector, wherein the target area proposal refers to a characteristic extracted from an area where a target possibly exists; carrying out relation modeling on the target area proposal and the category center, and updating the category center and the target area proposal; the distance between each class of the target domain and the source domain is shortened by utilizing the updated class center, so that the distance between different classes of the target domain is shortened by virtue of the source domain information;
after training, classification detection is directly carried out on the target domain image, namely, a target area proposal of the target domain image is firstly generated, then relation modeling is carried out on the target area proposal and a category center of the target domain image, and a target detection result is obtained through a final classifier and a regressor on the target area proposal obtained through final updating.
According to the technical scheme provided by the invention, the class center does not need to be calculated independently, but the class center and the target area proposal are put into the graph to be updated together, so that the model can be trained end to end; when the class centers are aligned, the difference between the classes of the target domain can be enlarged while the distribution difference between the source domain and the target domain is reduced, so that the target domain can be effectively classified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an unsupervised domain adaptive target detection method based on center alignment and relationship significance according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an unsupervised domain adaptive target detection method based on center alignment and relationship significance. The scheme provided by the invention can be applied to the fields of automatic driving and video monitoring, and effectively improves the detection performance in the face of different scene conditions such as weather, illumination, visual angles and the like. In practice, the system can be seamlessly connected with the existing detection system, and hardly brings extra storage and calculation overhead.
The method is introducedPreviously, the dataform was first defined. Source domain data (x)S,yS)∈DS,xS∈XS,yS∈YS,DSData distribution, X, representing the source domainSRepresenting the source domain sample space, YSRepresenting source domain label space, target domain data xT∈XTThe target domain distribution is marked as DTThe target proposal (region proposal) is marked as P epsilon RNxDAnd the category center of each type of target is marked as C e RKxDWherein, the target propofol may have the extracted features of the target region, N is the number of the target propofol, D is the dimension of the target propofol, where each target propofol after region pooling is taken as an input of the module, and K is the number of categories. P and C are present for both the source domain and the target domain, hereinafter distinguished by subscripts S and T.
As shown in fig. 1, in a training phase, a corresponding target area proposal is generated for images of a source domain and a target domain by a detector, which is a flow chart of an unsupervised domain adaptive target detection method based on center alignment and relationship significance provided by the present invention; carrying out relation modeling on the target area proposal and the category center, and updating the category center and the target area proposal; the distance between each class of the target domain and the source domain is shortened by utilizing the updated class center, so that the distance between different classes of the target domain is shortened by virtue of the source domain information; after training, classification detection is directly carried out on the target domain image, namely, a target area proposal of the target domain image is firstly generated, then relation modeling is carried out on the target area proposal and a category center of the target domain image, and a target detection result is obtained through a final classifier and a regressor on the target area proposal obtained through final updating.
The method is described in detail below.
Our network is based on the two-stage detector FasterR-CNN, i.e. first extracting the target area and then classifying and position-adjusting it. We add a two classifier for domain alignment, a relational significance module for relational modeling, and a category center alignment module for category alignment on the basis of Faster R-CNN.
First, a domain alignment mechanism.
For images of a source domain and a target domain, feature maps are extracted through feature extractors respectively, then the feature maps are sent to a region proposal network and a region pooling module to generate corresponding target region proposals (proposal), the target region proposals can be features extracted from regions where targets can exist, in a normal target detector, the region proposals are sent to a final classifier and a regressor for classification and positioning, in the network, the region proposals are used for subsequent relation significance modeling besides the final classification and positioning, wherein the region proposal network is used for generating regions where targets can exist, and the region pooling module pools the features in each region to enable the regions with different sizes to have the same dimension features finally. Meanwhile, we operate on the feature map (located before the area proposal network) to achieve the alignment of the whole image domain level, i.e. the domain alignment mechanism, the purpose of domain alignment is to make the detector adapt to the target domain image, and of course, the loss function of domain alignment will update the feature extractor, the details are as follows:
in the embodiment of the invention, a confrontation discriminator is adopted to discriminate the feature maps of a source domain and a target domain; for pictures from a source domain and a target domain respectively, generating a feature map by a feature extractor respectively, and then performing secondary classification by using a discriminator to generate a penalty function as follows:
Figure BDA0002387965590000041
where x is the input image, G represents the feature extractor, D represents the discriminator,
Figure BDA0002387965590000042
represents the compound operation of G and D; xS,XTRepresenting source and target domain sample spaces, respectively, DSData distribution representing the source domain, DTData distribution of the target domain; e represents expectation.
Second, relational significance modeling
For a source domain and a target domain, respectively putting the generated target region proposal together with each category center to construct a graph network G (V, E), wherein V represents a node set formed by the target region proposal and the category center, E represents an edge set between nodes, and the category center is initialized to be a vector with all 0 and is updated iteratively by using the graph network; using the adjacency matrix A ∈ R(N+K)x(N+K)Representing the relation among the nodes, and transmitting the information among the nodes in the graph network, thereby generating a category center by using a target area proposal, and simultaneously finely adjusting the characteristics of the target area proposal by the category center; for adjacency matrix A, WW is adoptedTThe calculation is performed, where W is the classification output of the classifier on the node set V, for the class center, the class output is performed without adopting the result of the classifier, but one-hot vectors are directly defined as the class output of the corresponding class center, that is, for the nth class center, the class output is a K-dimensional vector, the vector is 1 in the nth dimension, and the remaining dimensions are all 0, which is to prevent the mutual interference caused by the generation of the result during dot product between the class centers. After the matrix A is calculated, the diagonal line of A is set to 0, and the purpose of setting is to eliminate itself when the adjacent matrix is multiplied by the node. Then, the characteristics of each node and other nodes are aggregated by matrix multiplication, and the source domain and the target domain are calculated in parallel:
PS,CS=AS*VS
PT,CT=AT*VT
wherein P, C represents the target area proposal and the category center, respectively, subscript S represents the source domain, and subscript T represents the target domain.
In the embodiment of the invention, the result of matrix multiplication is not directly used as a final result, but the calculated node and the original node characteristic are subjected to weighted summation so as to avoid excessively damaging the original characteristic; the correlation formula is:
Figure BDA0002387965590000043
Figure BDA0002387965590000044
wherein t and t-1 both represent iteration times, alpha and theta both represent updated coefficients,
Figure BDA0002387965590000045
respectively obtaining source domain target area proposal after the t iteration, source domain target area proposal generated by an area proposal network during the t iteration and updated information calculated by a graph network during the t iteration;
Figure BDA0002387965590000051
respectively obtaining target domain target area proposals after the t-th iteration, target domain target area proposals generated by an area proposal network during the t-th iteration and updated information obtained by graph network calculation during the t-th iteration;
Figure BDA0002387965590000052
respectively the source domain category center after the t iteration, the source domain category center after the t-1 iteration, and the pass
Figure BDA0002387965590000053
A generated category center;
Figure BDA0002387965590000054
respectively the target domain class center after the t-th iteration and the target domain class center after the t-1 th iteration
Figure BDA0002387965590000055
The generated category centers.
Figure BDA0002387965590000056
Usually a weighted sum of nodes other than the current node, the weight being the fraction of the current node to the other nodesSimilarity.
From the above update formula, it can be known that each iteration of the target propofol is recalculated, the result of the last iteration is not retained, the category center is global and is initialized to 0, and we respectively retain the global category centers for the source domain and the target domain, and the iteration is better and better.
And thirdly, aligning the centers of the categories.
After the category centers are obtained, the distance between each category of the target domain and the source domain can be shortened by utilizing the category centers, so that the distances between different categories of the target domain are shortened by virtue of the source domain information, and the different categories can be better distinguished.
The operations herein produce a semantic loss, expressed as:
Figure BDA0002387965590000057
wherein, YSRepresenting the source domain label space, K represents the number of categories,
Figure BDA0002387965590000058
respectively representing the category centers of the K type of the source domain and the target domain;
Figure BDA0002387965590000059
is a metric function (e.g., a two-norm distance).
Combining the countermeasure loss, the semantic loss and the detection loss of the source domain, training the whole network together, wherein the total loss function is as follows:
L=Ldet(XS,YS)+γLda(XS,XT)+λLsm(XS,YS,XT)
where γ, λ are equilibrium coefficients.
Ldet(XS,YS) The detection loss of the source domain consists of two parts, namely classification loss and position regression loss generated by a domain proposal network, and classification loss and position regression loss generated by feeding the target domain proposal of the source domain image into a final classifier and a final regressorAnd (4) returning and losing.
In the embodiment of the present invention, the last classifier and the regressor mentioned above are set at the end of the whole model, where the classifier and the preceding classifier used for outputting the region proposal score are the same classifier, and in the training stage, the input of the last classifier and the regressor is the result of the class center alignment module in fig. 1, the last classifier and the regressor are not shown in fig. 1, but the detection loss in fig. 1 actually covers the last classifier and the regressor, and the implementation manners of the two can be referred to in the prior art, and thus are not described again.
In the testing stage, class center alignment is not required, so that the target detection result can be obtained by directly inputting the target area proposal updated by the relation significance module into the final classifier and the regressor.
According to the scheme of the embodiment of the invention, the class center does not need to be calculated independently, but the class center and the target proposal are put into the graph to be updated together, so that the model can be trained end to end; according to the method, information is shared among the features in a graph convolution mode, so that context relationship information is better considered; the method can enlarge the difference between the classes of the target domain while reducing the distribution difference between the source domain and the target domain through the constraint of the class center, thereby effectively classifying the target domain; the method is based on the master R-CNN, no network parameter is required to be added, and no burden is brought to storage and calculation speed. After the whole network training is finished, only a target domain picture is needed to be input, a target region proposal is extracted, the target region proposal is updated by utilizing the relational modeling part and is sent to the final classifier and the regressor, and the two classifiers (for domain alignment) and the class center alignment are not needed in the stage.
Through the above description of the embodiments, it is clear to those skilled in the art that the above embodiments can be implemented by software, and can also be implemented by software plus a necessary general hardware platform. With this understanding, the technical solutions of the embodiments can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (4)

1. An unsupervised domain adaptive target detection method based on center alignment and relationship significance is characterized by comprising the following steps:
a training stage, generating a corresponding target area proposal for the images of a source domain and a target domain through a detector, wherein the target area proposal refers to a characteristic extracted from an area where a target possibly exists; carrying out relation modeling on the target area proposal and the category center, and updating the category center and the target area proposal; the distance between each class of the target domain and the source domain is shortened by utilizing the updated class center, so that the distance between different classes of the target domain is shortened by virtue of the source domain information;
after training, directly carrying out classification detection on the target domain images, namely firstly generating a target region proposal of the target domain images, then carrying out relation modeling on the target region proposal and a category center of the target domain images, and finally obtaining a target detection result by the target region proposal obtained by updating through a final classifier and a regressor;
wherein performing relational modeling on the target area proposal and the category center comprises:
for a source domain and a target domain, respectively putting the generated target region proposal together with each category center to construct a graph G (V, E), wherein V represents a node set formed by the target region proposal and the category center, E represents an edge set between nodes, and the category center is initialized to be a vector of 0;
the adjacency matrix A is used for representing the relation between the nodes, and the information between the nodes can be transmitted in the graph, so that a category center is generated by using the target area proposal, and the category center can finely adjust the characteristics of the target area proposal; for adjacency matrix A, WW is adoptedTCalculating, wherein W is the classification output of the classifier on the node set V, for the class center, one-hot vectors are directly defined as the class output of the corresponding class center, namely for the Nth class center, the class output is K-dimensional vectors, the Nth dimension of the vectors is 1, the rest dimensions are 0, and after the adjacency matrix A is calculated, the diagonal of the adjacency matrix A is set to be 0; then, the characteristics of each node and other nodes are aggregated by matrix multiplication, and the source domain and the target domain are calculated in parallel:
PS,CS=AS*VS
PT,CT=AT*VT
wherein P, C represents the target area proposal and the category center, respectively, subscript S represents the source domain, and subscript T represents the target domain.
2. The unsupervised domain-adaptive target detection method based on center alignment and relationship significance as claimed in claim 1, wherein the generating corresponding target region proposals for the images of the source domain and the target domain by the detector comprises:
for the images of the source domain and the target domain, generating corresponding target area proposals through a feature extractor, an area proposal network and an area pooling module respectively; the region pooling module is used for pooling the features in the regions generated by the region proposal network so that the regions with different sizes finally have the features with the same dimension.
3. The unsupervised domain-adaptive target detection method based on center alignment and relationship significance as claimed in claim 1, wherein the formula for updating the proposal of category center and target area is as follows:
Figure FDA0003659247260000021
Figure FDA0003659247260000022
wherein both alpha and theta represent updated coefficients,
Figure FDA0003659247260000023
respectively carrying out source domain target area proposal after the t iteration, source domain target area proposal generated by an area proposal network during the t iteration and updated information obtained by graph network calculation during the t iteration;
Figure FDA0003659247260000024
respectively obtaining target domain target area proposals after the t-th iteration, target domain target area proposals generated by an area proposal network during the t-th iteration, and updated information calculated by a graph network during the t-th iteration;
Figure FDA0003659247260000025
respectively the source domain category center after the t iteration, the source domain category center after the t-1 iteration, and the pass
Figure FDA0003659247260000026
A generated category center;
Figure FDA0003659247260000027
respectively the target domain class center after the t-th iteration and the target domain class center after the t-1 th iteration
Figure FDA0003659247260000028
The generated category centers.
4. The unsupervised domain adaptive target detection method based on center alignment and relationship significance as claimed in claim 2, characterized in that the method further introduces a domain alignment mechanism to achieve alignment of image domain level; adopting a countermeasure discriminator to discriminate the feature maps of the source domain and the target domain; for pictures from a source domain and a target domain respectively, generating a feature map by a feature extractor respectively, and then performing secondary classification by a discriminator to generate a countermeasure loss, wherein a loss function is as follows:
Figure FDA0003659247260000029
where x is the input image, G represents the feature extractor, D represents the discriminator,
Figure FDA00036592472600000210
represents the compound operation of G and D; xS,XTRepresenting source and target domain sample spaces, respectively, DSData distribution representing source domain, DTData distribution of the target domain; e represents expectation;
when the updated class center is used to zoom in the distance between the target domain and the source domain, semantic loss is generated and is expressed as:
Figure FDA00036592472600000211
wherein Y isSRepresenting the source domain label space, K represents the number of categories,
Figure FDA00036592472600000212
respectively representing the category centers of the K type of the source domain and the target domain;
Figure FDA0003659247260000031
is a metric function;
the total loss function for the training phase is:
L=Ldet(XS,YS)+γLda(XS,XT)+λLsm(XS,YS,XT)
wherein γ, λ are equilibrium coefficients; l isdet(XS,YS) The detection loss of the source domain is composed of two parts, namely, classification loss and position regression loss generated by a domain proposal network, and classification loss and regression loss generated by feeding the target domain proposal of the source domain image into a final classifier and a final regressor.
CN202010104273.6A 2020-02-20 2020-02-20 Unsupervised domain adaptive target detection method based on center alignment and relation significance Active CN111340021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010104273.6A CN111340021B (en) 2020-02-20 2020-02-20 Unsupervised domain adaptive target detection method based on center alignment and relation significance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010104273.6A CN111340021B (en) 2020-02-20 2020-02-20 Unsupervised domain adaptive target detection method based on center alignment and relation significance

Publications (2)

Publication Number Publication Date
CN111340021A CN111340021A (en) 2020-06-26
CN111340021B true CN111340021B (en) 2022-07-15

Family

ID=71183511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010104273.6A Active CN111340021B (en) 2020-02-20 2020-02-20 Unsupervised domain adaptive target detection method based on center alignment and relation significance

Country Status (1)

Country Link
CN (1) CN111340021B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880019B (en) * 2019-10-30 2022-07-12 北京中科研究院 Method for adaptively training target domain classification model through unsupervised domain
US11544503B2 (en) * 2020-04-06 2023-01-03 Adobe Inc. Domain alignment for object detection domain adaptation tasks
CN112115916B (en) * 2020-09-29 2023-05-02 西安电子科技大学 Domain adaptive Faster R-CNN semi-supervised SAR detection method
CN112308158B (en) * 2020-11-05 2021-09-24 电子科技大学 Multi-source field self-adaptive model and method based on partial feature alignment
CN112488229B (en) * 2020-12-10 2024-04-05 西安交通大学 Domain self-adaptive unsupervised target detection method based on feature separation and alignment
CN112434754A (en) * 2020-12-14 2021-03-02 前线智能科技(南京)有限公司 Cross-modal medical image domain adaptive classification method based on graph neural network
CN112668594B (en) * 2021-01-26 2021-10-26 华南理工大学 Unsupervised image target detection method based on antagonism domain adaptation
CN113436197B (en) * 2021-06-07 2022-10-04 华东师范大学 Domain-adaptive unsupervised image segmentation method based on generation of confrontation and class feature distribution
CN113535951B (en) * 2021-06-21 2023-02-17 深圳大学 Method, device, terminal equipment and storage medium for information classification
CN113688867B (en) * 2021-07-20 2023-04-28 广东工业大学 Cross-domain image classification method
CN113807420B (en) * 2021-09-06 2024-03-19 湖南大学 Domain self-adaptive target detection method and system considering category semantic matching
CN113887544B (en) * 2021-12-07 2022-02-15 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113920382B (en) * 2021-12-15 2022-03-15 深圳大学 Cross-domain image classification method based on class consistency structured learning and related device
CN115908723B (en) * 2023-03-09 2023-06-16 中国科学技术大学 Polar line guided multi-view three-dimensional reconstruction method based on interval perception

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235770A (en) * 2017-12-29 2018-06-29 深圳前海达闼云端智能科技有限公司 image identification method and cloud system
CN109753992A (en) * 2018-12-10 2019-05-14 南京师范大学 The unsupervised domain for generating confrontation network based on condition adapts to image classification method
CN110363122A (en) * 2019-07-03 2019-10-22 昆明理工大学 A kind of cross-domain object detection method based on multilayer feature alignment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10885383B2 (en) * 2018-05-16 2021-01-05 Nec Corporation Unsupervised cross-domain distance metric adaptation with feature transfer network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235770A (en) * 2017-12-29 2018-06-29 深圳前海达闼云端智能科技有限公司 image identification method and cloud system
CN109753992A (en) * 2018-12-10 2019-05-14 南京师范大学 The unsupervised domain for generating confrontation network based on condition adapts to image classification method
CN110363122A (en) * 2019-07-03 2019-10-22 昆明理工大学 A kind of cross-domain object detection method based on multilayer feature alignment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Sisi Cao等.An Iterative unsupervised Person Search Algorithm on Natural Scene Images.《IEEE》.2020, *
基于目标域局部近邻几何信息的域自适应图像分类方法;唐 宋等;《计算机应用》;20170430;第37卷(第4期);第1164-1168页 *

Also Published As

Publication number Publication date
CN111340021A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111340021B (en) Unsupervised domain adaptive target detection method based on center alignment and relation significance
US11741693B2 (en) System and method for semi-supervised conditional generative modeling using adversarial networks
US10885383B2 (en) Unsupervised cross-domain distance metric adaptation with feature transfer network
CN111062951B (en) Knowledge distillation method based on semantic segmentation intra-class feature difference
CN108256561B (en) Multi-source domain adaptive migration method and system based on counterstudy
Zhang et al. Head pose estimation in seminar room using multi view face detectors
Shamsolmoali et al. Multipatch feature pyramid network for weakly supervised object detection in optical remote sensing images
CN114492574A (en) Pseudo label loss unsupervised countermeasure domain adaptive picture classification method based on Gaussian uniform mixing model
WO2022218396A1 (en) Image processing method and apparatus, and computer readable storage medium
CN113469186B (en) Cross-domain migration image segmentation method based on small number of point labels
WO2023185494A1 (en) Point cloud data identification method and apparatus, electronic device, and storage medium
CN111797814A (en) Unsupervised cross-domain action recognition method based on channel fusion and classifier confrontation
JP6620882B2 (en) Pattern recognition apparatus, method and program using domain adaptation
CN109389156B (en) Training method and device of image positioning model and image positioning method
Vandenhende et al. A three-player gan: generating hard samples to improve classification networks
CN112085055A (en) Black box attack method based on migration model Jacobian array feature vector disturbance
CN112488229A (en) Domain self-adaptive unsupervised target detection method based on feature separation and alignment
Patel et al. An optimized deep learning model for flower classification using NAS-FPN and faster R-CNN
CN116670687A (en) Method and system for adapting trained object detection models to domain offsets
CN112508029A (en) Instance segmentation method based on target box labeling
CN115273154A (en) Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium
KR20180054406A (en) Image processing apparatus and method
CN113888603A (en) Loop detection and visual SLAM method based on optical flow tracking and feature matching
CN110942463B (en) Video target segmentation method based on generation countermeasure network
CN112380970B (en) Video target detection method based on local area search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant