CN114463601B - Big data-based target identification data processing system - Google Patents
Big data-based target identification data processing system Download PDFInfo
- Publication number
- CN114463601B CN114463601B CN202210376446.9A CN202210376446A CN114463601B CN 114463601 B CN114463601 B CN 114463601B CN 202210376446 A CN202210376446 A CN 202210376446A CN 114463601 B CN114463601 B CN 114463601B
- Authority
- CN
- China
- Prior art keywords
- data
- model
- node
- module
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a data processing system for target identification based on big data, wherein a data acquisition module in the data processing system is used for acquiring target priori knowledge; the data dividing module is used for randomly dividing the target priori knowledge into a training data set and a verification data set according to a preset proportion; the model construction module is used for determining an image feature extraction mode according to the data type of the training data set, extracting image features in the training data set based on the image feature extraction mode, and constructing a hypergraph model based on the image features; the model judging module is used for training the hypergraph model by utilizing the verification data set, when the hypergraph model is judged to be converged, the hypergraph model is recorded as a data processing model, and the data processing model is used for outputting a data label of data to be identified. Through the technical scheme in the application, the structure of the machine learning model is facilitated to be simplified, the model convergence efficiency is improved, and meanwhile, the accuracy of the predicted data label is facilitated to be improved.
Description
Technical Field
The application relates to the technical field of data processing, in particular to a data processing system for target identification based on big data.
Background
With the rapid development of networks and the explosive growth of various data information, the development of artificial intelligence technology is greatly promoted, wherein the image target identification technology is the basis of many application technologies (such as automatic driving of automobiles and grabbing of objects by mechanical arms), and therefore, the type identification of the object image is an essential link in artificial intelligence.
With the continuous development and optimization of the machine learning algorithm, certain accuracy can be ensured by utilizing the machine learning algorithm to automatically identify the target of the object. However, the model structure of such machine learning is usually complex and has a deep depth, which results in a difficult model training and a problem that the model cannot be converged. Therefore, how to simplify the model structure and improve the convergence rate of the model is an urgent problem to be solved in the process of using machine learning for image target identification.
Disclosure of Invention
The purpose of this application lies in: how to simplify the structure of the machine learning model and improve the convergence efficiency of the model.
The technical scheme of the application is as follows: there is provided a data processing system for big data based object recognition, the data processing system comprising: the device comprises a data acquisition module, a data division module, a model construction module and a model judgment module; the data acquisition module is used for acquiring target priori knowledge, wherein the target priori knowledge comprises sample image data and sample label data; the data dividing module is used for randomly dividing the target priori knowledge into a training data set and a verification data set according to a preset proportion; the model construction module is used for determining an image feature extraction mode according to the data type of the training data set, extracting image features in the training data set based on the image feature extraction mode, and constructing a hypergraph model based on the image features, wherein a label transfer loss function is arranged in the hypergraph model and consists of the sum of feature similarity, label sensitivity and an empirical loss term; the model judging module is used for training the hypergraph model by utilizing the verification data set, when the hypergraph model is judged to be converged, the hypergraph model is recorded as a data processing model, and the data processing model is used for outputting a data label of data to be identified.
In any one of the above technical solutions, further, the model building module specifically includes: the device comprises a metric value calculation unit and a super edge construction unit; the metric value calculating unit is used for calculating the adjacent metric value between any two nodes by taking the image characteristic of each sample image data in the training data set as a node; the super edge construction unit is used for judging whether the proximity metric value between any two nodes is smaller than the proximity threshold value, and then the node v is connected with the node v j Put node v i And constructing the super edge of the hypergraph model according to the super edge set.
In any of the above technical solutions, further, the calculation formula for the metric value calculation unit to calculate the neighboring metric value between any two nodes is as follows:
in the formula, A ij Is a node v i And node v j D () is the Euclidean distance function, σ i And σ j Are respectively node v i And node v j Scaling constant of f θ (x i ) And f θ (x j ) Are respectively node v i And node v j The image feature of (1).
In any of the above technical solutions, further, a calculation formula of the label transfer loss function in the model building module is:
in the formula, R emp (Y) is the empirical loss term for the tag, λ 1 Is a first weight coefficient, λ 2 Is a second weight coefficient, and is,is a feature similarity function, phi is a sensitivity function,as a discriminant function, v i In the case of the ith node, the node,is and node v i At the same overcide e i The (j) th node in the group,to predict the label, y i Is a label vector.
In any of the above technical solutions, further, the feature similarity functionThe calculation formula of (2) is as follows:
in the formula, T is a characteristic difference function, and T is a preset threshold.
In any of the above technical solutions, further, the system further includes: a data transmission module; the data transmission module is used for receiving the target priori knowledge and transmitting the target priori knowledge to the data acquisition module.
The beneficial effect of this application is:
according to the technical scheme, the zoomed image features are utilized to calculate the adjacent metric value between two adjacent nodes so as to accelerate and optimize the construction process of the hyperedges in the hypergraph model, and the label transfer loss function consisting of the sum of the feature similarity, the label sensitivity and the experience loss term is introduced into the hypergraph model so as to accelerate the training and convergence rate of the hypergraph model, improve the accuracy of the model for outputting the predicted data label and reduce the complexity of the model.
In a preferred implementation manner of the application, the loss function of the hypergraph model is formed by the feature similarity, the label sensitivity and the experience loss term, and the image feature and the prediction data label are used as input quantities of the loss function together, so that the rationality of the loss function is improved, and the overall performance of the data processing model is optimized.
Drawings
The advantages of the above and/or additional aspects of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic block diagram of a data processing system based on big data object recognition according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a data processing process according to one embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, the present application will be described in further detail with reference to the accompanying drawings and detailed description. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited by the specific embodiments disclosed below.
As shown in fig. 1 and 2, the present embodiment provides a data processing system 100 based on object recognition of big data, the data processing system 100 including: the system comprises a data transmission module 10, a data acquisition module 20, a data dividing module 30, a model building module 40 and a model judging module 50.
In this embodiment, the data transmission module 10 is configured to communicate with external data, receive the target priori knowledge, and transmit the target priori knowledge to the data acquisition module 20, and after the data transmission module 10 receives the target priori knowledge, transmit the target priori knowledge to the data acquisition module 20.
In this embodiment, the data obtaining module 20 is configured to obtain target prior knowledge, where the target prior knowledge includes sample image data and sample label data.
Specifically, because data information in the internet is data in an open form, which generally has the problems of non-uniform standard, ambiguous attribution, high cost of labeling and the like, the data transmission module 10 and the data acquisition module 20 are used to collect a data set with abundant labeling and as related as possible as target prior knowledge, wherein the target prior knowledge includes sample image data and sample label data.
In this embodiment, the data dividing module 30 is configured to randomly divide the target priori knowledge into a training data set and a verification data set according to a preset ratio;
specifically, in this embodiment, a random sampling manner is adopted, and the sampling rate is as follows: 2, dividing the obtained target priori knowledge into a training data set and a verification data set, and no further description is given to the specific process.
In this embodiment, the model building module 40 is configured to determine an image feature extraction manner according to a data type of the training data set, extract image features in the training data set based on the image feature extraction manner, and build a hypergraph model based on the image features, where a label transfer loss function is set in the hypergraph model, and the label transfer loss function is composed of a sum of feature similarity, label sensitivity, and an empirical loss term;
specifically, one sample image data corresponds to one image feature, and the feature extraction method is determined by the data type and the feature of the sample image data, so that before extracting the image feature in the training data set, the image feature extraction mode needs to be determined according to the data type of the training data set.
Before constructing the hypergraph model, feature extraction is performed on sample image data in the training data set. Because the data types in the training data set are not uniform, and the training data set may be a three-dimensional image or a two-dimensional image, the data types of the training data set need to be judged first, and then a corresponding feature extraction method is determined, for example, when the training data set is three-dimensional image data, image features of the training data set can be extracted through multi-view convolution machine learning. And finally, extracting image characteristics of the brave and constructed hypergraph model by using a corresponding extraction method.
In this embodiment, the image features of each sample image data in the training data set are used as nodes of the hypergraph model, and the structural description is as follows:
in the formula, node setEach node in (a) represents an image feature of one sample image data, epsilon represents a set of hyper-edges between two nodes, and W is a set of hyper-edge weights.
In this embodiment, a k-nearest neighbor method may be used to establish the super edge, and before this process, a multi-level perceptron (MLP) is required to calculate the scaling constant σ of each data, which is not described in detail again.
In this embodiment, the model building module 40 specifically includes: a metric value calculating unit 41 and a super edge constructing unit 42;
the metric value calculating unit 41 is configured to calculate an adjacent metric value between any two nodes by using the image feature of each sample image data in the training data set as a node, where a calculation formula of the adjacent metric value is:
in the formula, A ij Is a node v i And node v j D () is the Euclidean distance function,σ i And σ j Are respectively node v i And node v j Scaling constant of x i And x j Are respectively node v i And node v j The image feature of (1).
Wherein, the super edge constructing unit 42 is configured to, when it is determined that the proximity metric value between any two nodes is smaller than the proximity threshold value, connect the node v j Put node v i And constructing the super edge of the hypergraph model according to the super edge set.
Specifically, the image feature f of any sample image data in the training data set is obtained in a traversal mode θ (x i ) As node v i Sequentially calculating the rest nodes and the node v i A proximity metric value a therebetween ij When judging node v i And node v j When the proximity metric value is less than the proximity threshold value, the node v is connected j Put node v i A super edge set of, i.e. node v j Belong to node v i At the super edge e i Contained node setIn (1). If the connection relationship of the hypergraph is described as:
then, node v j Belongs to a node v i Corresponding excess edge e i I.e. H (v) j ,e i )=1。
In this embodiment, after the hypergraph is constructed by using the above process, a loss function is introduced to generate a hypergraph model, so that the hypergraph model is optimized and solved by using a verification data set, a data processing model for processing data to be identified is obtained, and a data tag of the data processing model is output.
Specifically, taking a training data set as an example, data in all the training data sets includes two types, one type is sample image data, and a feature vector set can be formed by using image features of the sample image data, and features are setVector set X ═ X 1 ,x 2 ,…,x i ,…,x N In which x i For the image feature corresponding to the ith sample image data in the training data set, corresponding to the ith node v in the hypergraph i . The other is sample label data corresponding to label vector y i Which can compose a set of tag vectors Y ═ Y 1 ,y 2 ,…,y i ,…,y N }。
Thus, for the validation data set, if passing through the hypergraph model, the image feature x corresponding to the ith sample image data is i And the label vector y i Belong to the same class, then y i 1 is ═ 1; otherwise, set to y i =0。
In this embodiment, the tag transfer loss function is composed of a sum of feature similarity, tag sensitivity, and an empirical loss term, and a calculation formula of the tag transfer loss function is as follows:
in the formula, R emp (Y) is the empirical loss term for the tag, λ 1 Is a first weight coefficient if the first weight coefficient is lambda 1 Setting to 1 can make the label output by model prediction and the corresponding sample label data have small difference, but the model convergence rate is low.
In the formula of lambda 2 Is a second weight coefficient, and is,as a function of the feature similarity for computing node v i With the same overcide e i Middle nodeFeature similarity between them, feature similarity functionThe calculation formula of (2) is as follows:
in the formula, T is a characteristic difference function to obtain a difference value in the image characteristics, and T is a preset threshold;
in the formula (I), the compound is shown in the specification,as discriminant function for discriminating predictive labelsAnd the label vector y i The difference between them is in the range of [ -1, 1 [)]I.e. the more obvious the difference between the two is, the discriminant functionThe larger the value of phi is, the sensitivity function is phi, which is a segmentation function, and the value of phi can be set manually. In this embodiment, the calculation formula corresponding to the sensitivity function Φ can be set as:
specifically, after the hypergraph model is constructed based on the training data set, the hypergraph model can be verified by using the verification data set, and the specific verification process is not repeated.
In this embodiment, the model determining module 50 is configured to train the hypergraph model by using the verification data set, and when it is determined that the hypergraph model converges, the hypergraph model is recorded as a data processing model, and the data processing model is configured to output a data tag of the data to be identified.
Specifically, through the data processing model, data processing can be performed on input data, data labels of the input data can be predicted, an image recognition function is realized, and data support is provided for technologies such as automatic driving of an automobile, grabbing of an object by a mechanical arm and the like.
In order to verify the accuracy of the data processing model and the convergence rate of the model in this embodiment, a comparison is performed by a conventional method, and the comparison result is shown in table 1.
TABLE 1
Label error (%) | Rate of convergence (ms) | |
This example | 0.43 | 179 |
SIFT algorithm | 0.79 | 562 |
FREAK algorithm | 0.68 | 374 |
As can be seen from the comparison data, the performance of the data processing model provided in this embodiment is superior to that of the existing SIFT algorithm and FREAK algorithm in the aspect of image data processing, and the data processing model has the advantages of simple structure, high convergence rate, and certain improvement in accuracy.
The technical solution of the present application is described in detail above with reference to the accompanying drawings, and the present application proposes a data processing system based on object recognition of big data, the data processing system including: the device comprises a data acquisition module, a data division module, a model construction module and a model judgment module; the data acquisition module is used for acquiring target priori knowledge; the data dividing module is used for randomly dividing the target priori knowledge into a training data set and a verification data set according to a preset proportion, wherein the target priori knowledge comprises sample image data and sample label data; the model construction module is used for determining an image feature extraction mode according to the data type of the training data set, extracting image features in the training data set based on the image feature extraction mode, and constructing a hypergraph model based on the image features, wherein a label transfer loss function is arranged in the hypergraph model and consists of the sum of feature similarity, label sensitivity and an empirical loss term; the model judging module is used for training the hypergraph model by utilizing the verification data set, when the hypergraph model is judged to be converged, the hypergraph model is recorded as a data processing model, and the data processing model is used for outputting a data label of data to be identified. Through the technical scheme in the application, the structure of the machine learning model is facilitated to be simplified, the model convergence efficiency is improved, and meanwhile, the accuracy of the predicted data label is facilitated to be improved.
The steps in the present application may be sequentially adjusted, combined, and subtracted according to actual requirements.
The units in the device can be merged, divided and deleted according to actual requirements.
Although the present application has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and not restrictive of the application of the present application. The scope of the present application is defined by the appended claims and may include various modifications, adaptations, and equivalents of the invention without departing from the scope and spirit of the application.
Claims (5)
1. A data processing system for big data based object recognition, the system comprising: the device comprises a data acquisition module (20), a data dividing module (30), a model building module (40) and a model judging module (50);
the data acquisition module (20) is used for acquiring target priori knowledge;
the data dividing module (30) is used for randomly dividing the target priori knowledge into a training data set and a verification data set according to a preset proportion, wherein the target priori knowledge comprises sample image data and sample label data;
the model construction module (40) is used for determining an image feature extraction mode according to the data type of the training data set, extracting the image features in the training data set based on the image feature extraction mode, and constructing a hypergraph model based on the image features,
wherein a label transfer loss function is set in the hypergraph model, the label transfer loss function is composed of a sum of feature similarity, label sensitivity and an empirical loss term,
wherein, the calculation formula of the tag transfer loss function is as follows:
in the formula, R emp (Y) is the empirical loss term, λ, of the tag 1 Is a first weight coefficient, λ 2 Is a second weight coefficient, and is,is a feature similarity function, phi is a sensitivity function,as a discriminant function, v i In the case of the ith node, the node,is and node v i At the same overcide e i The (j) th node in the group,to predict the label, y i As a label vector, the discriminant functionFor discriminating the prediction tagAnd the label vector y i The difference between them is in the range of [ -1, 1 [)]And Y is a label vector set, wherein the calculation formula of the sensitivity function phi is as follows:
the model judging module (50) is used for training the hypergraph model by utilizing the verification data set, and when the hypergraph model is judged to be converged, the hypergraph model is recorded as a data processing model which is used for outputting a data label of data to be identified.
2. The big-data based object recognition data processing system of claim 1, wherein the model building module (40) specifically comprises: a metric value calculation unit (41) and a super edge construction unit (42);
the metric value calculating unit (41) is used for calculating the adjacent metric value between any two nodes by taking the image characteristic of each sample image data in the training data set as a node;
the super edge construction unit (42) is used for judging whether the adjacent metric value between any two nodes is smaller than the adjacent threshold value, and then the node v is connected with the adjacent threshold value j Put node v i And constructing the super edge of the hypergraph model according to the super edge set.
3. The big-data based object recognition data processing system of claim 2, wherein the metric value calculating unit (41) calculates the proximity metric value between any two nodes by the calculation formula:
in the formula, A ij Is a node v i And node v j D () is the Euclidean distance function, σ i And σ j Are respectively node v i And node v j Scaling constant of f θ (x i ) And f θ (x j ) Are respectively node v i And node v j The image feature of (1).
5. The big-data based object recognition data processing system of any of claims 1 to 4, wherein the system further comprises: a data transmission module (10);
the data transmission module (10) is used for receiving the target priori knowledge and transmitting the target priori knowledge to the data acquisition module (20).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210376446.9A CN114463601B (en) | 2022-04-12 | 2022-04-12 | Big data-based target identification data processing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210376446.9A CN114463601B (en) | 2022-04-12 | 2022-04-12 | Big data-based target identification data processing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114463601A CN114463601A (en) | 2022-05-10 |
CN114463601B true CN114463601B (en) | 2022-08-05 |
Family
ID=81417524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210376446.9A Active CN114463601B (en) | 2022-04-12 | 2022-04-12 | Big data-based target identification data processing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463601B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107480627A (en) * | 2017-08-08 | 2017-12-15 | 华中科技大学 | Activity recognition method, apparatus, storage medium and processor |
CN109492691A (en) * | 2018-11-07 | 2019-03-19 | 南京信息工程大学 | A kind of hypergraph convolutional network model and its semisupervised classification method |
CN109711366A (en) * | 2018-12-29 | 2019-05-03 | 浙江大学 | A kind of recognition methods again of the pedestrian based on group information loss function |
CN110443063A (en) * | 2019-06-26 | 2019-11-12 | 电子科技大学 | The method of the federal deep learning of self adaptive protection privacy |
CN111222434A (en) * | 2019-12-30 | 2020-06-02 | 深圳市爱协生科技有限公司 | Method for obtaining evidence of synthesized face image based on local binary pattern and deep learning |
CN113919441A (en) * | 2021-11-03 | 2022-01-11 | 北京工业大学 | Classification method based on hypergraph transformation network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875821A (en) * | 2018-06-08 | 2018-11-23 | Oppo广东移动通信有限公司 | The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing |
CN113971733A (en) * | 2021-10-29 | 2022-01-25 | 京东科技信息技术有限公司 | Model training method, classification method and device based on hypergraph structure |
-
2022
- 2022-04-12 CN CN202210376446.9A patent/CN114463601B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107480627A (en) * | 2017-08-08 | 2017-12-15 | 华中科技大学 | Activity recognition method, apparatus, storage medium and processor |
CN109492691A (en) * | 2018-11-07 | 2019-03-19 | 南京信息工程大学 | A kind of hypergraph convolutional network model and its semisupervised classification method |
CN109711366A (en) * | 2018-12-29 | 2019-05-03 | 浙江大学 | A kind of recognition methods again of the pedestrian based on group information loss function |
CN110443063A (en) * | 2019-06-26 | 2019-11-12 | 电子科技大学 | The method of the federal deep learning of self adaptive protection privacy |
CN111222434A (en) * | 2019-12-30 | 2020-06-02 | 深圳市爱协生科技有限公司 | Method for obtaining evidence of synthesized face image based on local binary pattern and deep learning |
CN113919441A (en) * | 2021-11-03 | 2022-01-11 | 北京工业大学 | Classification method based on hypergraph transformation network |
Non-Patent Citations (2)
Title |
---|
基于深度学习的图像样本标签赋值校正算法实现;舒忠;《数字印刷》;20191010;全文 * |
基于自适应度量学习的行人再识别;詹敏等;《电脑知识与技术》;20170405(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114463601A (en) | 2022-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112560656A (en) | Pedestrian multi-target tracking method combining attention machine system and end-to-end training | |
CN110942091A (en) | Semi-supervised few-sample image classification method for searching reliable abnormal data center | |
CN111259917B (en) | Image feature extraction method based on local neighbor component analysis | |
CN113361627A (en) | Label perception collaborative training method for graph neural network | |
CN117152459B (en) | Image detection method, device, computer readable medium and electronic equipment | |
CN112116950B (en) | Protein folding identification method based on depth measurement learning | |
CN113920472A (en) | Unsupervised target re-identification method and system based on attention mechanism | |
CN116206327A (en) | Image classification method based on online knowledge distillation | |
CN114358250A (en) | Data processing method, data processing apparatus, computer device, medium, and program product | |
CN117131348B (en) | Data quality analysis method and system based on differential convolution characteristics | |
CN114463601B (en) | Big data-based target identification data processing system | |
CN116561562B (en) | Sound source depth optimization acquisition method based on waveguide singular points | |
CN116109650B (en) | Point cloud instance segmentation model training method and training device | |
CN114463602B (en) | Target identification data processing method based on big data | |
CN116630694A (en) | Target classification method and system for partial multi-label images and electronic equipment | |
EP4068163A1 (en) | Using multiple trained models to reduce data labeling efforts | |
CN113032612B (en) | Construction method of multi-target image retrieval model, retrieval method and device | |
CN115600134A (en) | Bearing transfer learning fault diagnosis method based on domain dynamic impedance self-adaption | |
CN112163526B (en) | Method and device for identifying age based on face information and electronic equipment | |
CN117454957B (en) | Reasoning training system for image processing neural network model | |
CN113572732B (en) | Multi-step attack modeling and prediction method based on VAE and aggregated HMM | |
CN117115179A (en) | Frame-by-frame point cloud rapid instance segmentation method and device based on nearest neighbor KNN algorithm | |
CN117892199A (en) | Multi-angle joint activity recognition classification method based on local loss | |
CN117765034A (en) | Vehicle multi-target tracking research method of AIMM-UKF-JPDA | |
CN117253095A (en) | Image classification system and method based on biased shortest distance criterion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |