CN110704650B - OTA picture tag identification method, electronic equipment and medium - Google Patents
OTA picture tag identification method, electronic equipment and medium Download PDFInfo
- Publication number
- CN110704650B CN110704650B CN201910934233.1A CN201910934233A CN110704650B CN 110704650 B CN110704650 B CN 110704650B CN 201910934233 A CN201910934233 A CN 201910934233A CN 110704650 B CN110704650 B CN 110704650B
- Authority
- CN
- China
- Prior art keywords
- ota
- label
- target
- tag
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an identification method of an OTA picture tag, electronic equipment and a medium, wherein the identification method of the OTA picture tag comprises the following steps: constructing an OTA image tag data set according to an OTA image library, wherein the OTA image library comprises OTA images and tags corresponding to the OTA images; establishing a label relation model according to the OTA image label data set; establishing an image feature extraction model according to the label relation model, and extracting image features; establishing a label identification model according to the image characteristics; and identifying the label of the OTA picture to be detected by adopting a label identification model. The invention uses a deep learning method to automatically identify the pictures based on massive picture information and the currently existing labels in the OTA hotel.
Description
Technical Field
The invention belongs to the technical field of identification of OTA (over the air) picture tags, and particularly relates to an identification method of an OTA picture tag, electronic equipment and a medium.
Background
The images can provide visual representation and communication of information and thus find large scale application in OTA (online travel) scenarios. The image can be effectively and accurately displayed, so that the user experience can be greatly improved, and the conversion rate of the user is improved. The content in OTA images is often complex, and a single image often contains multiple semantics and has multiple tag attributes. In the prior art, the identification of the image tag is usually performed manually, so that the time consumption is long, the efficiency is low, the cost is high, and the accuracy is low.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides an identification method of an OTA picture tag, electronic equipment and a medium.
The invention solves the technical problems by the following technical scheme:
the invention provides an identification method of OTA picture labels, which comprises the following steps:
constructing an OTA image tag data set according to an OTA image library, wherein the OTA image library comprises OTA images and tags corresponding to the OTA images;
establishing a label relation model according to the OTA image label data set;
establishing an image feature extraction model according to the label relation model, and extracting image features;
establishing a label identification model according to the image characteristics;
and identifying the label of the OTA picture to be detected by adopting a label identification model.
Preferably, the step of constructing an OTA image tag dataset according to an OTA image library comprises:
counting the occurrence times of labels in an OTA picture library;
acquiring a target label, wherein the target label is a label with the occurrence frequency larger than a preset value;
and constructing an OTA image tag data set, wherein the OTA image tag data set comprises a target tag and a target picture, and the target picture is a picture corresponding to the target tag in an OTA picture library.
Preferably, the step of establishing a label relation model from the OTA image label dataset comprises:
converting the target label into a label vector in a one-hot (coding mode) mode, and constructing a label relation directed graph according to the occurrence times of the target label, wherein in the label relation directed graph, the conversion coefficient z from the ith target label to the jth target label is the conversion coefficient z ij The following formula is satisfied:
z ij =q ij /q i
wherein q i Representing the number, q, of target pictures in the OTA image tag dataset including the ith target tag ij Representing the number of target pictures in the OTA image tag dataset that include both the ith target tag and the jth target tag, i e [1, N],j∈[1,N]N is the number of categories of the target tag. The object tag includes identifiable contents in the picture and its attributes and photographing environments of the picture, such as mountain, sea, day, night, building part, building panorama, etc.
Preferably, the step of establishing a label relation model according to the OTA image label dataset further comprises:
converting the target label into a label relation matrix W through the graph volume transformation, wherein the label relation matrix W is used for representing a label relation model,the dimension of D coincides with the output characteristic dimension of the characteristic transformation function F (X) of the convolutional neural network. />For characterizing the real number domain.
Preferably, the transfer function of the image feature extraction model is:
L=W·F(X),
wherein F (X) is a characteristic transformation function of the convolutional neural network.
Preferably, the transfer function of the tag identification model is:
preferably, the step of building a tag identification model based on the image features further comprises:
training a convolutional neural network according to a target loss function, wherein the target loss function is as follows:
wherein when the ith target label is contained in the target picture, y i When the target picture does not include the ith target label =1, y i =0,And representing the probability that the target picture obtained according to the tag identification model comprises the ith target tag.
Preferably, the step of identifying the tag of the OTA picture to be detected by using the tag identification model includes:
using labelsIdentifying model to obtain target probability of OTA picture to be detectedTarget probability->For the probability of including the kth target tag in the OTA picture to be tested, if +.>Setting a corresponding target label for the OTA picture to be detected, τ k For the target threshold corresponding to the kth target label, k ε [1, N]。
Preferably, after constructing the tag identification model, the identification method further comprises the steps of:
acquiring the target threshold tau k The target threshold tau k Is F 1 The threshold corresponding to the maximum value of the score.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the identification method of the OTA picture tag is realized when the processor executes the computer program.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the method for identifying OTA picture tags of the present invention.
The invention has the positive progress effects that: the method and the system are based on massive picture information and the currently existing labels in the OTA hotel, and utilize a deep learning method to automatically identify the pictures, so that the information of the pictures can be quickly and accurately mined, the hidden labels of the images can be efficiently extracted, the operation and maintenance cost can be greatly saved, the accuracy of picture display can be ensured, the personalized display and search requirements can be met, and the service experience of users in an OTA scene can be effectively improved.
Drawings
Fig. 1 is a flowchart of an identifying method of an OTA picture tag according to embodiment 1 of the present invention.
Fig. 2 is a schematic structural diagram of an electronic device according to embodiment 2 of the present invention.
Detailed Description
The invention is further illustrated by means of the following examples, which are not intended to limit the scope of the invention.
Example 1
The embodiment provides a method for identifying OTA picture tags. Referring to fig. 1, the method for identifying the OTA picture tag includes the following steps:
step S101, constructing an OTA image tag data set according to an OTA image library. The OTA picture library comprises OTA pictures and labels corresponding to the OTA pictures.
Step S102, a label relation model is established according to the OTA image label data set.
And step S103, establishing an image feature extraction model according to the label relation model, and extracting image features.
Step S104, a label identification model is established according to the image characteristics.
Step S105, a label identification model is adopted to identify the label of the OTA picture to be detected.
In the specific implementation, in step S101, the number of occurrences of the tag in the OTA picture library is counted; acquiring a target label, wherein the target label is a label with the occurrence frequency larger than a preset value; and constructing an OTA image tag data set, wherein the OTA image tag data set comprises a target tag and a target picture, and the target picture is a picture corresponding to the target tag in an OTA picture library. As a preferred embodiment, the preset value is 500, that is, the number of occurrences of the target tag from the OTA image tag dataset is not less than 500.
The existing tags are various, and the tags comprise identifiable contents (particularly contents with high frequency of occurrence) in pictures, attributes of the identifiable contents and shooting environments of the pictures, such as mountains, seas, daytime, night, building parts, building panorama and the like. Correlation exists among different labels, so that the dependency relationship among the labels can be effectively mined, and the accuracy of label identification can be improved. Therefore, in step S102, the shape is passed through one-hotConverting the target label into a label vector, and constructing a label relation directed graph according to the occurrence times of the target label, wherein in the label relation directed graph, the conversion coefficient z from the ith target label to the jth target label is the conversion coefficient z from the ith target label to the jth target label ij The following formula is satisfied:
z ij =q ij /q i
wherein q i Representing the number, q, of target pictures in the OTA image tag dataset including the ith target tag ij Representing the number of target pictures in the OTA image tag dataset that include both the ith target tag and the jth target tag, i e [1, N],j∈[1,N]N is the number of categories of the target tag. For example, a tag that characterizes a picture that contains content of "mountain" belongs to a different category than a tag that characterizes a picture that contains content of "daytime". And, the target label is converted into a label relation matrix W through the graph volume transformation, and the label relation matrix W is used for representing a label relation model.The dimension of D coincides with the output characteristic dimension of the characteristic transformation function F (X) of the convolutional neural network. />For characterizing the real number domain.
The pictures (including images) are dense large-size data, and the efficient extraction of high-level semantic features in the pictures is greatly helpful for identifying tags. Therefore, in step S103, a deep neural network containing 55 convolutional layers is constructed as a BaseNet (basic network) for image feature extraction. Specifically, based on the disclosed large-scale scene classification data set, model preliminary training is carried out, a training set and a verification set are divided, the model iterates on the training set until convergence is achieved, the weight of a convolution layer is fixed after the effect on the divided verification set reaches the optimum, and the characteristics before the full connection layer are extracted for the subsequent multi-label recognition image characteristic extraction. Namely, inputting a picture X, outputting a feature vector through feature transformation F of a convolution layer, and then dot multiplying a label relation matrix W to obtain an output feature L after feature change, namely, a transfer function of an image feature extraction model is as follows:
L=W·F(X),
wherein F (X) is a characteristic transformation function of the convolutional neural network.
In step S104, in order to accommodate the multi-label learning task, the image feature vectors are input to the full-connection layer, including N nodes. And (3) dividing the network weight of the training set training full-connection layer based on the OTA image tag data set constructed in the step (S101). Finally, an N-dimensional feature vector is output, sigmoid (an activation function of a neural network) is accessed as an activation function of the last layer output, and the probability of predicting the tag by the output model, that is, the transfer function of the tag identification model is:
the tag recognition model outputs an N-dimensional feature vectorEach dimension represents a probability that the target picture contains a corresponding target label.
In the training process, defining a target loss function, training a convolutional neural network according to the target loss function, wherein the target loss function is as follows:
wherein when the ith target label is contained in the target picture, y i When the target picture does not include the ith target label =1, y i =0. That is, vector y= { Y 1 ,y 2 ,y 3 ,…y N And the method is used for representing whether the target picture contains a corresponding target label.Representing a target picture identified from a tag identification modelIncluding the probability of the ith target tag.
After constructing the tag recognition model, the recognition method of the present embodiment further includes the steps of:
acquiring the target threshold tau k The target threshold tau k Is F 1 The threshold corresponding to the maximum value of the score.
Wherein F is 1 The score is expressed as:
For the 1 st target label, different alternative thresholds tau 'are respectively adopted' 1 Acquiring tag identification models at different tau' 1 Accuracy and recall rate recovery of an OTA test picture library to obtain different tau' 1 F respectively corresponding to 1 Score. Wherein F is 1 Alternative threshold τ 'corresponding to the maximum value of the score' 1 Target threshold τ corresponding to 1 st target tag 1 . In a similar manner, a target threshold τ corresponding to each target tag may be obtained k ,k∈[1,N]。
In step S105, a tag identification model is used to obtain the target probability of the OTA picture to be detectedTarget probability->For the probability of including the kth target tag in the OTA picture to be tested, if +.>Setting a corresponding target label for the OTA picture to be detected.
The identification method of the OTA picture tag of the embodiment is based on massive picture information under OTA and the currently existing tag, and the automatic tag identification is carried out on the picture by using a deep learning method. Practice shows that the identification method of the OTA picture tag can reach 95% of average accuracy, and the time for identifying multiple tags of a single picture is 120 milliseconds. The identification method of the OTA picture tag can quickly and accurately mine the information of the picture, effectively extract the hidden tag of the image, greatly save operation and maintenance cost, ensure the accuracy of picture display, meet personalized display and search requirements and effectively improve the service experience of users in OTA scenes.
Example 2
Fig. 2 is a schematic structural diagram of an electronic device according to the present embodiment. The electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, which when executed implements the method of identifying OTA picture tags of embodiment 1. The electronic device 30 shown in fig. 2 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 2, the electronic device 30 may be embodied in the form of a general purpose computing device, which may be a server device, for example. Components of electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, a bus 33 connecting the different system components, including the memory 32 and the processor 31.
The bus 33 includes a data bus, an address bus, and a control bus.
The processor 31 executes various functional applications and data processing, such as the method of identifying an OTA picture tag of embodiment 1 of the present invention, by running a computer program stored in the memory 32.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., keyboard, pointing device, etc.). Such communication may be through an input/output (I/O) interface 35. Also, model-generating device 30 may also communicate with one or more networks, such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet, via network adapter 36. As shown, network adapter 36 communicates with the other modules of model-generating device 30 via bus 33. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with the model-generating device 30, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, data backup storage systems, and the like.
It should be noted that although several units/modules or sub-units/modules of an electronic device are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present invention. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Example 3
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for identifying OTA picture tags of embodiment 1.
More specifically, among others, readable storage media may be employed including, but not limited to: portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible embodiment, the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps of implementing the method for identifying an OTA picture tag of embodiment 1 when said program product is run on the terminal device.
Wherein the program code for carrying out the invention may be written in any combination of one or more programming languages, which program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on the remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.
Claims (8)
1. The identification method of the OTA picture tag is characterized by comprising the following steps of:
constructing an OTA image tag data set according to an OTA image library, wherein the OTA image library comprises OTA images and tags corresponding to the OTA images;
establishing a label relation model according to the OTA image label data set;
establishing an image feature extraction model according to the label relation model, and extracting image features;
establishing a label identification model according to the image characteristics;
identifying the label of the OTA picture to be detected by adopting the label identification model;
the step of constructing an OTA image tag data set according to an OTA image library comprises the following steps:
counting the occurrence times of the tag in the OTA picture library;
obtaining a target label, wherein the target label is the label with the occurrence frequency larger than a preset value;
constructing the OTA image tag data set, wherein the OTA image tag data set comprises the target tag and a target picture, and the target picture is the picture corresponding to the target tag in the OTA picture library;
the step of establishing a label relation model according to the OTA image label data set comprises the following steps:
converting the target label into a label vector in a one-hot mode, and constructing a label relation directed graph according to the occurrence times of the target label, wherein in the label relation directed graph, the conversion coefficient z from the ith target label to the jth target label is the conversion coefficient z from the ith target label to the jth target label ij The following formula is satisfied:
z ij =q ij /q i
wherein q i Representing the number, q, of the target pictures in the OTA image tag dataset including the ith target tag ij Representing the number of the target pictures in the OTA image tag dataset, i E [1, N],j∈[1,N]N is the number of categories of the target tag;
the step of establishing a label relation model according to the OTA image label data set further comprises the following steps:
converting the target label into a label relation matrix W through a picture volume transformation, wherein the label relation matrix W is used for representing the label relation model,d is identical in dimension to the output characteristic dimension of the characteristic transformation function F (X) of the convolutional neural network, < >>For characterizing the real number domain.
2. The method for identifying an OTA picture tag of claim 1 wherein the transfer function of the image feature extraction model is:
L=W·F(X)。
4. the method of claim 3, wherein the step of creating a tag identification model from the image features further comprises:
training the convolutional neural network according to a target loss function, wherein the target loss function is as follows:
wherein when the ith target label is contained in the target picture, y i When the target picture does not include the ith target label =1, y i =0,y i E (0, 1) indicates whether the map contains the ith said target tag,and representing the probability that the target picture obtained according to the label identification model comprises the ith target label.
5. The method of claim 4, wherein the step of identifying the tag of the OTA picture to be detected using the tag identification model comprises:
acquiring target probability of the OTA picture to be detected by adopting the tag identification modelThe target probability->For the probability of including the kth target tag in the OTA picture to be tested, if +.>Setting the corresponding target label, tau, for the OTA picture to be tested k For the target threshold corresponding to the kth target label, k is [1, N ]]。
6. The method of OTA picture tag identification of claim 5 wherein after constructing the tag identification model, the method further comprises the steps of:
acquiring the target threshold tau k The target threshold tau k Is F 1 The threshold corresponding to the maximum value of the score.
7. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of identifying an OTA picture tag of any one of claims 1-6 when the computer program is executed by the processor.
8. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the steps of the method of identifying an OTA picture tag of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910934233.1A CN110704650B (en) | 2019-09-29 | 2019-09-29 | OTA picture tag identification method, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910934233.1A CN110704650B (en) | 2019-09-29 | 2019-09-29 | OTA picture tag identification method, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110704650A CN110704650A (en) | 2020-01-17 |
CN110704650B true CN110704650B (en) | 2023-04-25 |
Family
ID=69198005
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910934233.1A Active CN110704650B (en) | 2019-09-29 | 2019-09-29 | OTA picture tag identification method, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110704650B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113011494B (en) * | 2021-03-18 | 2024-02-27 | 北京百度网讯科技有限公司 | Feature processing method, device, equipment and storage medium |
CN113177525A (en) * | 2021-05-27 | 2021-07-27 | 杭州有赞科技有限公司 | AI electronic scale system and weighing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764208A (en) * | 2018-06-08 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium, electronic equipment |
CN109754015A (en) * | 2019-01-02 | 2019-05-14 | 京东方科技集团股份有限公司 | Neural network and correlation technique, medium and equipment for the identification of paintings multi-tag |
CN109918513A (en) * | 2019-03-12 | 2019-06-21 | 北京百度网讯科技有限公司 | Image processing method, device, server and storage medium |
CN110084296A (en) * | 2019-04-22 | 2019-08-02 | 中山大学 | A kind of figure expression learning framework and its multi-tag classification method based on certain semantic |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8873867B1 (en) * | 2012-07-10 | 2014-10-28 | Google Inc. | Assigning labels to images |
WO2016070098A2 (en) * | 2014-10-31 | 2016-05-06 | Paypal, Inc. | Determining categories for weakly labeled images |
CN105354307B (en) * | 2015-11-06 | 2021-01-15 | 腾讯科技(深圳)有限公司 | Image content identification method and device |
CN107861971A (en) * | 2017-09-15 | 2018-03-30 | 广州唯品会研究院有限公司 | A kind of product search method and device |
CN110110189A (en) * | 2018-02-01 | 2019-08-09 | 北京京东尚科信息技术有限公司 | Method and apparatus for generating information |
CN109816009B (en) * | 2019-01-18 | 2021-08-10 | 南京旷云科技有限公司 | Multi-label image classification method, device and equipment based on graph convolution |
CN109871909B (en) * | 2019-04-16 | 2021-10-01 | 京东方科技集团股份有限公司 | Image recognition method and device |
-
2019
- 2019-09-29 CN CN201910934233.1A patent/CN110704650B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764208A (en) * | 2018-06-08 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium, electronic equipment |
CN109754015A (en) * | 2019-01-02 | 2019-05-14 | 京东方科技集团股份有限公司 | Neural network and correlation technique, medium and equipment for the identification of paintings multi-tag |
CN109918513A (en) * | 2019-03-12 | 2019-06-21 | 北京百度网讯科技有限公司 | Image processing method, device, server and storage medium |
CN110084296A (en) * | 2019-04-22 | 2019-08-02 | 中山大学 | A kind of figure expression learning framework and its multi-tag classification method based on certain semantic |
Also Published As
Publication number | Publication date |
---|---|
CN110704650A (en) | 2020-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111062871B (en) | Image processing method and device, computer equipment and readable storage medium | |
WO2021139191A1 (en) | Method for data labeling and apparatus for data labeling | |
CN110297935A (en) | Image search method, device, medium and electronic equipment | |
CN110851641B (en) | Cross-modal retrieval method and device and readable storage medium | |
CN110232340B (en) | Method and device for establishing video classification model and video classification | |
CN112861575A (en) | Pedestrian structuring method, device, equipment and storage medium | |
CN113065013B (en) | Image annotation model training and image annotation method, system, equipment and medium | |
CN110866564B (en) | Season classification method, system, electronic device and medium for multiple semi-supervised images | |
CN112069319A (en) | Text extraction method and device, computer equipment and readable storage medium | |
CN114550053A (en) | Traffic accident responsibility determination method, device, computer equipment and storage medium | |
CN110851644A (en) | Image retrieval method and device, computer-readable storage medium and electronic device | |
CN112528029A (en) | Text classification model processing method and device, computer equipment and storage medium | |
CN111783712A (en) | Video processing method, device, equipment and medium | |
CN110704650B (en) | OTA picture tag identification method, electronic equipment and medium | |
CN112287144A (en) | Picture retrieval method, equipment and storage medium | |
CN115577768A (en) | Semi-supervised model training method and device | |
CN114943937A (en) | Pedestrian re-identification method and device, storage medium and electronic equipment | |
CN113762303B (en) | Image classification method, device, electronic equipment and storage medium | |
CN107729944B (en) | Identification method and device of popular pictures, server and storage medium | |
CN113239883A (en) | Method and device for training classification model, electronic equipment and storage medium | |
CN112559877A (en) | CTR (China railway) estimation method and system based on cross-platform heterogeneous data and behavior context | |
CN113705293A (en) | Image scene recognition method, device, equipment and readable storage medium | |
CN117115557A (en) | Automatic image labeling method, system, electronic equipment and storage medium | |
CN112633246A (en) | Multi-scene recognition method, system, device and storage medium in open scene | |
CN116630753A (en) | Multi-scale small sample target detection method based on contrast learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |