CN109643332A - A kind of sentence recommended method and device - Google Patents

A kind of sentence recommended method and device Download PDF

Info

Publication number
CN109643332A
CN109643332A CN201680088593.9A CN201680088593A CN109643332A CN 109643332 A CN109643332 A CN 109643332A CN 201680088593 A CN201680088593 A CN 201680088593A CN 109643332 A CN109643332 A CN 109643332A
Authority
CN
China
Prior art keywords
keyword
sentence
image
unpaired
pair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201680088593.9A
Other languages
Chinese (zh)
Other versions
CN109643332B (en
Inventor
胡慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN109643332A publication Critical patent/CN109643332A/en
Application granted granted Critical
Publication of CN109643332B publication Critical patent/CN109643332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A kind of sentence recommended method and device, obtain N number of keyword of target image, include the directly associated indirect keywords of keyword of the directly keywords of parsing target image obtain M and M in N number of keyword;The each sentence being directed in statement library, obtains the matching degree between target image and each sentence;The sentence that matching degree in statement library between target image is more than or equal to first threshold is recommended into user.In the application, due to both including direct keyword in N number of keyword, it further include the associated indirect keyword of direct keyword, so that subsequent can recommend sentence according to direct keyword and indirect keyword, so as to make the sentence recommended more meet the sentence of image artistic conception and the subjective thought of user, user satisfaction is improved.

Description

A kind of sentence recommended method and device Technical field
This application involves image procossing and sentence processing technology field more particularly to a kind of sentence recommended method and devices.
Background technique
The development of the communication medias such as the universal and social networks with mobile devices such as digital camera and smart phones, the number of pictures in user hand are more and more.Many users like selecting the photo in oneself terminal device to share on social networks such as wechat, microblogging.When carrying out photo sharing, user is often desirable to can have intension for some relatively aestheticisms of photo addition, or meets the sentence of characteristics of the times, to increase attraction and the influence power of oneself to sharing contents.
For this demand for meeting user, the application for photo processing, such as Meitu Xiu Xiu in the prior art provide the user and add text for picture, and forward function of the picture for being added to text into social networks.For the user high for writing level, it is easy to add well-known saying U.S. sentence using the picture that these applications are shared for oneself selection.However, some users when carrying out picture sharing, can not directly write out attracting text or expect the famous sayings of famous figures for meeting picture artistic conception quickly, so that this mode has significant limitation.
Another implementation is to identify picture material using image recognition technology in the prior art, generates corresponding description.For example, being directed to the picture that one big girl accompanies the same little girl to play, generation is described as " two young girls are playing with lego toy ";It is directed to the picture of a width salad, generation is described as " the salad has many different types of vegetables in it ".The description that this mode generates is that the statement of objectivity is carried out to the connection and their attribute in picture between object and the activity participated in.However, the statement of these objectivity can not share to the picture of user brings positive influence when carrying out picture sharing.
To sum up, a kind of sentence recommended method is needed at present, for recommending the sentence for more meeting image artistic conception and the subjective thought of user for user, improves user satisfaction.
Summary of the invention
The application provides a kind of sentence recommended method and device, for recommending the sentence for more meeting image artistic conception and the subjective thought of user for user, improves user satisfaction.
In a first aspect, the embodiment of the present application provides a kind of sentence recommended method, comprising:
Obtain N number of keyword of target image;It include parsing the M direct keywords and a directly associated indirect keyword of keyword of the M that the target image obtains in N number of keyword;N, M are positive integer, and N > M;
The first sentence being directed in statement library, calculate separately the similarity between the keyword that N number of keyword and first sentence include, and the similarity between the keyword for according to N number of keyword and first sentence including, obtain the matching degree between the target image and first sentence;First sentence is any sentence in the statement library;
The sentence that matching degree in the statement library between the target image is more than or equal to first threshold is recommended into user.
So, due to both including direct keyword in N number of keyword of target image, it further include the associated indirect keyword of direct keyword, and the M directly associated indirect keyword of keyword can the M according to the direct keywords for being able to reflect user's subjectivity thought and image artistic conception that extend of keyword, therefore, recommend sentence according to direct keyword and indirect keyword, the sentence recommended can be made more to meet the sentence of image artistic conception and the subjective thought of user, improves user satisfaction.
Optionally, before obtaining the matching degree between the target image and first sentence, further includes:
Determine theme belonging to first sentence;According to theme belonging to first sentence and theme table corresponding with weighted value, the corresponding weighted value of theme belonging to first sentence is determined;
The similarity between keyword for including according to N number of keyword and first sentence, obtains the matching degree between the target image and first sentence, comprising:
The corresponding weighted value of theme belonging to the similarity between keyword for including with first sentence according to N number of keyword and first sentence, obtains the matching degree between the target image and first sentence.
In this way, considering the first sentence institute when determining the matching degree between target image and first sentence The corresponding weighted value of the theme of category so that determining to determine that the matching degree between target image and first sentence is more accurate reasonable, and more meets the hobby of user to increase the calculation basis of the matching degree between target image and first sentence.
Optionally, after the sentence that the matching degree in the statement library between the target image is more than or equal to first threshold being recommended user, further includes:
Determine theme belonging to the object statement and the object statement selected in the sentence of recommendation;Weighted value corresponding with theme belonging to the object statement in the theme and the corresponding table of weighted value is tuned up.
In this way, adjusting the corresponding weighted value of theme belonging to object statement according to the object statement selected, so as to sufficiently update weighted value according to feedback information, be conducive to recommend for user the sentence for more meeting user demand out.
Optionally, N number of keyword of the target image is obtained, comprising:
The target image is parsed, the M direct keywords are obtained;
Each of the M directly keyword direct keywords are directed to, according to keyword contingency table, the keyword of second threshold will be more than or equal to the degree of association of each directly keyword as each directly associated indirect keyword of keyword;It include the degree of association between multiple keywords in the keyword contingency table;
According to each direct associated indirect keyword of keyword, the M directly associated indirect keywords of keyword are obtained.
So, the indirect keyword of direct keyword is obtained according to keyword contingency table, to extend the range of the keyword of target image, since indirect keyword can direct keyword extends according to the keyword for being able to reflect user's subjectivity thought and image artistic conception, so as to make the sentence recommended more meet the sentence of image artistic conception and the subjective thought of user, user satisfaction is improved.
Optionally, the target image is parsed, obtains the M direct keywords, comprising:
The target image is parsed, the feature of the target image is obtained;If it is determined that including face characteristic in the feature of the target image, then the expression of the face is determined according to the face characteristic, obtains face keyword;According to the feature in the feature of the target image in addition to the face characteristic, scene keyword is obtained;According to the face keyword and the scene keyword, the M are obtained directly Keyword.
Optionally, the keyword contingency table is obtained in the following way:
Training set is obtained, includes multiple first sentence images pair in the training set, the first sentence image pair of each of the multiple first sentence image pair includes sentence and the corresponding image of the sentence;
According to the keyword of the direct keyword of the image of each first sentence image pair and the sentence of each first sentence image pair, the item collection of each first sentence image pair is obtained;
The direct keyword for extracting the multiple each image of first sentence image pair, obtains the first keyword set;The keyword for extracting the multiple each sentence of first sentence image pair, obtains the second keyword set;
According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
In this way, keyword contingency table is constructed according to the training set got, so that it is that user recommends sentence to lay the foundation that keyword contingency table, which is subsequent with theoretical foundation,.
It optionally, further include multiple unpaired sentences and multiple unpaired images in the training set;
According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained, comprising:
According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, initial key word association table is obtained;
The first unpaired image being directed in the multiple unpaired image obtains the associated indirect keyword of direct keyword of the described first unpaired image according to the direct keyword of the described first unpaired image from the initial key word association table;The similarity between the associated indirect keyword of direct keyword of the keyword and the first unpaired image in the multiple unpaired sentence in each unpaired sentence is calculated, similarity is more than or equal to the unpaired sentence of third threshold value and the first unpaired image forms the second sentence image pair;The first unpaired image is any not matching in the multiple unpaired image To image;
According to multiple second sentence images pair, the item collection of each second sentence image pair is obtained;
According to the item collection of the item collection of the multiple first sentence image pair and multiple second sentence images pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
Optionally, after the sentence that the matching degree in the statement library between the target image is more than or equal to first threshold being recommended user, further includes:
Determine the object statement selected in the sentence of recommendation;The object statement and the target image are formed into third sentence image pair;According to the third sentence image pair, the item collection of third sentence image pair is obtained;
According to the item collection of the multiple first sentence image pair, the item collection of the item collection of the multiple second sentence image pair and the third sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, the keyword contingency table is updated.
Second aspect, the embodiment of the present application provide a kind of sentence recommendation apparatus, for realizing any one method in above-mentioned first aspect, including corresponding functional module, are respectively used to realize the step in above method.
The third aspect, the embodiment of the present application provides another sentence recommendation apparatus, for realizing any one method in above-mentioned first aspect, including communication interface, processor, memory and bus system, is respectively used to realize the step in above method.
In the application, N number of keyword of target image is obtained, includes parsing the M direct keywords and a directly associated indirect keyword of keyword of the M that the target image obtains in N number of keyword;N, M are positive integer, and N > M;The first sentence being directed in statement library, calculate separately the similarity between the keyword that N number of keyword and first sentence include, and the similarity between the keyword for according to N number of keyword and first sentence including, obtain the matching degree between the target image and first sentence;First sentence is any sentence in the statement library;The sentence that matching degree in the statement library between the target image is more than or equal to first threshold is recommended into user. In the application, due to both including direct keyword in N number of keyword of target image, it further include the associated indirect keyword of direct keyword, and the M directly associated indirect keyword of keyword can the M according to the direct keywords for being able to reflect user's subjectivity thought and image artistic conception that extend of keyword, therefore, recommend sentence according to direct keyword and indirect keyword, the sentence recommended can be made more to meet the sentence of image artistic conception and the subjective thought of user, improves user satisfaction.
Detailed description of the invention
In order to illustrate more clearly of the technical solution in the application, will briefly introduce below to attached drawing needed in embodiment description, it should be apparent that, the drawings in the following description are only some examples of the present application.
Fig. 1 is a kind of system architecture schematic diagram that the application is applicable in;
Fig. 2 a is flow diagram corresponding to a kind of sentence recommended method provided by the present application;
Fig. 2 b is the flow diagram for obtaining the direct keyword of target image;
Fig. 2 c be and the matched object statement effect diagram of target image;
Fig. 2 d is the overall flow schematic diagram of sentence recommended method in the embodiment of the present application;
Fig. 3 is a kind of structural schematic diagram of sentence recommendation apparatus provided by the present application;
Fig. 4 is the structural schematic diagram of another sentence recommendation apparatus provided by the present application.
Specific embodiment
In order to keep the purposes, technical schemes and advantages of the application clearer, the application is described in further detail below in conjunction with attached drawing.
The description and claims of this application and term " first " in above-mentioned attached drawing, " second " etc. are not use to describe a particular order for distinguishing different objects.In addition, term " includes " and " having " and their any deformations, it is intended that cover and non-exclusive include.Process, method, system, product or equipment for example including a series of steps or units are not limited to listed step or unit, but optionally further comprising the step of not listing or unit, or optionally further comprising other step or units intrinsic for these process, methods, product or equipment.
In the prior art for image recommendation sentence mode there are many, most common mode is to generate image tag according to picture material, image tag is matched with the keyword that the sentence in statement library includes, calculates the matching degree of image and sentence, and then is image recommendation sentence according to matching degree.It wherein, may include a plurality of types of sentences, such as the descriptive sentence of verse, well-known saying, grace etc. in statement library;It can only include verse in statement library if only wanting to recommend verse specifically alternatively, also can according to need the sentence in Filter sentence library.
For example, after parsing to image, identify in image to include sea, then the image tag generated can be " sea " or " ocean ".By include in statement library verse, well-known saying, grace a plurality of types of sentences such as descriptive sentence for, " sea " or " ocean " is matched with the keyword that the sentence in statement library includes, finally be user recommendation sentence may be " sea is beautiful " and other sentences including " sea ".
By way of further example, after being parsed to image, identify to include sea in image, if being intended for user recommends verse, then in view of the keyword that the sentence in verse for describing sea or ocean includes is mostly " deep blue sea ", " sea ", " Jiang Hai " etc., it then can be by the corresponding keyword in sea " deep blue sea ", " sea ", " Jiang Hai " etc. is used as image tag, and by " deep blue sea ", " sea ", " Jiang Hai " is matched with the keyword that the sentence in statement library includes, the sentence for being finally user's recommendation may be for " braving the wind and the waves can be sometimes, direct screening cloud sail Ji deep blue sea ", " sea rises bright moon, Time Together across the Strait ", " bateau dies from this, Jiang Hai posts the remaining years " etc..
It is directed to known to above content, image tag in the prior art is directly to be generated according to picture material, these image tags are often the word of direct description picture material, such as above-mentioned " sea " being previously mentioned, " deep blue sea ", " sea " etc., therefore, the subsequent word according to these directly description picture materials is that the sentence that user recommends also tends to be the sentence for including these direct words for describing picture materials.However, being unable to satisfy this demand of people using the sentence that aforesaid way is recommended as the sentence it is desirable to recommendation is more in line with the continuous improvement of image artistic conception and subjective this demand of thought of people.Such as, if picture material includes sea, since user's subjectivity affective association that sea can characterize is " containing " or " mystery ", then user is it may be desirable that obtain the sentence of this subjective thought about " containing " or " mystery ", and the above-mentioned directly sentence including image tag can not reflect this subjective thought of user;For another example if figure As content includes southern exposure, due to the user's subjectivity affective association that can characterize of southern exposure be " struggle " or " it is desirable that ", then user it may be desirable that obtain about " struggle " or " it is desirable that " sentence of this subjective thought, similarly, if hardly resulting in the sentence of this subjective thought of reflection user using southern exposure as image tag.
Based on this, the application provides a kind of sentence recommended method, it is that user recommends sentence according to the M of image directly keywords and the M directly associated indirect keywords of keyword, since the M directly associated indirect keywords of keyword can the M according to the direct keywords for being able to reflect user's subjectivity thought and image artistic conception that extend of keyword, therefore, recommend sentence according to direct keyword and indirect keyword, the sentence recommended can be made more to meet the sentence of image artistic conception and the subjective thought of user, improve user satisfaction.
Sentence recommended method in the application is applicable to several scenes, the first possible scene is, after user chooses the picture for needing to recommend sentence in the terminal used, executing the sentence recommended method in the application by terminal is that user recommends sentence, and is presented to the user;Second of possible scene is, after user chooses the picture for needing to recommend sentence in the terminal used, executing the sentence recommended method in the application by the server connecting with terminal is that user recommends sentence, and is presented to the user by terminal.
It is directed to the first above-mentioned possible scene and second of possible scene, Fig. 1 is a kind of system architecture schematic diagram that the application is applicable in.As shown in Figure 1, including server 101, one or more terminals, such as first terminal shown in FIG. 1 1021, second terminal 1022, third terminal 1023 in the system architecture.First terminal 1021, second terminal 1022, third terminal 1023 can be communicated by network (such as: wireless network) with server 103.
In the first possible scene, sentence recommended method is executed by first terminal 1021, second terminal 1022 or third terminal 1023, to use the user of first terminal 1021, second terminal 1022 or third terminal 1023 to recommend sentence.In such cases, keyword contingency table and statement library are stored in server 101, and keyword contingency table and statement library are sent to terminal, terminal receives and stores keyword contingency table and statement library, so as to be that user recommends sentence based on keyword contingency table and statement library.And server 101 can be updated keyword contingency table and statement library according to the setting period, and updated keyword contingency table and statement library are sent to terminal, in order to which terminal timely updates keyword contingency table and statement library, to recommend more suitable sentence for user;Alternatively, can also by terminal according to setting the period more New keywords contingency table and statement library.
In second of possible scene, sentence recommended method is executed by server 101, to use the user of first terminal 1021, second terminal 1022 or third terminal 1023 to recommend sentence.In such cases, it is stored with keyword contingency table and statement library in server 101, and without being stored with keyword contingency table and statement library in first terminal 1021, second terminal 1022 or third terminal 1023.Using the user of first terminal 1021, second terminal 1022 or third terminal 1023 after first terminal 1021, second terminal 1022 or third terminal 1023 choose the picture for needing to recommend sentence, the picture can be sent to server 101 by first terminal 1021, second terminal 1022 or third terminal 1023, after server 101 executes sentence recommended method according to the picture received, the sentence of recommendation is sent to first terminal 1021, second terminal 1022 or third terminal 1023.Similarly, server 101 can be updated keyword contingency table and statement library according to the setting period, in order to recommend more suitable sentence for user.
In the application, in view of the data volume of picture is larger, picture is sent to server by first terminal 1021, second terminal 1022 or third terminal 1023 can cause larger pressure to network bandwidth, therefore, first terminal 1021, second terminal 1022 or third terminal 1023 can parse the picture for needing to recommend sentence, it obtains the direct keyword of picture and is sent to server, sentence is recommended for user according to the direct keyword of picture by server, so as to which the data volume for needing to transmit is effectively reduced.
It should be noted that the more period of new keywords contingency table and statement library can be identical, it can not also be identical, specifically without limitation.What keyword contingency table was also possible to be updated under the triggering of other conditions (for example, user from selection target sentence in the sentence of recommendation), subsequent specific introduction.
In the application, terminal can be the equipment with picture display function and sentence presentation function, specifically it can be the handheld device with wireless connecting function or be connected to other processing equipments of radio modem, the mobile terminal communicated through wireless access network with one or more core nets.For example, terminal can be mobile phone, computer, tablet computer etc..For another example, terminal be also possible to portable, pocket, hand-held, built-in computer or vehicle-mounted mobile device.For another example, terminal can be a part of user equipment (user equipment, abbreviation UE).Server can be the computer equipment etc. with processing capacity.
Fig. 2 a is flow diagram corresponding to a kind of sentence recommended method provided by the present application, as shown in Figure 2 a, which comprises
Step 201, N number of keyword of target image is obtained;It include parsing the M direct keywords and a directly associated indirect keyword of keyword of the M that the target image obtains in N number of keyword;N, M are positive integer, and N > M;
Step 202, the first sentence being directed in statement library, calculate separately the similarity between the keyword that N number of keyword and first sentence include, and the similarity between the keyword for according to N number of keyword and first sentence including, obtain the target image and matching degree between a sentence;First sentence is any sentence in the statement library;
Step 203, the sentence that the matching degree in the statement library between the target image is more than or equal to first threshold is recommended into user.
Wherein, target image can be the image of sentence to be recommended, and the picture being specifically as follows in the photograph album that terminal is stored, which can be the picture of user's shooting, or, or the picture of user's drawing, specifically without limitation.In the application, due to both including direct keyword in N number of keyword of target image, it further include the associated indirect keyword of direct keyword, and the M directly associated indirect keyword of keyword can the M according to the direct keywords for being able to reflect user's subjectivity thought and image artistic conception that extend of keyword, therefore, recommend sentence according to direct keyword and indirect keyword, the sentence recommended can be made more to meet the sentence of image artistic conception and the subjective thought of user, improves user satisfaction.
Specifically, in step 201, the target image is parsed, obtains the feature of the target image, however, it is determined that includes face characteristic in the feature of the target image, then determines the expression of the face according to the face characteristic, obtain face keyword;According to the feature in the feature of the target image in addition to the face characteristic, scene keyword is obtained;According to the face keyword and the scene keyword, the M direct keywords are obtained.
Explanation is further explained below for the above process.
In specific implementation, being parsed to target image specifically can do feature extraction using convolutional neural networks, after obtaining the feature of target image by parsing target image, the feature of target image is sent into object detector, obtain the object in target image, such as: coffee cup, coffee bean, people, stone etc. It can also determine classification belonging to object, such as: fruit, flowers, animal, plant, daily necessity etc..
For the people in image, it carries out Face datection and determines human face region, and face is input to expression classifier and obtains whether the face is smiling face, wherein, the method that Face datection can use the existing adaboost classifier based on harr feature, after detecting face, smiling face's classifier can be trained with one 3 layers of convolutional neural networks.Wherein, if face is smiling face, obtained face keyword can be smile, smiling face etc..
It is directed to the other feature in addition to the feature of people, is sent in scene classifier, scene keyword is obtained.In the application, scene classifier includes multiple, such as event scenarios classifier, place scene classifier, and other scene classifiers etc., it can also specifically use and be realized based on the method for more scene Recognitions of convolutional network, correspondingly, scene keyword can refer to a plurality of types of keywords, such as, for characterizing the keyword such as court in place, open air etc., keyword for characterizing event is such as had a meal, sleep, it plays soccer, and other keywords are as taken a risk, route, challenge, city, forest, grassland, sky, streams, seashore, rock, the woods, pendulum decorations, decoration, it is natural, sea of clouds, street, building, lake water, rice field, animal, plant, flower, pyrotechnics, village, seashore, ecology, traces, ruins, the setting sun, sunrise etc..
For example, the flow diagram that Fig. 2 b is the direct keyword of acquisition target image after parsing to target image, obtains feature 1 and feature 2 as shown in Figure 2 b, feature 1 and feature 2 are inputted into target detection classifier.After identifying people according to feature 1, Face datection is carried out, and be input to expression classifier, face keyword is obtained, for example, laughing at, happily;After determining that feature 2 is other feature in addition to the feature of people, it is sent to scene classifier, obtains scene keyword, for example, court, having a meal.And then according to face keyword and scene keyword, the direct keyword for obtaining target target image be laugh at, happily, court, have a meal.
Through the above way, obtain M direct keywords, respectively keyword 1, keyword 2, keyword 3 ..., keyword M, it is directed to each of the M directly keyword direct keywords, according to keyword contingency table, the keyword of second threshold will be more than or equal to the degree of association of each directly keyword as each directly associated indirect keyword of keyword;According to each direct key The indirect keyword of word association obtains the M directly associated indirect keywords of keyword.
It include the degree of association between multiple keywords in the application, in keyword contingency table, as shown in table 1,
Table 1: the partial content example in keyword contingency table
According to table 1, get with keyword 1, keyword 2, keyword 3 ..., the degree of association of keyword M be more than or equal to the keyword of second threshold, to obtain the M directly associated indirect keywords of keyword.Wherein, second threshold can be rule of thumb arranged with actual conditions by those skilled in the art.
For example, keyword 1 is " sunrise ", after inquiry table 1, obtain " it is desirable that " with the degree of association between " sunrise " greater than second threshold, then " it is desirable that " be " sunrise " indirect keyword.
In the application, keyword contingency table can be to be obtained by off-line training, is introduced below for the generating process in keyword contingency table.
Specifically, training set is obtained first, includes multiple first sentence images pair in the training set, the first sentence image pair of each of the multiple first sentence image pair includes sentence and the corresponding image of the sentence;According to the keyword of the direct keyword of the image of each first sentence image pair and the sentence of each first sentence image pair, the item collection of each first sentence image pair is obtained;The direct keyword for extracting the multiple each image of first sentence image pair, obtains the first keyword set;The keyword for extracting the multiple each sentence of first sentence image pair, obtains the second keyword set;According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
In specific implementation, from internet acquire various sentences (including famous sayings of famous figures, mood U.S. sentence, about The sentence of cuisines and the equal themes of life) and image using sentence and the corresponding image of the sentence as the first sentence image pair obtains training set, training gather in further include multiple unpaired sentences and multiple unpaired images.Wherein, when sentence and the corresponding image of the sentence refer to acquisition sentence, and the sentence matched image, or when acquisition image, and described image matched sentence, so matched sentence and image construction the first sentence image pair.Preferably, after acquiring various sentences from internet in the application, sentence can be screened, such as the sentence of Negative sentiments is filtered out by emotion classifiers;Similarly, after collecting image, image can also be screened, to guarantee the reasonability of training set.
Multiple first sentence images pair are directed to, the direct keyword of image is obtained using image recognition technology, constitutes the first keyword set;The keyword of sentence is obtained using natural language understanding the relevant technologies, constitutes the second keyword set;It as shown in table 2, is the Keywords section content example of the direct keyword of the first sentence image pair image and sentence.
2: the first sentence image of table is to partial content example
According to table 2, image A and sentence a constitutes the first sentence image pair, and the direct keyword that parsing image A obtains image A is " sunrise ", " sea ", and sentence a is " regretting passing by; following not as good as struggle ", therefore, { sunrise, sea, struggle are following } it may make up an item collection;Image B and sentence b constitute the first sentence image pair, parsing image B obtain the direct keyword of image B be " pyrotechnics ", " smiling face ", " girl ", sentence b are " taking advantage of all to be in time for, do happy oneself ", therefore, pyrotechnics, smiling face, girl is happy, oneself it may make up an item collection;Image C and sentence c constitutes the first sentence image pair, and the direct keyword that parsing image C obtains image C is " blue sky ", " rainbow ", " sea ", and sentence c is " not suffering from how wind and rain is shown in rainbow ", therefore, { blue sky, sea, wind and rain, rainbow } it may make up an item collection.
After obtaining multiple item collections according to table 1, utilize association analysis algorithm, such as frequent item set, the correlation degree confidence (Ai | Bj) between each keyword in each keyword and the second keyword set in the first keyword set is obtained, to obtain initial key word association table.
Further, the first unpaired image being directed in the multiple unpaired image, according to the direct keyword of the described first unpaired image, the associated indirect keyword of direct keyword of the described first unpaired image is obtained from the initial key word association table;The similarity between the associated indirect keyword of direct keyword of the keyword and the first unpaired image in the multiple unpaired sentence in each unpaired sentence is calculated, similarity is more than or equal to the unpaired sentence of third threshold value and the first unpaired image forms the second sentence image pair;The first unpaired image is any unpaired image in the multiple unpaired image.
In specific implementation, the first unpaired image is parsed, the direct keyword for obtaining the first unpaired image is " sunrise ", then can obtain " sunrise " associated indirect keyword from initial key word association table, for example, " struggle ";Calculate the similarity between the keyword and " struggle " that each unpaired sentence includes in multiple unpaired sentences, it is " we should work hard; accomplish something " that similarity, which is obtained, more than or equal to the unpaired sentence of third threshold value, so as to which the unpaired sentence and the first unpaired image are formed the second sentence image pair.Wherein third threshold value can be rule of thumb arranged with actual conditions by those skilled in the art.
Through the above way, multiple second sentence images are obtained to rear, the item collection of each sentence image pair of multiple second sentence image pairs can be obtained, and then it can be according to the item collection of multiple first sentence images pair and the item collection of multiple second sentence images pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
In step 202, if the first sentence include keyword be keyword 1a, N number of keyword of target image be respectively keyword 1b, keyword 2b ..., keyword Nb, then calculate separately target image Keyword 1b and the similarity of keyword 1a, keyword 2b and keyword 1a similarity ..., the similarity of keyword Nb and keyword 1a, and the similarity of N number of keyword and keyword 1a are weighted summation, obtain the matching degree between target image and the first sentence.
For another example, if the keyword that the first sentence includes is keyword 1a and keyword 2a, N number of keyword of target image be respectively keyword 1b, keyword 2b ..., keyword Nb, then calculate the similarity of the keyword 1b and keyword 1a of target image, and the similarity of keyword 1b and keyword 2a, and the similarity of keyword 1b and keyword 1a, keyword 2a are weighted summation, obtain the matching degree of keyword 1b and the first sentence;Similarly, can be obtained keyword 2b ..., the matching degree of keyword Nb and the first sentence, and then by keyword 1b, keyword 2b ..., the matching degree of keyword Nb and the first sentence be weighted summation, obtain the matching degree between target image and the first sentence.
The above-mentioned weighted sum being previously mentioned is specifically described below.For the similarity of keyword 1b and keyword 1a, keyword 2a are weighted summation, the determination of weighted value can be there are two types of mode: (1) identical weighted value is arranged, that is the weighted value of the similarity of keyword 1b and keyword 1a and keyword 1b is identical as the weighted value of the similarity of keyword 2a, such as is 0.5;(2) similarity threshold is set, 0 is set by the weighted value for being less than the similarity of similarity threshold, for example, the similarity of keyword 1b and keyword 1a is less than similarity threshold, then sets 0 for the weighted value of keyword 1b and the similarity of keyword 1a;The weighted value that similarity is more than or equal to W similarity of similarity threshold is disposed as 1/W.
In the application, by taking keyword 1b and keyword 1a as an example, when calculating the similarity between keyword 1b and keyword 1a, it can be indicated with the vector of a dimension according to term vector model, each keyword, calculate the similarity between two keywords, actually calculate the distance between vector, wherein the calculating of distance can use Euclidean distance, or COS distance can also be used, specifically without limitation.
In order to more accurately recommend sentence for user, it may further determine that theme belonging to first sentence in the application, and the theme according to belonging to first sentence and theme table corresponding with weighted value, determine the corresponding weighted value of theme belonging to first sentence, and then according between the keyword that N number of keyword includes with first sentence similarity and first sentence belonging to the corresponding weighted value of theme, obtain the matching degree between the target image and first sentence.
In the application, the keyword of the sentence in statement library can be used into unsupervised learning method in advance, for example after kmeans clustering algorithm is clustered, obtain different classifications, one classification is a theme, to obtain theme belonging to each sentence in statement library.Specifically, theme belonging to each sentence can be pursue a goal with determination, mood, cuisines, sleep, life, work, reading, tourism etc..
Further, various ways can be used and store theme belonging to each sentence, such as, a kind of possible implementation is, a subject identification, 1, theme 2, theme 3 etc. for example, subject identification is the theme respectively are distributed for each theme, if theme belonging to sentence a is the theme 1, the label of theme 1 can be then set for sentence a, if theme belonging to sentence b is the theme 2, the label of theme 1 can be set for sentence b, so as to determine theme belonging to each sentence according to the label of each sentence.Alternatively possible implementation is that theme belonging to each sentence is stored in the form of tables of data, is theme partial content example belonging to sentence and sentence as shown in table 3.
Table 3: theme partial content example belonging to sentence and sentence
Further, after determining theme belonging to the first sentence, the corresponding weighted value of theme belonging to the first sentence can be obtained according to theme table corresponding with weighted value, as shown in table 4, be the theme table partial content signal corresponding with weighted value.
Table 4: topic weights value corresponds to the signal of table partial content
Thus, in step 203, can by the similarity between keyword that N number of keyword of target image and the first sentence include and average value be directed to multiplied by the user weighted value of theme belonging to the first sentence, obtain the matching degree between target image and first sentence.Wherein, the first sentence can be any sentence in statement library, therefore, the sentence that the matching degree between the target image is more than or equal to first threshold can be recommended user according to each sentence in statement library and the matching degree between target image;First threshold can be rule of thumb arranged with actual conditions by those skilled in the art;In specific implementation, it is also possible to for the matching degree between sentence each in statement library and the target image being ranked up according to sequence from high to low, the sentence for choosing the forward predetermined quantity that sorts recommends user.
User can be selected from multiple sentences of recommendation one of sentence as with the matched object statement of target image, in the application, object statement can be loaded on target image with pixel form, alternatively, can also be individually present together with target image.Specifically without limitation.Fig. 2 c be and the matched object statement effect diagram of target image.
It is specifically introduced below for table is corresponded in topic weights value.
In the application, weighted value can be distributed respectively for each theme in advance.Specifically, if there is n theme, the initial weight value for the distribution of each theme can all be 1/n, as shown in table 5.
Table 5: initial subject weighted value corresponds to table
The subsequent object statement that can be selected in the sentence of recommendation according to user, to be updated to topic weights value.For example, theme belonging to the object statement that user A is selected in the sentence of recommendation is the theme 1, then it can be directed to user A and tune up the weighted value of theme 1, and the weighted value of other themes is accordingly adjusted It is small.To realize the update for corresponding to table to topic weights value.
It should be noted that be only in above-mentioned table 4 and table 5 it is exemplary the topic weights value of multiple users is stored in a table, in the application, can also for each user store a topic weights value correspond to table.
Specifically, if above-mentioned sentence recommended method is executed by terminal, then terminal can store topic weights value for the user of using terminal and correspond to table, and after user's selection target sentence, table is corresponded to topic weights value to be updated, it is updated alternatively, terminal can also correspond to table to topic weights value according to the object statement that the setting period selects according to user, so that subsequent correspond to the hobby that the sentence that table is user's recommendation more meets user according to updated topic weights value.If above-mentioned sentence recommended method is executed by server, a topic weights value can be stored for each user in server correspond to table, or the topic weights value of multiple users can also be stored in a table, and after user's selection target sentence, table is corresponded to topic weights value to be updated, it is updated alternatively, server can also correspond to table to topic weights value according to the object statement that the setting period selects according to user.
After user's selection target sentence, keyword contingency table can also be updated in the application, specifically, the object statement and the target image can be formed third sentence image pair, according to the third sentence image pair, obtain the item collection of third sentence image pair, according to the item collection of the multiple first sentence image pair, the item collection of the item collection of the multiple second sentence image pair and the third sentence image pair, calculate the degree of association between each keyword in each keyword and second keyword set in first keyword set, update the keyword contingency table.
In specific implementation, above-mentioned renewal process can be executed after user's selection target sentence, alternatively, above-mentioned renewal process can also be executed according to the object statement that user selects according to the setting period, specifically without limitation.
It can thus be appreciated that, topic weights value is updated according to the object statement of user's selection in the application and corresponds to table and keyword contingency table, it is updated according to feedback information, so that subsequent, to correspond to table and keyword contingency table according to updated topic weights value be the hobby that the sentence that user recommends more meets user.
Fig. 2 d is the overall flow schematic diagram of sentence recommended method in the embodiment of the present application.Fig. 2 d is illustrated in vivider mode to be recommended the process of sentence in the embodiment of the present application for user and corresponds to what table was updated to keyword contingency table and topic weights value according to the object statement that user selects from the sentence of recommendation Process, it is corresponding with content described in above-described embodiment, it no longer illustrates herein.
For above method process, the embodiment of the present invention also provides a kind of sentence recommendation apparatus, and the particular content of the device is referred to above method implementation.
Fig. 3 is a kind of structural schematic diagram of sentence recommendation apparatus provided in an embodiment of the present invention, and the device is for executing above method process.As shown in figure 3, the sentence recommendation apparatus 300 includes:
Module 301 is obtained, for obtaining N number of keyword of target image;It include parsing the M direct keywords and a directly associated indirect keyword of keyword of the M that the target image obtains in N number of keyword;N, M are positive integer, and N > M;
Processing module 302, the first sentence for being directed in statement library, calculate separately the similarity between the keyword that N number of keyword and first sentence include, and the similarity between the keyword for according to N number of keyword and first sentence including, obtain the matching degree between the target image and first sentence;First sentence is any sentence in the statement library;
Recommending module 303, the sentence for the matching degree in the statement library between the target image to be more than or equal to first threshold recommend user.
Optionally, the processing module 302 is specifically used for: determining theme belonging to first sentence;According to theme belonging to first sentence and theme table corresponding with weighted value, the corresponding weighted value of theme belonging to first sentence is determined;The corresponding weighted value of theme belonging to the similarity between keyword for including with first sentence according to N number of keyword and first sentence, obtains the matching degree between the target image and first sentence.
Optionally, the processing module 302 is also used to: determining theme belonging to the object statement and the object statement selected in the sentence of recommendation;Weighted value corresponding with theme belonging to the object statement in the theme and the corresponding table of weighted value is tuned up.
Optionally, the processing module 302 is specifically used for: parsing to the target image, obtains the M direct keywords;Each of the M directly keyword direct keywords are directed to, according to keyword contingency table, the keyword of second threshold will be more than or equal to the degree of association of each directly keyword as each directly associated indirect keyword of keyword;It include the degree of association between multiple keywords in the keyword contingency table;According to each direct associated indirect keyword of keyword, obtain To the M associated indirect keywords of direct keyword.
Optionally, the processing module 302 is specifically used for: parsing to the target image, obtains the feature of the target image;If it is determined that including face characteristic in the feature of the target image, then the expression of the face is determined according to the face characteristic, obtains face keyword;According to the feature in the feature of the target image in addition to the face characteristic, scene keyword is obtained;According to the face keyword and the scene keyword, the M direct keywords are obtained.
Optionally, the processing module 302 is specifically used for:
Training set is obtained, includes multiple first sentence images pair in the training set, the first sentence image pair of each of the multiple first sentence image pair includes sentence and the corresponding image of the sentence;
According to the keyword of the direct keyword of the image of each first sentence image pair and the sentence of each first sentence image pair, the item collection of each first sentence image pair is obtained;
The direct keyword for extracting the multiple each image of first sentence image pair, obtains the first keyword set;The keyword for extracting the multiple each sentence of first sentence image pair, obtains the second keyword set;
According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
It optionally, further include multiple unpaired sentences and multiple unpaired images in the training set;
The processing module 302 is specifically used for:
According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, initial key word association table is obtained;
The first unpaired image being directed in the multiple unpaired image obtains the associated indirect keyword of direct keyword of the described first unpaired image according to the direct keyword of the described first unpaired image from the initial key word association table;The similarity between the associated indirect keyword of direct keyword of the keyword and the first unpaired image in the multiple unpaired sentence in each unpaired sentence is calculated, similarity is more than or equal to the unpaired sentence of third threshold value and the first unpaired image forms Second sentence image pair;The first unpaired image is any unpaired image in the multiple unpaired image;
According to multiple second sentence images pair, the item collection of each second sentence image pair is obtained;
According to the item collection of the item collection of the multiple first sentence image pair and multiple second sentence images pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
Optionally, the processing module 302 is also used to:
Determine the object statement selected in the sentence of recommendation;
The object statement and the target image are formed into third sentence image pair;
According to the third sentence image pair, the item collection of third sentence image pair is obtained;
According to the item collection of the multiple first sentence image pair, the item collection of the item collection of the multiple second sentence image pair and the third sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, the keyword contingency table is updated.
Fig. 4 is the structural schematic diagram of another sentence recommendation apparatus provided in an embodiment of the present invention, and the device is for executing above method process.As shown in figure 4, the sentence recommendation apparatus 400 includes: communication interface 401, processor 402, memory 403 and bus system 404;
Wherein, memory 403, for storing program.Specifically, program may include program code, and program code includes computer operation instruction.Memory 403 may be random access memory (random access memory, abbreviation RAM), it is also possible to be nonvolatile memory (non-volatile memory), a for example, at least magnetic disk storage.A memory is illustrated only in figure, certainly, memory also can according to need, and be set as multiple.Memory 403 is also possible to the memory in processor 402.
Memory 403 stores following element, executable modules or data structures perhaps their subset or their superset:
Operational order: including various operational orders, for realizing various operations.
Operating system: including various system programs, for realizing various basic businesses and the hardware based task of processing.
The operation of 402 control statement recommendation apparatus 400 of processor, processor 402 can also be known as CPU (Central Processing Unit, central processing unit).In specific application, the various components of sentence recommendation apparatus 400 are coupled by bus system 404, and wherein bus system 404 can also include power bus, control bus and status signal bus in addition etc. in addition to including data/address bus.But for the sake of clear explanation, various buses are all designated as bus system 404 in figure.To be only schematically drawn in Fig. 4 convenient for indicating.
The method that above-mentioned the embodiment of the present application discloses can be applied in processor 402, or be realized by processor 402.Processor 402 may be a kind of IC chip, the processing capacity with signal.During realization, each step of the above method can be completed by the integrated logic circuit of the hardware in processor 402 or the instruction of software form.Above-mentioned processor 402 can be general processor, digital signal processor (DSP), specific integrated circuit (ASIC), field programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present application.General processor can be microprocessor or the processor is also possible to any conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present application, can be embodied directly in hardware decoding processor and execute completion, or in decoding processor hardware and software module combination execute completion.Software module can be located at random access memory, flash memory, read-only memory, in the storage medium of this fields such as programmable read only memory or electrically erasable programmable memory, register maturation.The storage medium is located at memory 403, and processor 402 reads the information in memory 403, executes above method step in conjunction with its hardware.
It can be seen from the above: in the application, obtaining N number of keyword of target image, includes parsing the M direct keywords and a directly associated indirect keyword of keyword of the M that the target image obtains in N number of keyword;N, M are positive integer, and N > M;The first sentence being directed in statement library, calculate separately the similarity between the keyword that N number of keyword and first sentence include, and the similarity between the keyword for according to N number of keyword and first sentence including, obtain the matching degree between the target image and first sentence;First sentence is any sentence in the statement library;The sentence that matching degree in the statement library between the target image is more than or equal to first threshold is recommended into user.In the application, due to both including straight in N number of keyword of target image Connect keyword, it further include the associated indirect keyword of direct keyword, and the M directly associated indirect keyword of keyword can the M according to the direct keywords for being able to reflect user's subjectivity thought and image artistic conception that extend of keyword, therefore, recommend sentence according to direct keyword and indirect keyword, the sentence recommended can be made more to meet the sentence of image artistic conception and the subjective thought of user, improve user satisfaction.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method or computer program product.Therefore, the form of complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the present invention.Moreover, the form for the computer program product implemented in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) that one or more wherein includes computer usable program code can be used in the present invention.
The present invention be referring to according to the method for the embodiment of the present invention, the flowchart and/or the block diagram of equipment (system) and computer program product describes.It should be understood that the combination of process and/or box in each flow and/or block and flowchart and/or the block diagram that can be realized by computer program instructions in flowchart and/or the block diagram.These computer program instructions be can provide to the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate a machine, so that generating by the instruction that computer or the processor of other programmable data processing devices execute for realizing the device for the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, to be able to guide in computer or other programmable data processing devices computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates the manufacture including command device, which realizes the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that series of operation steps are executed on a computer or other programmable device to generate computer implemented processing, thus the step of instruction executed on a computer or other programmable device is provided for realizing the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
Although preferred embodiments of the present invention have been described, once a person skilled in the art knows basic creative concepts, then additional changes and modifications may be made to these embodiments.So appended power Benefit requires to be intended to be construed to include preferred embodiment and all change and modification for falling into the scope of the invention.
Obviously, those skilled in the art various changes and modifications can be made to the invention without departing from the spirit and scope of the present invention.If then the present invention is also intended to include these modifications and variations in this way, these modifications and changes of the present invention is within the scope of the claims of the present invention and its equivalent technology.

Claims (16)

  1. A kind of sentence recommended method, which is characterized in that the described method includes:
    Obtain N number of keyword of target image;It include parsing the M direct keywords and a directly associated indirect keyword of keyword of the M that the target image obtains in N number of keyword;N, M are positive integer, and N > M;
    The first sentence being directed in statement library, calculate separately the similarity between the keyword that N number of keyword and first sentence include, and the similarity between the keyword for according to N number of keyword and first sentence including, obtain the matching degree between the target image and first sentence;First sentence is any sentence in the statement library;
    The sentence that matching degree in the statement library between the target image is more than or equal to first threshold is recommended into user.
  2. The method according to claim 1, wherein before obtaining the matching degree between the target image and first sentence, further includes:
    Determine theme belonging to first sentence;
    According to theme belonging to first sentence and theme table corresponding with weighted value, the corresponding weighted value of theme belonging to first sentence is determined;
    The similarity between keyword for including according to N number of keyword and first sentence, obtains the matching degree between the target image and first sentence, comprising:
    The corresponding weighted value of theme belonging to the similarity between keyword for including with first sentence according to N number of keyword and first sentence, obtains the matching degree between the target image and first sentence.
  3. According to the method described in claim 2, it is characterized in that, after the sentence that the matching degree in the statement library between the target image is more than or equal to first threshold is recommended user, further includes:
    Determine theme belonging to the object statement and the object statement selected in the sentence of recommendation;
    Weighted value corresponding with theme belonging to the object statement in the theme and the corresponding table of weighted value is tuned up.
  4. Method according to any one of claim 1-3, which is characterized in that obtain N number of keyword of the target image, comprising:
    The target image is parsed, the M direct keywords are obtained;
    Each of the M directly keyword direct keywords are directed to, according to keyword contingency table, the keyword of second threshold will be more than or equal to the degree of association of each directly keyword as each directly associated indirect keyword of keyword;It include the degree of association between multiple keywords in the keyword contingency table;
    According to each direct associated indirect keyword of keyword, the M directly associated indirect keywords of keyword are obtained.
  5. According to the method described in claim 4, obtaining the M direct keywords it is characterized in that, parse to the target image, comprising:
    The target image is parsed, the feature of the target image is obtained;
    If it is determined that including face characteristic in the feature of the target image, then the expression of the face is determined according to the face characteristic, obtains face keyword;
    According to the feature in the feature of the target image in addition to the face characteristic, scene keyword is obtained;
    According to the face keyword and the scene keyword, the M direct keywords are obtained.
  6. According to the method described in claim 4, it is characterized in that, obtaining the keyword contingency table in the following way:
    Training set is obtained, includes multiple first sentence images pair in the training set, the first sentence image pair of each of the multiple first sentence image pair includes sentence and the corresponding image of the sentence;
    According to the keyword of the direct keyword of the image of each first sentence image pair and the sentence of each first sentence image pair, the item collection of each first sentence image pair is obtained;
    The direct keyword for extracting the multiple each image of first sentence image pair, obtains the first keyword set;The keyword for extracting the multiple each sentence of first sentence image pair, obtains the second keyword set;
    According to the item collection of each first sentence image pair of the multiple first sentence image pair, described in calculating The degree of association between each keyword in each keyword and second keyword set in first keyword set, obtains keyword contingency table.
  7. According to the method described in claim 6, it is characterized in that, further including multiple unpaired sentences and multiple unpaired images in the training set;
    According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained, comprising:
    According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, initial key word association table is obtained;
    The first unpaired image being directed in the multiple unpaired image obtains the associated indirect keyword of direct keyword of the described first unpaired image according to the direct keyword of the described first unpaired image from the initial key word association table;The similarity between the associated indirect keyword of direct keyword of the keyword and the first unpaired image in the multiple unpaired sentence in each unpaired sentence is calculated, similarity is more than or equal to the unpaired sentence of third threshold value and the first unpaired image forms the second sentence image pair;The first unpaired image is any unpaired image in the multiple unpaired image;
    According to multiple second sentence images pair, the item collection of each second sentence image pair is obtained;
    According to the item collection of the item collection of the multiple first sentence image pair and multiple second sentence images pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
  8. The method according to the description of claim 7 is characterized in that after the sentence that the matching degree in the statement library between the target image is more than or equal to first threshold is recommended user, further includes:
    Determine the object statement selected in the sentence of recommendation;
    The object statement and the target image are formed into third sentence image pair;
    According to the third sentence image pair, the item collection of third sentence image pair is obtained;
    According to the item collection of the multiple first sentence image pair, the item of the multiple second sentence image pair The item collection of collection and the third sentence image pair calculates the degree of association between each keyword in each keyword and second keyword set in first keyword set, updates the keyword contingency table.
  9. A kind of sentence recommendation apparatus, which is characterized in that described device includes:
    Processor, for obtaining N number of keyword of target image;It include parsing the M direct keywords and a directly associated indirect keyword of keyword of the M that the target image obtains in N number of keyword;N, M are positive integer, and N > M;The first sentence being directed in statement library, calculate separately the similarity between the keyword that N number of keyword and first sentence include, and the similarity between the keyword for according to N number of keyword and first sentence including, obtain the matching degree between the target image and first sentence;First sentence is any sentence in the statement library;
    Communication interface, the sentence for the matching degree in the statement library between the target image to be more than or equal to first threshold recommend user.
  10. Device according to claim 9, which is characterized in that the processor is specifically used for:
    Determine theme belonging to first sentence;
    According to theme belonging to first sentence and theme table corresponding with weighted value, the corresponding weighted value of theme belonging to first sentence is determined;
    The corresponding weighted value of theme belonging to the similarity between keyword for including with first sentence according to N number of keyword and first sentence, obtains the matching degree between the target image and first sentence.
  11. Device according to claim 10, which is characterized in that the processor is also used to:
    Determine theme belonging to the object statement and the object statement selected in the sentence of recommendation;
    Weighted value corresponding with theme belonging to the object statement in the theme and the corresponding table of weighted value is tuned up.
  12. The device according to any one of claim 9-11, which is characterized in that the processor is specifically used for:
    The target image is parsed, the M direct keywords are obtained;
    The direct keywords of each of the M directly keyword are directed to, according to keyword contingency table, The keyword of second threshold will be more than or equal to the degree of association of each directly keyword as each directly associated indirect keyword of keyword;It include the degree of association between multiple keywords in the keyword contingency table;
    According to each direct associated indirect keyword of keyword, the M directly associated indirect keywords of keyword are obtained.
  13. Device according to claim 12, which is characterized in that the processor is specifically used for:
    The target image is parsed, the feature of the target image is obtained;
    If it is determined that including face characteristic in the feature of the target image, then the expression of the face is determined according to the face characteristic, obtains face keyword;
    According to the feature in the feature of the target image in addition to the face characteristic, scene keyword is obtained;
    According to the face keyword and the scene keyword, the M direct keywords are obtained.
  14. Device according to claim 12, which is characterized in that the processor is specifically used for:
    Training set is obtained, includes multiple first sentence images pair in the training set, the first sentence image pair of each of the multiple first sentence image pair includes sentence and the corresponding image of the sentence;
    According to the keyword of the direct keyword of the image of each first sentence image pair and the sentence of each first sentence image pair, the item collection of each first sentence image pair is obtained;
    The direct keyword for extracting the multiple each image of first sentence image pair, obtains the first keyword set;The keyword for extracting the multiple each sentence of first sentence image pair, obtains the second keyword set;
    According to the item collection of each first sentence image pair of the multiple first sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
  15. Device according to claim 14, which is characterized in that further include multiple unpaired sentences and multiple unpaired images in the training set;
    The processor is specifically used for:
    According to the item collection of each first sentence image pair of the multiple first sentence image pair, described in calculating The degree of association between each keyword in each keyword and second keyword set in first keyword set, obtains initial key word association table;
    The first unpaired image being directed in the multiple unpaired image obtains the associated indirect keyword of direct keyword of the described first unpaired image according to the direct keyword of the described first unpaired image from the initial key word association table;The similarity between the associated indirect keyword of direct keyword of the keyword and the first unpaired image in the multiple unpaired sentence in each unpaired sentence is calculated, similarity is more than or equal to the unpaired sentence of third threshold value and the first unpaired image forms the second sentence image pair;The first unpaired image is any unpaired image in the multiple unpaired image;
    According to multiple second sentence images pair, the item collection of each second sentence image pair is obtained;
    According to the item collection of the item collection of the multiple first sentence image pair and multiple second sentence images pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, keyword contingency table is obtained.
  16. Device according to claim 15, which is characterized in that the processor is also used to:
    Determine the object statement selected in the sentence of recommendation;
    The object statement and the target image are formed into third sentence image pair;
    According to the third sentence image pair, the item collection of third sentence image pair is obtained;
    According to the item collection of the multiple first sentence image pair, the item collection of the item collection of the multiple second sentence image pair and the third sentence image pair, the degree of association between each keyword in each keyword and second keyword set in first keyword set is calculated, the keyword contingency table is updated.
CN201680088593.9A 2016-12-26 2016-12-26 Statement recommendation method and device Active CN109643332B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/112163 WO2018119593A1 (en) 2016-12-26 2016-12-26 Statement recommendation method and device

Publications (2)

Publication Number Publication Date
CN109643332A true CN109643332A (en) 2019-04-16
CN109643332B CN109643332B (en) 2021-02-23

Family

ID=62706590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680088593.9A Active CN109643332B (en) 2016-12-26 2016-12-26 Statement recommendation method and device

Country Status (2)

Country Link
CN (1) CN109643332B (en)
WO (1) WO2018119593A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297934A (en) * 2019-07-04 2019-10-01 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium
CN110414001A (en) * 2019-07-18 2019-11-05 腾讯科技(深圳)有限公司 Sentence generation method and device, storage medium and electronic device
CN111797262A (en) * 2020-06-24 2020-10-20 北京小米松果电子有限公司 Poetry generation method and device, electronic equipment and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921918B (en) * 2018-07-24 2023-05-30 Oppo广东移动通信有限公司 Video creation method and related device
CN109508423A (en) * 2018-12-14 2019-03-22 平安科技(深圳)有限公司 Source of houses recommended method, device, equipment and storage medium based on semantics recognition
CN109783643A (en) * 2019-01-09 2019-05-21 北京一览群智数据科技有限责任公司 A kind of approximation sentence recommended method and device
CN110298684B (en) * 2019-05-22 2023-06-06 平安科技(深圳)有限公司 Vehicle type matching method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226547A (en) * 2013-04-28 2013-07-31 百度在线网络技术(北京)有限公司 Method and device for producing verse for picture
US20140096018A1 (en) * 2012-09-28 2014-04-03 Interactive Memories, Inc. Methods for Recognizing Digital Images of Persons known to a Customer Creating an Image-Based Project through an Electronic Interface
CN104951554A (en) * 2015-06-29 2015-09-30 浙江大学 Method for matching landscape with verses according with artistic conception of landscape

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7017114B2 (en) * 2000-09-20 2006-03-21 International Business Machines Corporation Automatic correlation method for generating summaries for text documents
CN104008180B (en) * 2014-06-09 2017-04-12 北京奇虎科技有限公司 Association method of structural data with picture, association device thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140096018A1 (en) * 2012-09-28 2014-04-03 Interactive Memories, Inc. Methods for Recognizing Digital Images of Persons known to a Customer Creating an Image-Based Project through an Electronic Interface
CN103226547A (en) * 2013-04-28 2013-07-31 百度在线网络技术(北京)有限公司 Method and device for producing verse for picture
CN104951554A (en) * 2015-06-29 2015-09-30 浙江大学 Method for matching landscape with verses according with artistic conception of landscape

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIN MA等: "Multimodal Convolutional Neural Networks for Matching Image and Sentence", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
郭乔进等: "基于关键词的图像标注综述", 《计算机工程与应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297934A (en) * 2019-07-04 2019-10-01 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium
CN110297934B (en) * 2019-07-04 2024-03-15 腾讯科技(深圳)有限公司 Image data processing method, device and storage medium
CN110414001A (en) * 2019-07-18 2019-11-05 腾讯科技(深圳)有限公司 Sentence generation method and device, storage medium and electronic device
CN110414001B (en) * 2019-07-18 2023-09-26 腾讯科技(深圳)有限公司 Sentence generation method and device, storage medium and electronic device
CN111797262A (en) * 2020-06-24 2020-10-20 北京小米松果电子有限公司 Poetry generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109643332B (en) 2021-02-23
WO2018119593A1 (en) 2018-07-05

Similar Documents

Publication Publication Date Title
CN109643332A (en) A kind of sentence recommended method and device
Pandey et al. FoodNet: Recognizing foods using ensemble of deep networks
CN107742107B (en) Facial image classification method, device and server
CN104090967B (en) Application program recommends method and recommendation apparatus
Bruni et al. Multimodal distributional semantics
Jiang et al. Understanding and predicting interestingness of videos
Jiang et al. Columbia-UCF TRECVID2010 Multimedia Event Detection: Combining Multiple Modalities, Contextual Concepts, and Temporal Matching.
US8660378B2 (en) Image evaluating device for calculating an importance degree of an object and an image, and an image evaluating method, program, and integrated circuit for performing the same
CN108140032A (en) Automatic video frequency is summarized
US20200019628A1 (en) Visual intent triggering for visual search
CN105468596B (en) Picture retrieval method and device
US20170109786A1 (en) System for producing promotional media content and method thereof
CN109829108B (en) Information recommendation method and device, electronic equipment and readable storage medium
CN104423945B (en) A kind of information processing method and electronic equipment
CN110390025A (en) Cover figure determines method, apparatus, equipment and computer readable storage medium
Yu et al. Food image recognition by personalized classifier
CN110046634A (en) The means of interpretation and device of cluster result
CN110096591A (en) Long text classification method, device, computer equipment and storage medium based on bag of words
Ahmed et al. Maximum response deep learning using Markov, retinal & primitive patch binding with GoogLeNet & VGG-19 for large image retrieval
CN107423396A (en) It is a kind of that method is recommended based on the Mashup of function implication relation and cluster
CN110287788A (en) A kind of video classification methods and device
CN109242030A (en) Draw single generation method and device, electronic equipment, computer readable storage medium
Fiallos et al. Detecting topics and locations on Instagram photos
CN111737473A (en) Text classification method, device and equipment
CN109308332A (en) A kind of target user's acquisition methods, device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant