CN111506753B - Recommendation method, recommendation device, electronic equipment and readable storage medium - Google Patents

Recommendation method, recommendation device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111506753B
CN111506753B CN202010158274.9A CN202010158274A CN111506753B CN 111506753 B CN111506753 B CN 111506753B CN 202010158274 A CN202010158274 A CN 202010158274A CN 111506753 B CN111506753 B CN 111506753B
Authority
CN
China
Prior art keywords
training
quality
layer
image
image sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010158274.9A
Other languages
Chinese (zh)
Other versions
CN111506753A (en
Inventor
信峥
董健
王永康
王兴星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Liangxin Technology Co ltd
Original Assignee
Hainan Liangxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Liangxin Technology Co ltd filed Critical Hainan Liangxin Technology Co ltd
Priority to CN202010158274.9A priority Critical patent/CN111506753B/en
Publication of CN111506753A publication Critical patent/CN111506753A/en
Application granted granted Critical
Publication of CN111506753B publication Critical patent/CN111506753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a recommendation method, a recommendation device, an electronic device and a readable storage medium, wherein the recommendation method comprises the following steps: performing first training on the first quality estimation model by adopting a first image sample; inputting the second image sample into a second quality prediction model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares a processing layer and a first output layer with the first quality prediction model, the second quality prediction model also comprises a second output layer, and the output of the processing layer is used as the input of the second output layer after noise is added; performing second training on the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of the second sample score and the second training score of the second image sample; and predicting the quality score of the image to be recommended by adopting the first quality prediction model obtained by the second training, and recommending the image to be recommended according to the quality score. The present disclosure may improve accuracy of recommendations.

Description

Recommendation method, recommendation device, electronic equipment and readable storage medium
Technical Field
The disclosure relates to the technical field of personalized recommendation, and in particular relates to a recommendation method, a recommendation device, electronic equipment and a readable storage medium.
Background
In the field of personalized recommendation technology for images, quality evaluation needs to be performed on images so as to recommend images with higher quality to users preferentially. Wherein the quality of the image is related to the sharpness of the image, the content of the image, and the remaining intrinsic properties of the image.
In the prior art, a pre-trained quality prediction model is adopted to predict the quality of an image, and the quality prediction model usually needs a large number of image samples to be trained, wherein the image samples can be obtained from a network and an application, so that some interference information, namely noise exists in the image samples, and the accuracy of the trained quality prediction model is lower due to the image samples with the noise, so that the recommended accuracy is poor.
Disclosure of Invention
The present disclosure provides a recommendation method, an apparatus, an electronic device, and a readable storage medium, where after a first training is performed on a first quality prediction model, a second training is performed on a second quality prediction model, and because training image information is restored from an image to which noise is added when the second quality prediction model is subjected to the second training, and a loss value is determined by combining the training image information and a gap between a second image sample, a second training process of combining the noise on the first quality prediction model is implemented, which is conducive to improving accuracy of the first quality prediction model, and further improving recommendation accuracy.
According to a first aspect of the present disclosure, there is provided a recommendation method, the method comprising:
performing first training on a first quality pre-estimation model by adopting a first image sample, wherein the first quality pre-estimation model comprises: a processing layer and a first output layer, the output of the processing layer being the input of the first output layer;
inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares the processing layer and the first output layer with the first quality prediction model obtained by the first training, the second quality prediction model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs training image information of the second image sample, and the first output layer outputs a second training score of the second image sample;
performing a second training on the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of a second sample score and a second training score of the second image sample;
And predicting the quality score of the image to be recommended by adopting the first quality prediction model obtained by the second training, and recommending the image to be recommended according to the quality score.
According to a second aspect of the present disclosure, there is provided a recommendation device, the device comprising:
the first training module is used for carrying out first training on a first quality pre-estimation model by adopting a first image sample, and the first quality pre-estimation model comprises: a processing layer and a first output layer, the output of the processing layer being the input of the first output layer;
the second input module is used for inputting a second image sample into the second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares the processing layer and the first output layer with the first quality prediction model obtained by the first training, the second quality prediction model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs training image information of the second image sample, and the first output layer outputs a second training score of the second image sample;
The second training module is used for carrying out second training on the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of the second sample score and the second training score of the second image sample;
and the quality score prediction module is used for predicting the quality score of the image to be recommended by adopting the first quality prediction model obtained by the second training, and recommending the image to be recommended according to the quality score.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the aforementioned recommended method when executing the program.
According to a fourth aspect of the present disclosure, there is provided a readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the aforementioned recommendation method.
The present disclosure provides a recommendation method, apparatus, electronic device, and readable storage medium, where a first image sample may be first used to perform a first training on a first quality prediction model; then inputting the second image sample into a second quality prediction model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares a processing layer and a first output layer with the first quality prediction model, the second quality prediction model also comprises a second output layer, and the output of the processing layer is used as the input of the second output layer after noise is added; training the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of the second sample score and the second training score of the second image sample; and finally, predicting the quality score of the image to be recommended by adopting a second quality prediction model, and recommending the image to be recommended according to the quality score. According to the method and the device, after the first quality pre-estimated model is subjected to first training, the second quality pre-estimated model is subjected to second training, and as training image information is restored from the noise-added image when the second quality pre-estimated model is subjected to second training, a loss value is determined by combining the difference between the training image information and the second image sample, a second training process of the first quality pre-estimated model by combining noise is realized, the accuracy of the first quality pre-estimated model is improved, and the recommended accuracy is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure, the drawings that are needed in the description of the present disclosure will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 shows a flow chart of steps of a recommendation method of the present disclosure;
FIG. 2 illustrates a schematic diagram of a second quality estimation model of the present disclosure;
FIG. 3 illustrates another structural schematic of a second quality estimation model of the present disclosure;
FIG. 4 shows a block diagram of a recommender of the present disclosure;
fig. 5 shows a block diagram of an electronic device of the present disclosure.
Detailed Description
The following description of the technical solutions in the present disclosure will be made clearly and completely with reference to the accompanying drawings in the present disclosure, and it is apparent that the described embodiments are some, but not all, embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
Referring to fig. 1, a flowchart illustrating steps of the recommendation method of the present disclosure is shown, and specifically includes:
step 101, performing first training on a first quality pre-estimation model by adopting a first image sample, wherein the first quality pre-estimation model comprises: a processing layer and a first output layer, the output of the processing layer being the input of the first output layer.
The first quality prediction model may be a deep learning model that outputs a predicted value at will, and the first quality prediction model includes a processing layer and a first output layer, where the processing layer is configured to perform linear or nonlinear operation on input information, and then the first output layer outputs a predicted value according to information output by the processing layer. The training process of the first image sample is that the first image sample is input into a first quality prediction model to obtain a quality score of the first image sample, so that the quality score of the first image sample approximates to the sample score of the first image sample.
In embodiments of the present disclosure, the quality score may be a numerical representation of quality in various dimensions, such as CTR (Click Through Rate, click rate), sharpness of the image, and the like.
It is understood that the first training is a first training of the first quality estimation model.
102, inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares the processing layer and the first output layer with the first quality prediction model obtained by the first training, the second quality prediction model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs training image information of the second image sample, and the first output layer outputs a second training score of the second image sample.
The second quality prediction model includes the first quality prediction model, as shown in the structural schematic diagram of the second quality prediction model in fig. 2, where the second quality prediction model includes a processing layer and a first output layer of the first quality prediction model, and the first output layer is used to predict a quality score of the second image sample during training, which is called a second training score of the second image sample. In addition, a second output layer is added to the second quality estimation model, and is used for recovering images from noise images with noise during training to obtain training image information, wherein the second output layer is equivalent to inverse operation of the processing layer.
In one embodiment of the present disclosure, the noise may be any form of noise, may be random noise, or may be noise conforming to a distribution function such as normal distribution, average distribution, or the like. Embodiments of the present disclosure are not limited thereto.
It will be appreciated that the second image sample is an image for training the second quality prediction model, and the first image sample is an image for training the first quality prediction model, and the second image sample may be the same as or different from the first image sample.
And step 103, performing a second training on the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of the second sample score and the second training score of the second image sample.
Specifically, the loss value of the second image sample and the training image information represents the difference between the second image sample and the training image information, and the loss value of the second sample score and the second training score represents the difference between the second sample score and the second training score, and the larger the difference is, the larger the loss value is; the smaller the gap, the smaller the loss value. The loss value can be calculated by selecting the existing loss function according to actual requirements.
Based on the loss value, the training process of the second quality estimation model is as follows: and determining a comprehensive loss value according to the two loss values, and adjusting the parameters of the second quality prediction model according to the gradient of the comprehensive loss value to the parameters of the second quality prediction model so that the comprehensive loss value after the next iteration is smaller than the current iteration until the comprehensive loss value is not continuously reduced in multiple iterations.
The integrated loss value may be a sum of two loss values, or may be a sum weighted according to a certain weight, where the weight may be set according to an actual application requirement, for example, if the prediction accuracy of the quality score is desired to be higher, a larger weight may be set for the loss values of the second sample score and the second training score, and a smaller weight may be set for the loss values of the second image sample and the training image information.
It should be noted that, after the training of the second quality prediction model is performed by the first quality prediction model, that is: after the first quality prediction model is trained according to step 101, the parameters of the processing layer in the first quality prediction model are used as the initial parameters of the processing layer in the second quality prediction model, the parameters of the first output layer in the first quality prediction model are used as the initial parameters of the first output layer in the second quality prediction model, and the initial parameters of the second output layer in the second quality prediction model can be set or randomly set according to the empirical value.
It can be appreciated that the second training is a training of the second quality estimation model, which is equivalent to a second training of the first quality estimation model by combining noise and the second output layer.
And 104, estimating the quality score of the image to be recommended by adopting the first quality estimation model obtained by the second training, and recommending the image to be recommended according to the quality score.
The image to be recommended can be any image provided by the personalized recommendation platform or an image related to the search word input by the user. Embodiments of the present disclosure are not limited thereto.
The process of predicting the quality score of the image to be recommended is as follows: and inputting the image to be recommended into a first quality prediction model, and outputting the quality score of the image to be recommended by a first output layer of the first quality prediction model.
In practical applications, the recommendation is usually performed based on a plurality of images to be recommended, so as to predict the quality score of each image to be recommended, and arrange the images in descending order to recommend the images to the user, or acquire the images to be recommended with the quality scores greater than or equal to a preset quality score threshold value to recommend the images to the user.
Optionally, in another embodiment of the present disclosure, the step 103 includes sub-steps A1 to A5:
And a sub-step A1 of determining a first loss value of the second image sample according to the second image sample and the training image information, and determining a second loss value of the second image sample according to a second sample score and a second training score of the second image sample.
The first loss value is a loss value of the second image sample and training image information, and the second loss value is a loss value of the second sample score and the second training score.
And a sub-step A2, inputting the first loss value of the second image sample into a monotonically decreasing function to obtain the weight of the second image sample.
Wherein a monotonically decreasing function is used to convert the first loss value into a weight and guarantee the following relationship: if the first loss value is larger, the weight is smaller; if the first loss value is smaller, the weight is larger. The monotonically decreasing function may be any function that decreases over a value interval greater than 0, for example, an exponential function, and the weight of the second sample image may be obtained as follows:
wherein W is i LOSS1 for the weight of the ith second image sample i A first loss value for the ith second image sample.
And a sub-step A3, weighting a second loss value of the second image sample by adopting the weight of the second image sample to obtain a weighted loss value of the second image sample.
In one embodiment of the present disclosure, the weighted loss value may be calculated with reference to the following formula:
wherein WLOSS is provided i LOSS of weight value for the ith second image sample, LOSS2 i A second loss value for the ith second image sample.
And a sub-step A4, determining the comprehensive loss value of the second image sample according to the weighted loss value and the first loss value of the second image sample.
In one embodiment of the present disclosure, the integrated loss value may be calculated with reference to the following formula:
wherein TLOSS is as follows i Is the composite loss value for the ith second image sample.
And a sub-step A5, performing second training on the second quality estimation model according to the comprehensive loss value of the second image sample.
It should be noted that, in general, a large number of second image samples are used to perform the second training on the second quality prediction model, for example, in each iteration, a large number of second image samples may be input into the second quality prediction model, a comprehensive loss value of each second image sample is determined, the comprehensive loss values of the second image samples are averaged to obtain an average loss value, and the second quality prediction model is performed on the second quality prediction model according to the average loss value. The average loss value can be calculated according to the following formula:
Where ALOSS is the average loss value and I is the number of second image samples used for each iteration. First LOSS value LOSS1 i And a second LOSS value LOSS2 i Suitable LOSS function calculation can be selected according to practical application requirements, for example, the first LOSS value LOSS1 i The following square sum loss function calculation can be used:
wherein N is the number of pixels contained in the second image sample, the number of pixels of the second image sample and the training image information is the same, CH i,n The value of the nth pixel point in the ith second image sample is CH' i,n The value of the nth pixel point in the training image information is obtained.
Also for example, the second LOSS value LOSS2 i The cross entropy loss function can be calculated as follows:
LOSS2 i =-[y i ·log(y' i )+(1-y i )·log(1-y' i )] (6)
wherein y is i A second sample score, y ', corresponding to the ith second image sample' i A second training score for the ith second image sample.
It can be understood that the process of performing the second training on the second quality prediction model according to the average loss value is: and adjusting the parameters of the second quality estimation model according to the gradient of the average loss value to the parameters of the second quality estimation model so that the average loss value after the next iteration is smaller than that of the current iteration until the average loss value is not continuously reduced in multiple iterations.
Embodiments of the present disclosure may use a first loss value to weight a second loss value to reduce the impact of noise on the integrated loss value, which helps to improve the accuracy of the integrated loss value.
Optionally, in another embodiment of the present disclosure, the second image sample is replaced with second feature information of the commodity, the second feature information includes a second discrete feature and a second continuous feature, the training image information is replaced with third feature information of the commodity, the third feature information includes a third discrete feature and a third continuous feature, and the substep A1 includes substeps B1 to B3:
and a sub-step B1 of inputting the second discrete feature and the third discrete feature into a discrete loss function to obtain a discrete loss value of the second feature information.
The discrete loss function calculates a loss value for the discrete valued data, and the accuracy is higher.
And a sub-step B2 of inputting the second continuous feature and the third continuous feature into a continuous loss function to obtain a continuous loss value of the second feature information.
The continuous loss function calculates the loss value for the continuously valued data, and the accuracy is higher.
And a sub-step B3 of determining a first loss value of the second characteristic information according to the discrete loss value and the continuous loss value.
In particular, the first loss value may be a sum of a discrete loss value and a continuous loss value.
Embodiments of the present disclosure may calculate discrete and continuous loss values for discrete and continuous features, respectively, helping to improve the accuracy of the first loss value.
Optionally, in another embodiment of the present disclosure, the first image sample is replaced with first feature information of the commodity, the first feature information corresponds to a first sample score, and the step 101 includes sub-steps C1 to C3:
and C1, inputting the first characteristic information into a first quality estimation model to obtain a first training score of the first characteristic information.
Wherein the first training score is a quality score of the first characteristic information.
And a sub-step C2 of determining a loss value of the first characteristic information according to the first training score of the first characteristic information and the first sample score of the first characteristic information.
The loss value of the first feature information may be any existing loss function, for example, a cross entropy loss function, a sum of squares loss function, an absolute value loss function, or the like.
And C3, performing first training on the first quality estimation model through the loss value of the first characteristic information.
Specifically, the gradient of the parameter of the first quality estimation model is adjusted through the loss value of the first characteristic information, so that the loss value of the first characteristic information after the next iteration is smaller than the loss value of the current iteration.
The embodiment of the disclosure can perform first training on the first quality pre-estimated model by adopting the first characteristic information so as to realize second training of the second quality pre-estimated model on the basis of the first quality pre-estimated model.
Optionally, in another embodiment of the present disclosure, the first quality prediction model is a DNN model, the processing layer includes an input layer and a hidden layer, the input of the DNN model is the input of the input layer, the output of the input layer is the input of the hidden layer, the output of the hidden layer is the output of the first output layer, and the output of the first output layer is the output of the first quality prediction model.
Among them, the DNN (Deep Neural Networks, deep neural network) model is a commonly used deep learning model, and the main structure of the DNN model includes: an input layer, a plurality of hidden layers and an output layer. Referring to the schematic structural diagram of the second quality estimation model shown in fig. 3, DNN is included as the first quality estimation model.
Embodiments of the present disclosure may employ a commonly used DNN model as the first quality estimation model.
Optionally, in another embodiment of the present disclosure, the second output layer is a denoising layer, the first quality prediction model and the second quality prediction model share the input layer and the hidden layer, an input of the second quality prediction model is used as an input of the input layer, an output of the input layer is used as an input of the hidden layer, an output of the hidden layer is used as an input of the denoising layer after adding noise, and an output of the first output layer is an output of the second quality prediction model.
The denoising layer is used for removing noise from the noise image so as to restore the image. The denoising layer of the second quality estimation model shown in fig. 3 is a second output layer, and is used for outputting restored images, namely outputting training image information in the training process, and the processing and denoising layer in fig. 3 forms a DAE (Denoising AutoEncoder, noise reduction and self-coding) network.
Optionally, in another embodiment of the present disclosure, the discrete loss function is a sum of squares loss function and the continuous loss function is a cross entropy loss function.
In one embodiment of the present disclosure, when the discrete loss function is a sum of squares loss function, the following discrete loss values may be obtained:
wherein DLOSS is provided i For the i-th second feature information, N1 is the number of discrete features contained in each second feature information, D i,n1 For the (1) th discrete feature information in the (i) th second feature information, D' i,n1 Is the (n 1) th discrete feature information in the (i) th third feature information.
When the continuous loss function is a cross entropy loss function, the following continuous loss value can be obtained:
wherein, CLOSS i For the continuous loss value of the ith second feature information, N2 is the number of continuous features contained in each second feature information, C i,n2 For the (n 2) th continuous characteristic information in the (i) th second characteristic information, C' i,n2 Is the nth 2 consecutive feature information in the ith third feature information.
Embodiments of the present disclosure may calculate discrete loss values using a sum-of-squares loss function and continuous loss values using a cross-entropy loss function.
In summary, the present disclosure provides a recommendation method, including: performing first training on a first quality pre-estimation model by adopting a first image sample, wherein the first quality pre-estimation model comprises: a processing layer and a first output layer, the output of the processing layer being the input of the first output layer; inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares the processing layer and the first output layer with the first quality prediction model obtained by the first training, the second quality prediction model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs training image information of the second image sample, and the first output layer outputs a second training score of the second image sample; performing a second training on the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of a second sample score and a second training score of the second image sample; and predicting the quality score of the image to be recommended by adopting the first quality prediction model obtained by the second training, and recommending the image to be recommended according to the quality score. According to the method and the device, after the first quality pre-estimated model is subjected to first training, the second quality pre-estimated model is subjected to second training, and as training image information is restored from the noise-added image when the second quality pre-estimated model is subjected to second training, a loss value is determined by combining the difference between the training image information and the second image sample, a second training process of the first quality pre-estimated model by combining noise is realized, the accuracy of the first quality pre-estimated model is improved, and the recommended accuracy is further improved.
Referring to fig. 4, there is shown a block diagram of the recommending apparatus of the present disclosure, concretely as follows:
a first training module 201, configured to perform a first training on a first quality prediction model by using a first image sample, where the first quality prediction model includes: a processing layer and a first output layer, the output of the processing layer being the input of the first output layer.
A second input module 202, configured to input a second image sample to a second quality prediction model, to obtain training image information and a second training score of the second image sample; the second quality prediction model shares the processing layer and the first output layer with the first quality prediction model obtained by the first training, the second quality prediction model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs training image information of the second image sample, and the first output layer outputs a second training score of the second image sample.
And a second training module 203, configured to perform a second training on the second quality prediction model according to the second image sample and the loss value of the training image information, and the second sample score of the second image sample and the loss value of the second training score.
And the quality score prediction module 204 is configured to estimate a quality score of an image to be recommended by using the first quality estimation model obtained by the second training, and recommend the image to be recommended according to the quality score.
Optionally, in another embodiment of the present disclosure, the second training module 203 includes a loss value determination sub-module, a weight calculation sub-module, a loss value weighting sub-module, a comprehensive loss value calculation sub-module, and a second training sub-module:
a loss value determination submodule, configured to determine a first loss value of the second image sample according to the second image sample and the training image information, and determine a second loss value of the second image sample according to a second sample score and a second training score of the second image sample.
And the weight calculation sub-module is used for inputting the first loss value of the second image sample into a monotonically decreasing function to obtain the weight of the second image sample.
And the loss value weighting submodule is used for weighting the second loss value of the second image sample by adopting the weight of the second image sample to obtain a weighted loss value of the second image sample.
And the comprehensive loss value calculation submodule is used for determining the comprehensive loss value of the second image sample according to the weighted loss value and the first loss value of the second image sample.
And the second training sub-module is used for carrying out second training on the second quality estimation model according to the comprehensive loss value of the second image sample.
Optionally, in another embodiment of the present disclosure, the second image sample is replaced with second feature information of the commodity, the second feature information includes a second discrete feature and a second continuous feature, the training image information is replaced with third feature information of the commodity, the third feature information includes a third discrete feature and a third continuous feature, and the loss value determining submodule includes a discrete loss value determining unit, a continuous loss value determining unit, and a first loss value determining unit:
and the discrete loss value determining unit is used for inputting the second discrete feature and the third discrete feature into a discrete loss function to obtain the discrete loss value of the second feature information.
And the continuous loss value determining unit is used for inputting the second continuous feature and the third continuous feature into a continuous loss function to obtain the continuous loss value of the second feature information.
And the first loss value determining unit is used for determining a first loss value of the second characteristic information according to the discrete loss value and the continuous loss value.
Optionally, in another embodiment of the disclosure, the first image sample is replaced with first feature information of the commodity, the first feature information corresponds to a first sample score, and the first training module includes a first training score prediction sub-module, a first loss value determination sub-module, and a first training sub-module:
and the first training score prediction sub-module is used for inputting the first characteristic information into a first quality prediction model to obtain a first training score of the first characteristic information.
And the first loss value determining submodule is used for determining the loss value of the first characteristic information according to the first training score of the first characteristic information and the first sample score of the first characteristic information.
And the first training sub-module is used for carrying out first training on the first quality estimation model through the loss value of the first characteristic information.
Optionally, in another embodiment of the present disclosure, the first quality prediction model is a DNN model, the processing layer includes an input layer and a hidden layer, the input of the DNN model is the input of the input layer, the output of the input layer is the input of the hidden layer, the output of the hidden layer is the output of the first output layer, and the output of the first output layer is the output of the first quality prediction model.
Optionally, in another embodiment of the present disclosure, the second output layer is a denoising layer, the first quality prediction model and the second quality prediction model share the input layer and the hidden layer, an input of the second quality prediction model is used as an input of the input layer, an output of the input layer is used as an input of the hidden layer, an output of the hidden layer is used as an input of the denoising layer after adding noise, and an output of the first output layer is an output of the second quality prediction model.
Optionally, in another embodiment of the present disclosure, the discrete loss function is a sum of squares loss function and the continuous loss function is a cross entropy loss function.
In summary, the present disclosure provides a recommendation device, including: the first training module is used for carrying out first training on a first quality pre-estimation model by adopting a first image sample, and the first quality pre-estimation model comprises: a processing layer and a first output layer, the output of the processing layer being the input of the first output layer; the second input module is used for inputting a second image sample into the second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares the processing layer and the first output layer with the first quality prediction model obtained by the first training, the second quality prediction model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs training image information of the second image sample, and the first output layer outputs a second training score of the second image sample; the second training module is used for carrying out second training on the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of the second sample score and the second training score of the second image sample; and the quality score prediction module is used for predicting the quality score of the image to be recommended by adopting the first quality prediction model obtained by the second training, and recommending the image to be recommended according to the quality score. According to the method and the device, after the first quality pre-estimated model is subjected to first training, the second quality pre-estimated model is subjected to second training, and as training image information is restored from the noise-added image when the second quality pre-estimated model is subjected to second training, a loss value is determined by combining the difference between the training image information and the second image sample, a second training process of the first quality pre-estimated model by combining noise is realized, the accuracy of the first quality pre-estimated model is improved, and the recommended accuracy is further improved.
The device embodiments of the present disclosure may refer to detailed descriptions of method embodiments, and are not described herein.
The present disclosure also provides an electronic device, referring to fig. 5, comprising: a processor 301, a memory 302 and a computer program 3021 stored on the memory 302 and executable on the processor, the processor 301 implementing the recommended methods of the previous embodiments when executing the program.
The present disclosure also provides a readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the recommendation method of the previous embodiments.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present disclosure is not directed to any particular programming language. It will be appreciated that the disclosure described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present disclosure.
In the description provided herein, numerous specific details are set forth. However, it is understood that the present disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Various component embodiments of the present disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in the recommendation device according to the present disclosure may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present disclosure may also be implemented as a device or apparatus program for performing part or all of the methods described herein. Such a program embodying the present disclosure may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the disclosure, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but is intended to cover any modifications, equivalents, and alternatives falling within the spirit and principles of the present disclosure.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A recommendation method, the method comprising:
performing first training on a first quality pre-estimation model by adopting a first image sample, wherein the first quality pre-estimation model comprises: a processing layer and a first output layer, the output of the processing layer being the input of the first output layer;
Inputting a second image sample into a second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares the processing layer and the first output layer with the first quality prediction model obtained by the first training, the second quality prediction model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs training image information of the second image sample, and the first output layer outputs a second training score of the second image sample;
performing a second training on the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of a second sample score and a second training score of the second image sample;
and predicting the quality score of the image to be recommended by adopting a second quality prediction model obtained by the second training, and recommending the image to be recommended according to the quality score.
2. The method of claim 1, wherein the step of performing a second training on the second quality prediction model based on the loss values of the second image sample and the training image information, and the loss values of the second sample score and the second training score of the second image sample, comprises:
Determining a first loss value of the second image sample according to the second image sample and the training image information, and determining a second loss value of the second image sample according to a second sample score and a second training score of the second image sample;
inputting the first loss value of the second image sample into a monotonically decreasing function to obtain the weight of the second image sample;
weighting a second loss value of the second image sample by adopting the weight of the second image sample to obtain a weighted loss value of the second image sample;
determining a comprehensive loss value of the second image sample according to the weighted loss value and the first loss value of the second image sample;
and performing second training on the second quality estimation model according to the comprehensive loss value of the second image sample.
3. The method of claim 2, wherein the second image sample is replaced with second feature information of the commodity, the second feature information including a second discrete feature and a second continuous feature, the training image information is replaced with third feature information of the commodity, the third feature information including a third discrete feature and a third continuous feature, and the step of determining a first loss value of the second feature information based on the second discrete feature and the second continuous feature, the third discrete feature, and the third continuous feature comprises:
Inputting the second discrete feature and the third discrete feature into a discrete loss function to obtain a discrete loss value of the second feature information;
inputting the second continuous feature and the third continuous feature into a continuous loss function to obtain a continuous loss value of the second feature information;
and determining a first loss value of the second characteristic information according to the discrete loss value and the continuous loss value.
4. A method according to any one of claims 1 to 3, wherein the first image sample is replaced with first feature information of the commodity, the first feature information corresponding to a first sample score, the step of first training a first quality prediction model using the first feature information comprising:
inputting the first characteristic information into a first quality estimation model to obtain a first training score of the first characteristic information;
determining a loss value of the first characteristic information according to a first training score of the first characteristic information and a first sample score of the first characteristic information;
and performing first training on the first quality estimation model through the loss value of the first characteristic information.
5. The method of claim 1, wherein the first quality prediction model is a DNN model, the processing layer comprises an input layer and a hidden layer, the input of the DNN model is the input of the input layer, the output of the input layer is the input of the hidden layer, the output of the hidden layer is the output of the first output layer, and the output of the first output layer is the output of the first quality prediction model.
6. The method of claim 5, wherein the second output layer is a denoising layer, the first quality prediction model and the second quality prediction model share the input layer and the concealment layer, an input of the second quality prediction model is taken as an input of the input layer, an output of the input layer is taken as an input of the concealment layer, an output of the concealment layer is taken as an input of the denoising layer after adding noise, and an output of the first output layer is taken as an output of the second quality prediction model.
7. A method according to claim 3, wherein the discrete loss function is a sum of squares loss function and the continuous loss function is a cross entropy loss function.
8. A recommendation device, the device comprising:
the first training module is used for carrying out first training on a first quality pre-estimation model by adopting a first image sample, and the first quality pre-estimation model comprises: a processing layer and a first output layer, the output of the processing layer being the input of the first output layer;
the second input module is used for inputting a second image sample into the second quality estimation model to obtain training image information and a second training score of the second image sample; the second quality prediction model shares the processing layer and the first output layer with the first quality prediction model obtained by the first training, the second quality prediction model further comprises a second output layer, the output of the processing layer is used as the input of the second output layer after noise is added, the second output layer outputs training image information of the second image sample, and the first output layer outputs a second training score of the second image sample;
the second training module is used for carrying out second training on the second quality estimation model according to the loss values of the second image sample and the training image information and the loss values of the second sample score and the second training score of the second image sample;
And the quality score prediction module is used for predicting the quality score of the image to be recommended by adopting a second quality prediction model obtained by the second training, and recommending the image to be recommended according to the quality score.
9. An electronic device, comprising:
processor, memory and computer program stored on the memory and executable on the processor, characterized in that the processor implements the recommendation method according to any of claims 1-7 when executing the program.
10. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the recommendation method according to any of the method claims 1-7.
CN202010158274.9A 2020-03-09 2020-03-09 Recommendation method, recommendation device, electronic equipment and readable storage medium Active CN111506753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158274.9A CN111506753B (en) 2020-03-09 2020-03-09 Recommendation method, recommendation device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158274.9A CN111506753B (en) 2020-03-09 2020-03-09 Recommendation method, recommendation device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111506753A CN111506753A (en) 2020-08-07
CN111506753B true CN111506753B (en) 2023-09-12

Family

ID=71877665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158274.9A Active CN111506753B (en) 2020-03-09 2020-03-09 Recommendation method, recommendation device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111506753B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633425B (en) * 2021-03-11 2021-05-11 腾讯科技(深圳)有限公司 Image classification method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002792A (en) * 2018-07-12 2018-12-14 西安电子科技大学 SAR image change detection based on layering multi-model metric learning
CN109308696A (en) * 2018-09-14 2019-02-05 西安电子科技大学 Non-reference picture quality appraisement method based on hierarchy characteristic converged network
CN110766052A (en) * 2019-09-20 2020-02-07 北京三快在线科技有限公司 Image display method, evaluation model generation device and electronic equipment
CN110807757A (en) * 2019-08-14 2020-02-18 腾讯科技(深圳)有限公司 Image quality evaluation method and device based on artificial intelligence and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180121733A1 (en) * 2016-10-27 2018-05-03 Microsoft Technology Licensing, Llc Reducing computational overhead via predictions of subjective quality of automated image sequence processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002792A (en) * 2018-07-12 2018-12-14 西安电子科技大学 SAR image change detection based on layering multi-model metric learning
CN109308696A (en) * 2018-09-14 2019-02-05 西安电子科技大学 Non-reference picture quality appraisement method based on hierarchy characteristic converged network
CN110807757A (en) * 2019-08-14 2020-02-18 腾讯科技(深圳)有限公司 Image quality evaluation method and device based on artificial intelligence and computer equipment
CN110766052A (en) * 2019-09-20 2020-02-07 北京三快在线科技有限公司 Image display method, evaluation model generation device and electronic equipment

Also Published As

Publication number Publication date
CN111506753A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
US20190026609A1 (en) Personalized Digital Image Aesthetics in a Digital Medium Environment
CN112000819A (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
CN111260020B (en) Convolutional neural network calculation method and device
CN110096617B (en) Video classification method and device, electronic equipment and computer-readable storage medium
US20230004608A1 (en) Method for content recommendation and device
CN115731505B (en) Video salient region detection method and device, electronic equipment and storage medium
Shelke et al. An improved anti-forensics JPEG compression using least cuckoo search algorithm
CN113435430B (en) Video behavior identification method, system and equipment based on self-adaptive space-time entanglement
CN111724370A (en) Multi-task non-reference image quality evaluation method and system based on uncertainty and probability
CN113537630A (en) Training method and device of business prediction model
CN111506753B (en) Recommendation method, recommendation device, electronic equipment and readable storage medium
Wang et al. Data quality-aware mixed-precision quantization via hybrid reinforcement learning
WO2022193469A1 (en) System and method for ai model watermarking
CN117478978B (en) Method, system and equipment for generating movie video clips through texts
CN113160042B (en) Image style migration model training method and device and electronic equipment
CN112801890A (en) Video processing method, device and equipment
US6813390B2 (en) Scalable expandable system and method for optimizing a random system of algorithms for image quality
CN113014928B (en) Compensation frame generation method and device
US20130211803A1 (en) Method and device for automatic prediction of a value associated with a data tuple
CN113891069A (en) Video quality assessment method, device and equipment
Masood et al. Intelligent noise detection and filtering using neuro-fuzzy system
EP1433134A2 (en) Scalable expandable system and method for optimizing a random system of algorithms for image quality
CN114727107B (en) Video processing method, device, equipment and medium
KR102561613B1 (en) Method and device for denoising image with noise by using deep image prior which has been applied with stochastic temporal ensembling and optimized stopping timing automatic decision algorithm
CN112200234B (en) Method and device for preventing model stealing in model classification process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220324

Address after: 571924 Room 302, building b-20, Hainan Ecological Software Park, west of Meilun South Road, Laocheng town economic and Technological Development Zone, Chengmai County, Hainan Province

Applicant after: Hainan Liangxin Technology Co.,Ltd.

Address before: 100083 2106-030, 9 North Fourth Ring Road, Haidian District, Beijing.

Applicant before: BEIJING SANKUAI ONLINE TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant