CN113627419A - Interest region evaluation method, device, equipment and medium - Google Patents

Interest region evaluation method, device, equipment and medium Download PDF

Info

Publication number
CN113627419A
CN113627419A CN202010381698.1A CN202010381698A CN113627419A CN 113627419 A CN113627419 A CN 113627419A CN 202010381698 A CN202010381698 A CN 202010381698A CN 113627419 A CN113627419 A CN 113627419A
Authority
CN
China
Prior art keywords
image
interest
evaluation result
region
evaluated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010381698.1A
Other languages
Chinese (zh)
Inventor
夏德国
张刘辉
赵辉
蒋冰
白红霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN202010381698.1A priority Critical patent/CN113627419A/en
Publication of CN113627419A publication Critical patent/CN113627419A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method, a device, equipment and a medium for evaluating an interest area, and relates to the electronic map technology. Wherein, the method comprises the following steps: determining an image to be evaluated of the interest area; determining an image evaluation result of an image to be evaluated under at least one dimension; and evaluating the interest region according to the image evaluation result of the image to be evaluated in at least one dimension. Compared with the manual evaluation of the interest region, the method and the device for evaluating the interest region can improve the objectivity of the evaluation of the interest region and improve the efficiency of the evaluation of the interest region.

Description

Interest region evaluation method, device, equipment and medium
Technical Field
The embodiment of the application relates to a computer technology, in particular to an electronic map technology, and particularly relates to a method, a device, equipment and a medium for evaluating an interest area.
Background
In recent years, with the improvement of the living standard of people and the convenience of a tourism mode, the number of times of tourism of people is more and more. The mobile map navigation application and the travel application play an increasingly important role in the aspect that a user selects a travel Interest Area (Area Of Interest, AOI, refers to a physical entity in a map). The applications can provide auxiliary decision reference for the user by showing the information of the interest areas, such as area evaluation, the surrounding environment of the areas and the like, and provide great convenience for the travel route planning of the user.
Currently, the evaluation of the interest area is mainly performed by a field examination based on a small number of professional appreciators, so that the objectivity of the evaluation result is poor and the evaluation efficiency is low.
Disclosure of Invention
The embodiment of the application discloses a method, a device, equipment and a medium for evaluating an interest region, so as to improve the objectivity and efficiency of the evaluation of the interest region.
In a first aspect, an embodiment of the present application discloses a method for evaluating a region of interest, including:
determining an image to be evaluated of the interest area;
determining an image evaluation result of the image to be evaluated under at least one dimension;
and evaluating the interest region according to the image evaluation result of the image to be evaluated in the at least one dimension.
In a second aspect, an embodiment of the present application further discloses a device for evaluating a region of interest, including:
the image to be evaluated determining module is used for determining an image to be evaluated of the interest area;
the image evaluation result determining module is used for determining an image evaluation result of the image to be evaluated under at least one dimension;
and the interest region evaluation module is used for evaluating the interest region according to the image evaluation result of the image to be evaluated in the at least one dimension.
In a third aspect, an embodiment of the present application further discloses an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a region of interest assessment method as described in any of the embodiments of the present application.
In a fourth aspect, embodiments of the present application further disclose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method for region of interest assessment according to any of the embodiments of the present application.
According to the technical scheme of the embodiment of the application, the interest area is evaluated by utilizing the image evaluation result of the image to be evaluated of the interest area in at least one dimension, so that the problems of poor objectivity and low evaluation efficiency of the existing interest area evaluation are solved, the objectivity and the accuracy of the interest area evaluation are improved, and the efficiency of the interest area evaluation is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow chart of a method for region of interest assessment disclosed in an embodiment of the present application;
FIG. 2 is a flow chart of another method for region of interest assessment disclosed in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of another disclosed method for region of interest assessment in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of an assessment architecture for a region of interest disclosed in accordance with an embodiment of the present application;
FIG. 5 is a flow chart of another disclosed region of interest assessment method according to an embodiment of the present application;
FIG. 6 is a schematic illustration of an interface showing the results of a region of interest assessment according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of an apparatus for evaluating a region of interest according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device disclosed according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a method for evaluating an area of interest according to an embodiment of the present application, which may be applied to a case where an area of interest in a map is evaluated, and the method may be performed by an area of interest evaluation apparatus, which may be implemented by software and/or hardware, and may be integrated on any electronic device with computing capability, such as a server.
As shown in fig. 1, the method for evaluating a region of interest disclosed in the embodiment of the present application may include:
s101, determining an image to be evaluated of the interest area.
The interest area can be any area-shaped physical entity on the map, such as a scenic spot, a block, a mall, and the like; the image to be evaluated comprises any image that can be used for evaluating the region of interest. The image to be evaluated can be acquired by professional acquisition personnel by using acquisition equipment under the conditions of different acquisition angles, different acquisition distances and the like, can be acquired by a map user acquiring the image and uploading the image to a background server, and can be acquired by mining network image data.
Optionally, the image to be evaluated of the interest region includes a street view image, and the street view image includes two parts of contents: the image and the positioning information corresponding to the image are the positioning information when the street view image is collected by the collecting equipment, such as GPS coordinate information; further, determining an image to be evaluated of the region of interest includes: and determining an image to be evaluated of the interest area according to the position relation between the positioning information carried by the collected street view image and the interest area. That is, in a large number of street view images collected, when the position relationship between the positioning information and the interest area satisfies a position condition, the street view image is considered to belong to an image to be evaluated of the interest area, where the position condition includes, but is not limited to: the positioning information carried in the street view image is in a preset area range of the interest area, or the distance between the positioning information carried in the street view image and a specified position in the interest area is smaller than a distance threshold value, and the distance threshold value can be set adaptively. By adopting the street view image, the images can be classified according to the positioning information, and the interest area to which the images belong can be determined efficiently.
S102, determining an image evaluation result of the image to be evaluated in at least one dimension.
In the embodiments of the present application, any available image processing technology may be utilized to perform recognition processing on an image to be evaluated, so as to evaluate the image to be evaluated from at least one dimension. For example, the evaluation result of the image to be evaluated may be determined by using a pre-trained evaluation model for evaluating the image.
The evaluation dimension of the image to be evaluated can be used for characterizing the interest area from different angles, so that a map user can obtain more reference information about the interest area when searching the interest area. The evaluation dimension may include, but is not limited to, the aesthetic quality of the area of interest, the attraction rating corresponding to the area of interest, the architectural style of the area of interest (including, but not limited to, distinguishing the style by region and distinguishing the style by year, etc.), the commercial prosperity of the area of interest, and the number of types of shops in the area of interest, among other considered dimensions that may be used to evaluate the area of interest.
Correspondingly, the image evaluation result of the image to be evaluated in at least one dimension comprises: an image beauty evaluation result and an image feature evaluation result. The image aesthetic degree evaluation result is used for representing the interest area from the perspective of the entity environment, such as the environment neatness degree, the environment beauty degree and the like of the interest area, wherein the entity environment comprises a natural environment and an artificially created environment; the image specificity degree evaluation result is used for representing the interest region from the perspective of the human colloquial, such as the degree that the current interest region is distinguished from other interest regions in ethnic style and form. The image evaluation result of the image to be evaluated in at least one dimension can also comprise a sight spot grade evaluation result, an architectural style evaluation result, a business prosperity degree evaluation result, a shop type quantity evaluation result and the like.
For any region of interest, a large number of images to be evaluated may be collected for image evaluation, e.g., region of interest AiCorresponding streetscape image data set Ii={Ii,1,Ii,2,Ii,3..., there may be image evaluation results in at least one dimension for each street view image in the set. The image evaluation result may be represented in the form of a specific score or evaluation grade, and the embodiment of the present application is not particularly limited. Taking the evaluation dimension-aesthetic measure as an example, the aesthetic measure can be divided into 5 grades or levels, where B is {0, 1, 2, 3, 4}, and each level corresponds to an image aesthetic measure evaluation result; similarly, for example, the chroma may be divided into 5 levels or levels, where C ═ 0, 1, 2, 3, 4, and each level corresponds to an image chroma evaluation result.
S103, evaluating the interest region according to the image evaluation result of the image to be evaluated in at least one dimension.
Each interest region can include a large number of images to be evaluated, so that for each evaluation dimension, the image evaluation results of the images to be evaluated can be comprehensively considered, and the region evaluation result of the interest region in the evaluation dimension is determined, for example, in the images to be evaluated of a certain interest region, the region evaluation result of the interest region in the aspect of the aesthetic degree can be determined to be excellent if most of the image aesthetic degree evaluation results are excellent; further, after determining the region evaluation results of the interest region under each evaluation dimension, the region evaluation results of each dimension can be considered comprehensively, and the comprehensive evaluation result of the interest region is determined.
According to the technical scheme of the embodiment of the application, the interest area is evaluated by utilizing the image evaluation result of the image to be evaluated of the interest area in at least one dimension, and the evaluation objectivity of the interest area is ensured by the objectivity of the image evaluation result because the image evaluation result does not depend on artificial subjective evaluation, so that the problem of poor evaluation objectivity of the conventional interest area is solved, and the objectivity and the accuracy of the evaluation of the interest area are improved; in addition, the interest area is evaluated based on the image evaluation result, relevant personnel are not required to conduct on-site investigation and evaluation, the labor and time cost is reduced, the efficiency of interest area evaluation is improved, the problem of low efficiency of the existing interest area evaluation is solved, and the timeliness of interest area evaluation is also guaranteed; meanwhile, the multidimensional image evaluation result also ensures the multidimensional evaluation of the interest area, and solves the problem that the existing interest area evaluation dimension is single, for example, only the standard scenic spot grade corresponding to the interest area is considered, and the evaluation dimension of the interest area is enriched.
Fig. 2 is a flowchart of another method for evaluating a region of interest according to an embodiment of the present application, which is further optimized and expanded based on the above technical solution, and can be combined with the above various optional embodiments. As shown in fig. 2, the method may include:
s201, determining an image to be evaluated of the interest area.
S202, determining an image evaluation result of the image to be evaluated in at least one dimension by using the image evaluation model.
The image evaluation model is obtained by training based on a sample image of a sample interest area and an annotation evaluation result of the sample image in at least one dimension. In the embodiment of the application, the samples participating in the model training are obtained by adopting large-scale random images and a large amount of user labels, for example, a sample interest area set can be obtainedDenoted A, any sample region of interest Ai∈A={A1,A2,A3,.., any sample region of interest AiThe corresponding sample image set may be denoted as Ii={Ii,1,Ii,2,Ii,3...}. The evaluation result of the label of each sample image may be represented in the form of a specific score or evaluation grade, and the embodiment of the present application is not particularly limited.
For the evaluation result (or called as an annotation classification value) of each sample image in each dimension, multiple users can be simultaneously annotated, and then the target annotation evaluation result of each sample image in each dimension is determined according to the number of people corresponding to each annotation evaluation result, for example, the annotation evaluation result with the largest number of annotated people is used as the target annotation evaluation result, and then the target annotation evaluation result is used in the model training process. Compared with the traditional mode, the method has the advantages that the relation between the image and the image evaluation result of each dimensionality can be learned based on the big data thought through model training, so that the subsequent image evaluation is more objective and more accords with the judgment standard of a user, in addition, the objectivity of the sample image labeling evaluation result used for model training can be improved based on the big data thought, the confidence coefficient of the image evaluation model is ensured, and a foundation is laid for obtaining the objective and accurate interest area evaluation result subsequently.
Optionally, determining an image evaluation result of the image to be evaluated in at least one dimension by using the image evaluation model, including: and determining the image evaluation results of the images to be evaluated under different dimensions by using the image evaluation model trained in advance based on the multi-task learning. Multi-Task Learning (Multi-Task Learning) is a machine Learning method that learns multiple related tasks together based on Shared Representation (Shared Representation). In the model training process, a sample image can be used as input, the labeling evaluation result of the sample image in at least one dimension is simultaneously used as output, and the multitask image evaluation model is obtained through training based on any available neural network structure. By utilizing the image evaluation model based on the multi-task learning training, the efficiency of image evaluation can be improved.
In addition, determining an image evaluation result of the image to be evaluated in at least one dimension by using the image evaluation model may further include: and respectively determining image evaluation results of the images to be evaluated under different dimensions by using the image evaluation model which is trained separately for each dimension. In the process of training the model for each dimension individually, the sample image can be used as input, the label evaluation result of the sample image in each dimension can be used as output, and the image evaluation model in each dimension is obtained through training based on any available neural network model.
The image evaluation model in the embodiment of the application includes a deep neural network image classification model based on a convolutional neural network, and the neural network model can be selected from but not limited to a vgg (visual Geometry Group network) model, a resnet (residual network) model, an inclusion model and the like.
S203, evaluating the interest area according to the image evaluation result of the image to be evaluated in at least one dimension.
According to the technical scheme of the embodiment of the application, the image evaluation result of the image to be evaluated under at least one dimension is determined by using the image evaluation model, so that the automatic scoring of the image to be evaluated is realized, the image evaluation efficiency is improved, the accuracy of the image evaluation result is ensured, and then the image evaluation result is used for evaluating the interest region. The image evaluation result does not depend on artificial subjective evaluation, so the objectivity of the image evaluation result ensures the evaluation objectivity of the interest area; the efficiency of image evaluation is improved, timeliness and evaluation efficiency of evaluation realization of the interest region are guaranteed, and further the problems of poor objectivity and low evaluation efficiency of the existing interest region are solved; the multidimensional image evaluation result also ensures the multidimensional evaluation of the interest region, solves the problem of single evaluation dimension of the existing interest region, and enriches the evaluation dimension of the interest region.
Fig. 3 is a flowchart of another method for evaluating a region of interest according to an embodiment of the present application, which is further optimized and expanded based on the above technical solution, and can be combined with the above various optional embodiments. Specifically, the following exemplifies an example in which the image evaluation model is a neural network model trained in advance based on multitask learning, and the embodiments of the present application are described. As shown in fig. 3, the method may include:
s301, determining an image to be evaluated of the interest area.
S302, extracting the image characteristics of the image to be evaluated by using the characteristic extraction network in the image evaluation model.
S303, based on the image characteristics, determining image evaluation results of the image to be evaluated under different dimensions by using the prediction networks corresponding to different dimensions in the image evaluation model.
The feature extraction network may be a convolutional neural network, and the prediction networks corresponding to different dimensions may include a multi-layer fully-connected network, that is, in the model training process, the model training operation may be performed based on the convolutional neural network and the multi-layer fully-connected network corresponding to multiple (two or more) dimensions.
Optionally, in the image evaluation model pre-trained based on multi-task learning, the objective function is related to the weight of each network branch in the image evaluation model and the loss function of the prediction network corresponding to different dimensions in the image evaluation model. For example, the objective function L in the image evaluation model can be expressed by the following formula:
Figure BDA0002482222310000071
wherein W represents a weight matrix of a feature extraction network in the image evaluation model, WmAnd WnWeight matrix, W, representing the prediction network for different dimensions in the image evaluation modelmAnd WnThe corresponding total number is related to the number of image evaluation dimensions, λ represents a model parameter or is called hyper-parameter, λ may be an empirical value or adapted during the model training process, I represents the image data of the sample image of the input model, Y represents the image data of the sample image of the input modelmAnd YnRepresenting the evaluation results of the samples under different dimensions in the training process of the model, Lm(W,Wm,I,Ym) And Ln(W,Wn,I,Yn) Representing a loss function of the prediction network corresponding to different dimensions, | W | | computational complexity2、||Wm||2、||Wn||2Respectively, representing the two norms of the corresponding weight matrices. Specifically, Lm(W,Wm,I,Ym) And Ln(W,Wn,I,Yn) Can be respectively a softmax cross entropy loss function, as Lm(W,Wm,I,Ym) For example, the functional form may be expressed as follows:
Figure BDA0002482222310000081
where N represents the number of input sample images, K represents the number of prediction classifications of image evaluation results in the current dimension, fjRepresents the value of the jth class in the prediction vector f representing the image evaluation result of the ith sample image in the current dimension,
Figure BDA0002482222310000082
representing the value representing the actual classification of the i-th sample image in the image evaluation prediction vector f in the current dimension. For specific description of the softmax cross entropy loss function, reference may be made to description of the prior art principle, and details are not repeated in the embodiments of the present application.
According to the embodiment of the application, in the multi-task model training process, the target function in the image evaluation model is customized, the loss function of the prediction network of each dimension and the weight of each network branch are comprehensively considered, the model training accuracy is ensured, and the accuracy of the evaluation result of the image to be evaluated in each dimension is ensured.
S304, evaluating the interest area according to the image evaluation results of the image to be evaluated in different dimensions.
According to the technical scheme of the embodiment of the application, firstly, the image evaluation result of the image to be evaluated under at least one dimension is determined by pre-training the image evaluation model based on multi-task learning, so that the automatic scoring of the image to be evaluated is realized, the image evaluation efficiency is improved, the accuracy of the image evaluation result is ensured, and then the image evaluation result is used for evaluating the interest region. The image evaluation result does not depend on artificial subjective evaluation, so the objectivity of the image evaluation result ensures the evaluation objectivity of the interest area; the efficiency of image evaluation is improved, timeliness and evaluation efficiency of evaluation realization of the interest region are guaranteed, and further the problems of poor objectivity and low evaluation efficiency of the existing interest region are solved; the multidimensional image evaluation result also ensures the multidimensional evaluation of the interest region, solves the problem of single evaluation dimension of the existing interest region, and enriches the evaluation dimension of the interest region.
Fig. 4 is a schematic diagram of an evaluation architecture of an interest area disclosed in the embodiment of the present application, and specifically, the evaluation architecture of the embodiment of the present application is exemplarily illustrated by taking two dimensions of an aesthetic evaluation and a feature evaluation as examples. As shown in fig. 4, the image to be evaluated may be a street view image, the image to be evaluated is input into an image evaluation model, the image to be evaluated is encoded by using a feature extraction network, that is, a convolutional neural network, image features are extracted, and then the image features are respectively input into an aesthetic degree prediction network and a feature degree prediction network, so as to obtain an image aesthetic degree evaluation result and an image feature degree evaluation result of the image to be evaluated; and then the beauty degree and the feature degree of the interest area can be evaluated based on the image beauty degree evaluation result and the image feature degree evaluation result of a plurality of images to be evaluated in the same interest area. In the multitask image evaluation model for the beauty and specialty degrees, the objective function of the model may be expressed as follows:
Figure BDA0002482222310000091
wherein W represents a weight matrix of a feature extraction network in the image evaluation model, WBAnd WCA weight matrix representing the corresponding aesthetic degree prediction network and characteristic degree prediction network in the image evaluation model, and lambda represents the modelParameters or so-called hyper-parameters, I representing image data of sample images input during model training, YBAnd YCRespectively representing the aesthetic degree marking evaluation result and the characteristic degree marking evaluation result of the sample image, LB(W,WB,I,YB) And LC(W,WC,I,YC) And representing loss functions corresponding to the aesthetic prediction network and the characteristic prediction network. Specifically, LB(W,WB,I,YB) And LC(W,WC,I,YC) Can be respectively a softmax cross entropy loss function, as LB(W,WB,I,YB) For example, the functional form may be expressed as follows:
Figure BDA0002482222310000092
fig. 5 is a flowchart of another method for evaluating a region of interest according to an embodiment of the present application, which is further optimized and expanded based on the above technical solution, and can be combined with the above various optional embodiments. As shown in fig. 5, the method may include:
s401, determining an image to be evaluated of the interest area.
S402, determining an image evaluation result of the image to be evaluated in at least one dimension.
S403, determining a region evaluation result of at least one dimension corresponding to the interest region according to the image evaluation result of the image to be evaluated in at least one dimension.
S404, determining a comprehensive evaluation result of the interest region according to the region evaluation result of the interest region.
In the embodiment of the application, for any interest area, a large number of images to be evaluated may be corresponding, each image to be evaluated has an image evaluation result of at least one dimension, for each dimension, the image evaluation results of the plurality of images to be evaluated in the dimension may be calculated according to a preset calculation policy, the calculation result is used as the area evaluation result of the interest area in the dimension, for example, the preset calculation policy is taken as an example of averaging, and the average value of the image evaluation results in the current dimension is used as the area evaluation result of the interest area in the dimension. After the region evaluation results of the interest region in each dimension are obtained, the comprehensive evaluation result of the interest region can be determined according to a preset comprehensive evaluation calculation strategy.
The preset comprehensive evaluation calculation strategy may be to calculate a comprehensive evaluation result of the region of interest by using a preset linear function or a preset nonlinear function related to the region evaluation result of each dimension, or calculate a comprehensive evaluation result of the region of interest according to weight assignment of the preset region evaluation result (for example, the larger the evaluation result value or the higher the grade, the larger the corresponding weight), or determine the comprehensive evaluation result of the region of interest by using a pre-trained comprehensive evaluation determination model. The comprehensive evaluation determination model is a supervised learning model trained in advance based on a supervised learning thought, and specifically, model training can be performed by using a large number of artificial comprehensive evaluation results of an interest region and region evaluation results of the interest region corresponding to each dimension, so as to learn a fitting relationship between the artificial comprehensive evaluation results and the region evaluation results of each dimension. After the training of the comprehensive evaluation determination model is completed, the method can be directly used in the process of determining the comprehensive evaluation results of other interest areas.
Taking an image to be evaluated as a street view image, and taking the evaluation dimensions including the aesthetic degree and the specific degree as examples, the comprehensive evaluation result of how to determine the interest region is exemplarily illustrated. Suppose a region of interest AiThe result of the image aesthetics evaluation (alternatively referred to as the aesthetics score classification) of all street view images in (B) may be represented asi={Bi,1,Bi,2,Bi,3..., and the image feature evaluation result (alternatively referred to as a feature score classification) can be represented as Ci={Ci,1,Ci,2,Ci,3...}. By means of averaging, the interest area A can be obtainediRegional assessment results for aesthetics
Figure BDA0002482222310000101
And evaluating the junction for regions of a specific colorFruit
Figure BDA0002482222310000102
Figure BDA0002482222310000103
Figure BDA0002482222310000104
Wherein n is the interest area AiThe number of images in the street view image set.
Further, the region of interest a may be utilizediRegional assessment results for aesthetics
Figure BDA0002482222310000105
And region evaluation result for specific color
Figure BDA0002482222310000106
Comprehensively calculating the interest area AiComprehensive evaluation result S ofiThe calculation formula is as follows:
Figure BDA0002482222310000107
wherein f (×) represents a comprehensive evaluation function, and may be a preset linear function or a preset nonlinear function, and may also be obtained through a weight distribution rule of a preset region evaluation result, and may also be obtained through training of the foregoing supervised learning model, which is not specifically limited in this embodiment of the present application.
S405, sending at least one of the area evaluation result and the comprehensive evaluation result of the interest area to a user terminal according to the search requirement of the user on the interest area; the user terminal is used for displaying at least one result.
Illustratively, after a user sends a search requirement about an interest area to a background server through a map application of a user terminal, the server responds to the search requirement of the user, and simultaneously issues basic information of the interest area and at least one result of an area evaluation result and a comprehensive evaluation result to the user terminal for displaying.
According to the technical scheme of the embodiment of the application, the interest area is evaluated by utilizing the image evaluation result of the image to be evaluated of the interest area in at least one dimension, and the evaluation objectivity of the interest area is ensured by the objectivity of the image evaluation result because the image evaluation result does not depend on artificial subjective evaluation, so that the problem of poor evaluation objectivity of the conventional interest area is solved, and the objectivity and the accuracy of the evaluation of the interest area are improved; in addition, the interest area is evaluated based on the image evaluation result, relevant personnel are not required to conduct on-site investigation and evaluation, the labor and time cost is reduced, the efficiency of interest area evaluation is improved, the problem of low efficiency of the existing interest area evaluation is solved, and the timeliness of interest area evaluation is also guaranteed; meanwhile, the multidimensional image evaluation result also ensures the multidimensional evaluation of the interest region, solves the problem of single evaluation dimension of the existing interest region and enriches the evaluation dimension of the interest region; at least one of the area evaluation result and the comprehensive evaluation result of the interest area is sent to the user terminal for displaying, so that the effect of providing richer auxiliary decision reference information for the user when the user searches the interest area is achieved.
Fig. 6 is a schematic interface display diagram of an interest area evaluation result disclosed in an embodiment of the present application, and specifically, an example of an aesthetic score, a feature score, and a comprehensive score of an interest area is given, which should not be construed as a specific limitation to the embodiment of the present application. As shown in fig. 6, when the user activates the map application on the terminal to search the region of interest "xxx scenic spot", and enters the search result display interface, the interface includes a map display area 61 and a region of interest information display area 62. The interest area information presentation area 62 may include basic information such as the name of the interest area, the distance from the current location of the user, and the administrative area to which the user belongs, and presents the evaluation results of multiple dimensions of "xxx scenic spots" using star marks: the beauty degree score, the feature degree score and the comprehensive score realize the effect of providing rich auxiliary reference information about the interest areas for the user, and are convenient for the user to make decision-making selection among different interest areas.
In addition, the lowest end of the search result display interface can also comprise functional controls for collecting, searching periphery, navigating, adding travel and the like. Of course, the specific layout of the search result display interface may be completely changed according to the requirement of the application development design, and the embodiment of the present application is not particularly limited.
Fig. 7 is a schematic structural diagram of an interest area evaluation apparatus according to an embodiment of the present application, which may be applied to the case of evaluating an interest area in a map, and the apparatus may be implemented by software and/or hardware, and may be integrated on any electronic device with computing capability, such as a server.
As shown in fig. 7, the apparatus 700 for evaluating a region of interest disclosed in the embodiment of the present application may include an image to be evaluated determining module 701, an image evaluation result determining module 702, and a region of interest evaluating module 703, where:
an image to be evaluated determining module 701, configured to determine an image to be evaluated of an interest region;
an image evaluation result determining module 702, configured to determine an image evaluation result of an image to be evaluated in at least one dimension;
the interest area evaluation module 703 is configured to evaluate an interest area according to an image evaluation result of the image to be evaluated in at least one dimension.
Optionally, the image evaluation result of the image to be evaluated in at least one dimension includes: an image aesthetic degree evaluation result and an image specific degree evaluation result;
the image aesthetic degree evaluation result is used for representing the interest region from the perspective of the entity environment, and the image characteristic degree evaluation result is used for representing the interest region from the perspective of the folk custom of the people.
Optionally, the image evaluation result determining module 702 includes:
the model evaluation unit is used for determining an image evaluation result of the image to be evaluated under at least one dimension by using the image evaluation model;
the image evaluation model is obtained by training based on a sample image of a sample interest area and an annotation evaluation result of the sample image in at least one dimension.
Optionally, the model evaluation unit is specifically configured to:
and determining the image evaluation results of the images to be evaluated under different dimensions by using the image evaluation model trained in advance based on the multi-task learning.
Optionally, the model evaluation unit includes:
the image characteristic extraction subunit is used for extracting the image characteristics of the image to be evaluated by utilizing a characteristic extraction network in the image evaluation model;
and the evaluation result determining subunit is used for determining the image evaluation results of the image to be evaluated under different dimensions by utilizing the corresponding prediction networks with different dimensions in the image evaluation model based on the image characteristics.
Optionally, in the image evaluation model pre-trained based on multi-task learning, the objective function is related to the weight of each network branch in the image evaluation model and the loss function of the prediction network corresponding to different dimensions in the image evaluation model.
Optionally, the objective function L is expressed by the following formula:
Figure BDA0002482222310000131
wherein W represents a weight matrix of a feature extraction network in the image evaluation model, WmAnd WnRepresenting a weight matrix of the prediction network for different dimensions in the image evaluation model, λ representing the model parameters, YmAnd YnRepresenting the evaluation results of the samples under different dimensions in the training process of the model, Lm(W,Wm,I,Ym) And Ln(W,Wn,I,Yn) Representing the loss function of the prediction network for different dimensions, I representing the image of the sample imageAnd (4) data.
Optionally, the interest region evaluation module 703 includes:
the region evaluation result determining unit is used for determining a region evaluation result of at least one dimension corresponding to the interest region according to the image evaluation result of the image to be evaluated under at least one dimension;
and the comprehensive evaluation result determining unit is used for determining the comprehensive evaluation result of the interest region according to the region evaluation result of the interest region.
Optionally, the apparatus disclosed in the embodiment of the present application further includes:
the evaluation result issuing module is used for sending at least one of the area evaluation result and the comprehensive evaluation result of the interest area to the user terminal according to the search requirement of the user on the interest area after the comprehensive evaluation result determining unit executes the operation of determining the comprehensive evaluation result of the interest area according to the area evaluation result of the interest area;
the user terminal is used for displaying at least one result.
Optionally, the image to be evaluated includes a street view image.
Optionally, the to-be-evaluated image determining module 701 is specifically configured to:
and determining an image to be evaluated of the interest area according to the position relation between the positioning information carried by the collected street view image and the interest area.
The region of interest evaluation apparatus 700 disclosed in the embodiment of the present application can execute any of the region of interest evaluation methods disclosed in the embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution methods. Reference may be made to the description of any method embodiment of the present application for details not explicitly described in this embodiment.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 8, fig. 8 is a block diagram of an electronic device for implementing the method for evaluating a region of interest in the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of embodiments of the present application described and/or claimed herein.
As shown in fig. 8, the electronic apparatus includes: one or more processors 801, memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations, e.g., as a server array, a group of blade servers, or a multi-processor system. Fig. 8 illustrates an example of a processor 801.
The memory 802 is a non-transitory computer readable storage medium provided by the embodiments of the present application. The memory stores instructions executable by at least one processor, so that the at least one processor executes the method for evaluating a region of interest provided by the embodiments of the present application. The non-transitory computer-readable storage medium of the embodiments of the present application stores computer instructions for causing a computer to perform the region of interest assessment method provided by the embodiments of the present application.
The memory 802 is a non-transitory computer readable storage medium, and can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for evaluating a region of interest in the embodiment of the present application, for example, the to-be-evaluated image determination module 701, the image evaluation result determination module 702, and the region of interest evaluation module 703 shown in fig. 7. The processor 801 executes various functional applications and data processing of the electronic device by running non-transitory software programs, instructions and modules stored in the memory 802, that is, implements the region of interest assessment method in the above-described method embodiments.
The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 802 may include high speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 may optionally include memory located remotely from the processor 801, which may be connected via a network to an electronic device for implementing the region of interest assessment method of the present embodiments. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for implementing the method for evaluating the region of interest in the embodiment of the present application may further include: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, and are exemplified by a bus in fig. 8.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic apparatus for implementing the region of interest assessment method in the present embodiment, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output device 804 may include a display apparatus, an auxiliary lighting device such as a Light Emitting Diode (LED), a tactile feedback device, and the like; the tactile feedback device is, for example, a vibration motor or the like. The Display device may include, but is not limited to, a Liquid Crystal Display (LCD), an LED Display, and a plasma Display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, Integrated circuitry, Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs, also known as programs, software applications, or code, include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or Device for providing machine instructions and/or data to a Programmable processor, such as a magnetic disk, optical disk, memory, Programmable Logic Device (PLD), including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device for displaying information to a user, for example, a Cathode Ray Tube (CRT) or an LCD monitor; and a keyboard and a pointing device, such as a mouse or a trackball, by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here, or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the interest area is evaluated by utilizing the image evaluation result of the image to be evaluated of the interest area in at least one dimension, so that the problems of poor objectivity and low evaluation efficiency of the conventional interest area evaluation are solved, the objectivity of the interest area evaluation is improved, and the efficiency of the interest area evaluation is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A method for region of interest assessment, comprising:
determining an image to be evaluated of the interest area;
determining an image evaluation result of the image to be evaluated under at least one dimension;
and evaluating the interest region according to the image evaluation result of the image to be evaluated in the at least one dimension.
2. The method according to claim 1, wherein the image evaluation result of the image to be evaluated in at least one dimension comprises: an image aesthetic degree evaluation result and an image specific degree evaluation result;
the image aesthetic degree evaluation result is used for representing the interest region from the perspective of a physical environment, and the image characteristic degree evaluation result is used for representing the interest region from the perspective of a human folk custom.
3. The method of claim 1, wherein determining an image evaluation result of the image to be evaluated in at least one dimension comprises:
determining an image evaluation result of the image to be evaluated under at least one dimension by using an image evaluation model;
the image evaluation model is trained on a sample image of a sample interest area and an annotation evaluation result of the sample image in at least one dimension.
4. The method according to claim 3, wherein the determining an image evaluation result of the image to be evaluated in at least one dimension by using an image evaluation model comprises:
and determining the image evaluation results of the image to be evaluated under different dimensions by utilizing the image evaluation model pre-trained based on multi-task learning.
5. The method according to claim 4, wherein the determining image evaluation results of the image to be evaluated in different dimensions by using the image evaluation model pre-trained based on multi-task learning comprises:
extracting the image characteristics of the image to be evaluated by utilizing a characteristic extraction network in the image evaluation model;
and determining the image evaluation result of the image to be evaluated under different dimensions by utilizing the prediction networks corresponding to different dimensions in the image evaluation model based on the image characteristics.
6. The method according to claim 4 or 5, wherein in the image evaluation model pre-trained based on multi-task learning, an objective function is related to the weight of each network branch in the image evaluation model and a loss function of a prediction network corresponding to different dimensions in the image evaluation model.
7. The method of claim 6, wherein the objective function L is expressed by the following formula:
Figure FDA0002482222300000021
wherein W represents a weight matrix of a feature extraction network in the image evaluation model, WmAnd WnA weight matrix, Y, representing the prediction network for different dimensions in the image evaluation modelmAnd YnRepresenting the evaluation results of the samples under different dimensions in the training process of the model, Lm(W,Wm,I,Ym) And Ln(W,Wn,I,Yn) Representing a loss function of the prediction network corresponding to the different dimensions, λ representing a model parameter, and I representing image data of the sample image.
8. The method according to claim 1, wherein evaluating the region of interest according to the image evaluation result of the image to be evaluated in the at least one dimension comprises:
determining a region evaluation result of the interest region corresponding to the at least one dimension according to an image evaluation result of the image to be evaluated in the at least one dimension;
and determining a comprehensive evaluation result of the interest region according to the region evaluation result of the interest region.
9. The method of claim 8, wherein after determining a composite evaluation of the region of interest based on the region evaluation of the region of interest, the method further comprises:
according to the search requirement of the user on the interest area, at least one of the area evaluation result and the comprehensive evaluation result of the interest area is sent to the user terminal;
wherein, the user terminal is used for displaying the at least one result.
10. The method of claim 1, wherein the image to be evaluated comprises a street view image.
11. The method of claim 10, wherein determining an image to be evaluated of the region of interest comprises:
and determining an image to be evaluated of the interest region according to the position relation between the positioning information carried by the collected street view image and the interest region.
12. An apparatus for region of interest assessment, comprising:
the image to be evaluated determining module is used for determining an image to be evaluated of the interest area;
the image evaluation result determining module is used for determining an image evaluation result of the image to be evaluated under at least one dimension;
and the interest region evaluation module is used for evaluating the interest region according to the image evaluation result of the image to be evaluated in the at least one dimension.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the region of interest assessment method of any one of claims 1-11.
14. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the region of interest assessment method of any one of claims 1-11.
CN202010381698.1A 2020-05-08 2020-05-08 Interest region evaluation method, device, equipment and medium Pending CN113627419A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010381698.1A CN113627419A (en) 2020-05-08 2020-05-08 Interest region evaluation method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010381698.1A CN113627419A (en) 2020-05-08 2020-05-08 Interest region evaluation method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113627419A true CN113627419A (en) 2021-11-09

Family

ID=78377268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010381698.1A Pending CN113627419A (en) 2020-05-08 2020-05-08 Interest region evaluation method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113627419A (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012079144A (en) * 2010-10-04 2012-04-19 National Institute Of Information & Communication Technology Sightseeing spot recommendation device and program
CN104392007A (en) * 2014-12-18 2015-03-04 西安电子科技大学宁波信息技术研究院 Streetscape retrieval and identification method of intelligent mobile terminal
US20150206169A1 (en) * 2014-01-17 2015-07-23 Google Inc. Systems and methods for extracting and generating images for display content
CN104933643A (en) * 2015-06-26 2015-09-23 中国科学院计算技术研究所 Scenic region information pushing method and device
KR20170078109A (en) * 2015-12-29 2017-07-07 에스케이플래닛 주식회사 Method and Apparatus for Providing Recommended Contents
WO2017166137A1 (en) * 2016-03-30 2017-10-05 中国科学院自动化研究所 Method for multi-task deep learning-based aesthetic quality assessment on natural image
CN107305561A (en) * 2016-04-21 2017-10-31 斑马网络技术有限公司 Processing method, device, equipment and the user interface system of image
CN107330455A (en) * 2017-06-23 2017-11-07 云南大学 Image evaluation method
CN107766417A (en) * 2017-09-08 2018-03-06 百度在线网络技术(北京)有限公司 A kind of method and apparatus for being used to submit POI data
CN108417029A (en) * 2018-02-11 2018-08-17 东南大学 City road network travel time estimation method based on adaptive multitask deep learning
CN108648104A (en) * 2018-04-25 2018-10-12 河南聚合科技有限公司 A kind of tourism big data co-construction and sharing based on cloud platform
WO2019057067A1 (en) * 2017-09-20 2019-03-28 众安信息技术服务有限公司 Image quality evaluation method and apparatus
CN110119689A (en) * 2019-04-18 2019-08-13 五邑大学 A kind of face beauty prediction technique based on multitask transfer learning
CN110189291A (en) * 2019-04-09 2019-08-30 浙江大学 A kind of general non-reference picture quality appraisement method based on multitask convolutional neural networks
CN110223292A (en) * 2019-06-20 2019-09-10 厦门美图之家科技有限公司 Image evaluation method, device and computer readable storage medium
CN110298687A (en) * 2019-05-23 2019-10-01 香港理工大学深圳研究院 A kind of region attraction appraisal procedure and equipment
CN110348525A (en) * 2019-07-15 2019-10-18 北京百度网讯科技有限公司 Map point of interest acquisition methods, device, equipment and storage medium
CN110378410A (en) * 2019-07-16 2019-10-25 北京字节跳动网络技术有限公司 Multi-tag scene classification method, device and electronic equipment
CN110414489A (en) * 2019-08-21 2019-11-05 五邑大学 A kind of face beauty prediction technique based on multi-task learning
CN111024101A (en) * 2018-10-10 2020-04-17 上海擎感智能科技有限公司 Navigation path landscape evaluation method and system, storage medium and vehicle-mounted terminal

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012079144A (en) * 2010-10-04 2012-04-19 National Institute Of Information & Communication Technology Sightseeing spot recommendation device and program
US20150206169A1 (en) * 2014-01-17 2015-07-23 Google Inc. Systems and methods for extracting and generating images for display content
CN104392007A (en) * 2014-12-18 2015-03-04 西安电子科技大学宁波信息技术研究院 Streetscape retrieval and identification method of intelligent mobile terminal
CN104933643A (en) * 2015-06-26 2015-09-23 中国科学院计算技术研究所 Scenic region information pushing method and device
KR20170078109A (en) * 2015-12-29 2017-07-07 에스케이플래닛 주식회사 Method and Apparatus for Providing Recommended Contents
WO2017166137A1 (en) * 2016-03-30 2017-10-05 中国科学院自动化研究所 Method for multi-task deep learning-based aesthetic quality assessment on natural image
US20190026884A1 (en) * 2016-03-30 2019-01-24 Institute Of Automation, Chinese Academy Of Sciences Method for assessing aesthetic quality of natural image based on multi-task deep learning
CN107305561A (en) * 2016-04-21 2017-10-31 斑马网络技术有限公司 Processing method, device, equipment and the user interface system of image
CN107330455A (en) * 2017-06-23 2017-11-07 云南大学 Image evaluation method
CN107766417A (en) * 2017-09-08 2018-03-06 百度在线网络技术(北京)有限公司 A kind of method and apparatus for being used to submit POI data
WO2019057067A1 (en) * 2017-09-20 2019-03-28 众安信息技术服务有限公司 Image quality evaluation method and apparatus
CN108417029A (en) * 2018-02-11 2018-08-17 东南大学 City road network travel time estimation method based on adaptive multitask deep learning
CN108648104A (en) * 2018-04-25 2018-10-12 河南聚合科技有限公司 A kind of tourism big data co-construction and sharing based on cloud platform
CN111024101A (en) * 2018-10-10 2020-04-17 上海擎感智能科技有限公司 Navigation path landscape evaluation method and system, storage medium and vehicle-mounted terminal
CN110189291A (en) * 2019-04-09 2019-08-30 浙江大学 A kind of general non-reference picture quality appraisement method based on multitask convolutional neural networks
CN110119689A (en) * 2019-04-18 2019-08-13 五邑大学 A kind of face beauty prediction technique based on multitask transfer learning
CN110298687A (en) * 2019-05-23 2019-10-01 香港理工大学深圳研究院 A kind of region attraction appraisal procedure and equipment
CN110223292A (en) * 2019-06-20 2019-09-10 厦门美图之家科技有限公司 Image evaluation method, device and computer readable storage medium
CN110348525A (en) * 2019-07-15 2019-10-18 北京百度网讯科技有限公司 Map point of interest acquisition methods, device, equipment and storage medium
CN110378410A (en) * 2019-07-16 2019-10-25 北京字节跳动网络技术有限公司 Multi-tag scene classification method, device and electronic equipment
CN110414489A (en) * 2019-08-21 2019-11-05 五邑大学 A kind of face beauty prediction technique based on multi-task learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张小威: "基于深度学习的图像美学质量评价", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 3 *

Similar Documents

Publication Publication Date Title
JP7159405B2 (en) MAP INFORMATION DISPLAY METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
WO2022134478A1 (en) Route recommendation method and apparatus, electronic device, and storage medium
CN113642431B (en) Training method and device of target detection model, electronic equipment and storage medium
CN111026937A (en) Method, device and equipment for extracting POI name and computer storage medium
CN109409612A (en) A kind of paths planning method, server and computer storage medium
CN111931067A (en) Interest point recommendation method, device, equipment and medium
CN111814077A (en) Information point query method, device, equipment and medium
CN112541332B (en) Form information extraction method and device, electronic equipment and storage medium
JP7206514B2 (en) Method for sorting geolocation points, training method for sorting model, and corresponding device
CN116932733A (en) Information recommendation method and related device based on large language model
CN112380104A (en) User attribute identification method and device, electronic equipment and storage medium
CN111915608A (en) Building extraction method, device, equipment and storage medium
CN113158030B (en) Recommendation method and device for remote interest points, electronic equipment and storage medium
CN117541202A (en) Employment recommendation system based on multi-mode knowledge graph and pre-training large model fusion
CN111241225B (en) Method, device, equipment and storage medium for judging change of resident area
CN113157829A (en) Method and device for comparing interest point names, electronic equipment and storage medium
CN113627419A (en) Interest region evaluation method, device, equipment and medium
US11976935B2 (en) Route recommendation method, electronic device, and storage medium
CN112380849B (en) Method and device for generating interest point extraction model and extracting interest points
CN113449754B (en) Label matching model training and displaying method, device, equipment and medium
CN112052402B (en) Information recommendation method and device, electronic equipment and storage medium
CN112989219B (en) Point-of-interest recommendation method and device, electronic equipment and storage medium
CN114548288A (en) Model training and image recognition method and device
Sun et al. Context-aware augmented reality using human–computer interaction models
CN112257517A (en) Scenic spot recommendation system based on scenic spot clustering and group emotion recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination