CN113469737A - Advertisement analysis database creation system - Google Patents

Advertisement analysis database creation system Download PDF

Info

Publication number
CN113469737A
CN113469737A CN202110686157.4A CN202110686157A CN113469737A CN 113469737 A CN113469737 A CN 113469737A CN 202110686157 A CN202110686157 A CN 202110686157A CN 113469737 A CN113469737 A CN 113469737A
Authority
CN
China
Prior art keywords
user
advertisement
data
module
analysis database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110686157.4A
Other languages
Chinese (zh)
Inventor
吴育怀
苏娟
汪功林
陈孝君
梁雨菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Grapefruit Cool Media Information Technology Co ltd
Original Assignee
Anhui Grapefruit Cool Media Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Grapefruit Cool Media Information Technology Co ltd filed Critical Anhui Grapefruit Cool Media Information Technology Co ltd
Priority to CN202110686157.4A priority Critical patent/CN113469737A/en
Publication of CN113469737A publication Critical patent/CN113469737A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the field of big data processing, and particularly relates to a system for creating an advertisement analysis database. The creation system includes: the system comprises a historical user query module, an advertisement characteristic data extraction module, a user feedback data extraction module, a face recognition module, an image recognition module, a voice recognition module, a video action recognition module, a user label establishment module, an acceptance evaluation value calculation module and a database establishment module. The system for creating the advertisement analysis database can analyze the preference of the user according to the identity characteristics of the user and the feedback of different advertisements, and further establish the database containing the identity characteristics, the preference characteristics and the aversion characteristics of each user; and because the favorable characteristics and the aversive characteristics are extracted from the keyword data set of the advertisement, the method is very suitable for the user to copy the consumption requirements and consumption information of the user, and can solve the problems that the user behavior analysis and interest prediction are difficult and the user cannot be accurately portrayed.

Description

Advertisement analysis database creation system
Technical Field
The invention belongs to the field of big data processing, and particularly relates to a system for creating an advertisement analysis database.
Background
Commercials have significant implications for influencing the purchasing decision of a user; in order to increase the sales volume of goods, each business company invests a lot of advertising services every year, which is a huge business opportunity. Conventional advertising services are always expecting greater advertising effectiveness by increasing the coverage of the advertisements. With the continuous progress of big data analysis technology, more and more advertising companies know the interests and hobbies of users through behavior analysis of the users, and then accurately push advertisements to the users. Such accurate ad push services are typically not available online. The method is mainly characterized in that each network service provider on line can track behaviors of a user such as browsing and searching lines, and further analyze interests and hobbies of the user; but such behavior tracking is more costly and difficult to implement online. Therefore, there is a need to develop a new method for analyzing the interests of users.
Elevators, malls, garages, etc. are the most common advertising scenarios under these scenarios. The advertisement delivery target user group is large, the user group has multiple and complex identities, the user behavior analysis is very valuable in the scene of large user quantity, and the method is also very significant for accumulating user data and establishing advertisement analysis data. However, no related technology or application exists in the prior art. In the prior art, data such as gender proportion, age level, occupation distribution and the like of a user group can be obtained only through data investigation. It is impossible to drill down deeply into more useful information such as interests and hobbies of the client and to realize accurate portrayal of different users.
Disclosure of Invention
The invention provides a system for creating an advertisement analysis database, which aims to solve the problems that the difficulty of behavior analysis and interest prediction of a user is high on line and accurate portrait of the user cannot be realized.
The invention is realized by adopting the following technical scheme:
a system for creating an advertisement analysis database is used for analyzing the preference of a user according to the identity characteristics of the user and the feedback of different advertisements, and further creating a database containing the identity characteristics, the preference characteristics and the aversion characteristics of each user; the creation system includes: the system comprises a historical user query module, an advertisement characteristic data extraction module, a user feedback data extraction module, a face recognition module, an image recognition module, a voice recognition module, a video action recognition module, a user label establishment module, an acceptance evaluation value calculation module and a database establishment module.
The historical user query module is used for querying an advertisement analysis database and extracting a user portrait data set of the collected historical users; the user portrait dataset comprises facial feature data of each historical user and user labels, and the user labels comprise identity labels, favorite labels and aversion labels.
The advertisement characteristic data extraction module is used for extracting the playing time length T of each advertisement and a keyword data set associated with the advertisement when the advertisement is played by an advertisement delivery system.
The user feedback data extraction module is used for acquiring voice stream data and video stream data generated by a user watching the advertisement playing and a switching instruction of the advertisement required to be switched and played.
The face recognition module is used for extracting the face features of each user watching the advertisement, completing the comparison process of the face features of the current user and the face features of each historical user in the advertisement analysis database, and distinguishing the newly added user from the historical users.
The image identification module is used for carrying out framing processing on the video stream data to obtain an image data setImage recognition; and then obtaining: (1) identity feature data of each user; (2) expressions of respective users during the advertisement play, wherein p1,nClassifying the liked expression of the user with the number n as the proportion of the total quantity of the images sampled at every other frame; p is a radical of2,nClassifying the face which is ignored for the user with the number n in the ratio of the total number of the images sampled at every other frame; p is a radical of3,nThe expression classified as disliked by the user numbered n is a proportion of the total number of images sampled at every other frame.
The voice recognition module is used for carrying out voice recognition on the voice stream data.
The video motion recognition module is used for carrying out video motion recognition on video stream data.
The user label establishing module is used for establishing an empty user label for each newly added user identified by the face identification module and supplementing various feature data which are acquired by the image identification module and reflect the identity features of the newly added user to the identity label of the corresponding user.
The acceptance evaluation value calculation module is used for calculating the acceptance evaluation value E of each user to the current advertisementn
The database creation module is to:
(1) based on expert experience, set EnA high threshold value E ofhAnd a low threshold value El(ii) a Wherein E ishCritical value indicating that the user likes the currently played advertisement, ElA critical value indicating that the user dislikes the currently played advertisement, El>0。
(2) The following decisions and decisions are made for each user:
when E isn≥EhAnd p is1,n+p2,n≥p3,nAdding feature data in a keyword data set associated with the currently played advertisement into a favorite tag corresponding to the current user, and performing feature data deduplication on the supplemented favorite tag; and deleting the characteristic data which is the same as the characteristic data in the keyword data set in the aversion label corresponding to the current user.
(ii) when E is less than or equal to ElAnd p is2,n+p3,n≥p1,nAdding feature data in a keyword data set associated with the currently played advertisement into an aversion tag corresponding to the current user, and performing feature data duplication elimination on the supplemented aversion tag; and deleting the characteristic data matched with the characteristic data in the keyword data set in the favorite label corresponding to the current user.
(3) And updating the user label of each user in sequence to obtain a new user portrait data set of each user, and further completing the creation or updating of the advertisement analysis database.
Further, the specific functions of the user feedback data extraction module include:
(1) when the advertisement delivery system plays the advertisement, voice information generated by users watching the advertisement in the advertisement delivery area is obtained, and voice stream data relevant to each advertisement is obtained.
(2) When the advertisement delivery system plays the advertisement, multi-angle monitoring videos of all users watching the advertisement in the advertisement delivery area are obtained, and video stream data relevant to each advertisement are obtained.
(3) The method comprises the steps that when the advertisement delivery system plays the advertisement, a switching instruction sent by a user watching the advertisement is obtained, wherein the switching instruction comprises a keyboard input instruction, a voice interaction instruction or a gesture interaction instruction; and assigning the characteristic quantity SW representing the switching instruction to be 1 when the acquisition is successful, otherwise assigning the characteristic quantity SW to be 0.
Further, the specific functions of the speech recognition module include:
(1) and acquiring a voice interaction instruction which is sent by a user during the advertisement playing and is used for indicating that the currently played advertisement is required to be switched.
(2) And extracting all words in the voice stream data, and finding out keywords matched with the characteristic data in the keyword data set.
Further, the specific functions of the video motion recognition module include:
(1) and extracting a gesture interaction instruction which is sent by a certain user in the video stream data and represents that the currently played advertisement is required to be switched.
(2) And extracting gesture actions which are sent out by a certain user in the video stream data and are used for feeding back the currently played advertisement.
(3) And extracting characteristic actions reflecting the eye attention position change of a certain user in the current advertisement playing process.
Further, the specific functions of the acceptance evaluation value calculation module include:
(1) acquiring keywords which are identified from voice stream data by a voice identification module and matched with characteristic data in a keyword data set, and counting the number N of the keywords1
(2) Acquiring the gesture actions which are recognized by the video action recognition module and reflect the feedback of the user to the currently played advertisement, and counting the number N of the gesture actions2
(3) Obtaining the characteristic action which is identified by the video action identification module and reflects the eye attention position change of a certain user in the current advertisement playing process, and calculating the attention duration t of the current user to the currently played advertisement according to the characteristic actionn
(4) And acquiring the number of the three-category-expression classification results of each user, which are identified by the image identification module, and calculating the ratio of the number of the three-category-expression classification results of each user in the total sample size.
(5) The value of SW is obtained.
(6) The acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure BDA0003124760510000031
In the above formula, n represents the user number of the current user, EnIndicating the evaluation value of the currently played advertisement by the user with the number n, EnNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure BDA0003124760510000032
representing the attention concentration of the user with the number n to the currently played advertisement; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Representing the influence factor of the attitude action feedback on the overall recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1nClassifying the liked expression of the user with the number n as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2nClassifying the face which is ignored for the user with the number n in the ratio of the total number of the images sampled at every other frame; c denotes the score of aversive expression, p3nThe expression classified as disliked by the user numbered n is a proportion of the total number of images sampled at every other frame.
Further, the acceptance evaluation value calculation module calculates the attention duration t of the user with the number n to the currently played advertisementnThe calculation formula of (a) is as follows:
Figure BDA0003124760510000033
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement.
Further, the feature data in the keyword data set of each advertisement extracted by the advertisement feature data extraction module at least comprises:
(1) keywords reflecting the advertised promotional product.
(2) Keywords that reflect the targeted customer population targeted by the advertisement.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the ad words.
(5) The duration of the advertisement is classified.
(6) The genre of the advertisement is classified.
Further, the feature data in the identity tag includes user number, gender, age group, wearing style and other features; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features; the age range in the identity label is one of 0-10 years old, 10-20 years old, 20-30 years old, 30-50 years old, 50-70 years old and above 70 years old which are classified according to the image recognition result; the wearing style in the identity tag includes leisure, business, sports, children or elderly.
Further, the content reflected by other characteristics comprises whether glasses are worn, whether a hat is worn, whether alopecia occurs, whether lipstick is smeared, whether high-heeled shoes are worn, whether beard is accumulated, and whether a wristwatch is worn; for the above feature, if so, the feature data reflecting the feature is added to the other features, otherwise, the feature data is not added to the other features.
Further, the advertisement analysis database is empty at the beginning of creation, and after the user portrait dataset of the first historical user is entered, the creation system of the advertisement analysis database determines that the current user is a new user or a historical user by comparing the facial features of the current user with the facial features of the historical users in the advertisement analysis database; and inputting the distinguished user portrait dataset of the newly added user into an advertisement analysis database, or updating the user label in the user portrait dataset of the historical user existing in the advertisement analysis database.
The technical scheme provided by the invention has the following beneficial effects:
the system for creating the advertisement analysis database provided by the invention innovates a method for analyzing the user behavior interest and predicting the demand. The hobby, demand and aversion objects of the user are analyzed and predicted through feedback of the user when the user watches different types of advertisements, and the analysis mode has very strong pertinence and is also very suitable for portraying the consumption psychology of the user; thereby predicting the consumer demand of the user. The database creating method and the database creating system can be applied to scenes of high pedestrian flow such as shopping malls, elevators, garages and the like, and can be used for performing behavior analysis and interest prediction on users on line.
The technical scheme of the invention extracts various different identity characteristics of the user; the responses of the user to different advertisements are accurately obtained through image recognition, voice recognition and video action recognition technologies, accurate portrayal of the user is achieved through a large number of repeated feature extraction processes, and objects which are interesting or not interesting to the user are known; the extracted feature data are classified and stored, and a user portrait data set of the user can be obtained. The system for creating the advertisement analysis database provided by the invention also has better learning performance, can adjust the content of the characteristic data along with the real-time feedback of the user, and is more accurate in user portrait and real-time.
Drawings
Fig. 1 is a schematic block diagram of a system for creating an advertisement analysis database according to embodiment 1 of the present invention;
fig. 2 is a flowchart of a method for creating an advertisement analysis database according to embodiment 1 of the present invention;
fig. 3 is a category differentiation diagram of feature data included in an identity tag in an advertisement analysis database according to embodiment 1 of the present invention;
FIG. 4 is a graph illustrating type discrimination of feature data included in a user image data set according to embodiment 1 of the present invention;
FIG. 5 is a flowchart of a method for accurately delivering advertisements based on user images according to embodiment 2 of the present invention;
fig. 6 is a flowchart of a method for timely analyzing user requirements in a business district scenario according to embodiment 3 of the present invention;
fig. 7 is a flowchart of a method for evaluating the recognition degree of the advertisement by the user based on the feature recognition in embodiment 4 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
The embodiment further provides a system for creating an advertisement analysis database, as shown in fig. 1, the system includes: the system comprises a historical user query module, an advertisement characteristic data extraction module, a user feedback data extraction module, a face recognition module, an image recognition module, a voice recognition module, a video action recognition module, a user label establishing module, an acceptance evaluation value calculation module and a database establishing module.
The historical user query module is used for querying an advertisement analysis database and extracting a user portrait data set of the collected historical users; the user portrait dataset comprises facial feature data of each historical user and user labels, and the user labels comprise identity labels, favorite labels and aversion labels.
The advertisement characteristic data extraction module is used for extracting the playing time length T of each advertisement and a keyword data set associated with the advertisement when the advertisement is played by an advertisement delivery system.
A user feedback data extraction module to: (1) when the advertisement delivery system plays the advertisement, voice information generated by users watching the advertisement in the advertisement delivery area is obtained, and voice stream data relevant to each advertisement is obtained. (2) When the advertisement delivery system plays the advertisement, multi-angle monitoring videos of all users watching the advertisement in the advertisement delivery area are obtained, and video stream data relevant to each advertisement are obtained. (3) The method comprises the steps that when the advertisement delivery system plays the advertisement, a switching instruction sent by a user watching the advertisement is obtained, wherein the switching instruction comprises a keyboard input instruction, a voice interaction instruction or a gesture interaction instruction; and assigning the characteristic quantity SW representing the switching instruction to be 1 when the acquisition is successful, otherwise assigning the characteristic quantity SW to be 0.
The face recognition module is used for obtaining an image data set through framing processing according to the video stream data and extracting face features of each user appearing in the image data set; and finishing the comparison process of the facial features of the current user and the facial features of each historical user in the advertisement analysis database, and distinguishing the newly added user from the historical users.
The image identification module is used for carrying out image identification on an image data set obtained by framing processing of video stream data, and further: (1) and acquiring various feature data reflecting the identity features of the newly added user. (2) The expressions of all the users during the advertisement playing are extracted, and the expressions are classified into one of liked, ignored or disliked.
The voice recognition module is used for carrying out voice recognition on voice stream data, and then: (1) and acquiring the voice interaction instruction which is sent by a user during the advertisement playing and is used for indicating that the currently played advertisement is required to be switched. (2) And extracting all words in the voice stream data, and finding out keywords matched with the characteristic data in the keyword data set.
The video motion recognition module is used for carrying out video motion recognition on video stream data, and further: (1) and extracting a gesture interaction instruction which is sent by a certain user in the video stream data and represents that the currently played advertisement is required to be switched. (2) And extracting gesture actions which are sent out by a certain user in the video stream data and are used for feeding back the currently played advertisement. (3) And extracting characteristic actions reflecting the eye attention position change of a certain user in the current advertisement playing process.
The user label establishing module is used for establishing an empty user label for each newly added user, and supplementing various feature data which are acquired by the image identification module and reflect the identity features of the newly added user to the identity label of the corresponding user.
The acceptance evaluation value calculation module is used for: (1) acquiring keywords which are identified from voice stream data by a voice identification module and matched with the characteristic data in the keyword data set, and counting the number N of the keywords1. (2) Acquiring gesture actions which are recognized by a video action recognition module and reflect the feedback of the user to the currently played advertisement, and counting the number N of the gesture actions2. (3) Obtaining reflections identified by a video motion recognition moduleA certain user pays attention to the characteristic action of the position change in the current advertisement playing process, and the attention duration t of the current user to the currently played advertisement is calculated according to the characteristic actionn(ii) a Where n represents the user number of the current user. The acceptance evaluation value calculation module calculates the attention duration t of the user with the number n to the currently played advertisementnThe calculation formula of (a) is as follows:
Figure BDA0003124760510000051
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement.
(4) And acquiring the number of the three-category-expression classification results of each user, which are identified by the image identification module, and calculating the ratio of the number of the three-category-expression classification results of each user in the total sample size. (5) The value of SW is obtained. (6) The acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure BDA0003124760510000052
In the above formula, n represents the user number of the current user, EnIndicating the evaluation value of the currently played advertisement by the user with the number n, EnNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure BDA0003124760510000053
representing the attention concentration of the user with the number n to the currently played advertisement; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Feedback alignment representing attitude motionInfluence factors of the body recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1,nThe expression classified as favorite by the user with the characterization number n is used as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2,nThe expression classified as neglect for the user with the representation number n is the proportion of the expression in the total quantity of the images sampled at every other frame; c denotes the score of aversive expression, p3,nThe expression classified as aversive for the user with the characterization number n is a proportion of the total number of images sampled at every other frame.
The database creation module is to: (1) based on expert experience, set EnA high threshold value E ofhAnd a low threshold value El(ii) a Wherein E ishCritical value indicating that the user likes the currently played advertisement, ElA critical value indicating that the user dislikes the currently played advertisement, ElIs greater than 0. (2) The following decisions and decisions are made for each user: when E isn≥EhAnd p is1,n+p2,n≥p3,nAdding feature data in a keyword data set associated with the currently played advertisement into a favorite tag corresponding to the current user, and performing feature data deduplication on the supplemented favorite tag; and deleting the characteristic data which is the same as the characteristic data in the keyword data set in the aversion label corresponding to the current user. (ii) when E is less than or equal to ElAnd p is2,n+p3,n≥p1,nAdding feature data in a keyword data set associated with the currently played advertisement into an aversion tag corresponding to the current user, and performing feature data duplication elimination on the supplemented aversion tag; and deleting the characteristic data matched with the characteristic data in the keyword data set in the favorite label corresponding to the current user. (3) Updating the user label of each user in turn to obtain the new label of each userThe user images the data set and thus completes the creation or updating of the advertisement analysis database. Wherein the user representation data set includes facial feature data and user tags of corresponding users.
The advertisement analysis database in this embodiment is empty at the beginning of creation, and after a user portrait data set of a first historical user is entered therein, the creation system of the advertisement analysis database determines that the current user is a new user or a historical user by comparing the facial features of the current user with the facial features of the historical users in the advertisement analysis database; and inputting the distinguished user portrait dataset of the newly added user into an advertisement analysis database, or updating the user label in the user portrait dataset of the historical user existing in the advertisement analysis database.
The data in the advertisement analysis database obtained by the system of the embodiment realizes accurate portrayal of user interests and hobbies; thereby enabling accurate targeted marketing of advertisements to users.
The data in the advertisement analysis database is mainly obtained by identifying the identity characteristics of the user and analyzing the result of the acceptance evaluation of the user on the video advertisements in the scenes such as an elevator, a garage, a shopping mall and the like. The data in the advertisement analysis database mainly comprises the following contents:
(1) facial features of the user; the characteristic is mainly used for distinguishing the identities of different users as the unique identity marks of the users, and meanwhile, the advertisement analysis database allocates a special user number to the users according to the different identity marks.
(2) Identity characteristics of the user; the content of the partial data is rich, all the characteristics which can be obtained and are useful for distinguishing the identity characteristics of the user are included, including age, height, posture, wearing and dressing, physiological state and the like, and the characteristics have reference values for judging the working type, behavior habits, demand characteristics, hobbies, group members and the like of the user.
(3) A preference object of the user; the data of the part is obtained through the feedback of the user to different types of advertisements, and the content of the part is continuously updated and continuously optimized; basically, objects concerned and favored by the user in the current state can be described.
(4) An object of aversion of the user; the data of the part is obtained through the feedback of the user to different types of advertisements, and the content of the part is continuously updated and continuously optimized; objects that are not of interest or aversion in the current state of the user can be substantially characterized.
In this embodiment, as shown in fig. 2, the advertisement analysis database is created as follows:
step one, establishing user labels of all users
1. In the advertisement playing process, the facial features of each user are sequentially acquired, and facial recognition is carried out on the facial features.
2. Inquiring an advertisement analysis database according to the result of the facial recognition, and judging whether the facial features of the current user are matched with the facial features of a certain historical user in the advertisement analysis database:
(1) if yes, the current user is skipped.
(2) Otherwise, establishing an empty user label for the current user; the user tags include an identity tag, a favorite tag and an aversion tag.
3. And acquiring the multi-angle image of each user, and supplementing the feature data in the identity label of each user according to the image recognition result of the multi-angle image.
In this step, profiling can be performed on each user, and whether the user is a new user or a historical user, the user can be profiled and analyzed as long as the user appears in the target area and can be captured. This enables the size of the advertisement analysis database established in the present embodiment to reach a high level, and the sample is also rich enough. And a data foundation is laid for later application development by applying the database.
In the present embodiment, as shown in fig. 3, the feature data supplemented in the identity tag includes user number, gender, age group, wearing style and other features; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features.
The age range in the identity label is one of 0-10 years old, 10-20 years old, 20-30 years old, 30-50 years old, 50-70 years old and above 70 years old which are classified according to the image recognition result; the wearing style in the identity tag includes leisure, business, sports, children or elderly. In the embodiment, it is considered that the age has an important influence on the needs of the user, and therefore, the age characteristic is one of the identity characteristics which must be considered. Meanwhile, as the conventional image information collection cannot directly acquire the professional characteristics of the user, the embodiment can roughly divide the occupation or social identity of the user to a certain extent by classifying the wearing style of the user.
Meanwhile, the contents reflected by other characteristics in the identity label comprise whether glasses are worn, whether a hat is worn, whether alopecia exists, whether lipstick is smeared, whether high-heeled shoes are worn, whether beard is accumulated, whether a wristwatch is worn and the like; for the above feature, if so, the feature data reflecting the feature is added to the other features, otherwise, the feature data is not added to the other features. Other features in the identity tag are very typical user-distinguishing features that have a great correlation with the consumer needs of different users. Women wearing high-heeled shoes, for example, painted lipstick, may have a higher level of interest in advertising for clothing, cosmetics, etc. Beard individuals are generally not very concerned with shavers. The hair-growing products and health products are more likely to be interested by the hair-losing population.
In fact, after applying some more various feature extraction techniques, the embodiment can also acquire more different types of identity features, and the more abundant the obtained feature quantity, the more detailed the feature classification of the user.
Step two, acquiring the characteristic data of the advertisement played currently
1. And acquiring the playing time T of each played advertisement and a keyword data set associated with each advertisement.
The feature data in the keyword data set are a plurality of preset keywords related to the content of the advertisement played currently. The feature data within the keyword dataset for each advertisement includes at least:
(1) keywords reflecting the advertised promotional product.
(2) Keywords that reflect the targeted customer population targeted by the advertisement.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the ad words.
(5) The duration of the advertisement is classified.
(6) The genre of the advertisement is classified.
In this embodiment, rich keywords are set for each advertisement, and these keywords include various types of information that the client can receive from one advertisement. When the user indicates approval of the advertisement, or makes positive feedback on the content in the advertisement, then some or all of the features in the keyword dataset for the advertisement may be deemed to be of interest or preference by the user. Conversely, when a user exhibits aversion or negative feedback with respect to an advertisement, the user may be deemed to be indifferent or aversive to certain features in the keyword dataset for the advertisement. In this way. When the sample size of the feedback data of different types of advertisements of corresponding users is collected to be large enough, the preference of the users can be analyzed basically, and then the preference of the users can be portrayed.
Step three, obtaining feedback data of each user on advertisement playing
1. Acquiring voice stream data generated by all users in an advertisement putting area during advertisement playing, monitoring video stream data of all users in the advertisement putting area, and sending an instruction which requires switching of the currently played advertisement by one or more users in the advertisement putting area.
The mode of the instruction sent by the user for switching the currently played advertisement includes key input, voice interaction and gesture interaction. The voice interaction is realized by identifying a voice keyword which is sent by a user and requires to switch the currently played advertisement; the gesture interaction is realized by identifying a characteristic gesture sent by a user for switching the currently played advertisement; the key input means a key input instruction to request switching of the currently played advertisement, which is input by the user directly through a key.
The voice key words are obtained by a voice recognition algorithm according to real-time voice stream data recognition; the characteristic gestures are obtained by a video motion recognition algorithm according to real-time video stream data; the key input instruction is obtained through an entity switching key module installed on an advertisement playing site.
In this embodiment, the feedback of the user mainly includes the following aspects:
(1) the change in expression when the user views the advertisement.
(2) Direct discussion of the user for the advertisement. E.g. talk about an actor or speaker in an advertisement, talk about the effect of a product, etc
(3) Gesture actions made by the user while viewing the advertisement. For example, a user's hand is directed to the advertisement playing device to prompt other users to watch the advertisement, which reflects that the user is interested in the currently playing advertisement.
(4) The time of attention of the user to watch a certain advertisement.
(5) The user requests to switch the currently played advertisement. This directly reflects that the user dislikes the advertisement.
In addition, other types of feedback can be extracted when the technical conditions are mature, and can be applied to later data analysis, such as laughing of the user, characteristic actions in other details, and the like.
2. And judging whether an instruction for switching the currently played advertisement is received, if so, assigning 1 to the characteristic quantity SW reflecting the instruction, and if not, assigning 0 to the SW.
Step four, calculating the acceptance evaluation value of each user to the current advertisement
1. Performing voice recognition on voice stream data, extracting keywords matched with feature data in the keyword data set, and counting the number N of the keywords1
2. Performing video motion recognition on video stream data; extracting the gesture actions of the representation user for feeding back the currently played advertisement, and counting the number N of the gesture actions2
The gesture actions of the user for feeding back the currently played advertisement include a head nodding, a palm applanation, a hand pointing to an advertisement playing interface generated by the user during the advertisement playing period, a head raising or head turning action of switching the head from a non-direct-view state to a direct-view state, and the like.
3. Performing video motion recognition on video stream data; extracting characteristic actions reflecting eye attention position changes of each user, and calculating attention duration t of each user to the currently played advertisement according to the characteristic actionsn(ii) a Wherein n represents the user number of the current user.
The attention duration t of the user with the number n to the currently played advertisementnThe calculation method of (2) is as follows:
Figure BDA0003124760510000081
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement.
In this embodiment, when counting the attention duration of the user to the advertisement, the duration of the user viewing the advertisement playing interface is considered, and the duration of the user in a non-viewing state is also considered. In the embodiment, the time length determined to belong to the non-attention state is removed, and then the average value of the time length determined to belong to the attention state is obtained, so that the relatively accurate attention time length is obtained.
4. Performing frame-by-frame sampling on a frame image of video stream data according to a sampling frequency; carrying out image recognition on the images sampled at every other frame; extracting facial expressions of each user, and classifying the facial expressions as liked, ignored or disliked; and respectively counting the number of the three types of expression classification results of each user, and calculating the ratio of the number of the three types of expression classification results of each user in the total sample amount of the user.
5. And acquiring the value of the SW.
6. The acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure BDA0003124760510000082
In the above formula, n represents the user number of the current user, EnIndicating the evaluation value of the currently played advertisement by the user with the number n, EnNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure BDA0003124760510000083
representing the attention concentration of the user with the number n to the currently played advertisement; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Representing the influence factor of the attitude action feedback on the overall recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1,nThe expression classified as favorite by the user with the characterization number n is used as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2,nThe expression classified as neglect for the user with the representation number n is the proportion of the expression in the total quantity of the images sampled at every other frame; c denotes the score of aversive expression, p3,nThe expression classified as aversive for the user with the characterization number n is a proportion of the total number of images sampled at every other frame.
In this embodiment, the expression recognition may be completed by a neural network algorithm trained by a large number of samples. Voice recognition, video motion recognition, etc. also have a large number of products that can be directly applied, and for these parts, this embodiment is not described again.
In the embodiment, various types of feedback information made by the user on the played advertisement is extracted from voice stream data and video stream data of the user through the technologies of voice recognition, image recognition and video action recognition, and after the feedback information is quantized by the method provided by the embodiment, an evaluation result reflecting the recognition degree of the user on the current advertisement can be obtained. This result reflects the user's current advertisement's likes and dislikes, which in turn can be used to characterize the user's needs or interests.
Step five, establishing or updating the advertisement analysis database
1. Set EnA high threshold value E ofhAnd a low threshold value ElWherein E ishCritical value indicating that the user likes the currently played advertisement, ElA critical value indicating that the user dislikes the currently played advertisement, El>0。
2. When E isn≥EhAnd p is1,n+p2,n≥p3,nAdding feature data in a keyword data set associated with the currently played advertisement into a favorite tag corresponding to the current user, and performing feature data deduplication on the supplemented favorite tag; and deleting the characteristic data which is the same as the characteristic data in the keyword data set in the aversion label corresponding to the current user.
3. When E is less than or equal to ElAnd p is2,n+p3,n≥p1,nAdding feature data in a keyword data set associated with the currently played advertisement into an aversion tag corresponding to the current user, and performing feature data duplication elimination on the supplemented aversion tag; and deleting the characteristic data matched with the characteristic data in the keyword data set in the favorite label corresponding to the current user.
4. And updating the user label of each user to obtain a new user portrait data set of each user, and creating an advertisement analysis database.
As shown in fig. 4, the user portrait data set includes facial feature data and a user tag of a corresponding user.
The most core content in the advertisement analysis database is the content of the like label and the aversion label obtained by analyzing the behavior of the user, and the content is the direct data used for analyzing the user requirement at the later stage. In this embodiment, the user's likes and dislikes, which should be consistent with some or all of the features in the keyword dataset of the advertisement, can be directly estimated by feedback on the user when viewing the advertisement. Therefore, in this embodiment, after each advertisement is played, the accuracy attitude of the advertisement of the user is determined through analysis and statistics of feedback information of the user, and then the keyword data set of the advertisement is used as a feature in the favorite tag or the aversion tag of the current user when a specific condition is met.
In order to avoid the phenomenon of misclassification, the attitudes of the determined users need to be checked more strictly. The determination process of this embodiment introduces a special threshold determined according to expert experience, which is used as a basis for determining the true attitude of the user, in this embodiment, the threshold EhAnd ElThe method is determined after repeated verification, and can have high reliability. Thereby ensuring that the final portrait for the user is accurate and reliable.
Example 2
The embodiment provides an advertisement accurate delivery method based on user portrait. The accurate delivery method is developed based on the advertisement analysis database in the embodiment 1; as shown in fig. 5, the precise delivery method includes the following steps:
the method comprises the following steps: obtaining user label of current user
1. And acquiring the facial features of each current user in the advertisement delivery area.
2. Sequentially performing facial recognition on each current user, inquiring an advertisement analysis database (namely the advertisement analysis database created in the embodiment 1 or 2) containing user portrait data sets of a plurality of historical users according to the facial recognition result, and making the following judgment:
(1) when the facial features of the current user are matched with the feature data in one historical user facial feature data, all the feature data in the user tags of the historical users are obtained.
(2) And when the facial features of the current user are not matched with the feature data in the facial feature data of all historical users, judging that the current user is a new user, and establishing an empty user label for the new user.
The user portrait data set comprises corresponding facial feature data and user tags of historical users; the user tags include an identity tag, a like tag, and an aversion tag.
3. Acquiring a multi-angle image of the newly added user, performing image recognition on the multi-angle image, and supplementing feature data in the identity tag of the newly added user according to a recognition result; the characteristic data supplemented in the identity tag comprises a user number, gender, age group, wearing style and other characteristics; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features.
Step two: establishing a target image dataset for a current user group
1. Setting a historical user proportion critical value q0, and calculating the proportion q of the current users identified as historical users in the advertisement delivery area in the current user group.
2. Judging the magnitude relation between q and q0, and making the following decision according to the judgment result:
(1) and when q is more than or equal to q0, extracting characteristic data in the favorite labels of all historical users, and after the characteristic data are de-duplicated, taking the characteristic data as a target image data set of the current user group.
(2) When q is less than q0, extracting characteristic data in favorite labels of all historical users; and sequentially calculating the contact ratio Dc1 between the content in the identity label of each newly added user and the content in the identity label of each historical user, wherein the calculation formula Dc1 is as follows:
Figure BDA0003124760510000101
extracting characteristic data in the preference label of the historical user with the maximum coincidence degree Dc1 with the identity label of each newly added user; and merging the two parts of feature data (the identified historical users and the favorite labels of the historical users with the maximum coincidence degree with the identity labels of the newly-added users), and after the duplication of the feature data is removed, taking the feature data as a target image data set of the current user group.
Step three: adjusting the playing sequence of the advertisements in the advertisement playing sequence list
1. And acquiring a keyword data set associated with each advertisement in the advertisement playing sequence list, wherein the characteristic data in the keyword data set are a plurality of preset keywords related to the content of the currently played advertisement.
2. The calculation formula for obtaining feature data in the target image data set and calculating the degree of coincidence Dc2, Dc2 between feature data in the keyword data set associated with each advertisement and feature data in the target image data set is as follows:
Figure BDA0003124760510000102
3. and sequencing the advertisements in the advertisement playing sequence list according to the descending order of the calculation result of the Dc2 of each advertisement to obtain the readjusted advertisement playing sequence list.
The method for adjusting the advertisement playing sequence list in the advertisement delivery system provided in the embodiment is mainly based on the following principle and implementation logic:
since the present embodiment has already acquired the data in the advertisement analysis database created in embodiment 1; therefore, when the advertisement is delivered, the face recognition is carried out on all the users in the advertisement delivery area, and whether the users belong to historical users in the advertisement analysis database or newly-added users which are not collected by the advertisement analysis database can be distinguished.
Profiling of historical users has been achieved in view of advertisement analysis data, i.e., feature data rich in user tags. At this time, when most users in the advertisement delivery area belong to the historical users, it can be considered that the needs and preferences of the historical users can represent the current entire user group. By obtaining the favorite label of the corresponding historical user and then extracting the feature data, a target portrait data set used for depicting the favorite or the demand of the current user group can be obtained.
When the number of newly added users in the advertisement delivery area reaches a certain level, portrayal cannot be performed only by historical users. It is obviously not known that real-time analysis is performed on the new users at this time, but because the implementation can query an advertisement analysis data set with a sufficiently large sample size and sufficiently rich data, the embodiment can perform identification (which can be realized by an image identification technology) on the new users, then compare the identification with the user tags in the advertisement analysis data set, extract the most suitable historical users from the identification, and use the user tags of the historical users as the user tags of the new users temporarily, so as to obtain the features in the favorite tags of the new users. Since the identity characteristics (such as age, height, sex, dressing, physiological characteristics) of the user have a great correlation with the needs or preferences of the user (characteristics in the preference label). Such approximate substitution in this embodiment should therefore be of high confidence. The embodiment can obtain the target portrait data set of the user group containing a large number of newly added users through the technical scheme.
After the target portrait dataset of the user group in the advertisement delivery area is obtained, the embodiment further compares the feature data in the target portrait dataset with the keyword dataset of each advertisement to be played, so that the overlapping degree of the feature data and the keyword dataset of each advertisement to be played can be found, and the higher the overlapping degree is, the user group is the target client of the advertisement, and at this time, the advertisements should be placed at the position where the advertisement is delivered preferentially.
In addition, specific mention is made of: by using completely similar software and hardware equipment, in another embodiment, the adjustment of the advertisement playing sequence list can be realized at one time; on the one hand, based on the feedback of the user to the played advertisement, the user label of the newly added user is updated; the data content of the advertisement analysis database is further enriched.
Example 3
The embodiment provides a method for timely analyzing user requirements in a business district scene, which is further developed on the basis of embodiment 1, and realizes the most direct and rapid prediction or evaluation of the user requirements of a specific user. As shown in fig. 6, the method includes the steps of:
step 1: and acquiring the facial features of the current user in the advertisement delivery area.
Step 2: sequentially performing facial recognition on the current user, inquiring an advertisement analysis database containing user portrait datasets of a plurality of historical users according to the facial recognition result (the advertisement analysis database is the advertisement analysis database created in the embodiment 1), and making the following judgment:
(1) when the facial features of the current user are matched with the feature data in one historical user facial feature data, all the feature data in the user tags of the historical users are obtained.
(2) And when the facial features of the current user are not matched with the feature data in the facial feature data of all historical users, judging that the current user is a new user, and establishing an empty user label for the new user.
The user portrait data set comprises corresponding facial feature data and user tags of historical users; the user tags include an identity tag, a like tag, and an aversion tag.
And step 3: acquiring a multi-angle image of the newly added user, performing image recognition on the multi-angle image, and supplementing feature data in the identity tag of the newly added user according to a recognition result; the characteristic data supplemented in the identity tag comprises a user number, gender, age group, wearing style and other characteristics; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features.
And 4, step 4: comparing all the feature data in the identity tags with the identity tags of all the historical users in the advertisement analysis database, and calculating the feature coincidence degree Dc3 between the feature data and the identity tags, wherein the calculation formula of Dc3 is as follows:
Figure BDA0003124760510000111
and 5: extracting the characteristic data in the favorite label and the aversion label of the historical user with the maximum value of the characteristic coincidence degree Dc3 of the current user in the advertisement analysis database, filling the characteristic data into the user image data set of the newly added user, and completing the timely analysis process of the user requirement of the current user.
By analyzing the above processes, it can be found that the method in the embodiment can analyze and identify the user just before the user leaves the scene, so as to establish an estimated portrait dataset of features and behaviors and predict objects which the user likes and dislikes; and based on such predictions, timely analysis of user needs is achieved. The analysis method is more timely and effective, and long-time tracking and evaluation on the user are not needed. Therefore, the method has high practical value, and it is noted that the accuracy of the timely analysis result has great correlation with the sample size in the advertisement analysis database containing the user portrait data sets of a plurality of historical users. The larger the sample size of the advertisement analysis database, the more accurate the results of such timely analysis.
The logic of the method of this embodiment is to first obtain facial features of a user appearing in a specific scene, determine whether a data sample of the user is already recorded in the advertisement analysis database, if so, directly extract contents of favorite tags and aversion tags recorded in the advertisement analysis database by the user, and use the contents as a user image data set of the user, thereby analyzing and predicting the user requirements of the user. When the data sample of the user is not included in the advertisement analysis database, the identity feature of the user is extracted, and then the like label and the dislike label in the identity labels of the historical users whose identity features are most similar to the user (determined by Dc 3) in each historical user included in the advertisement analysis database are extracted and used as the user image data set of the current user, so as to analyze the user requirement of the user.
Example 4
On the basis of embodiment 2, this embodiment provides a method for evaluating the recognition degree of a user for an advertisement based on feature recognition, as shown in fig. 7, the method includes the following steps:
the method comprises the following steps: obtaining the characteristic data of the advertisement played currently
And acquiring the playing time T of each played advertisement and a keyword data set associated with each advertisement.
The feature data in the keyword data set are a plurality of preset keywords related to the content of the advertisement played currently. The feature data within the keyword dataset for each advertisement includes at least:
(1) keywords reflecting the advertised promotional product.
(2) Keywords that reflect the targeted customer population targeted by the advertisement.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the ad words.
(5) The duration of the advertisement is classified.
(6) The genre of the advertisement is classified.
Step two, obtaining feedback data of each user to the advertisement playing
1. Acquiring voice stream data generated by all users in an advertisement putting area during advertisement playing, monitoring video stream data of all users in the advertisement putting area, and sending an instruction which requires switching of the currently played advertisement by one or more users in the advertisement putting area.
The mode of the instruction sent by the user for switching the currently played advertisement includes key input, voice interaction and gesture interaction. The voice interaction is realized by identifying a voice keyword which is sent by a user and requires to switch the currently played advertisement; the gesture interaction is realized by identifying a characteristic gesture sent by a user for switching the currently played advertisement; the key input means a key input instruction to request switching of the currently played advertisement, which is input by the user directly through a key.
The voice key words are obtained by a voice recognition algorithm according to real-time voice stream data recognition; the characteristic gestures are obtained by a video motion recognition algorithm according to real-time video stream data; the key input instruction is obtained through an entity switching key module installed on an advertisement playing site.
In this embodiment, the feedback of the user mainly includes the following aspects:
(1) the change in expression when the user views the advertisement.
(2) Direct discussion of the user for the advertisement. E.g. talk about an actor or speaker in an advertisement, talk about the effect of a product, etc
(3) Gesture actions made by the user while viewing the advertisement. For example, a user's hand is directed to the advertisement playing device to alert other users, which reflects that the user is interested in the currently playing advertisement.
(4) The time of attention of the user to watch a certain advertisement.
(5) The user requests to switch the currently played advertisement. This directly reflects that the user dislikes the advertisement.
In addition, other types of feedback can be extracted when the technical conditions are mature, and can be applied to later data analysis, such as laughing of the user, characteristic actions in other details, and the like.
2. And judging whether an instruction for switching the currently played advertisement is received, if so, assigning 1 to the characteristic quantity SW reflecting the instruction, and if not, assigning 0 to the SW.
Thirdly, calculating the acceptance evaluation value of each user to the current advertisement
1. Performing voice recognition on voice stream data, extracting keywords matched with feature data in the keyword data set, and counting the number N of the keywords1
2. Performing video motion recognition on video stream data; extracting the advertisement which characterizes the user to play currentlyThe feedback gesture actions are counted, and the number N of the feedback gesture actions is counted2
The gesture actions of the user for feeding back the currently played advertisement include a head nodding, a palm applanation, a hand pointing to an advertisement playing interface generated by the user during the advertisement playing period, a head raising or head turning action of switching the head from a non-direct-view state to a direct-view state, and the like.
3. Performing video motion recognition on video stream data; extracting characteristic actions reflecting eye attention position changes of each user, and calculating attention duration t of each user to the currently played advertisement according to the characteristic actionsn(ii) a Where n represents the user number of the current user.
The attention duration t of the user with the number n to the currently played advertisementnThe calculation method of (2) is as follows:
Figure BDA0003124760510000131
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement.
In this embodiment, when counting the attention duration of the user to the advertisement, the duration of the user viewing the advertisement playing interface is considered, and the duration of the user in a non-viewing state is also considered. In the embodiment, the time length determined to belong to the non-attention state is mainly removed, and then the average value of the time length determined to belong to the attention state is approximately obtained, so that the relatively accurate attention time length can be obtained.
4. Performing frame-by-frame sampling on a frame image of video stream data according to a sampling frequency; carrying out image recognition on the images sampled at every other frame; extracting facial expressions of each user, and classifying the facial expressions as liked, ignored or disliked; and respectively counting the number of the three types of expression classification results of each user, and calculating the ratio of the number of the three types of expression classification results of each user in the total sample amount of the user.
5. And acquiring the value of the SW.
6. The acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure BDA0003124760510000132
In the above formula, n represents the user number of the current user, EnIndicating the evaluation value of the currently played advertisement by the user with the number n, EnNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure BDA0003124760510000133
representing the attention concentration of the user with the number n to the currently played advertisement; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Representing the influence factor of the attitude action feedback on the overall recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1,nThe expression classified as favorite by the user with the characterization number n is used as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2,nThe expression classified as neglect for the user with the representation number n is the proportion of the expression in the total quantity of the images sampled at every other frame; c denotes the score of aversive expression, p3,nThe expression classified as aversive for the user with the characterization number n is a proportion of the total number of images sampled at every other frame.
The method provided by the embodiment can identify various types of feedback characteristics according to the feedback made by the user when the advertisement is played, and further obtain the acceptance evaluation of the user on the advertisement. The method can acquire various types of feedback of the user, and the obtained result of the acceptance evaluation of the user on the advertisement is more accurate and can be used as a basis for evaluating the advertisement putting effect.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A system for creating an advertisement analysis database, comprising: the creating system is used for analyzing the preference of the user according to the identity characteristics of the user and the feedback of different advertisements, and further establishing a database containing the identity characteristics, the preference characteristics and the aversion characteristics of each user; the creation system includes:
a historical user query module for querying an advertisement analysis database to extract a user profile data set of historical users collected therein; the user portrait dataset comprises facial feature data of each historical user and user labels, and the user labels comprise identity labels, favorite labels and aversion labels;
the advertisement characteristic data extraction module is used for extracting the playing time length T of each advertisement and a keyword data set related to the advertisement when the advertisement is played by an advertisement delivery system;
the user feedback data extraction module is used for acquiring voice stream data and video stream data generated by a user watching the advertisement playing and a switching instruction of the advertisement required to be switched and played;
the face recognition module is used for extracting the face features of each user watching the advertisement, completing the comparison process of the face features of the current user and the face features of each historical user in the advertisement analysis database, and distinguishing the newly added user from the historical users;
the image identification module is used for carrying out image identification on an image data set obtained by framing the video stream data; and then toObtaining: (1) identity feature data of each user; (2) expressions of respective users during the advertisement play, wherein p1,nClassifying the liked expression of the user with the number n as the proportion of the total quantity of the images sampled at every other frame; p is a radical of2,nClassifying the face which is ignored for the user with the number n in the ratio of the total number of the images sampled at every other frame; p is a radical of3,nThe proportion of expressions which are classified as aversive by the user with the number n in the total amount of the images sampled at every other frame;
a voice recognition module for performing voice recognition on the voice stream data;
the video motion recognition module is used for carrying out video motion recognition on the video stream data;
the user label establishing module is used for establishing an empty user label for each newly added user identified by the face identification module and supplementing each item of characteristic data which is obtained by the image identification module and reflects the identity characteristics of the newly added user into the identity label of the corresponding user;
an acceptance evaluation value calculation module for calculating an acceptance evaluation value E of each user to the current advertisementn(ii) a And
a database creation module to:
(1) based on expert experience, set EnA high threshold value E ofhAnd a low threshold value El(ii) a Wherein E ishCritical value indicating that the user likes the currently played advertisement, ElA critical value indicating that the user dislikes the currently played advertisement, El>0;
(2) The following decisions and decisions are made for each user:
when E isn≥EhAnd p is1,n+p2,n≥p3,nAdding feature data in a keyword data set associated with the currently played advertisement into a favorite tag corresponding to the current user, and performing feature data deduplication on the supplemented favorite tag; deleting the characteristic data which are the same as the characteristic data in the keyword data set in the aversion label corresponding to the current user;
(ii) when E is less than or equal to ElAnd p is2,n+p3,n≥p1,nAdding feature data in a keyword data set associated with the currently played advertisement into an aversion tag corresponding to the current user, and performing feature data duplication elimination on the supplemented aversion tag; deleting feature data matched with the feature data in the keyword data set in the favorite label corresponding to the current user;
(3) and updating the user label of each user in sequence to obtain a new user portrait data set of each user, and further completing the creation or updating of the advertisement analysis database.
2. The advertisement analysis database creation system according to claim 1, characterized in that: the specific functions of the user feedback data extraction module comprise:
(1) acquiring voice information generated by users watching the advertisements in an advertisement delivery area when the advertisement delivery system plays the advertisements to obtain voice stream data related to each advertisement;
(2) when the advertisement delivery system plays the advertisement, acquiring multi-angle monitoring videos of all users watching the advertisement in an advertisement delivery area to obtain video stream data related to each advertisement;
(3) the method comprises the steps that when the advertisement delivery system plays the advertisement, a switching instruction sent by a user watching the advertisement is obtained, wherein the switching instruction comprises a keyboard input instruction, a voice interaction instruction or a gesture interaction instruction; and assigning the characteristic quantity SW representing the switching instruction to be 1 when the acquisition is successful, otherwise assigning the characteristic quantity SW to be 0.
3. The advertisement analysis database creation system according to claim 2, characterized in that: the specific functions of the speech recognition module include:
(1) acquiring the voice interaction instruction which is sent by a user during the advertisement playing and represents that the currently played advertisement is required to be switched;
(2) and extracting all words in the voice stream data, and finding out keywords matched with the characteristic data in the keyword data set.
4. A system for creating an advertisement analysis database according to claim 3, wherein: the specific functions of the video motion recognition module include:
(1) extracting the gesture interaction instruction which is sent by a certain user in the video stream data and represents that the currently played advertisement is required to be switched;
(2) extracting gesture actions which are sent out by a certain user and used for feeding back the currently played advertisement in the video stream data;
(3) and extracting characteristic actions reflecting the eye attention position change of a certain user in the current advertisement playing process.
5. The advertisement analysis database creation system according to claim 4, characterized in that: the acceptance evaluation value calculation module has the specific functions of:
(1) acquiring keywords which are identified from the voice stream data by the voice identification module and matched with the characteristic data in the keyword data set, and counting the number N of the keywords1
(2) Acquiring the gesture actions which are recognized by the video action recognition module and reflect the feedback of the user to the currently played advertisement, and counting the number N of the gesture actions2
(3) Obtaining the characteristic action which is identified by the video action identification module and reflects the eye attention position change of a certain user in the current advertisement playing process, and calculating the attention duration t of the current user to the currently played advertisement according to the characteristic actionn
(4) Acquiring the number of the three-category-expression classification results of each user identified by the image identification module, and calculating the ratio of the number of the three-category-expression classification results of each user in the total sample volume;
(5) acquiring the value of SW;
(6) the acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure FDA0003124760500000021
In the above formula, n represents the user number of the current user, EnIndicating the evaluation value of the currently played advertisement by the user with the number n, EnNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure FDA0003124760500000022
representing the attention concentration of the user with the number n to the currently played advertisement; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Representing the influence factor of the attitude action feedback on the overall recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1nClassifying the liked expression of the user with the number n as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2nClassifying the face which is ignored for the user with the number n in the ratio of the total number of the images sampled at every other frame; c denotes the score of aversive expression, p3nThe expression classified as disliked by the user numbered n is a proportion of the total number of images sampled at every other frame.
6. The advertisement analysis database creation system according to claim 5, characterized in that: the acceptance evaluation value calculation module calculates the attention duration t of the user with the number n to the currently played advertisementnThe calculation formula of (a) is as follows:
Figure FDA0003124760500000031
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement.
7. The advertisement analysis database creation system according to claim 1, characterized in that: the feature data in the keyword data set of each advertisement extracted by the advertisement feature data extraction module at least comprises the following steps:
(1) keywords reflecting the advertised promotional product;
(2) keywords reflecting targeted customer groups targeted by the advertisement;
(3) keywords reflecting a speaker of the advertisement or a character image of the advertisement;
(4) high frequency or special keywords in the ad;
(5) the time length of the advertisement is classified;
(6) the genre of the advertisement is classified.
8. The advertisement analysis database creation system according to claim 1, characterized in that: the feature data in the identity tag comprises user number, gender, age bracket, wearing style and other features; the other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features; the age range in the identity label is one of 0-10 years old, 10-20 years old, 20-30 years old, 30-50 years old, 50-70 years old and above 70 years old which are classified according to the image recognition result; the wearing style in the identity tag includes leisure, business, sports, children or elderly.
9. The advertisement analysis database creation system according to claim 8, characterized in that: the contents reflected by the other characteristics comprise whether glasses are worn, whether a hat is worn, whether alopecia exists, whether lipstick is smeared, whether high-heeled shoes are worn, whether beard is accumulated, and whether a wristwatch is worn; for the above feature, if so, the feature data reflecting the feature is added to the other features, otherwise, the feature data is not added to the other features.
10. The advertisement analysis database creation system according to claim 1, characterized in that: the advertisement analysis database is empty at the beginning of creation, and after the user portrait dataset of a first historical user is input into the advertisement analysis database, a creation system of the advertisement analysis database determines that the current user is a new user or a historical user by comparing the facial features of the current user with the facial features of the historical users in the advertisement analysis database; and inputting the distinguished user portrait dataset of the newly added user into an advertisement analysis database, or updating the user label in the user portrait dataset of the historical user existing in the advertisement analysis database.
CN202110686157.4A 2021-06-21 2021-06-21 Advertisement analysis database creation system Withdrawn CN113469737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110686157.4A CN113469737A (en) 2021-06-21 2021-06-21 Advertisement analysis database creation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110686157.4A CN113469737A (en) 2021-06-21 2021-06-21 Advertisement analysis database creation system

Publications (1)

Publication Number Publication Date
CN113469737A true CN113469737A (en) 2021-10-01

Family

ID=77868931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110686157.4A Withdrawn CN113469737A (en) 2021-06-21 2021-06-21 Advertisement analysis database creation system

Country Status (1)

Country Link
CN (1) CN113469737A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116797282A (en) * 2023-08-28 2023-09-22 成都一心航科技有限公司 Real-time monitoring system and monitoring method for advertisement delivery
CN116823352A (en) * 2023-07-14 2023-09-29 菏泽学义广告设计制作有限公司 Intelligent advertisement design system based on remote real-time interaction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823352A (en) * 2023-07-14 2023-09-29 菏泽学义广告设计制作有限公司 Intelligent advertisement design system based on remote real-time interaction
CN116797282A (en) * 2023-08-28 2023-09-22 成都一心航科技有限公司 Real-time monitoring system and monitoring method for advertisement delivery
CN116797282B (en) * 2023-08-28 2023-10-27 成都一心航科技有限公司 Real-time monitoring system and monitoring method for advertisement delivery

Similar Documents

Publication Publication Date Title
CN113393275B (en) Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform
CN113379460A (en) Advertisement accurate delivery method based on user portrait
CN108876526B (en) Commodity recommendation method and device and computer-readable storage medium
CN106920129B (en) Eye tracking-based network advertisement effect evaluation system and method
KR101197978B1 (en) Laugh detector and system and method for tracking an emotional response to a media presentation
CN106971317A (en) The advertisement delivery effect evaluation analyzed based on recognition of face and big data and intelligently pushing decision-making technique
US8615434B2 (en) Systems and methods for automatically generating campaigns using advertising targeting information based upon affinity information obtained from an online social network
KR101385700B1 (en) Method and apparatus for providing moving image advertisements
US8612293B2 (en) Generation of advertising targeting information based upon affinity information obtained from an online social network
CN107146096B (en) Intelligent video advertisement display method and device
CN113435924B (en) VOC car owner cloud big data platform
CN107305557A (en) Content recommendation method and device
US20030126013A1 (en) Viewer-targeted display system and method
CN113469737A (en) Advertisement analysis database creation system
US20040001616A1 (en) Measurement of content ratings through vision and speech recognition
US20130218678A1 (en) Systems and methods for selecting and generating targeting information for specific advertisements based upon affinity information obtained from an online social network
KR102118042B1 (en) Method for advertisement design based on artificial intelligence and apparatus for using the method
CN104573619A (en) Method and system for analyzing big data of intelligent advertisements based on face identification
WO2021031600A1 (en) Data collection method and apparatus, computer device, and storage medium
CN112598438A (en) Outdoor advertisement recommendation system and method based on large-scale user portrait
CN108876430B (en) Advertisement pushing method based on crowd characteristics, electronic equipment and storage medium
CN108804577B (en) Method for estimating interest degree of information tag
CN110597987A (en) Search recommendation method and device
CN113377327A (en) Huge curtain MAX intelligent terminal in garage that possesses intelligent voice interaction function
CN110766454A (en) Method for collecting customer visit information of store and store subsystem architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20211001

WW01 Invention patent application withdrawn after publication