CN113535700A - User information updating method for digital audio-visual place and computer readable storage medium - Google Patents

User information updating method for digital audio-visual place and computer readable storage medium Download PDF

Info

Publication number
CN113535700A
CN113535700A CN202110811297.XA CN202110811297A CN113535700A CN 113535700 A CN113535700 A CN 113535700A CN 202110811297 A CN202110811297 A CN 202110811297A CN 113535700 A CN113535700 A CN 113535700A
Authority
CN
China
Prior art keywords
user
data
user information
behavior data
updating method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110811297.XA
Other languages
Chinese (zh)
Inventor
郑智勇
汤周文
陈丹明
刘旺
林剑宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Kaimi Network Science & Technology Co ltd
Original Assignee
Fujian Kaimi Network Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Kaimi Network Science & Technology Co ltd filed Critical Fujian Kaimi Network Science & Technology Co ltd
Priority to CN202110811297.XA priority Critical patent/CN113535700A/en
Publication of CN113535700A publication Critical patent/CN113535700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a method for updating user information in a digital audio-visual place and a computer readable storage medium, comprising the following steps: collecting user behavior data of a digital audio-visual place, and generating a training data set according to the user behavior data, wherein the training data set comprises the user behavior data and user information of a corresponding user; analyzing the training data set to generate an estimation model; and inputting the behavior data of the user to be estimated into the estimation model for estimation to obtain the user information of the user to be estimated. The invention can reversely predict the user information of other users with the same or similar behavior data according to the estimation model generated by the training data set, thereby greatly enriching the user information and providing powerful data support for generating the user portrait.

Description

User information updating method for digital audio-visual place and computer readable storage medium
Technical Field
The present invention relates to the field of digital audiovisual place technology, and in particular, to a method for updating user information in a digital audiovisual place and a computer-readable storage medium.
Background
The user portrait is a user information tag, and can be easily understood as a useful information tag extracted from massive information. The users can be distinguished into different types according to the user target, behavior and viewpoint difference in the user portrait, then common features are extracted from each type, and classification labels are given to the types, so that the prototype of a type of people, namely the user portrait, can be described.
Digital audiovisual venues lack a reliable access to basic information of users due to industrial features, so that it is difficult to generate user portraits and to update them.
Disclosure of Invention
Therefore, it is necessary to provide a method for updating user information in a digital audiovisual place to solve the technical problem in the prior art that user information in the digital audiovisual field is difficult to obtain.
In order to achieve the above object, the inventor provides a method for updating user information in a digital audiovisual place, comprising the following steps:
collecting user behavior data of a digital audio-visual place, and generating a training data set according to the user behavior data, wherein the training data set comprises the user behavior data and user information of a corresponding user;
analyzing the training data set to generate an estimation model;
and inputting the behavior data of the user to be estimated into the estimation model for estimation to obtain the user information of the user to be estimated.
Further, the analyzing the training data set to generate the estimation model comprises the following steps:
calculating the weight of the user behavior data in the training data set influencing the user information;
generating the estimation model according to the weight of each of the user behavior data.
Further, the weight is represented by a TF-IDF value;
Figure BDA0003168324300000021
further, the estimation model is generated by calculating the weight of each user behavior data by using a logistic regression algorithm.
Further, the logistic regression algorithm is as follows:
y=wx+b;
Figure BDA0003168324300000022
wherein, x is TF-IDF value, w and b are parameters to be solved, and p is probability value.
Further, the method also comprises the following steps:
and updating the user portrait of the corresponding user according to the user information obtained by the estimation of the estimation model.
Further, the user behavior data comprises any one or more of recorded sound data, song ordering data and social data; the user behavior data can be collected through a song requesting device or a mobile terminal connected with the song requesting device.
Further, the user information includes any one or more of age and gender.
Further, after user behavior data of the digital audio-visual place is collected, preprocessing is carried out on the user behavior data, and the processing comprises any one or combination of multiple of data de-duplication, format error data de-duplication, content error data de-duplication, logic error data de-duplication, incomplete data de-duplication and data association verification.
In order to solve the above technical problem, the present invention further provides another technical solution:
a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements a digital audiovisual place user information updating method according to any of the above claims.
Different from the prior art, the technical scheme collects user behavior data of a digital audio-visual place and generates a training data set according to the user behavior data, wherein the training data set comprises the user behavior data and user information of a corresponding user; analyzing the training data set to generate an estimation model; and inputting the behavior data of the user to be estimated into the estimation model for estimation to obtain the user information of the user to be estimated. Therefore, the estimation model generated by the training data set can reversely predict the user information of other users with the same or similar behavior data, thereby greatly enriching the user information and providing powerful data support for generating the user portrait.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for updating user information of a digital audiovisual station according to an embodiment of the present invention;
FIG. 2 is a flow diagram of behavioral data preprocessing according to an embodiment;
FIG. 3 is a flow diagram of an embodiment for generating an estimation model;
FIG. 4 is a flowchart illustrating steps of a method for updating user information of a digital audiovisual station according to an embodiment of the present invention;
FIG. 5 is a block diagram of a computer-readable storage medium in accordance with the detailed description.
Description of reference numerals:
500. a computer-readable storage medium;
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1 to 4, the present embodiment provides a digital audiovisual place user information updating method, where the digital audiovisual place user information updating method may acquire behavior data of a user in a digital audiovisual place through a song requesting device and/or a mobile terminal such as a mobile phone, where the behavior data includes authorization data such as song requesting, recording, and social contact, acquire user information such as age and gender of the user through a registration authorization manner, and combine the acquired user behavior data and the user information to generate a training data set. And then, inputting the collected behavior data of the user to be estimated into the estimation model, and reversely deducing the user information of the estimation user through the estimation model. By the method for updating the user information in the digital audio-visual place, the user information of other users with the same or similar behavior data can be reversely predicted, so that the user information is greatly enriched, and powerful data support is provided for generating the user portrait.
As shown in fig. 1, in one embodiment, the method for updating user information of a digital audiovisual place includes the following steps:
s101, collecting user behavior data of a digital audio-visual place, and generating a training data set according to the user behavior data, wherein the training data set comprises the user behavior data and user information of a corresponding user;
s102, analyzing the training data set to generate an estimation model;
s103, inputting the behavior data of the user to be estimated into the estimation model for estimation, and obtaining the user information of the user to be estimated.
In step S101, the digital audiovisual place may be a place with a multimedia on-demand system, such as a KTV, a bar, and the like. The user behavior data of the digital audiovisual place can comprise any one or more of song ordering data, recording data and social data, wherein the user behavior data is acquired by user authorization. The user information is basic information of the user, the user information comprises any one or more of age and gender, and the user information is also acquired through user authorization.
To facilitate data management, in an embodiment, after user behavior data of a digital audiovisual place is collected, the user behavior data is preprocessed, and the preprocessing includes any one or more of deleting duplicate data, deleting format error data, deleting content error data, deleting logic error data, deleting incomplete data, and verifying data relevance. The formatted data can be obtained after the preprocessing, and the formatted data can comprise user ID, recording files, song information, singer information and box user information. The formatted data can be stored in a storage system, and the user behavior data can be stored in a distributed file storage system due to the large amount of the user behavior data.
As shown in fig. 2, in an embodiment, the preprocessing the user behavior data sequentially includes:
s201, deleting repeated data (namely, washing the repeated data);
s202, deleting the incomplete data (namely, cleaning the missing data);
s203, deleting format error data (namely, cleaning the format error data);
s204, deleting the content error data (cleaning the content error data);
s205, deleting the logic error data (namely, cleaning the logic error data);
and S206, verifying data relevance. Because there may be a plurality of data sources, it can be verified whether the related data information of the same user is consistent through data relevance verification, if not, the data needs to be adjusted or removed, and the interference noise in the data can be removed through relevance verification, thereby improving the accuracy of the data.
The formatted user behavior data can be obtained by preprocessing the user behavior data, the stored calling is convenient, useless missing data and error data can be filtered by preprocessing, and the reliability of the data is improved.
The preprocessed user behavior data is associated with the user information of the corresponding user, so that a training data set is generated, the data (including the user information and the user behavior data) of the user with complete or relatively complete user information is stored in the training data set, the user behavior data and the user information in the training data set can be correspondingly stored through associated labels such as user ID, and when the data of a certain user in the training data set needs to be called, the user only needs to input the corresponding user ID. The training data set is to be used as standard data for training the estimation model,
in step S102, an estimation model is generated by analyzing the training data set, and the estimation model can estimate user information such as age and gender of the user according to the input user behavior data.
In step S103, behavior data of the user to be estimated is input into the estimation model for estimation, so as to obtain user information of the user to be estimated. The embodiment can reversely predict the user information of other users with the same or similar behavior data in the digital audio-visual place through the estimation model, thereby greatly enriching the user information and providing powerful data support for generating the user portrait.
In one embodiment, the analyzing the training data set to generate the estimation model comprises:
calculating the weight of the user behavior data in the training data set influencing the user information;
generating the estimation model according to the weight of each of the user behavior data.
By analyzing the training data set, the estimation model can comprehensively calculate the influence of each weight on the user information, and finally estimate the user information of the user.
In one embodiment, the weights are represented by TF-IDF values, which represent how important the ith word is to the dj-class documents; wherein, the TF-IDF value calculation process comprises the following steps: calculating according to formula 1;
Figure BDA0003168324300000061
where i denotes the position of the word in the bag of words, j denotes the number of text categories, ni,jRepresents the number of times the ith word in the bag appears in the jth document, Σknk,jRepresenting the number of all entries in the document;
calculating according to formula 2;
Figure BDA0003168324300000062
where D represents the total number of documents in the text set, { m: t is ti∈djDenotes the number of documents containing the word and belonging to the j-th class, { x: t is ti∈dxIndicates the number of all documents containing the word;
calculating according to formula 3;
Figure BDA0003168324300000063
thereby obtaining the calculation formula of the TF-IDF value:
Figure BDA0003168324300000064
applying the calculation formula of the TF-IDF value to this embodiment, we can obtain:
Figure BDA0003168324300000071
the behavior data in the training data set is recorded through corresponding characters, for example, the behavior data of the user for ordering songs is as follows: behavior type-song on demand, song name-love you ten thousand years, user ID-XXXXXX. Therefore, in the TF-IDF value calculation formula, the frequency and frequency of certain behavior data in the training data set are calculated by calculating the frequency and frequency of occurrence of words (i.e., characters) corresponding to the behavior data in the training data set, and the weight of the behavior data is calculated according to the frequency and frequency of occurrence of words corresponding to the behavior data.
In an embodiment, after the weights of the user behavior data influencing the user information in the training data set are calculated through the TF-IDF values, the weights of the user behavior data are calculated by using a logistic regression algorithm to generate the estimation model.
Logistic regression can be expressed according to equation 4 and equation 5;
y ═ wx + b; (formula 4)
Figure BDA0003168324300000072
Wherein, x is TF-IDF value, w and b are parameters to be solved, and p is probability value. In the embodiment, the logistic regression algorithm estimates the user information of the user to be estimated through the TF-IDF value in the training data set.
FIG. 3 is a flow chart illustrating the generation of the estimation model according to an embodiment; the generating step of the estimation model includes:
s301, inputting a text;
s302, text preprocessing;
s303, Chinese word segmentation is carried out to stop words; the stop word can be removed according to the stop word file, for example, a stop word bank is preset, then the stop word bank is compared with the stop word bank, and if the stop word falls into the stop word bank, the stop word is deleted.
S304, calculating a TF-IDF value (text representation method);
s305, training an estimation model;
the input text in S301 may be user behavior data and corresponding user information, where the user behavior data may be data that is not cleaned, and therefore, the user behavior data needs to be preprocessed in step S302 and step S303.
In other embodiments, the input text may be the training data set in the above embodiments, in which the user behavior data is preprocessed, so that step S302 and step S303 may be required.
The calculation of the TF-IDF value in step S304 is the same as the weighting of the user behavior data affecting the user information indicated by the TF-IDF value in the above embodiment, and will not be described again here.
In step S305, the estimation model training is predicted by a logistic regression algorithm, and a classification result is output, and then the logistic regression algorithm is optimized by the polishing classification result until an ideal estimation model is obtained.
In another embodiment, the estimation model may be an artificial intelligence model with self-learning capability, and the continuous training may make the estimation model aware of the correlation between the user information and the user behavior data in the training data set.
In one embodiment, the digital audiovisual venue may create a user representation for the user based on existing user information, which may in some cases be incomplete, resulting in an incomplete user representation. In this case, the digital audiovisual place may use the estimation model in the above embodiment to estimate the obtained user information, and supplement the obtained user information to the user image of the corresponding user, i.e., update the user image of the corresponding user.
In the above embodiment, a user profile may be created for the user, and the user may be classified into different types of user clusters according to differences in characteristics such as user goals, behaviors, and viewpoints. And the training data set and the estimation model may be generated in units of user clusters in the above embodiments. When generating the training data set, the users in the user cluster may be divided into users with complete user information and users with incomplete user information, then the user behavior data and the user information corresponding to the users with complete user information are used as the training data set, and an estimation model is generated through the training data set, wherein the estimation model is only for the user cluster and may not be applicable to other user clusters. In some embodiments, member data of the user is also obtained, member data and user behavior data of the same user are subjected to statistics, and then a training data set comprising the member data and the user behavior data is generated.
In the implementation, the users are firstly divided into different user clusters, and the user information updating method in the digital audio-visual place is carried out by taking the user clusters as units, so that the training data set and the estimation model in the implementation mode have higher pertinence, the processing data amount of the estimation model is smaller, and the accuracy of user information estimation is higher.
As shown in fig. 4, a method for updating user information of a digital audiovisual place according to an embodiment includes:
s401, collecting user behavior data through song requesting equipment, wherein the user behavior data comprise recording data, song requesting data, social data and the like;
s402, cleaning and sorting the user behavior data, wherein the cleaning and sorting comprises the steps in the user behavior data preprocessing in the embodiment;
s403, obtaining user information of registration authorization through a mobile phone terminal;
s404, acquiring member data of a user;
s405, counting data; the data statistics is used for counting the acquired member data and the acquired user behavior data so as to generate training data;
s406, generating a training data set, wherein the training data set comprises the cleaned and sorted user behavior data and the user information, and generating an estimation model according to the training data set;
s407 judging whether the user information is complete;
if not, jumping to step S408, and estimating user information such as age, gender and the like of the user by using the estimation model;
and S409, finally, outputting an estimation result.
As shown in fig. 5, an embodiment provides a computer-readable storage medium 500, on which a computer program is stored, which when executed by a processor implements the digital audiovisual place user information updating method according to any of the above embodiments. The user information updating method of the digital audio-visual place can generate a training data set according to user behavior data and user information, an estimation model generated through the training data set can reversely predict the user information of other users with the same or similar behavior data through the estimation model, so that the user information is greatly enriched, and powerful data support is provided for generating user portraits.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.

Claims (10)

1. A user information updating method for a digital audio-visual place is characterized by comprising the following steps:
collecting user behavior data of a digital audio-visual place, and generating a training data set according to the user behavior data, wherein the training data set comprises the user behavior data and user information of a corresponding user;
analyzing the training data set to generate an estimation model;
and inputting the behavior data of the user to be estimated into the estimation model for estimation to obtain the user information of the user to be estimated.
2. A digital audiovisual place user information updating method according to claim 1, characterized by calculating weights of the user behavior data affecting the user information in the training data set;
generating the estimation model according to the weight of each of the user behavior data.
3. A digital audio-visual venue user information updating method according to claim 2, wherein the weight is represented by a TF-IDF value;
Figure FDA0003168324290000011
4. the digital audiovisual venue user information updating method of claim 2, wherein the estimation model is generated by calculating the weight of each user behavior data by using a logistic regression algorithm.
5. The digital audiovisual venue user information updating method according to claim 4, wherein the logistic regression algorithm is:
y=wx+b;
Figure FDA0003168324290000012
wherein, x is TF-IDF value, w and b are parameters to be solved, and p is probability value.
6. A digital audiovisual venue user information updating method according to claim 1, further comprising the steps of:
and updating the user portrait of the corresponding user according to the user information obtained by the estimation of the estimation model.
7. The digital audio-visual venue user information updating method according to claim 1, wherein the user behavior data comprises any one or more of recorded data, song requesting data, social data; the user behavior data can be collected through a song requesting device or a mobile terminal connected with the song requesting device.
8. A digital audiovisual space user information updating method according to claim 1, wherein the user information includes any one or more of age and gender.
9. The digital audio-visual site user information updating method according to claim 1, wherein after the user behavior data of the digital audio-visual site is collected, the user behavior data is preprocessed, and the preprocessing includes any one or more of data de-duplication, data de-formatting error, data de-content error, data de-logic error, data de-defect, and data association verification.
10. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing a digital audiovisual place user information updating method according to any one of claims 1 to 9.
CN202110811297.XA 2021-07-19 2021-07-19 User information updating method for digital audio-visual place and computer readable storage medium Pending CN113535700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110811297.XA CN113535700A (en) 2021-07-19 2021-07-19 User information updating method for digital audio-visual place and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110811297.XA CN113535700A (en) 2021-07-19 2021-07-19 User information updating method for digital audio-visual place and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113535700A true CN113535700A (en) 2021-10-22

Family

ID=78100036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110811297.XA Pending CN113535700A (en) 2021-07-19 2021-07-19 User information updating method for digital audio-visual place and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113535700A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804701A (en) * 2018-06-19 2018-11-13 苏州大学 Personage's portrait model building method based on social networks big data
WO2019120019A1 (en) * 2017-12-20 2019-06-27 Oppo广东移动通信有限公司 User gender prediction method and apparatus, storage medium and electronic device
CN110956303A (en) * 2019-10-12 2020-04-03 未鲲(上海)科技服务有限公司 Information prediction method, device, terminal and readable storage medium
CN111026906A (en) * 2019-12-05 2020-04-17 网乐互联(北京)科技有限公司 Recommendation system for streaming listening audio content in vehicle-mounted scene
CN112825178A (en) * 2019-11-21 2021-05-21 北京沃东天骏信息技术有限公司 Method and device for predicting user gender portrait

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019120019A1 (en) * 2017-12-20 2019-06-27 Oppo广东移动通信有限公司 User gender prediction method and apparatus, storage medium and electronic device
CN108804701A (en) * 2018-06-19 2018-11-13 苏州大学 Personage's portrait model building method based on social networks big data
CN110956303A (en) * 2019-10-12 2020-04-03 未鲲(上海)科技服务有限公司 Information prediction method, device, terminal and readable storage medium
CN112825178A (en) * 2019-11-21 2021-05-21 北京沃东天骏信息技术有限公司 Method and device for predicting user gender portrait
CN111026906A (en) * 2019-12-05 2020-04-17 网乐互联(北京)科技有限公司 Recommendation system for streaming listening audio content in vehicle-mounted scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张凌云等: "《智慧旅游的理论与实践》", vol. 978, 天津:南开大学出版社, pages: 314 - 329 *

Similar Documents

Publication Publication Date Title
CN109547814B (en) Video recommendation method and device, server and storage medium
CN109492772B (en) Method and device for generating information
CN109960761B (en) Information recommendation method, device, equipment and computer readable storage medium
CN112256874A (en) Model training method, text classification method, device, computer equipment and medium
CN111177473B (en) Personnel relationship analysis method, device and readable storage medium
CN111898675B (en) Credit wind control model generation method and device, scoring card generation method, machine readable medium and equipment
CN111639970A (en) Method for determining price of article based on image recognition and related equipment
JP2018200621A (en) Patent requirement propriety prediction device and patent requirement propriety prediction program
CN112925911B (en) Complaint classification method based on multi-modal data and related equipment thereof
CN112381019B (en) Compound expression recognition method and device, terminal equipment and storage medium
CN112036659A (en) Social network media information popularity prediction method based on combination strategy
CN115130711A (en) Data processing method and device, computer and readable storage medium
CN113836338A (en) Fine-grained image classification method and device, storage medium and terminal
CN113656699B (en) User feature vector determining method, related equipment and medium
CN114445121A (en) Advertisement click rate prediction model construction and advertisement click rate prediction method
US20230419195A1 (en) System and Method for Hierarchical Factor-based Forecasting
CN116542783A (en) Risk assessment method, device, equipment and storage medium based on artificial intelligence
CN116910341A (en) Label prediction method and device and electronic equipment
CN113535700A (en) User information updating method for digital audio-visual place and computer readable storage medium
CN115269998A (en) Information recommendation method and device, electronic equipment and storage medium
CN113010664B (en) Data processing method and device and computer equipment
CN112084408B (en) List data screening method, device, computer equipment and storage medium
CN115222112A (en) Behavior prediction method, behavior prediction model generation method and electronic equipment
CN114648688A (en) Method, system and equipment for evaluating landscape level along high-speed rail and readable storage medium
Bakery et al. A new double truncated generalized gamma model with some applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination