CN114610863A - Dialogue text pushing method and device, storage medium and terminal - Google Patents

Dialogue text pushing method and device, storage medium and terminal Download PDF

Info

Publication number
CN114610863A
CN114610863A CN202210080353.1A CN202210080353A CN114610863A CN 114610863 A CN114610863 A CN 114610863A CN 202210080353 A CN202210080353 A CN 202210080353A CN 114610863 A CN114610863 A CN 114610863A
Authority
CN
China
Prior art keywords
dialogue
text data
dialog
feature information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210080353.1A
Other languages
Chinese (zh)
Inventor
曹富康
黄明星
王福钋
徐华韫
沈鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Absolute Health Ltd
Original Assignee
Beijing Absolute Health Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Absolute Health Ltd filed Critical Beijing Absolute Health Ltd
Priority to CN202210080353.1A priority Critical patent/CN114610863A/en
Publication of CN114610863A publication Critical patent/CN114610863A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Databases & Information Systems (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a method and a device for pushing a dialog text, a storage medium and a terminal, relates to the technical field of computing processing, and aims to solve the technical problem of improving the pushing accuracy of the dialog text. The method mainly comprises the following steps: obtaining dialogue text data in the target context dialogue; extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the completed model, and determining user portrait data matched with the dialogue text data; and determining dialog text data to be pushed matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data, and pushing.

Description

Dialogue text pushing method and device, storage medium and terminal
Technical Field
The invention relates to the technical field of computing processing, in particular to a method and a device for pushing a dialog text, a storage medium and a terminal.
Background
With the wide application of artificial intelligence, artificial intelligence has been gradually used in the field of telemarketing to implement conversation processes, for example, in the process of telemarketing insurance products, a conversation robot learns the historical conversations of a large number of insurance agents to determine the optimal reply conversation content in the same scene, so as to complete the intelligent marketing and purchasing of the insurance products.
Currently, the existing conversation-based robot outputs accurate reply content, which is usually a conversation directly selected as the reply content according to a fixed conversational logic, and outputs the conversation to a user. However, because the fixed conversational logic cannot meet diversified user conversation requirements, the reply content corresponding to the conversation content input by the user cannot be accurately matched, the intelligent reply effect of the conversation content is affected, and the pushing accuracy of the conversation text is reduced.
Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is how to improve the pushing accuracy of the dialog text.
According to an aspect of the present invention, there is provided a method for pushing dialog text, including:
obtaining dialogue text data in the target context dialogue;
extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the completed model, and determining user portrait data matched with the dialogue text data;
and determining dialog text data to be pushed matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data, and pushing.
Further, before the extracting the dialogue semantic feature information and the dialogue emotion feature information in the dialogue text data based on the text feature processing model which is trained by the completed model, the method further comprises:
obtaining historical dialogue text data from a preset historical dialogue text database, and performing context analysis on the historical dialogue text data to determine dialogue semantic feature information and dialogue emotion feature information with training labels;
and performing model training on the initial text feature processing model based on the historical dialogue text data, the dialogue semantic feature information with the training labels and the dialogue emotional feature information to obtain a text feature processing model for completing the model training.
Further, the determining dialog text data to be pushed which is matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data comprises:
obtaining historical conversation text data from a preset historical conversation text database, and determining at least one piece of reference conversation text data matched with the conversation semantic feature information, the conversation emotion feature information and the user portrait data, wherein the conversation emotion feature information comprises user emotion feature information and transaction degree feature information;
sequencing the reference dialogue text data through a feature sequencing model which is trained according to the dialogue semantic feature information, the dialogue emotion feature information and the user portrait data to obtain the reference dialogue text data with sequencing marks;
and determining the reference dialog text data corresponding to the first sequencing mark as the dialog text data to be pushed.
Further, the determining at least one reference dialog text data matching the dialog semantic feature information, the dialog emotion feature information, the user portrait data comprises:
performing context analysis on the historical dialogue text data to determine historical semantic feature information and historical dialogue emotional feature information;
matching the dialogue semantic feature information and the dialogue emotion feature information with the historical dialogue semantic feature information and the historical dialogue emotion feature information to determine first reference dialogue text data;
and screening second reference dialogue text data from the first reference dialogue text data according to the identity characteristic information and the transaction characteristic information in the user portrait data to serve as at least one piece of reference dialogue text data to be sequenced.
Further, after determining and pushing the dialog text data to be pushed, which is matched with the dialog text data, based on the dialog semantic feature information, the dialog emotion feature information, and the user portrait data, the method further includes:
if the business operation matched with the target context dialog is completed after the pushing of the dialog text data to be pushed is completed, acquiring the input reply dialog text data, and updating the input reply dialog text data into the preset historical dialog text database so as to perform model training on the text feature processing model again;
and if the business operation matched with the target contextual dialogue is not completed after the pushing of the dialogue text data to be pushed is completed, acquiring the input reply dialogue text data, and updating the dialogue text data in the target contextual dialogue based on the reply dialogue text data so as to extract the dialogue semantic feature information and the dialogue emotional feature information again.
Further, the method further comprises:
detecting whether a service keyword matched with the dialog text data exists in reply dialog text data after the dialog text data to be pushed is pushed, wherein the service keyword is a word having a service dialog relation with the dialog text data;
if the service keywords matched with the dialog text data exist, confirming that the service operation is not executed completely, and updating the reply dialog text data into the dialog text data;
and if the service key words matched with the dialogue text data do not exist, determining that the service operation is finished.
Further, the determining user representation data that matches the dialog text data comprises:
determining service information corresponding to the target context dialog, and analyzing service keywords and identity keywords in the dialog text data;
and extracting user portrait data matched with the service keywords and the identity keywords from a user portrait database, wherein all user portrait data corresponding to different service information are stored in the user portrait database.
According to an aspect of the present invention, there is provided a dialog text push apparatus, including:
the acquisition module is used for acquiring dialogue text data in the target context dialogue;
the determining module is used for extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the model, and determining user portrait data matched with the dialogue text data;
and the pushing module is used for determining and pushing the dialog text data to be pushed, which is matched with the dialog text data, based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data.
Further, the apparatus further comprises: a training module for training the training of the device,
the acquisition module is further used for acquiring historical dialogue text data from a preset historical dialogue text database, performing context analysis on the historical dialogue text data, and determining dialogue semantic feature information and dialogue emotion feature information with training labels;
and the training module is used for carrying out model training on the initial text feature processing model based on the historical dialogue text data, the dialogue semantic feature information with the training labels and the dialogue emotional feature information to obtain a text feature processing model for completing the model training.
Further, the determining module includes:
the acquisition unit is used for acquiring historical conversation text data from a preset historical conversation text database and determining at least one piece of reference conversation text data matched with the conversation semantic feature information, the conversation emotion feature information and the user portrait data, wherein the conversation emotion feature information comprises user emotion feature information and transaction degree feature information;
the processing unit is used for sequencing the reference dialogue text data through a feature sequencing model which is trained according to the dialogue semantic feature information, the dialogue emotion feature information and the user portrait data to obtain the reference dialogue text data with sequencing marks;
and the determining unit is used for determining the reference dialog text data corresponding to the first sequencing mark as the dialog text data to be pushed.
Further, the acquisition unit includes:
the first determining subunit is used for performing context analysis on the historical dialogue text data to determine historical semantic feature information and historical dialogue emotional feature information;
the second determining subunit is used for matching the dialogue semantic feature information and the dialogue emotion feature information with the historical dialogue semantic feature information and the historical dialogue emotion feature information to determine first reference dialogue text data;
and the screening subunit is used for screening second reference dialogue text data from the first reference dialogue text data according to the identity characteristic information and the transaction characteristic information in the user portrait data to serve as at least one reference dialogue text data to be sequenced.
Further, the apparatus further comprises: the updating module is used for updating the data of the data storage module,
the updating module is used for acquiring the input reply dialog text data and updating the input reply dialog text data into the preset historical dialog text database to perform model training on the text feature processing model again if the business operation matched with the target context dialog is completed after the pushing of the dialog text data to be pushed is completed;
and the updating module is further used for acquiring the input reply dialog text data if the business operation matched with the target context dialog is not completed after the push of the dialog text data to be pushed is completed, and updating the dialog text data in the target context dialog based on the reply dialog text data so as to extract the dialog semantic feature information and the dialog emotion feature information again.
Further, the apparatus further comprises: a detection module for detecting the position of the optical fiber,
the detection module is used for detecting whether a service keyword matched with the dialog text data exists in reply dialog text data after the dialog text data to be pushed is pushed, wherein the service keyword is a word having a service dialog relation with the dialog text data;
the determining module is further configured to determine that the business operation is not completed if a business keyword matched with the dialog text data exists, so as to update the reply dialog text data to the dialog text data;
the determining module is further configured to determine that the service operation is completed if the service keyword does not exist, where the service keyword is matched with the dialogue text data.
Further, the determining module includes:
the analysis unit is used for determining the service information corresponding to the target context dialog and analyzing the service keywords and the identity keywords in the dialog text data;
and the extraction unit is used for extracting user portrait data matched with the service keywords and the identity keywords from a user portrait database, and all user portrait data corresponding to different service information are stored in the user portrait database.
According to another aspect of the present invention, a storage medium is provided, where at least one executable instruction is stored, and the executable instruction causes a processor to perform an operation corresponding to the pushing method of the dialog text as described above.
According to still another aspect of the present invention, there is provided a terminal including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the pushing method of the dialog text.
By the technical scheme, the technical scheme provided by the embodiment of the invention at least has the following advantages:
compared with the prior art, the embodiment of the invention has the advantages that the dialog text data in the target context dialog are obtained; extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the completed model, and determining user portrait data matched with the dialogue text data; and determining dialog text data to be pushed matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data, and pushing, so that the push requirement of the targeted text data for dialog with the user is met, diversified user dialog scenes are realized, the user replies more accurately based on the pushed dialog text data, and the accuracy and the effectiveness of man-machine dialog are improved.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
The invention will be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 shows a flowchart of a method for pushing dialog text according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating another dialog text pushing method provided by an embodiment of the present invention;
fig. 3 is a flowchart illustrating a still another dialog text pushing method provided by an embodiment of the present invention;
fig. 4 is a flowchart illustrating a method for pushing dialog text according to another embodiment of the present invention;
fig. 5 is a flowchart illustrating a push method of dialog texts according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a telemarketing process for an insurance product provided by an embodiment of the invention;
fig. 7 is a block diagram illustrating a device for pushing dialog text according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the invention are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the computer system/server include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
The computer system/server may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Aiming at the problem that due to the fact that fixed conversational logic cannot meet diversified user conversation requirements, reply content corresponding to conversation content input by a user cannot be accurately matched, and the intelligent reply effect of the conversation content is influenced, so that the pushing accuracy of the conversation text is reduced, the embodiment of the invention provides a conversation text pushing method, and as shown in fig. 1, the method comprises the following steps:
101. dialog text data in the target contextual dialog is obtained.
In the embodiment of the invention, in a scene that the current execution end provides the intelligent robot to perform the man-machine conversation with the user, the current execution end can be a terminal device or a server end for executing the man-machine conversation, so that the man-machine conversation is performed in different context conversations. The context is a human-computer conversation context which is pre-configured and suitable for executing business operations aiming at different business information so as to provide a matched conversation text for the intelligent robot, and includes but is not limited to a consultation context of insurance products, an application context of insurance claims, a marketing context of insurance products and the like, so that a target context is selected from a plurality of pre-configured contexts, and the conversation text is configured for the intelligent robot.
It should be noted that after determining the dialog of the target context, in order to implement accurate push of the dialog text, the dialog text data in the target context is obtained, and at this time, the dialog text data may be context text data generated by a man-machine dialog in the current target context, so that the dialog text to be recommended is determined based on the dialog text data. The language of the dialog text data includes, but is not limited to, chinese, english, french, and the like, so that when a man-machine dialog is performed in a target context, a corresponding language is selected based on the language of a user to perform the dialog, and meanwhile, the dialog text data may be a sentence text content including a plurality of words or a text content composed of one word, and the embodiment of the present invention is not particularly limited.
102. And extracting the dialogue semantic feature information and the dialogue emotional feature information in the dialogue text data based on the text feature processing model which is trained by the model, and determining user portrait data matched with the dialogue text data.
In the embodiment of the invention, in order to accurately extract the characteristics of the dialogue text data so as to recommend the dialogue text data in a targeted manner, after the dialogue text data is acquired, the dialogue semantic characteristic information and the dialogue emotional characteristic information in the dialogue text data are extracted based on the text characteristic processing model which is trained by the model. The dialogue semantic feature information is feature vector content used for representing context semantics in the dialogue text data, the dialogue emotion feature information is feature vector used for representing semantic emotion in the dialogue text data, and includes but is not limited to user emotion feature information and transaction degree feature information.
It should be noted that, in the embodiment of the present invention, in order to improve the dialog text data to be pushed for determining the dialog text data matching, the user portrait data matching the dialog text data is obtained. The user portrait data describes user characteristics by establishing and depicting the label content of the user through different basic data, such as user gender, user age, user occupation, and the like, and the embodiment of the present invention is not limited specifically. In the embodiment of the present invention, the user portrait data may be pre-stored in the current execution end, or may be obtained by requesting the cloud server, and different dialog text data may be matched with corresponding user portrait data.
103. And determining dialog text data to be pushed matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data, and pushing.
In an embodiment of the present invention, in a target context, after obtaining dialog semantic feature information, dialog emotion feature information, and user portrait data based on dialog text data, dialog text data to be pushed that matches the dialog text data is determined, and at this time, the dialog text data to be pushed is dialog text data to be subjected to voice dialog reply of an intelligent robot that matches the dialog text data, and is thereby pushed to the intelligent robot, so that the intelligent robot performs a broadcast.
It should be noted that, when determining the dialog text data to be pushed, the matched dialog text data may be found from the multiple reference dialog text data based on the pre-configured feature correspondence, and the multiple reference dialog text data may also be screened based on the trained feature sorting model, so as to determine the dialog text data to be pushed.
In an embodiment of the present invention, for further definition and explanation, as shown in fig. 2, before the step of extracting the dialogue semantic feature information and the dialogue emotion feature information in the dialogue text data based on the text feature processing model after model training is completed, the method further includes:
201. obtaining historical dialogue text data from a preset historical dialogue text database, and performing context analysis on the historical dialogue text data to determine dialogue semantic feature information and dialogue emotion feature information with training labels;
202. and performing model training on the initial text feature processing model based on the historical dialogue text data, the dialogue semantic feature information with the training labels and the dialogue emotional feature information to obtain a text feature processing model for completing the model training.
In order to realize accurate extraction of the speech meaning characteristic information and the dialogue emotion characteristic information, historical dialogue text data are obtained from a preset historical dialogue text database, so that context analysis is carried out, model training is carried out on an initial text by a characteristic processing model, and the text characteristic processing model for characteristic extraction is obtained. The preset historical dialogue text database stores a large amount of historical dialogue text data which are in different contexts and have completed manual dialogue, and at this time, because different contexts can meet different business requirements, the historical dialogue text data can be optimal dialogue text data with reference value, for example, in an insurance sales context, the historical dialogue text data can be dialogue text data corresponding to insurance product marketing of excellent sales clerks, and the embodiment of the present invention is not particularly limited. After obtaining the historical dialogue text data, performing context analysis on at least one historical dialogue text data to determine dialogue semantic feature information and dialogue emotion feature information with training labels, wherein the context analysis is used for representing the analysis of semantics in dialogue contents to determine semantic features and emotion features in the dialogue texts.
It should be noted that the initial text feature processing model in the embodiment of the present invention is a pre-trained language model BERT model, and the historical dialogue text data to be trained, the dialogue semantic feature information with training labels, and the dialogue emotion feature information are all represented in a vector form, so that based on the historical dialogue text data, the dialogue semantic feature information with training labels, and the dialogue emotion feature information, the initial text feature processing model is subjected to model training, and a text feature processing model for completing model training is obtained. The model training of the BERT model is not specifically limited, and meanwhile, in order to meet the requirements of rapidness and convenience of a dialogue processing service, the initial text feature processing model is not further optimized, so that after the model training is completed, the text feature processing model after the model training is directly used for feature extraction.
For further definition and explanation, in an embodiment of the present invention, as shown in fig. 3, the step of determining dialog text data to be pushed, which matches the dialog text data based on the dialog semantic feature information, the dialog emotion feature information, and the user portrait data includes:
301. obtaining historical dialogue text data from a preset historical dialogue text database, and determining at least one piece of reference dialogue text data matched with the dialogue semantic feature information, the dialogue emotional feature information and the user portrait data;
302. sequencing the reference dialogue text data through a feature sequencing model which is trained according to the dialogue semantic feature information, the dialogue emotion feature information and the user portrait data to obtain the reference dialogue text data with sequencing marks;
303. and determining the reference dialog text data corresponding to the first sequencing mark as the dialog text data to be pushed.
In the embodiment of the invention, in order to accurately determine the dialog text data to be pushed based on the extracted semantic feature information, the dialog emotion feature information and the user portrait data, at least one reference dialog text is obtained, and therefore, the dialog text data to be pushed is determined according to the sequencing of each reference dialog text by the trained feature sequencing model. Similarly, a large amount of historical dialogue text data which are finished with manual dialogue in different contexts are stored in the preset historical dialogue text database, at this time, different contexts can meet different business requirements, so that the historical dialogue text data can be optimal dialogue text data with reference value, and at this time, at least one piece of reference dialogue text data matched with the utterance meaning feature information, the dialogue emotion feature information and the user portrait data can be based on the dialogue meaning feature information. Specifically, the determination may be performed based on a preconfigured reference dialogue corresponding relationship, for example, the parameter dialogue corresponding relationship records reference dialogue text data 1 and reference dialogue text data 2 corresponding to purchase semantics (utterance meaning feature information), interest features (dialogue emotion feature information), and female features (user portrait data), so that the reference dialogue text data 1 and the reference dialogue text data 2 are sorted by a feature sorting model after model training according to the purchase semantics, the interest features, and the female features, and the reference dialogue text data 1 and the reference dialogue text data 2 with sorting marks are obtained. In addition, in order to improve matching accuracy of reference dialog text data, the dialog emotion feature information further includes user emotion feature information and transaction degree feature information, the user emotion feature information is used for representing feature content of user emotion in a dialog process, for example, a word of 'mixed eggs' in the dialog text data input by a user can be used for extracting that the user emotion feature is not happy, the transaction degree feature information is used for representing whether the transaction is successful or not strongly intention, for example, a word of 'i want to buy insurance 1' in the dialog text data input by the user can be used for extracting that the transaction degree feature information is strongly intention of the transaction, and embodiments of the present invention are not particularly limited.
It should be noted that the feature ordering model in the embodiment of the present invention may be based on a deep learning model Xdeepfm, Din, or the like in a machine learning algorithm, so that multiple reference dialog text data are ordered by the feature ordering model, and the ordering basis may be the reference dialog text data that is optimally matched with the dialog semantic feature information, the dialog emotion feature information, and the user portrait data, for example, the feature ordering model is obtained by training the deep learning model based on a reference dialog text training sample set with an optimal ordering label. And when the reference dialogue text data with the sequencing marks are obtained, determining the reference dialogue text data corresponding to the first sequencing mark as the dialogue text data to be pushed, namely, taking the first sequenced reference dialogue text data in the plurality of reference dialogue text data with the sequencing marks as the dialogue text data to be pushed, so as to be used as the dialogue text data replied by the intelligent robot.
For further definition and explanation, in one embodiment of the present invention, as shown in FIG. 4, the step of determining at least one reference dialog text data that matches the dialog semantic feature information, the dialog emotion feature information, and the user portrait data comprises:
401. performing context analysis on the historical dialogue text data to determine historical semantic feature information and historical dialogue emotional feature information;
402. matching the dialogue semantic feature information and the dialogue emotion feature information with the historical dialogue semantic feature information and the historical dialogue emotion feature information to determine first reference dialogue text data;
403. and screening second reference dialogue text data from the first reference dialogue text data according to the identity characteristic information and the transaction characteristic information in the user portrait data to serve as at least one piece of reference dialogue text data to be sequenced.
Specifically, in order to improve matching accuracy of reference dialogue text data and perform optimal recommendation of dialogue, context analysis is performed on historical dialogue text data, after historical semantic feature information and historical dialogue emotion feature information are determined, extracted dialogue semantic feature information and dialogue emotion feature information are matched with the historical semantic feature information and the historical dialogue emotion feature information, matching can be performed according to word similarity values, historical semantic feature information with similarity meeting a threshold value and historical dialogue text data corresponding to the historical dialogue emotion feature information are used as first reference dialogue text data, and preset similarity is not specifically limited. Further, after at least one first reference contrast text data is determined, second reference dialogue text data is screened from the first reference dialogue text data according to identity feature information and transaction feature information in the user portrait data to serve as at least one reference dialogue text data to be sequenced. The user portrait data comprises identity characteristic information and transaction characteristic information which are respectively used for limiting the identity of the user and whether the user successfully transacts for a specific product, so that second reference dialogue text data is screened from the first reference dialogue text data and used as at least one reference dialogue text data to be sequenced. The screening method may be to compare the identity feature words and the transaction feature words with words in the first reference dialogue text data one by one, and use the matched reference dialogue text data (which may be matched according to a complete comparison method or a similarity matching method) as the second reference dialogue text data obtained by screening, which is not limited in the embodiment of the present invention.
In an embodiment of the present invention, for further definition and description, after determining dialog text data to be pushed that matches the dialog text data based on the dialog semantic feature information, the dialog emotion feature information, and the user portrait data, and pushing, the method further includes:
if the business operation matched with the target context dialog is completed after the pushing of the dialog text data to be pushed is completed, acquiring the input reply dialog text data, and updating the input reply dialog text data into the preset historical dialog text database so as to perform model training on the text feature processing model again;
and if the business operation matched with the target contextual dialogue is not completed after the pushing of the dialogue text data to be pushed is completed, acquiring the input reply dialogue text data, and updating the dialogue text data in the target contextual dialogue based on the reply dialogue text data so as to extract the dialogue semantic feature information and the dialogue emotional feature information again.
Specifically, after the dialogue text data to be pushed is pushed to the intelligent robot, in order to improve the processing accuracy of each machine learning model, when the business operation of matching the target context dialogue is completed after the pushing, reply dialogue text data input by a user is obtained, and the reply dialogue text data is updated to a preset historical dialogue text database, so that the model-rebuilding training is performed on the text feature processing model based on the updated historical dialogue text data. And after the pushing is completed and the business operation matched with the target contextual dialogue is not completed, acquiring reply dialogue text data input by the user, and updating the dialogue text data in the target contextual dialogue based on the reply dialogue text data, namely, explaining that the reply dialogue text data input by the user is used as new dialogue text data for determining the dialogue text data to be pushed to perform the steps of feature extraction, sequencing and the like when the business operation is not completed. For example, in the telemarketing process of insurance products shown in fig. 6, in the embodiment of the present invention, the execution method of step 101 and step 103 serves as a conversational recommendation model, the dialog text data to be pushed is pushed to the reply agent of the robot so as to broadcast excellent conversations, the user performs a voice reply as the reply dialog text data for feedback, after the insurance marketing operation is completed, the dialog text data is updated to the historical dialog text data, and if the insurance marketing operation is not completed, the dialog text data is updated to the dialog text data in step 101 as a dialog context. In addition, the business operations include, but are not limited to, business execution contents matched in different contexts, for example, insurance purchase operations, insurance claim settlement operations, and the like, and the embodiments of the present invention are not limited in particular.
In an embodiment of the present invention, for further definition and explanation, as shown in fig. 5, the steps further include:
501. detecting whether a service keyword matched with the dialog text data exists in reply dialog text data after the dialog text data to be pushed is pushed;
502. if the service keywords matched with the dialog text data exist, confirming that the service operation is not executed completely, and updating the reply dialog text data into the dialog text data;
503. and if the service key words matched with the dialogue text data do not exist, determining that the service operation is finished.
In the embodiment of the invention, in order to ensure that the business operation is completed based on the pushed conversation text data, after the pushed conversation text data is broadcast to the user by the intelligent robot after being pushed, the current execution end receives the reply conversation text data input by the user, and at the moment, whether the matched business keywords exist in the reply conversation text data is detected. If the service keywords are matched with the conversation text data, the reply conversation text data and the conversation text data are in the context corresponding to the same service information and belong to the content of the user for performing targeted reply based on the pushed conversation text data, and therefore the reply conversation text data is updated to the conversation text data to re-determine the conversation text data to be pushed if the current service operation is determined not to be completed. If no service keyword matched with the conversation text data exists, the reply conversation text data and the conversation text data are not in the context corresponding to the same service information and belong to the content of non-targeted reply of the user based on the pushed conversation text data, so that the current service operation is determined to be completed, at the moment, a finish word can be further pushed so as to push the intelligent robot to finish the conversation.
For further definition and illustration, in one embodiment of the present invention, the step of determining user representation data matching said dialog text data comprises: determining service information corresponding to the target context dialog, and analyzing service keywords and identity keywords in the dialog text data; and extracting user portrait data matched with the business keywords and the identity keywords from a user portrait database.
In the embodiment of the present invention, in order to accurately determine the dialog text data to be pushed, the user portrait data matched with the dialog text data is determined, specifically, the business information matched with the target context dialog is first determined, at this time, the business information includes, but is not limited to, insurance sales, product introduction, information collection, and the like, and the embodiment of the present invention is not particularly limited. Meanwhile, the service keywords and the identity keywords corresponding to the service information in the dialogue text data are analyzed based on a natural language processing technology, for example, the service information is insurance marketing, the dialogue text data is 'I want to consult, can buy insurance a at age 33', the analyzed service keywords are insurance a, and the identity keywords are age 33. Further, user image data corresponding to the service keyword and the identity keyword is extracted from the user image database, all user image data corresponding to different service information are stored in the user image database, for example, user information of user image data of 33-year-old purchase insurance a is extracted from the user image database according to the service keyword insurance a and the identity keyword 33 years old, and the embodiment of the present invention is not particularly limited.
Compared with the prior art, the embodiment of the invention provides a method for pushing a dialog text, which comprises the steps of obtaining dialog text data in a target context dialog; extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the completed model, and determining user portrait data matched with the dialogue text data; and determining dialog text data to be pushed matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data, and pushing, so that the push requirement of the targeted text data for dialog with the user is met, diversified user dialog scenes are realized, the user replies more accurately based on the pushed dialog text data, and the accuracy and the effectiveness of man-machine dialog are improved.
Further, as an implementation of the method shown in fig. 1, an embodiment of the present invention provides a device for pushing a dialog text, where as shown in fig. 7, the device includes:
an obtaining module 61, configured to obtain dialog text data in the target contextual dialog;
a determining module 62, configured to extract, based on a text feature processing model with model training completed, dialogue semantic feature information and dialogue emotional feature information in the dialogue text data, and determine user portrait data matching the dialogue text data;
and the pushing module 63 is configured to determine dialog text data to be pushed, which is matched with the dialog text data, based on the dialog semantic feature information, the dialog emotion feature information, and the user portrait data, and push the dialog text data.
Further, the apparatus further comprises: a training module for training the training of the device,
the acquisition module is further used for acquiring historical dialogue text data from a preset historical dialogue text database, performing context analysis on the historical dialogue text data, and determining dialogue semantic feature information and dialogue emotion feature information with training labels;
and the training module is used for carrying out model training on the initial text feature processing model based on the historical dialogue text data, the dialogue semantic feature information with the training labels and the dialogue emotional feature information to obtain a text feature processing model for completing the model training.
Further, the determining module includes:
the acquisition unit is used for acquiring historical conversation text data from a preset historical conversation text database and determining at least one piece of reference conversation text data matched with the conversation semantic feature information, the conversation emotion feature information and the user portrait data, wherein the conversation emotion feature information comprises user emotion feature information and transaction degree feature information;
the processing unit is used for sequencing the reference dialogue text data through a feature sequencing model which is trained according to the dialogue semantic feature information, the dialogue emotion feature information and the user portrait data to obtain the reference dialogue text data with sequencing marks;
and the determining unit is used for determining the reference dialog text data corresponding to the first sequencing mark as the dialog text data to be pushed.
Further, the acquisition unit includes:
the first determining subunit is used for performing context analysis on the historical dialogue text data to determine historical semantic feature information and historical dialogue emotional feature information;
the second determining subunit is used for matching the dialogue semantic feature information and the dialogue emotion feature information with the historical dialogue semantic feature information and the historical dialogue emotion feature information to determine first reference dialogue text data;
and the screening subunit is used for screening second reference dialogue text data from the first reference dialogue text data according to the identity characteristic information and the transaction characteristic information in the user portrait data to serve as at least one reference dialogue text data to be sequenced.
Further, the apparatus further comprises: updating module
The updating module is used for acquiring the input reply dialog text data and updating the input reply dialog text data into the preset historical dialog text database to perform model training on the text feature processing model again if the business operation matched with the target context dialog is completed after the pushing of the dialog text data to be pushed is completed;
and the updating module is further used for acquiring the input reply dialog text data if the business operation matched with the target context dialog is not completed after the push of the dialog text data to be pushed is completed, and updating the dialog text data in the target context dialog based on the reply dialog text data so as to extract the dialog semantic feature information and the dialog emotion feature information again.
Further, the apparatus further comprises: a detection module for detecting the position of the optical fiber,
the detection module is used for detecting whether a service keyword matched with the dialog text data exists in reply dialog text data after the dialog text data to be pushed is pushed, wherein the service keyword is a word having a service dialog relation with the dialog text data;
the determining module is further configured to determine that the business operation is not completed if a business keyword matched with the dialog text data exists, so as to update the reply dialog text data to the dialog text data;
the determining module is further configured to determine that the service operation is completed if the service keyword does not exist, where the service keyword is matched with the dialogue text data.
Further, the determining module includes:
the analysis unit is used for determining the service information corresponding to the target context dialog and analyzing the service keywords and the identity keywords in the dialog text data;
and the extraction unit is used for extracting user portrait data matched with the service keywords and the identity keywords from a user portrait database, and all user portrait data corresponding to different service information are stored in the user portrait database.
Compared with the prior art, the embodiment of the invention provides a device for pushing the dialog text, and the device provided by the invention has the advantages that the dialog text data in the target context dialog are obtained; extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the completed model, and determining user portrait data matched with the dialogue text data; and determining dialog text data to be pushed matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data, and pushing, so that the push requirement of the targeted text data for dialog with the user is met, diversified user dialog scenes are realized, the user replies more accurately based on the pushed dialog text data, and the accuracy and the effectiveness of man-machine dialog are improved.
According to an embodiment of the present invention, a computer storage medium is provided, where at least one executable instruction is stored, and the computer executable instruction can execute the processing method of claim data in any method embodiment described above.
Fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 8, the terminal may include: a processor (processor)702, a Communications Interface 704, a memory 706, and a communication bus 708.
Wherein: the processor 702, communication interface 704, and memory 706 communicate with each other via a communication bus 708.
A communication interface 704 for communicating with network elements of other devices, such as clients or other servers.
The processor 702 is configured to execute the program 710, and may specifically execute relevant steps in the embodiment of the method for processing claim data.
In particular, the program 710 may include program code that includes computer operating instructions.
The processor 702 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 706 stores a program 710. The memory 706 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 710 may be specifically configured to cause the processor 302 to perform the following operations:
obtaining dialogue text data in the target context dialogue;
extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the completed model, and determining user portrait data matched with the dialogue text data;
and determining dialog text data to be pushed matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data, and pushing.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The method and system of the present invention may be implemented in a number of ways. For example, the methods and systems of the present invention may be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustrative purposes only, and the steps of the method of the present invention are not limited to the order specifically described above unless specifically indicated otherwise. Furthermore, in some embodiments, the present invention may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present invention. Thus, the present invention also covers a recording medium storing a program for executing the method according to the present invention.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A method for pushing dialog text, comprising:
obtaining dialogue text data in the target context dialogue;
extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the completed model, and determining user portrait data matched with the dialogue text data;
and determining dialog text data to be pushed matched with the dialog text data based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data, and pushing.
2. The method of claim 1, wherein before extracting the dialogue semantic feature information and the dialogue emotion feature information in the dialogue text data based on the text feature processing model with model training completed, the method further comprises:
obtaining historical dialogue text data from a preset historical dialogue text database, and performing context analysis on the historical dialogue text data to determine dialogue semantic feature information and dialogue emotion feature information with training labels;
and performing model training on the initial text feature processing model based on the historical dialogue text data, the dialogue semantic feature information with the training labels and the dialogue emotional feature information to obtain a text feature processing model for completing the model training.
3. The method of claim 1 or 2, wherein the determining dialog text data to be pushed that matches the dialog text data based on the dialog semantic feature information, the dialog emotion feature information, and the user portrait data comprises:
obtaining historical conversation text data from a preset historical conversation text database, and determining at least one piece of reference conversation text data matched with the conversation semantic feature information, the conversation emotion feature information and the user portrait data, wherein the conversation emotion feature information comprises user emotion feature information and transaction degree feature information;
sequencing the reference dialogue text data through a feature sequencing model which is trained according to the dialogue semantic feature information, the dialogue emotion feature information and the user portrait data to obtain the reference dialogue text data with sequencing marks;
and determining the reference dialog text data corresponding to the first sequencing mark as the dialog text data to be pushed.
4. The method of claim 3, wherein said determining at least one reference dialog text data that matches the dialog semantic feature information, the dialog emotion feature information, the user portrait data comprises:
performing context analysis on the historical dialogue text data to determine historical semantic feature information and historical dialogue emotional feature information;
matching the dialogue semantic feature information and the dialogue emotion feature information with the historical dialogue semantic feature information and the historical dialogue emotion feature information to determine first reference dialogue text data;
and screening second reference dialogue text data from the first reference dialogue text data according to the identity characteristic information and the transaction characteristic information in the user portrait data to serve as at least one piece of reference dialogue text data to be sequenced.
5. The method of claim 1, wherein after determining the dialog text data to be pushed that matches the dialog text data based on the dialog semantic feature information, the dialog emotion feature information, and the user portrait data, and pushing, the method further comprises:
if the business operation matched with the target context dialog is completed after the pushing of the dialog text data to be pushed is completed, acquiring the input reply dialog text data, and updating the input reply dialog text data into the preset historical dialog text database so as to perform model training on the text feature processing model again;
and if the business operation matched with the target contextual dialogue is not completed after the pushing of the dialogue text data to be pushed is completed, acquiring the input reply dialogue text data, and updating the dialogue text data in the target contextual dialogue based on the reply dialogue text data so as to extract the dialogue semantic feature information and the dialogue emotional feature information again.
6. The method of claim 5, further comprising:
detecting whether a service keyword matched with the dialog text data exists in reply dialog text data after the dialog text data to be pushed is pushed, wherein the service keyword is a word having a service dialog relation with the dialog text data;
if the service keywords matched with the dialog text data exist, confirming that the service operation is not executed completely, and updating the reply dialog text data into the dialog text data;
and if the service key words matched with the dialogue text data do not exist, determining that the service operation is finished.
7. The method of any of claims 1-6, wherein determining user representation data that matches the dialog text data comprises:
determining service information corresponding to the target context dialog, and analyzing service keywords and identity keywords in the dialog text data;
and extracting user portrait data matched with the service keywords and the identity keywords from a user portrait database, wherein all user portrait data corresponding to different service information are stored in the user portrait database.
8. A device for pushing dialog text, comprising:
the acquisition module is used for acquiring dialogue text data in the target context dialogue;
the determining module is used for extracting dialogue semantic feature information and dialogue emotional feature information in the dialogue text data based on a text feature processing model which is trained by the model, and determining user portrait data matched with the dialogue text data;
and the pushing module is used for determining and pushing the dialog text data to be pushed, which is matched with the dialog text data, based on the dialog semantic feature information, the dialog emotion feature information and the user portrait data.
9. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the pushing method of dialog text according to any of claims 1-7.
10. A terminal, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the dialog text pushing method according to any one of claims 1-7.
CN202210080353.1A 2022-01-24 2022-01-24 Dialogue text pushing method and device, storage medium and terminal Pending CN114610863A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210080353.1A CN114610863A (en) 2022-01-24 2022-01-24 Dialogue text pushing method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210080353.1A CN114610863A (en) 2022-01-24 2022-01-24 Dialogue text pushing method and device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN114610863A true CN114610863A (en) 2022-06-10

Family

ID=81857845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210080353.1A Pending CN114610863A (en) 2022-01-24 2022-01-24 Dialogue text pushing method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN114610863A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118193683A (en) * 2024-05-14 2024-06-14 福州掌中云科技有限公司 Text recommendation method and system based on language big model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118193683A (en) * 2024-05-14 2024-06-14 福州掌中云科技有限公司 Text recommendation method and system based on language big model

Similar Documents

Publication Publication Date Title
CN107705066B (en) Information input method and electronic equipment during commodity warehousing
US9740677B2 (en) Methods and systems for analyzing communication situation based on dialogue act information
US9792279B2 (en) Methods and systems for analyzing communication situation based on emotion information
CN108027814B (en) Stop word recognition method and device
CN110019742B (en) Method and device for processing information
CN111062220B (en) End-to-end intention recognition system and method based on memory forgetting device
CN110955750A (en) Combined identification method and device for comment area and emotion polarity, and electronic equipment
CN111143530A (en) Intelligent answering method and device
CN115146712B (en) Internet of things asset identification method, device, equipment and storage medium
CN112905665A (en) Express delivery data mining method, device, equipment and storage medium
CN108205524B (en) Text data processing method and device
CN111368066B (en) Method, apparatus and computer readable storage medium for obtaining dialogue abstract
CN114783421A (en) Intelligent recommendation method and device, equipment and medium
CN110633475A (en) Natural language understanding method, device and system based on computer scene and storage medium
CN111639162A (en) Information interaction method and device, electronic equipment and storage medium
CN113051380A (en) Information generation method and device, electronic equipment and storage medium
CN109684444A (en) A kind of intelligent customer service method and system
CN110795942B (en) Keyword determination method and device based on semantic recognition and storage medium
CN112581297B (en) Information pushing method and device based on artificial intelligence and computer equipment
CN110750626B (en) Scene-based task-driven multi-turn dialogue method and system
CN108984777B (en) Customer service method, apparatus and computer-readable storage medium
CN114610863A (en) Dialogue text pushing method and device, storage medium and terminal
CN113609865A (en) Text emotion recognition method and device, electronic equipment and readable storage medium
CN114528851B (en) Reply sentence determination method, reply sentence determination device, electronic equipment and storage medium
CN112989003B (en) Intention recognition method, device, processing equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100102 201 / F, block C, 2 lizezhong 2nd Road, Chaoyang District, Beijing

Applicant after: Beijing Shuidi Technology Group Co.,Ltd.

Address before: 100102 201, 2 / F, block C, No.2 lizezhong 2nd Road, Chaoyang District, Beijing

Applicant before: Beijing Health Home Technology Co.,Ltd.