CN118095443B - Training method and equipment for generating large text model according to facts - Google Patents

Training method and equipment for generating large text model according to facts Download PDF

Info

Publication number
CN118095443B
CN118095443B CN202410478429.5A CN202410478429A CN118095443B CN 118095443 B CN118095443 B CN 118095443B CN 202410478429 A CN202410478429 A CN 202410478429A CN 118095443 B CN118095443 B CN 118095443B
Authority
CN
China
Prior art keywords
information
sentence
content
meaning
webpage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410478429.5A
Other languages
Chinese (zh)
Other versions
CN118095443A (en
Inventor
杨恒
龙涛
余文炫
李轩
吴永杰
李娟�
陈序
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aimo Technology Co ltd
Original Assignee
Shenzhen Aimo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aimo Technology Co ltd filed Critical Shenzhen Aimo Technology Co ltd
Priority to CN202410478429.5A priority Critical patent/CN118095443B/en
Publication of CN118095443A publication Critical patent/CN118095443A/en
Application granted granted Critical
Publication of CN118095443B publication Critical patent/CN118095443B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to the technical field of Internet, in particular to a training method and training equipment for generating a large text model according to facts. The method comprises the following steps: acquiring at least one keyword/sentence based on content input by a user; acquiring webpage information of the field associated with the keywords/sentences according to the keywords/sentences; marking the content of the webpage information according to the keywords/sentences to generate an extraction data set of the webpage structure; optimizing a prediction model of a webpage structure according to the data set, establishing an optimization model, and integrating the optimization model into an information retrieval system; cleaning the webpage information according to the optimization model to obtain pure text information; and inputting the keywords/sentences and the pure text information into the LLM large model to obtain optimization information. The technical problem of low webpage data quality when information is searched is solved.

Description

Training method and equipment for generating large text model according to facts
Technical Field
The invention relates to the technical field of Internet, in particular to a training method and training equipment for generating a large text model according to facts.
Background
When a user accesses the internet, the quality of LLM generation is often determined by retrieving information enhancement generation (RAG), for example, in the extraction of information such as picture information structuring and web page information structuring. In the prior art, extraction of web page structure in a specific field (such as a news web page) is often ignored, so that the finally extracted information is incomplete, or the content in the web page is not extracted at all, and a web page structure language such as "< div > < html > </title >" is directly extracted as effective information, so that vector storage and subsequent retrieval are performed, wrong retrieval is caused, and some useless information is input into the web page through LLM.
In summary, the technical problem to be solved by the present invention is how to solve the technical problem of low quality of web page data when retrieving information.
Disclosure of Invention
The invention aims to provide a training method and training equipment for generating a large model by words according to facts, which solve the technical problem of low webpage data quality when information is searched. The preferred technical solutions of the technical solutions provided by the present invention can produce a plurality of technical effects described below.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the invention provides a training method for generating a large model by words according to facts, which comprises the following steps:
acquiring at least one keyword/sentence based on content input by a user;
acquiring webpage information of the field associated with the keywords/sentences according to the keywords/sentences;
Marking the content of the webpage information according to the keywords/sentences to generate an extraction data set of the webpage structure;
Optimizing a prediction model of a webpage structure according to the data set, establishing an optimization model, and integrating the optimization model into an information retrieval system;
cleaning the webpage information according to the optimization model to obtain pure text information;
and inputting the keywords/sentences and the pure text information into the LLM large model to obtain optimization information.
When marking the content of the webpage information according to the keywords/sentences, marking the content of the webpage information according to the degree of coincidence of the content in the webpage information and the meaning/sentence of at least one keyword/sentence;
When the content of the webpage information is marked according to the meaning of the keyword/sentence, the meaning of the content in the webpage information is analyzed, and the marking is carried out according to the coincidence degree of the meaning of the keyword/sentence and the meaning of the content of the webpage information so as to generate a data set of the webpage structure.
Preferably, when acquiring the webpage information of the domain associated with the keyword/sentence according to the keyword/sentence, the meaning/sentence meaning of the keyword/sentence is analyzed, and the webpage information of the domain associated with the meaning/sentence is acquired according to the meaning/sentence meaning of the keyword/sentence.
Preferably, when the content of the web page information is marked according to the keywords/sentences, the content in the web page information is effective information when the content in the web page information is related to at least one keyword/sentence, and the content in the web page information is ineffective information when the content in the web page information is unrelated to at least one keyword/sentence
Preferably, a coincidence degree threshold is preset, when labeling is performed according to the coincidence degree of the meaning/sentence of the keyword/sentence and the meaning of the content of the webpage information, a coincidence degree value of the meaning/sentence of the content in the webpage information and the meaning/sentence of at least one keyword/sentence is analyzed, the relationship between the coincidence degree value and the coincidence degree threshold is judged, the invalid information is obtained when the coincidence degree value is smaller than the coincidence degree threshold, and the valid information is obtained when the coincidence degree value is larger than or equal to the coincidence degree threshold.
Preferably, when the optimization model is built after the prediction model of the web page structure is optimized according to the data set, the optimization model is built through the effective information of the web page information.
In view of the above, a second object of the present invention is to provide a computer-readable storage medium having a computer program stored thereon, the computer program implementing the steps of the training method when executed.
In view of the above, a third object of the present invention is to provide a training system, comprising: one or more processors;
And a memory for storing one or more computer programs, the one or more processors for executing the one or more computer programs stored by the memory, to cause the one or more processors to perform the training method as described above.
In view of the above, a fourth object of the present invention is to provide a training system as above provided on a terminal device.
By implementing one of the technical schemes, the invention has the following advantages or beneficial effects: by establishing an optimization model according to the data set and inputting the optimization model into the webpage through the LLM large model, the technical problem of low webpage data quality when retrieving information is solved; marking is carried out through the coincidence degree of the word meaning/sentence meaning of the keyword/sentence and the semantic meaning of the content of the webpage information, so that the content in the webpage after the information is searched is more comprehensive; by providing quality in the web structure when retrieving information, the user experience and efficiency are improved.
Drawings
For a clearer description of the technical solutions of embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art, in which:
fig. 1 is a flow chart of a training method of a web page structure according to the present invention.
Detailed Description
For a better understanding of the objects, technical solutions and advantages of the present invention, reference should be made to the various exemplary embodiments described hereinafter with reference to the accompanying drawings, which form a part hereof, and in which are described various exemplary embodiments which may be employed in practicing the present invention. The same reference numbers in different drawings identify the same or similar elements unless expressly stated otherwise. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. It is to be understood that they are merely examples of processes, methods, apparatuses, etc. that are consistent with certain aspects of the present disclosure as detailed in the appended claims, other embodiments may be utilized, or structural and functional modifications may be made to the embodiments set forth herein without departing from the scope and spirit of the present disclosure.
In the description of the present invention, it should be understood that the terms "center," "longitudinal," "transverse," and the like are used in an orientation or positional relationship based on that shown in the drawings, and are merely for convenience in describing the present invention and to simplify the description, rather than to indicate or imply that the elements referred to must have a particular orientation, be constructed and operate in a particular orientation. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. The term "plurality" means two or more. The terms "connected," "coupled" and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, integrally connected, mechanically connected, electrically connected, communicatively connected, directly connected, indirectly connected via intermediaries, or may be in communication with each other between two elements or in an interaction relationship between the two elements. The term "and/or" includes any and all combinations of one or more of the associated listed items. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In order to illustrate the technical solutions of the present invention, the following description is made by specific embodiments, only the portions related to the embodiments of the present invention are shown.
Embodiment one:
As shown in FIG. 1, the invention provides a training method for generating a large text model according to facts, which specifically comprises the following steps:
step S100: acquiring at least one keyword/sentence based on content input by a user; it should be noted that what is described herein is that when a user needs to retrieve information for a specific field, the user analyzes the content input by the user through the content input in a dialog box, a search field, or the like to obtain keywords/sentences. It should be further noted that keywords/sentences should include keywords.
Step S200: and acquiring the webpage information of the domain associated with the keywords/sentences according to the keywords/sentences. It should be noted that, what is described herein is that, according to the keywords/sentences obtained in step S100, the web page information in the related and/or same domain as the keywords/sentences is queried in the internet database, and the web page information includes the queried content. In addition, when the web page information in the area related to and/or the same as the keyword/sentence is queried through the keyword/sentence, the web page information comprises but is not limited to an internet database, other customized databases or any other databases available for query can be also used, so that the query result is more comprehensive and complete when the web page information in the area related to and/or the same as the keyword/sentence is queried through the keyword/sentence.
Step S300: and marking the content of the webpage information according to the keywords/sentences to generate an extraction data set of the webpage structure. It should be noted that, in the embodiment, after the web page information in the area related to and/or the same as the keyword/sentence is queried through the keyword/sentence in step S200, the content in the web page information is labeled again by using the keyword/sentence, and in this embodiment, at least the content in the web page information is labeled as effective information and ineffective information, where the effective information refers to that the content in the web page information is related to at least one keyword/sentence, and the ineffective information refers to that the content in the web page information is not related to the keyword/sentence, and it should be noted that the labeling can be manually labeled or may be labeled through other intelligent algorithms.
Step S400: and (3) optimizing the prediction model of the webpage structure according to the data set generated in the step (S300), establishing an optimization model, and integrating the optimization model into the information retrieval system. It should be noted that, the prediction model learns the main text and the invalid text of each html block according to the data set acquired in step S300, which is not only the valid information but also the invalid information, so that an optimized model of the html block only including the valid information is established after the valid information is adjusted, thereby reducing the interference of the html block including the invalid information.
Step S500: and (3) cleaning the webpage information according to the optimization model established in the step (S400) to obtain pure text information, namely pure text information.
Step S600: the keyword/sentence and the pure text information are input into the LLM large model to obtain the optimized information, so that the quality of the webpage structure in the process of retrieving the information is improved, and the text in the webpage is cleaner and more effective.
As an alternative implementation manner, when acquiring the webpage information of the domain associated with the keyword/sentence according to the keyword/sentence, the webpage information of the domain associated with the word/sentence, such as the paraphrasing/paraphrasing of the keyword/sentence, is further acquired through analyzing the word/sentence meaning of the keyword/sentence.
As an alternative implementation manner, when the content of the webpage information is marked according to the keywords/sentences, the degree of coincidence between the content in the webpage information and the meaning/sentence of the keywords/sentences is marked.
As an alternative embodiment, when the content of the web page information is marked according to the meaning/sentence of the keyword/sentence, the meaning of the content in the web page information is analyzed, and the marking is performed according to the degree to which the meaning/sentence of the keyword/sentence matches the meaning of the content in the web page information, that is, the marking is performed according to the meaning/sentence of the keyword/sentence and the meaning of the content in the web page information.
As an alternative implementation manner, when marking the content in the webpage information according to the keywords/sentences, marking the content in the webpage information as effective information when the content in the webpage information is related to the keywords/sentences, otherwise marking the content in the webpage information as ineffective information when the content in the webpage information is not related to the keywords/sentences; similarly, when the content in the webpage information is marked according to the meaning/meaning of the keyword/sentence, marking the content in the webpage information and the meaning/meaning of the keyword/sentence as effective information, otherwise, marking invalid information when the content in the webpage information and the meaning/meaning of the keyword/sentence are irrelevant; similarly, when labeling is performed according to the degree of coincidence between the meaning of the keyword/sentence and the meaning of the content of the web page information, the meaning of the content in the web page information and the meaning/meaning of the keyword/sentence are labeled as valid information, otherwise, the meaning of the content in the web page information and the meaning/meaning of the keyword/sentence are labeled as invalid information.
As an alternative implementation manner, when marking is performed according to the meaning of the keyword/sentence and the meaning of the content of the webpage information, through preset coincidence degree threshold value, when marking is performed according to the coincidence degree of the meaning of the keyword/sentence and the content of the webpage information, coincidence degree values of the meaning of the content in the webpage information and the meaning of the keyword/sentence are analyzed, the relation between the coincidence degree values and the coincidence degree threshold value is judged, and when the coincidence degree values are smaller than the coincidence degree threshold value, the coincidence degree values are effective information when the coincidence degree values are larger than or equal to the coincidence degree threshold value.
As an alternative implementation mode, when the optimization model is built after the prediction model of the webpage structure is optimized according to the data set, the optimization model is built through effective information marked by the content in the webpage information. The effective information is input into the webpage interface of the user only through the LLM big model.
This embodiment is a specific example only and does not suggest one such implementation of the invention.
Embodiment two:
The second embodiment is different from the first embodiment in that: the training system of the second embodiment by using the training method of the first embodiment comprises an acquisition module, an interaction module, a labeling module, an establishment module and an information input module; wherein,
The acquisition module is used for: for obtaining at least one keyword/sentence in the content input by the user; it should be noted that what is described herein is that when a user needs to retrieve information for a specific field, the user analyzes the content input by the user through the content input in a dialog box, a search field, or the like to obtain keywords/sentences. It should be further noted that keywords/sentences should include keywords.
And an interaction module: the method comprises the steps of acquiring webpage information of a field associated with at least one keyword/sentence; it should be noted that what is described herein is that, according to the keywords/sentences acquired by the acquisition module, web page information in the field related to and/or the same as the keywords/sentences is queried in the internet database, where the web page information includes queried content. In addition, when the web page information in the area related to and/or the same as the keyword/sentence is queried through the keyword/sentence, the web page information in the area related to and/or the same as the keyword/sentence can also be other customized databases or any other databases available for query, including but not limited to internet databases, so that the queried result is more comprehensive and complete when the web page information in the area related to and/or the same as the keyword/sentence is queried through the keyword/sentence.
A data set module: the method comprises the steps of marking the content in the webpage information to generate an extraction data set; it should be noted that, after the web page information in the area which is wanted and/or the same as the keyword/sentence is queried through the keyword/sentence by the interaction module, the method includes, but is not limited to, labeling the content of the web page information through an artificial or intelligent algorithm to obtain an extraction dataset, in this embodiment, at least the content in the web page information is labeled as effective information and ineffective information, where the effective information refers to that the content in the web page information is related to at least one keyword/sentence, and the ineffective information refers to that the content in the web page information is irrelevant to the keyword/sentence.
And (3) a building module: the method comprises the steps of optimizing a prediction model through a data set and then establishing an optimization model; and integrate the optimization model into an information retrieval system. It should be noted that, the prediction model learns the main text and the invalid text of each html block according to the data set acquired by the data set module, which is not only the valid information but also the invalid information, so that an optimized model of the html block only including the valid information is established after the valid information is adjusted, thereby reducing the interference of the html block including the invalid information.
And (3) a cleaning module: the method is used for cleaning the webpage information by using the optimization model to obtain the pure text information.
An input module: and the pure text information used for inputting the keywords/sentences input by the user and the cleaning and removing by the cleaning module is input into the LLM large model to obtain the optimization information.
Embodiment III:
Those of ordinary skill in the art will appreciate that all or part of the features/steps of the method embodiments described above may be implemented by a method, a data processing system, or a computer program, and that the features may be implemented in a manner that is not hardware, in a manner that is software, or in a combination of hardware and software. The aforementioned computer program may be stored in one or more computer readable storage media having stored thereon a computer program that, when executed (e.g., by a processor), performs a training method comprising embodiment one.
The aforementioned storage medium that can store the program code includes: static disk, solid state disk, random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), optical storage, magnetic storage, flash memory, magnetic or optical disk, and/or combinations thereof, may be implemented by any type of volatile or nonvolatile storage device or combination thereof.
Embodiment four:
The fourth embodiment provides an embodiment of a computer readable storage medium, including one or more processors and a memory; wherein the memory is configured to store one or more computer programs, and the one or more processors are configured to execute the one or more computer programs stored in the memory, so that the processor performs the training method in embodiment 1.
Fifth embodiment:
In this embodiment, the first training system is configured to train the first training system to the second training system. It should be noted that the terminal device may be implemented in various forms. For example, the terminal device described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. In addition, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for a moving purpose.
The foregoing is only illustrative of the preferred embodiments of the application, and it will be appreciated by those skilled in the art that various changes in the features and embodiments may be made and equivalents may be substituted without departing from the spirit and scope of the application. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the application without departing from the essential scope thereof. Therefore, it is intended that the application not be limited to the particular embodiment disclosed, but that the application will include all embodiments falling within the scope of the appended claims.

Claims (8)

1. A training method for generating a large model by words according to facts is characterized by comprising the following steps:
acquiring at least one keyword/sentence based on content input by a user;
acquiring webpage information of the field associated with the keywords/sentences according to the keywords/sentences;
Marking the content of the webpage information according to the keywords/sentences to generate an extraction data set of the webpage structure;
optimizing a prediction model of a webpage structure according to the data set, establishing an optimization model, and integrating the optimization model into an information retrieval system;
cleaning the webpage information according to the optimization model to obtain pure text information;
Inputting the keywords/sentences and the pure text information into a LLM large model to obtain optimization information;
When the content of the webpage information is marked according to the keywords/sentences, marking is carried out according to the coincidence degree of the content in the webpage information and the meaning/sentence meaning of at least one keyword/sentence;
When the content of the webpage information is marked according to the meaning of the keyword/sentence, the meaning of the content in the webpage information is analyzed, and marking is carried out according to the coincidence degree of the meaning of the keyword/sentence and the meaning of the content of the webpage information so as to generate the data set of the webpage structure.
2. The training method according to claim 1, wherein when acquiring web page information of a domain associated with the keyword/sentence based on the keyword/sentence, the meaning/meaning of the keyword/sentence is analyzed, and web page information of the domain associated with the meaning/meaning is acquired based on the meaning/meaning of the keyword/sentence.
3. The training method of claim 1, wherein when the content of the web page information is labeled according to the keywords/sentences, the content in the web page information is valid information when the content is related to at least one of the keywords/sentences, and the content in the web page information is invalid information when the content is unrelated to at least one of the keywords/sentences.
4. The training method according to claim 3, wherein a coincidence degree threshold is preset, when labeling is performed according to the coincidence degree of the semantic meaning/sentence of the keyword/sentence and the semantic meaning of the content of the web page information, a coincidence degree value of the semantic meaning of the content in the web page information and the semantic meaning/sentence of at least one keyword/sentence is analyzed, and a relationship between the coincidence degree value and the coincidence degree threshold is judged, wherein the invalid information is the invalid information when the coincidence degree value is smaller than the coincidence degree threshold, and the valid information is the coincidence degree value is greater than or equal to the coincidence degree threshold.
5. The training method according to claim 3 or 4, wherein when an optimization model is built after the prediction model of the web page structure is optimized according to the data set, the optimization model is built through valid information of the web page information.
6. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed, implements the training method of any of claims 1-5.
7. A training system for generating a large model from facts, comprising:
one or more processors;
Memory for storing one or more computer programs, one or more of the processors for executing the one or more computer programs stored by the memory, to cause the one or more processors to perform the training method of any of claims 1-5.
8. A terminal device provided with at least one training system according to claim 7.
CN202410478429.5A 2024-04-19 2024-04-19 Training method and equipment for generating large text model according to facts Active CN118095443B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410478429.5A CN118095443B (en) 2024-04-19 2024-04-19 Training method and equipment for generating large text model according to facts

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410478429.5A CN118095443B (en) 2024-04-19 2024-04-19 Training method and equipment for generating large text model according to facts

Publications (2)

Publication Number Publication Date
CN118095443A CN118095443A (en) 2024-05-28
CN118095443B true CN118095443B (en) 2024-07-05

Family

ID=91151919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410478429.5A Active CN118095443B (en) 2024-04-19 2024-04-19 Training method and equipment for generating large text model according to facts

Country Status (1)

Country Link
CN (1) CN118095443B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117909560A (en) * 2024-01-24 2024-04-19 百度时代网络技术(北京)有限公司 Search method, training device, training equipment, training medium and training program product

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102023491B1 (en) * 2017-10-30 2019-11-04 한림대학교 산학협력단 Method and apparatus for collecting and analyzing text data for analyzing association rules of text data
KR102566928B1 (en) * 2021-09-01 2023-08-14 주식회사 한글과컴퓨터 Electronic apparatus which generates a training set for performing reinforcement learning of the deep learning model for distinguishing user intention, and the operating method thereof
CN115310019A (en) * 2022-08-25 2022-11-08 北京天融信网络安全技术有限公司 Webpage classification method and device, electronic equipment and storage medium
CN117312711A (en) * 2023-09-26 2023-12-29 珍岛信息技术(上海)股份有限公司 Search engine optimization method and system based on AI analysis

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117909560A (en) * 2024-01-24 2024-04-19 百度时代网络技术(北京)有限公司 Search method, training device, training equipment, training medium and training program product

Also Published As

Publication number Publication date
CN118095443A (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN109783651B (en) Method and device for extracting entity related information, electronic equipment and storage medium
US8296288B2 (en) Query processing for web search
US8122042B2 (en) Method and system for determining a relevant content identifier for a search
CN113822067A (en) Key information extraction method and device, computer equipment and storage medium
US9946753B2 (en) Method and system for document indexing and data querying
US20140289177A1 (en) Finding and disambiguating references to entities on web pages
US9239879B2 (en) Method and system for determining confidence in answer for search
CN106708929B (en) Video program searching method and device
US20160328403A1 (en) Method and system for app search engine leveraging user reviews
CN108959413B (en) Topic webpage crawling method and topic crawler system
CN111104801B (en) Text word segmentation method, system, equipment and medium based on website domain name
CN109948154B (en) Character acquisition and relationship recommendation system and method based on mailbox names
CN113660541B (en) Method and device for generating abstract of news video
CN112559682B (en) Open source item personalized retrieval recommendation method based on Github software warehouse data set
CN111325030A (en) Text label construction method and device, computer equipment and storage medium
CN111858728A (en) Data extraction method, device and equipment for different data sources and storage medium
CN110569419A (en) question-answering system optimization method and device, computer equipment and storage medium
CN111859950A (en) Method for automatically generating lecture notes
CN112269906B (en) Automatic extraction method and device of webpage text
CN118095443B (en) Training method and equipment for generating large text model according to facts
CN104778232A (en) Searching result optimizing method and device based on long query
CN113326350B (en) Keyword extraction method, system, equipment and storage medium based on remote learning
CN112989163A (en) Vertical search method and system
Wang et al. Summarizing the differences from microblogs
CN111930880A (en) Text code retrieval method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant