CN112148884A - Systems and methods for autism intervention - Google Patents

Systems and methods for autism intervention Download PDF

Info

Publication number
CN112148884A
CN112148884A CN202010852052.7A CN202010852052A CN112148884A CN 112148884 A CN112148884 A CN 112148884A CN 202010852052 A CN202010852052 A CN 202010852052A CN 112148884 A CN112148884 A CN 112148884A
Authority
CN
China
Prior art keywords
intervention
entity
node
mode
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010852052.7A
Other languages
Chinese (zh)
Other versions
CN112148884B (en
Inventor
程建宏
宋华俐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Azuaba Technology Co ltd
Original Assignee
Beijing Azuaba Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Azuaba Technology Co ltd filed Critical Beijing Azuaba Technology Co ltd
Priority to CN202010852052.7A priority Critical patent/CN112148884B/en
Publication of CN112148884A publication Critical patent/CN112148884A/en
Application granted granted Critical
Publication of CN112148884B publication Critical patent/CN112148884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/433Query formulation using audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • Computational Linguistics (AREA)
  • Hospice & Palliative Care (AREA)
  • Animal Behavior & Ethology (AREA)
  • Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a system and method for autism intervention. The system comprises: a processing device and an output device, wherein the processing device comprises a processor and a memory, the memory storing executable instructions that, when the processing device is running, control the processor to perform the following: acquiring an intervention knowledge graph for autism, wherein the intervention knowledge graph comprises task nodes and mode nodes, the content of the task nodes represents a task to be subjected to autism intervention, and the content of the mode nodes represents a mode to be subjected to autism intervention; determining a first task node; determining a first mode node based on the first task node; and acquiring at least one first intervention entity based on the first mode node; wherein the output device outputs the first intervention entity to the intervention object.

Description

Systems and methods for autism intervention
Technical Field
The present disclosure relates to the technical field of autism medical devices, and more particularly, to a system for autism intervention and a method for autism intervention.
Background
Autism, also known as autism or autism, is a type of neurological developmental disorder.
Currently, human experience is mainly relied upon in the intervention treatment of autism. On the one hand, this does not guarantee a consistent level of treatment. For example, the levels of the patients vary, and different patients may have different treatment modalities for the symptoms of the same child, which results in unstable treatment of the child. On the other hand, it is difficult to share the experience of one diagnostician with other diagnosticians in time. On the other hand, children have certain conflict psychology to diagnosis and treatment, so that the treatment is not matched, and the treatment effect is poor.
Disclosure of Invention
It is an object of the present disclosure to provide a new system for autism intervention.
According to a first aspect of the present disclosure, there is provided a system for autism intervention, comprising: a processing device and an output device, wherein the processing device comprises a processor and a memory, the memory storing executable instructions that, when the processing device is running, control the processor to perform the following: acquiring an intervention knowledge graph for autism, wherein the intervention knowledge graph comprises task nodes and mode nodes, the content of the task nodes represents a task to be subjected to autism intervention, and the content of the mode nodes represents a mode to be subjected to autism intervention; determining a first task node; determining a first mode node based on the first task node; and acquiring at least one first intervention entity based on the first mode node; wherein the output device outputs the first intervention entity to the intervention object.
According to a second aspect of the present disclosure, there is provided a method for autism intervention, comprising: acquiring an intervention knowledge graph for autism, wherein the intervention knowledge graph comprises task nodes and mode nodes, the content of the task nodes represents a task to be subjected to autism intervention, and the content of the mode nodes represents a mode to be subjected to autism intervention; determining a first task node; determining a first mode node based on the first task node; acquiring at least one first intervention entity based on a first mode node; and outputting the first intervention entity to the intervention object.
According to the embodiment of the disclosure, the knowledge graph is used for performing the autism intervention, so that an automatic autism intervention scheme can be provided.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic structural diagram of a system for autism intervention provided by an embodiment of the present disclosure.
FIG. 2 is a schematic structural diagram of an intervention knowledge graph provided by an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of another system for autism intervention provided by embodiments of the present disclosure.
Fig. 4 is a schematic flow chart of a method for autism intervention provided by an embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of a device for autism intervention provided by an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< System >
Fig. 1 is a schematic block diagram of a system for autism intervention provided in accordance with one embodiment of the present disclosure.
As shown in fig. 1, the system for autism intervention comprises: a processing device 11 and an output device 12.
The processing device 11 includes a processor 111 and a memory 112. The processor 111 may include, for example, a CPU, MPU, MCU, or the like. The memory 112 may store the underlying software, system software, application software, data, and the like. The memory 112 may include various forms of memory, such as ROM, RAM, Flash, etc. The memory 112 stores executable instructions that, when the processing device 11 is running, control the processor 111 to perform the processes of S1100-S1400 as follows.
S1100, acquiring an intervention knowledge map for the autism.
The intervention knowledge graph comprises task nodes and mode nodes, the content of the task nodes represents a task for performing autism intervention, and the content of the mode nodes represents a mode for performing autism intervention.
The intervention knowledge map may be stored in memory 112. A knowledge graph (knowledgegraph) is a graph-based data structure that consists of nodes (points) and edges (edges). In the knowledge-graph, each node represents an "entity" existing in the real world, and each edge is a "relationship" between entities. Knowledge-graphs are an effective way to represent "relationships". A knowledge graph can be thought of as a relational network that links together different kinds of information. The concept of knowledge-graph was first proposed by Google and was mainly used to optimize existing search engines. In general, a knowledge graph is composed of a piece of knowledge, and each piece of knowledge can be represented as an SPO (Subject-predict-Object) triple. Here, the intervention knowledge map is used for autism intervention in children. The intervention knowledge map may be obtained in a variety of ways. For example, the intervention knowledge graph may be manually entered by a designer.
The intervention knowledge graph comprises at least one task node and at least one mode node. One task node may be connected to at least one task node and/or mode node and one mode node may be connected to at least one task node and/or mode node.
The content of the task node may include, for example: cognitive intervention tasks, self-life intervention tasks, and social intervention tasks. These tasks may be in a side-by-side relationship. In addition, a task may also include one or more subtasks in the intervention knowledgegraph. For example, the social intervention task may include a request to fetch in language subtask, a get actively subtask, and so on. A task node may be connected to other parallel nodes by edges, or to its children.
The modality nodes may indicate how to perform the autism intervention. The content of the modality node may directly represent the modality of the intervention, for example, the content of the modality node may include a question that the intervention object is expected to answer. The content of the manner node may indirectly represent the manner of intervention, e.g., the content of the manner node may include keywords, e.g., "fruits," of things that the subject is expected to intervene to identify. Then, keyword-related things can be searched for by the processing device 11 to be presented to the intervention object, for example, various "fruits".
S1200, determining a first task node.
Here, "first", "second" … … is used to identify different things, and does not indicate the order, priority, etc. of the indicated nodes or other things. The task node currently to be processed in the intervention knowledge graph may be referred to herein as a first task node. The first task node may be determined in a variety of ways, for example, the first task node may be determined by receiving settings of an operator by the processing device 11, the first task node may be automatically determined by the processing device 11 based on information of the intervention object, and so on.
S1300, determining a first mode node based on the first task node.
In the intervention knowledge graph, the nodes are interconnected by edges. Nodes represent "entities", i.e., tasks and/or ways of intervention, and "edges" represent relationships. In the case where the first task node is determined, the first mode node may be determined by an edge connected to the first task node. The first mode node may be directly connected to the first task node, or may be connected to the first task node through another task node.
The first-mode node may be randomly determined among nodes connected to the first task node via the edge; or the first mode node may be sequentially determined among nodes connected to the first task node via the edge; or the first mode node may be determined based on the weights of the respective edges connected to the first task node.
S1400, at least one first intervention entity is obtained based on the first mode node.
The intervention entity is the actual content used for the intervention on the intervention object, e.g. audio, video, pictures, vibrations, smells, etc. After acquiring the at least one first intervention entity, the output device 12 outputs the first intervention entity to the intervention object.
Here, the application of the intervention knowledge map to the intervention of autism is advantageous to eliminate experience differences between different interveners. In addition, the intervention knowledge map may be updated directly after obtaining the latest progress on autism intervention. This can eliminate the need for training intervention personnel. On the one hand, this can save costs; on the other hand, this is advantageous for the rapid spread of research efforts on autism. In addition, unlike the usual use of a knowledge-graph, where an intervention knowledge-graph is used, nodes of different nature are associated to produce intervention entity outputs for intervention objects, forming an intervention object oriented knowledge-graph.
The first mode node may directly contain the first intervening entity. For example, the contents of the first-mode node include: text and/or audio of "what you need". This content may be output to the intervention object as a first intervention entity and await a response from the intervention object.
Furthermore, the first method node may contain information for obtaining the first intervening entity. The above-described process S1400 may be realized by the following process S1410, for example.
S1410, based on the keywords in the first mode node, retrieving the first intervention entity from the database.
The repository may be stored in the memory 112. Multiple intervention entities may be stored in a repository. The intervention entity that matches the keyword in the first mode node may be retrieved from the repository as the first intervention entity. For example, the first mode node includes the keyword "fruit". The processing device 11 may retrieve pictures of various fruits, e.g. pictures of apples, peaches, bananas, etc., from a repository. The retrieved pictures of various fruits can be output to the intervention object as the intervention entity.
Furthermore, the above two ways may be combined, i.e. the first way node may directly contain the first intervening entity and contain information for obtaining the first intervening entity. For example, the content of the first mode node includes text and/or audio of "what you need" and also includes the keyword "fruit". In this case, the reaction of the intervention object may be determined by outputting text and/or audio of "what you need" to the intervention object and showing the retrieved pictures of various fruits.
For example, intervening entities that are crawled into a repository over a network. In addition, the intervening entities entered into the repository may also be performed manually.
The intervening entities may be heterogeneous entities, i.e., the intervening entities differ in data type. The existing knowledge graph is a semantic network used for representing the meaning of a language. In existing knowledge-graphs, each piece of knowledge can be represented as an SPO triplet, which is composed of textual data. In this regard, existing knowledge-maps are primarily used to mimic human language. The inventors propose herein to use intervention knowledge-maps for establishing connections between different intervention tasks/intervention modalities/intervention entities. Here, the semantics of the respective nodes are not taken into consideration. Thus, the inventors propose that heterogeneous intervening entities can be used, i.e. the data type of the intervening entity is not limited to text, but can be heterogeneous data. Here, "heterogeneous" means: intervening entities may be text, audio, video, smell data, temperature data, etc. without the need to convert them to uniform text data, much less to uniform format text data as in the prior art.
As previously described, the mode nodes may contain text and/or audio. Further, the intervening entities in the repository may include: entities of audio class, entities of image class, entities of odor class, entities of temperature class, entities of vibration class and entities of spray class.
In the case where the intervention entities in the repository are heterogeneous data, intervention in various dimensions may be provided to the intervention object (typically an autistic child). For example, providing visual, auditory, olfactory, etc. dimensions of intervention. In this way, the effect of intervention on the intervening object can be enhanced.
Here, by using heterogeneous intervention entities, the human cognitive processes can be simulated to coincide with the cognition of the intervention subject, thereby enhancing the intervention effect. For example, in the beginning of the cognition of a newborn infant, the infant does not have the ability to understand the semantics, and the infant simply links what he observes, the sound received, the touch, the smell and the like, thereby gradually establishing the abstract and the cognitive ability. Here, the original heterogeneous intervention entity is used as an intervention material, and the relation of the heterogeneous intervention entity is established through a knowledge graph. This can simulate the basic cognitive building process in humans. The simulation of the basic cognition establishing process of the human is beneficial to finding out the symptom knot of an intervention object in the intervention process, thereby realizing the intervention effect.
One example of obtaining and outputting an intervening entity is described below with reference to FIG. 2. For example, as shown in fig. 2, the task nodes include a "social" task node, a "ask for things in language" task node, an "get things actively" task node, and the like. The task node of 'asking for proposal' is a child node of the social task node, and the task node of 'asking for object' and the task node of 'taking the object actively' are child nodes of the task node of 'asking for proposal'. A "social" task node, a "ask for words" task node, or an "get things actively" task node may be determined as the first task node.
In fig. 2, 3 schema nodes are shown, schema 1, 2, 3 respectively. The first mode node may be selected from modes 1, 2, 3.
1. The contents of the mode 1 include contents of two aspects. The content of the first aspect is "what is you going to? "audio frequency. The content of the second aspect is information of "items that children may be interested in".
Based on the information of "items that children may be interested in", the following pictures of items can be retrieved in the repository: kiwi fruit, banana, pomegranate, pear, pineapple, watermelon, apple, peach, cake, orange, lychee, peanut, pistachio, ham sausage, cucumber, tomato, dragon fruit, corn, egg, hami melon, car, ball, magic cube, balloon, grape, lollipop, snowflake, clip, lantern ring, sleeve, small train toy, sea moss, melon seed, potato chips, hawthorn, haoduo, small steamed bun, pad, bubble, building block, small pistol, mane building block, television remote controller, biscuit, cola, shrimp bar, raisin, candied horse, small airplane, ocean ball, jigsaw, bullet head, hamburger, robot, mobile phone, bear.
"what do you want? "and show the picture of the retrieved item for intervention object selection.
2. The content of mode 2 includes only "what is you wanted? "audio frequency.
When "what do you want? "the desired item needs to be spoken by the intervening object itself.
3. The contents of mode 3 include only information of "items that children may be interested in".
Based on the information of "items that children may be interested in", pictures of the above items may be retrieved in the repository.
A picture of the retrieved item may be presented via the output device 12 for selection by the intervention object.
A first mode node is determined by the first task node based on the intervention knowledge-graph. "ask for items" task nodes are connected to modes 1, 2, 3. Thus, any of ways 1, 2, 3 may be determined by the task node "claim the item in language". The "get ahead" task node is connected to mode 3. Thus, the mode node 3 can be determined by the "take-initiative" task node.
The intervention entity may be an audio-like entity (e.g., sound), an image-like entity (e.g., a picture or video of fruit), an odor-like entity (e.g., a flower scent), a temperature-like entity (e.g., an increase or decrease in temperature), a vibration-like entity (e.g., a vibration stimulus to the intervention subject), and a spray-like entity (e.g., a sprayed mist). In this case, the output device 12 includes at least one of the following components:
an output interface for connecting the processing device 11 and an external device;
-display means for outputting an intervention entity of the image class;
-a speaker outputting an intervening entity of an audio class;
-a vibration device outputting a vibration-like intervention entity;
-a scent generation means outputting an intervention entity of a scent class;
-a temperature adjustment device outputting an intervention entity of a temperature class; and
-a nebulizer, outputting a nebulized intervention entity.
The system for autism intervention disclosed herein can automatically and intelligently intervene on the intervening subjects. This reduces the differences that arise from different intervention personnel diagnosing the intervention subject. This avoids to some extent the unstable therapeutic effect that comes with manual experience for intervention. Furthermore, this may facilitate the sharing of intervention experience by the intervention personnel.
Here, the processing device and the output device are used for performing autism intervention, so that the participation degree of an intervener can be reduced. In some cases, this may reduce the effect of human factors on the intervention object, so that the effect of the intervention mode may be determined more accurately. Furthermore, this may also improve the intervention effect.
In one embodiment, the above-described process S1200 may include the following processes S1210 and S1211.
S1210, counts the processing of the task node.
S1211, prohibiting the task node during the current intervention when the count value is greater than a first predetermined threshold.
The first predetermined threshold may be set empirically, for example, by setting the first predetermined threshold to 7, 8, or 9.
During the current intervention, the same first task node determined multiple times may be counted to obtain a count value. And in the case that the count value reaches a first preset threshold value, forbidding the first task node in the current intervention process so as to determine other task nodes as the first task node, and performing corresponding intervention processing. In this way, intervening processing into a closed loop state may be avoided.
Alternatively, the above-described process S1300 may include the following processes S1310 and S1311.
S1310 counts the processing of the mode node.
And S1311, prohibiting the mode node in the current intervention process when the count value is greater than a second preset threshold value.
The second predetermined threshold may be set empirically, for example, by setting the second predetermined threshold to 3, 4 or 5.
During the current intervention, the same first mode node determined multiple times may be counted to obtain a count value. In the event that the count value reaches a second predetermined threshold, the first task node is disabled during the current intervention. In this way, interference into the closed loop state can be avoided.
Optionally, the intervening entities in the repository have weights. Thus, the above-described process S1410 may embrace processes S1411 and S1412 as follows.
S1411, the intervention entity is retrieved from the repository based on the keywords of the first mode node.
And S1412, determining at least one intervention entity with higher weight as a first intervention entity.
The processor 111 may retrieve all intervening entities from the repository that match the keyword based on the keyword of the first mode node. The processor 111 may determine, as the first intervening entity, the intervening entities with the weight greater than the preset threshold, or may sort the retrieved intervening entities in descending order of the weight, and determine, as the first intervening entity, the intervening entity located at the top preset digit.
Alternatively, in the initial state of the repository, the weights of the intervening entities in the repository may be set manually. After the repository is used, the weights of the intervention entities in the repository may be updated based on the intervention effect.
An embodiment of obtaining the intervention result is described below with reference to fig. 3. As shown in fig. 3, the system for autism intervention further comprises: an input device 13. The input device 13 is used for inputting first intervention result data, and the first intervention result data represents the result of the intervention of the first intervention entity on the intervention object.
In one example, the first intervention result data may comprise data manually entered by the intervention person through an input device, i.e. the input device receives the first intervention result data manually entered by the intervention person.
In another embodiment, the input device 13 comprises a capture device that captures at least one of an image and a sound of the intervention subject as the first intervention result data. For example, the first intervention result data characterizes a reaction of the intervention object to the first intervention entity. Such responses include actions, emotions, language, etc. that intervene in the subject. Data characterizing such reactions may be captured by a capture device. For example, when an image of a banana is presented, the intervention object reacts to the image, and the first result data may be image data of the intervention object looking at the banana, image data of the intervention object pointing at the banana, the intervention object uttering a "i want banana" voice, etc. In this case, the capture device may include a camera, a microphone, or the like.
In another embodiment, the processor 111 may also perform the following processes S1500 and S1600.
And S1500, generating first intervention effect information based on the first intervention result data.
The processor 111 may evaluate the intervention effect of the first intervention entity based on the first intervention result data and evaluate the result as first intervention effect information.
In one example, the first intervention effect information may be a score or an effect level on the intervention result.
For example, when the interventionalist inputs "good", the first interventionality effect information may be set to 9.
In another example, the processor 111 analyzes the first intervention result data to obtain the first intervention effect information. For example, the first intervention result data captured by the capture device represents a reaction of the intervention object to the intervention entity, and the processing device 11 may determine the intervention effect information based on such a reaction. The processing device 11 may also determine intervention effect information from multiple dimensions. For example, the processing device 11 may set the first intervention effect information to +1 point when the video in the first intervention result data indicates that the intervention object reacts to the first intervention entity. The processing device 11 may set the first intervention effect information to +1 point when the video in the first intervention result data indicates that the intervention object has a reaction time of 1s to the first intervention entity. The processing apparatus 11 may set the first intervention effect information to +2 points when the video in the first intervention result data indicates that the intervention object acted and/or sounded to the first intervention entity. At this time, the processing device 11 may generate the first intervention effect information, i.e., 4 points in total, based on the first intervention result data. Here, a higher score indicates a better intervention effect.
And S1600, adjusting the weight of the first intervention entity based on the first intervention effect information.
The processing device 11 may increase the weight of the first intervention entity if the intervention effect represented by the first intervention effect information is better. And in the case that the intervention effect represented by the first intervention effect information is poor, the path weight of the first intervention entity is reduced.
Further, the processing device 11 may also execute the following processing S1700.
S1700, adding the captured first intervention result data as a new intervention entity to the database.
By adding the first intervention result data as a newly added intervention entity to the database, on one hand, the database can be expanded; alternatively, the first intervention result data of the intervention object itself may be utilized as part of the library. In this way, the intervention object may feel more familiar to the intervention entity, which may improve the intervention effect.
Relationships between nodes in the intervention knowledgegraph may be defined by a neural network. Therefore, the above-described process S1300 may include the following process S1320.
S1320, a first mode node is determined from the first task node using the neural network.
The modeling can be carried out based on data of task nodes and mode nodes, and learning and training are carried out through a deep neural network, so that the evaluation and classification of intervention effects are realized, and the recommendation of the task nodes and the mode nodes is optimized. The data includes, for example, intervention success rate, number of interventions, elapsed time, intervention mode, number of visual interactions per unit time, duration of visual interaction, and the like. Here, the classification types are ascending, descending, same-order trial and successful same-order repetition. On the basis, an intervention mode is determined according to the classification result of the neural network. For example, if the classification result is the ascending order, the intervention mode corresponding to the ascending order path is preferentially recommended.
In existing knowledge-graphs, "edges" between nodes are static, explicit, i.e., when using a knowledge-graph, the relationships or levels represented by the edges are determined. Different from the processing of a semantic network, the requirement on instantaneity during the autism intervention is not high, so that the neural network can be used as the edge of the intervention knowledge graph, and the relation between different nodes in the knowledge graph can be determined. Furthermore, unlike general knowledge maps, in autism intervention, not only the relationship between different intervention tasks/modalities, but also the current intervention state as well as the previous intervention state need to be considered. Therefore, these factors can be comprehensively considered with the neural network. In an intervention knowledgegraph, neural networks may be used to determine relationships between different nodes (including task nodes and mode nodes).
In another embodiment, the processing device 11 further performs the following process S1330.
And S1330, training the neural network by using the first intervention effect information.
In the embodiment of the application, the first intervention effect information can be used as the input of the neural network to train the neural network, so that the learning capacity of the neural network can be improved by using the actual first intervention effect information, and thus, a good first mode node can be determined from the first task node.
Further, the processing device 11 may also execute the following processes S1340 and S1341.
And S1340, taking the first intervention effect information as an input of the neural network, and determining a second mode node from the first mode node.
S1341, based on the second mode node, obtaining at least one second intervention entity.
The implementation of process S1341 may be similar to process S1400. For this reason, no further description is given here.
The output device 12 outputs the second intervention entity to the intervention object.
Further, the processing device 11 may also perform the following processes S1350-S1352.
And S1350, taking the first intervention effect information as an input of the neural network, so as to determine a second task node from the first mode node.
S1351, determining a second mode node from the second task node by using the neural network.
The implementation of the process S1351 may be similar to the process S1320. For this reason, no further description is given here.
S1352, acquiring at least one second intervention entity based on the second mode node. The output device 12 outputs the second intervention entity to the intervention object. The implementation of process S1352 may be similar to process S1400. For this reason, no further description is given here.
Here, after the current intervention processing is completed, the next intervention mode may be determined based on the current intervention effect. In this way, the current intervention state may be considered to determine a next intervention scenario or process. This can generate an appropriate intervention scheme for the state of the intervention object in time, thereby improving the intervention effect.
< method >
Fig. 4 illustrates a method for autism intervention in accordance with an embodiment of the present disclosure.
As shown in fig. 4, in step S4100, an intervention knowledge map for autism is acquired. The intervention knowledge graph comprises task nodes and mode nodes, the content of the task nodes represents a task for performing autism intervention, and the content of the mode nodes represents a mode for performing autism intervention.
In step S4200, a first task node is determined.
In step S4300, a first mode node is determined based on the first task node.
In step S4400, at least one first intervention entity is acquired based on the first mode node.
In step S4500, the first intervention entity is output to the intervention object.
In one embodiment, the node acquiring at least one intervention entity based on the first mode comprises: based on the keywords in the first mode node, a first intervention entity is retrieved from the repository.
In one embodiment, the intervening entities in the repository have weights. Based on the keywords in the first mode node, retrieving the first intervention entity from the repository includes: retrieving an intervention entity from a repository based on the keywords in the first mode node; and determining at least one intervention entity with higher weight as the first intervention entity. The method further comprises the following steps: inputting first intervention result data representing a result of intervention by a first intervention entity on the intervention object, generating first intervention effect information based on the first intervention result data; and adjusting the weight of the first intervention entity based on the first intervention effect information.
In one embodiment, at least one of an image and a sound of the intervention object is captured by a capture device as first intervention result data.
In one embodiment, the method further comprises: and adding the captured first intervention result data as a newly added intervention entity to the data bank.
In one embodiment, entering first intervention result data comprises: first intervention result data manually input by a user is received.
In one embodiment, the relationships between nodes in the intervention knowledge graph are defined by a neural network. Determining the first mode node based on the first task node comprises: a first mode node is determined from the first task node using the neural network.
In one embodiment, the method further comprises: training the neural network using the first intervention effect information.
In one embodiment, the method further comprises: using the first intervention effect information as an input of the neural network to determine a second mode node from the first mode node; acquiring at least one second intervention entity based on the second mode node; and outputting the second intervention entity to the intervention object.
In one embodiment, the method further comprises: using the first intervention effect information as an input to the neural network to determine a second task node from the first mode nodes; determining a second mode node from a second task node using the neural network; acquiring at least one second intervention entity based on the second mode node; and outputting the second intervention entity to the intervention object.
In one embodiment, determining the first task node comprises: counting the processing of the task nodes; and inhibiting the task node during the current intervention when the count value is greater than a first predetermined threshold. Determining the first mode node based on the first task node comprises: counting the processing of the mode nodes; and disabling the mode node during the current intervention when the count value is greater than a second predetermined threshold.
In one embodiment, the first intervention entity is output to the intervention object by at least one of:
-an output interface;
-a display device;
-a loudspeaker;
-a vibration device;
-odour generating means;
-a temperature adjustment device; and
-a nebulizer.
The implementation and effect of various treatments for autism intervention have been described above for system embodiments. Therefore, for the sake of brevity, these descriptions are not repeated in the method embodiment section.
< apparatus >
As shown in fig. 5, an embodiment of the present disclosure provides a device 50 for autism intervention. The apparatus 50 comprises: a first obtaining module 51, configured to obtain an intervention knowledge graph for autism, where the intervention knowledge graph includes task nodes and mode nodes, a content of the task node represents a task to be subjected to autism intervention, and a content of the mode node represents a mode to be subjected to autism intervention; a first determining module 52 for determining a first task node; a second determining module 53, configured to determine the first mode node based on the first task node; a second obtaining module 54, configured to obtain at least one first intervention entity based on the first mode node; and an output module 55 for outputting the first intervention entity to the intervention object.
In one embodiment, the second retrieval module 54 retrieves the first intervention entity from the repository based on the keywords in the first mode node.
In one embodiment, the intervening entities in the repository have weights. The second obtaining module 54 further performs the following processing: retrieving an intervention entity from a repository based on the keywords in the first mode node; and determining at least one intervention entity with higher weight as the first intervention entity.
Further, the apparatus 50 for autism intervention may also include an input module, a generation module, and an adjustment module. And the input module is used for inputting first intervention result data, and the first intervention result data represents the result of the first intervention entity intervening on the intervention object. And the generating module is used for generating first intervention effect information based on the first intervention result data. And the adjusting module is used for adjusting the weight of the first intervention entity based on the first intervention effect information.
In one embodiment, at least one of an image and a sound of the intervention object is captured by a capture device as first intervention result data.
In one embodiment, the apparatus 50 for autism intervention further comprises an adding module for adding the captured first intervention result data as a newly added intervention entity to the repository.
In one embodiment, the apparatus 50 for autism intervention further comprises an input module for receiving first intervention result data manually input by a user.
In one embodiment, the relationships between nodes in the intervention knowledge graph are defined by a neural network. The second determination module 53 may determine the first mode node from the first task node using a neural network.
In one embodiment, the apparatus 50 for autism intervention further comprises a training module. The training module is used for training the neural network by utilizing the first intervention effect information.
In one embodiment, the second determining module 53 is further configured to use the first intervention effect information as an input to the neural network to determine the second mode node from the first mode node. The second obtaining module 54 is further configured to obtain at least one second intervention entity based on the second mode node.
The output module 55 is further configured to: and outputting the second intervention entity to the intervention object.
In one embodiment, the first determination module 52 is further configured to use the first intervention effect information as an input to the neural network to determine the second task node from the first mode nodes. The second determining module 53 is further configured to determine a second mode node from the second task node using the neural network. The second obtaining module 54 is further configured to obtain at least one second intervention entity based on the second mode node. The output module 55 is further configured to output the second intervention entity to the intervention object.
In one embodiment, the first determination module 52 includes a first counting unit, a first inhibiting unit. The first counting unit is used for counting the processing of the task node. The first prohibiting unit is used for prohibiting the task node in the current intervention process when the counting value is larger than a first preset threshold value.
The second determining unit 53 may include a second counting unit, a second inhibiting unit. The second counting unit is used for counting the processing of the mode node. The second inhibiting unit is used for inhibiting the mode node in the current intervention process when the counting value is larger than a second preset threshold value.
< apparatus >
The embodiment of the present disclosure provides an electronic device 60, and the electronic device 60 includes an apparatus 50 for autism intervention provided by the above-mentioned apparatus embodiment.
Optionally, in another embodiment, the electronic device 60 comprises a memory 61 and a processor 62. The memory 61 is used to store computer instructions. The processor 62 is configured to invoke computer instructions from the memory 61 to perform any of the methods for autism intervention as provided in the above-described method embodiments.
< storage Medium >
Embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for autism intervention according to any one of the methods provided in the above-described method embodiments.
< summary of examples >
Embodiment 1, a system for autism intervention, comprising: a processing device and an output device, wherein,
wherein the processing device comprises a processor and a memory, the memory storing executable instructions that, when the processing device is running, control the processor to perform the following:
acquiring an intervention knowledge graph for autism, wherein the intervention knowledge graph comprises task nodes and mode nodes, the content of the task nodes represents a task to be subjected to autism intervention, and the content of the mode nodes represents a mode to be subjected to autism intervention;
determining a first task node;
determining a first mode node based on the first task node; and
acquiring at least one first intervention entity based on a first mode node;
wherein the output device outputs the first intervention entity to the intervention object. Require that
Embodiment 2, the system according to embodiment 1, wherein the process of acquiring at least one intervention entity based on the first mode node comprises:
based on the keywords in the first mode node, a first intervention entity is retrieved from the database.
Embodiment 3 the system of embodiment 2, wherein the intervening entities in the corpus have weights,
wherein the process of retrieving the first intervention entity from the repository based on the keywords in the first mode node comprises:
retrieving an intervention entity from a repository based on the keywords in the first mode node; and
determining at least one intervention entity with higher weight as a first intervention entity,
wherein the system further comprises: an input device for inputting first intervention result data representing a result of intervention by a first intervention entity with respect to the intervention subject,
wherein when the processing device is running, the executable instructions control the processor to further perform:
generating first intervention effect information based on the first intervention result data; and
the weight of the first intervention entity is adjusted based on the first intervention effect information.
Embodiment 4 the system of embodiment 3, wherein the input device comprises a capture device that captures at least one of an image and a sound of the intervention subject as first intervention result data.
Embodiment 5 the system of embodiment 4, wherein the executable instructions control the processor to perform the following when the processing device is running:
and adding the captured first intervention result data as a newly added intervention entity to the data bank.
Embodiment 6 the system of embodiment 3, wherein the input device receives first intervention result data manually input by a user.
Embodiment 7 the system of embodiment 3, wherein the relationships between nodes in the intervention knowledge-graph are defined by a neural network, an
Wherein the process of determining the first mode node based on the first task node comprises:
a first mode node is determined from the first task node using the neural network.
Embodiment 8 the system of embodiment 7, wherein the executable instructions control the processor to further perform the following when the processing device is running:
training the neural network using the first intervention effect information.
Embodiment 9 the system of embodiment 7, wherein the executable instructions control the processor to further perform the following when the processing device is running:
using the first intervention effect information as an input of the neural network to determine a second mode node from the first mode node; and
acquiring at least one second intervention entity based on the second mode node,
wherein the output device outputs the second intervention entity to the intervention object.
Embodiment 10 the system of embodiment 7, wherein the executable instructions control the processor to further perform the following when the processing device is running:
using the first intervention effect information as an input to the neural network to determine a second task node from the first mode nodes;
determining a second mode node from a second task node using the neural network; and
acquiring at least one second intervention entity based on the second mode node,
wherein the output device outputs the second intervention entity to the intervention object.
Embodiment 11 the system of embodiment 1, wherein the process of determining the first task node comprises:
counting the processing of the task nodes; and
disabling the task node during the current intervention when the count value is greater than a first predetermined threshold, an
Wherein the process of determining the first mode node based on the first task node comprises:
counting the processing of the mode nodes; and
and when the counting value is larger than a second preset threshold value, the mode node is forbidden in the current intervention process.
Embodiment 12 the system of embodiment 1, wherein the output device comprises at least one of:
-an output interface;
-a display device;
-a loudspeaker;
-a vibration device;
-odour generating means;
-a temperature adjustment device; and
-a nebulizer.
Example 13, a method for autism intervention comprising:
acquiring an intervention knowledge graph for autism, wherein the intervention knowledge graph comprises task nodes and mode nodes, the content of the task nodes represents a task to be subjected to autism intervention, and the content of the mode nodes represents a mode to be subjected to autism intervention;
determining a first task node;
determining a first mode node based on the first task node;
acquiring at least one first intervention entity based on a first mode node; and
the first intervention entity is output to the intervention object.
Embodiment 14 the method of embodiment 13, wherein the node acquiring at least one intervening entity based on the first mode comprises:
based on the keywords in the first mode node, a first intervention entity is retrieved from the repository.
Embodiment 15 the method of embodiment 14, wherein the intervening entities in the corpus have weights,
wherein retrieving the first intervention entity from the repository based on the keywords in the first mode node comprises:
retrieving an intervention entity from a repository based on the keywords in the first mode node;
and
determining at least one intervention entity with higher weight as a first intervention entity,
wherein the method further comprises:
inputting first intervention result data representing a result of an intervention by a first intervention entity with respect to the intervention subject,
generating first intervention effect information based on the first intervention result data; and
the weight of the first intervention entity is adjusted based on the first intervention effect information.
Embodiment 16 the method of embodiment 15, wherein at least one of an image and a sound of the intervention subject is captured by a capture device as first intervention result data.
Embodiment 17, the method of embodiment 16, further comprising:
and adding the captured first intervention result data as a newly added intervention entity to the data bank.
Embodiment 18 the method of embodiment 15, wherein inputting first intervention result data comprises: first intervention result data manually input by a user is received.
Embodiment 19 the method of embodiment 15, wherein the relationships between nodes in the intervention knowledge-graph are defined by a neural network, an
Wherein determining the first mode node based on the first task node comprises:
a first mode node is determined from the first task node using the neural network.
Embodiment 20, the method of embodiment 19, further comprising:
training the neural network using the first intervention effect information.
Embodiment 21, the method of embodiment 19, further comprising:
using the first intervention effect information as an input of the neural network to determine a second mode node from the first mode node;
acquiring at least one second intervention entity based on the second mode node; and
and outputting the second intervention entity to the intervention object.
Embodiment 22 the method of embodiment 19, further comprising:
using the first intervention effect information as an input to the neural network to determine a second task node from the first mode nodes;
determining a second mode node from a second task node using the neural network;
acquiring at least one second intervention entity based on the second mode node; and
and outputting the second intervention entity to the intervention object.
Embodiment 23 the method of embodiment 13, wherein determining the first task node comprises:
counting the processing of the task nodes; and
disabling the task node during the current intervention when the count value is greater than a first predetermined threshold, an
Wherein determining the first mode node based on the first task node comprises:
counting the processing of the mode nodes; and
and when the counting value is larger than a second preset threshold value, the mode node is forbidden in the current intervention process.
Embodiment 24 the method of embodiment 13, wherein the first intervention entity is output to the intervention object by at least one of:
-an output interface;
-a display device;
-a loudspeaker;
-a vibration device;
-odour generating means;
-a temperature adjustment device; and
-a nebulizer.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. A system for autism intervention, comprising: a processing device and an output device, wherein,
wherein the processing device comprises a processor and a memory, the memory storing executable instructions that, when the processing device is running, control the processor to perform the following:
acquiring an intervention knowledge graph for autism, wherein the intervention knowledge graph comprises task nodes and mode nodes, the content of the task nodes represents a task to be subjected to autism intervention, and the content of the mode nodes represents a mode to be subjected to autism intervention;
determining a first task node;
determining a first mode node based on the first task node; and
acquiring at least one first intervention entity based on a first mode node;
wherein the output device outputs the first intervention entity to the intervention object.
2. The system of claim 1, wherein the process of acquiring at least one intervening entity based on the first mode node comprises:
based on the keywords in the first mode node, a first intervention entity is retrieved from the database.
3. The system of claim 2, wherein intervening entities in the corpus have weights,
wherein the process of retrieving the first intervention entity from the repository based on the keywords in the first mode node comprises:
retrieving an intervention entity from a repository based on the keywords in the first mode node; and
determining at least one intervention entity with higher weight as a first intervention entity,
wherein the system further comprises: an input device for inputting first intervention result data representing a result of intervention by a first intervention entity with respect to the intervention subject,
wherein when the processing device is running, the executable instructions control the processor to further perform:
generating first intervention effect information based on the first intervention result data; and
the weight of the first intervention entity is adjusted based on the first intervention effect information.
4. The system of claim 3, wherein the input device comprises a capture device that captures at least one of an image and a sound of the intervention subject as first intervention result data.
5. The system of claim 4, wherein the executable instructions, when executed by the processing device, control the processor to:
and adding the captured first intervention result data as a newly added intervention entity to the data bank.
6. The system of claim 3, wherein the input device receives first intervention result data manually entered by a user.
7. The system of claim 3, wherein relationships between nodes in the intervention knowledgegraph are defined by a neural network, and
wherein the process of determining the first mode node based on the first task node comprises:
a first mode node is determined from the first task node using the neural network.
8. The system of claim 7, wherein the executable instructions control the processor to further perform, when the processing device is running, the process of:
training the neural network using the first intervention effect information.
9. The system of claim 7, wherein the executable instructions control the processor to further perform, when the processing device is running, the process of:
using the first intervention effect information as an input of the neural network to determine a second mode node from the first mode node; and
acquiring at least one second intervention entity based on the second mode node,
wherein the output device outputs the second intervention entity to the intervention object.
10. The system of claim 7, wherein the executable instructions control the processor to further perform, when the processing device is running, the process of:
using the first intervention effect information as an input to the neural network to determine a second task node from the first mode nodes;
determining a second mode node from a second task node using the neural network; and
acquiring at least one second intervention entity based on the second mode node,
wherein the output device outputs the second intervention entity to the intervention object.
CN202010852052.7A 2020-08-21 2020-08-21 Systems and methods for autism intervention Active CN112148884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010852052.7A CN112148884B (en) 2020-08-21 2020-08-21 Systems and methods for autism intervention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010852052.7A CN112148884B (en) 2020-08-21 2020-08-21 Systems and methods for autism intervention

Publications (2)

Publication Number Publication Date
CN112148884A true CN112148884A (en) 2020-12-29
CN112148884B CN112148884B (en) 2023-09-22

Family

ID=73889103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010852052.7A Active CN112148884B (en) 2020-08-21 2020-08-21 Systems and methods for autism intervention

Country Status (1)

Country Link
CN (1) CN112148884B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506624A (en) * 2021-08-16 2021-10-15 北京阿叟阿巴科技有限公司 Autism child cognitive ability assessment intervention system based on layer-by-layer generalization push logic

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017185887A1 (en) * 2016-04-29 2017-11-02 Boe Technology Group Co., Ltd. Apparatus and method for analyzing natural language medical text and generating medical knowledge graph representing natural language medical text
CN109145119A (en) * 2018-07-02 2019-01-04 北京妙医佳信息技术有限公司 The knowledge mapping construction device and construction method of health management arts
CN109284396A (en) * 2018-09-27 2019-01-29 北京大学深圳研究生院 Medical knowledge map construction method, apparatus, server and storage medium
CN109284342A (en) * 2018-11-22 2019-01-29 北京百度网讯科技有限公司 Method and apparatus for output information
CN110335676A (en) * 2019-07-09 2019-10-15 泰康保险集团股份有限公司 Data processing method, device, medium and electronic equipment
CN110363129A (en) * 2019-07-05 2019-10-22 昆山杜克大学 Autism early screening system based on smile normal form and audio-video behavioural analysis
CN110377745A (en) * 2018-04-11 2019-10-25 阿里巴巴集团控股有限公司 Information processing method, information retrieval method, device and server
CN110415822A (en) * 2019-07-23 2019-11-05 珠海格力电器股份有限公司 Method and device for predicting cancer
CN110532360A (en) * 2019-07-19 2019-12-03 平安科技(深圳)有限公司 Medical field knowledge mapping question and answer processing method, device, equipment and storage medium
CN111128391A (en) * 2019-12-24 2020-05-08 北京推想科技有限公司 Information processing apparatus, method and storage medium
CN111462841A (en) * 2020-03-12 2020-07-28 华南理工大学 Depression intelligent diagnosis device and system based on knowledge graph
CN111475631A (en) * 2020-04-05 2020-07-31 北京亿阳信通科技有限公司 Disease question-answering method and device based on knowledge graph and deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017185887A1 (en) * 2016-04-29 2017-11-02 Boe Technology Group Co., Ltd. Apparatus and method for analyzing natural language medical text and generating medical knowledge graph representing natural language medical text
CN110377745A (en) * 2018-04-11 2019-10-25 阿里巴巴集团控股有限公司 Information processing method, information retrieval method, device and server
CN109145119A (en) * 2018-07-02 2019-01-04 北京妙医佳信息技术有限公司 The knowledge mapping construction device and construction method of health management arts
CN109284396A (en) * 2018-09-27 2019-01-29 北京大学深圳研究生院 Medical knowledge map construction method, apparatus, server and storage medium
CN109284342A (en) * 2018-11-22 2019-01-29 北京百度网讯科技有限公司 Method and apparatus for output information
CN110363129A (en) * 2019-07-05 2019-10-22 昆山杜克大学 Autism early screening system based on smile normal form and audio-video behavioural analysis
CN110335676A (en) * 2019-07-09 2019-10-15 泰康保险集团股份有限公司 Data processing method, device, medium and electronic equipment
CN110532360A (en) * 2019-07-19 2019-12-03 平安科技(深圳)有限公司 Medical field knowledge mapping question and answer processing method, device, equipment and storage medium
CN110415822A (en) * 2019-07-23 2019-11-05 珠海格力电器股份有限公司 Method and device for predicting cancer
CN111128391A (en) * 2019-12-24 2020-05-08 北京推想科技有限公司 Information processing apparatus, method and storage medium
CN111462841A (en) * 2020-03-12 2020-07-28 华南理工大学 Depression intelligent diagnosis device and system based on knowledge graph
CN111475631A (en) * 2020-04-05 2020-07-31 北京亿阳信通科技有限公司 Disease question-answering method and device based on knowledge graph and deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴彦;: "我国自闭症儿童干预研究的可视化知识图谱分析", 南京晓庄学院学报, no. 05, pages 71 - 75 *
张靓;齐昊;: "基于知识图谱的自闭症谱系障碍研究主题分析", 中国卫生产业, no. 20, pages 197 - 201 *
苏明亮;王士泉;李伟;: "基于主动健康访问技术的医养结合智能综合服务管理平台研究", 医疗卫生装备, no. 06, pages 37 - 41 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506624A (en) * 2021-08-16 2021-10-15 北京阿叟阿巴科技有限公司 Autism child cognitive ability assessment intervention system based on layer-by-layer generalization push logic
CN113506624B (en) * 2021-08-16 2023-08-08 北京阿叟阿巴科技有限公司 Autism children cognitive ability evaluation intervention system based on hierarchical generalization push logic

Also Published As

Publication number Publication date
CN112148884B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
Fiallos et al. Tiktok and education: Discovering knowledge through learning videos
US9724824B1 (en) Sensor use and analysis for dynamic update of interaction in a social robot
CN106548773B (en) Child user searching method and device based on artificial intelligence
US20200126566A1 (en) Method and apparatus for voice interaction
CN107784354B (en) Robot control method and accompanying robot
US10157619B2 (en) Method and device for searching according to speech based on artificial intelligence
US20190020609A1 (en) Communication system and communication control method
US20190206406A1 (en) Dialogue method, dialogue system, dialogue apparatus and program
US11869524B2 (en) Audio processing method and apparatus, computer device, and storage medium
US10692498B2 (en) Question urgency in QA system with visual representation in three dimensional space
Zhao et al. Chatbridge: Bridging modalities with large language model as a language catalyst
CN111798279A (en) Dialog-based user portrait generation method and apparatus
US9922644B2 (en) Analysis of professional-client interactions
CN109278051A (en) Exchange method and system based on intelligent robot
Suhaimi et al. Modeling the affective space of 360 virtual reality videos based on arousal and valence for wearable EEG-based VR emotion classification
TWI823055B (en) Electronic resource pushing method and system
CN114140814A (en) Emotion recognition capability training method and device and electronic equipment
WO2020228349A1 (en) Virtual news anchor system based on air imaging and implementation method therefor
CN112307176A (en) Method and device for guiding user to write
CN112148884B (en) Systems and methods for autism intervention
CN108806699B (en) Voice feedback method and device, storage medium and electronic equipment
CN111949773A (en) Reading equipment, server and data processing method
CN111324710B (en) Online investigation method and device based on virtual person and terminal equipment
US20230052442A1 (en) Analyzing Objects Data to Generate a Textual Content Reporting Events
CN111931036A (en) Multi-mode fusion interaction system and method, intelligent robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant