CN118057480A - Information processing method, device and storage medium - Google Patents

Information processing method, device and storage medium Download PDF

Info

Publication number
CN118057480A
CN118057480A CN202211458210.6A CN202211458210A CN118057480A CN 118057480 A CN118057480 A CN 118057480A CN 202211458210 A CN202211458210 A CN 202211458210A CN 118057480 A CN118057480 A CN 118057480A
Authority
CN
China
Prior art keywords
information
behavior
target information
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211458210.6A
Other languages
Chinese (zh)
Inventor
孙舶寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202211458210.6A priority Critical patent/CN118057480A/en
Publication of CN118057480A publication Critical patent/CN118057480A/en
Pending legal-status Critical Current

Links

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The embodiment of the application discloses an information processing method, a device and a storage medium, which are executed by first equipment worn by a first object, wherein the method comprises the following steps: determining to start information acquisition; when the start of information acquisition is determined, acquiring target information, wherein the target information at least comprises: behavior information of the second object, wherein the target information is at least for assistance of the first object in caring for the second object. By wearing the first device capable of executing the information processing method of the embodiment of the application, a caretaker can conveniently take care of the caretaker, and the care pressure of the caretaker is reduced.

Description

Information processing method, device and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to an information processing method, an information processing device, and a storage medium.
Background
Baby care is a difficult problem faced by most families, especially caregivers lacking in baby care experience, in the process of nursing babies, a plurality of network videos are usually searched on a mobile phone to learn child care knowledge such as milk powder brewing, diaper changing and the like, and life habits, growth curves and the like of the babies are recorded through the mobile phone.
However, when the baby is crying, the baby care person also searches and learns the related child care course by operating the mobile phone while soothing the baby so as to know the reason and the solution of the crying of the baby, which brings a lot of invariance to the baby care person and increases the child care pressure of the care person. In addition, the caregivers can also have adverse effects on the eyesight of infants by the radiation generated by the mobile phone screen in the process of watching child-care videos through the mobile phone.
The same problem is faced with caregivers who need to care for people who cannot take care of their lives at home.
Disclosure of Invention
The application provides an information processing method, an information processing device and a storage medium.
A first aspect of an embodiment of the present application provides an information processing method, performed by a first device worn by a first object, the method including:
Determining to start information acquisition;
When the start of information acquisition is determined, acquiring target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
Optionally, when the information acquisition is determined to be started, acquiring the target information includes:
When the information acquisition is determined to be started, acquiring the image information of the first object, the image information of the second object, the voice information of the first object, the voice information of the space where the second object is located, the current position information of the first object and/or the operation information of the first object.
Optionally, the method further comprises:
And sending the target information to second equipment, wherein the target information is used for analyzing the behavior result of the first object and/or the second object by the second equipment.
Optionally, the method further comprises:
Acquiring auxiliary information determined based on the target information;
and outputting the auxiliary information.
Optionally, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the first object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
Optionally, the teaching information includes a teaching video, and the method further includes:
And according to the position information of the first object, the display position and the display angle of the teaching video in the space are truly determined.
Optionally, the acquiring the auxiliary information determined based on the target information includes:
Receiving the auxiliary information returned by the second equipment based on the behavior analysis result of the target information;
Or alternatively
And analyzing the behavior result of the first object and/or the second object according to the target information, and outputting auxiliary information according to the behavior result.
Optionally, the determining auxiliary information according to the behavior result includes:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
And determining prompt information output before or at the occurrence time.
Optionally, the determining to start information collection includes:
And determining to start information acquisition according to the received information acquisition instruction sent by the first object.
A second aspect of an embodiment of the present application provides an information processing method, performed by a second device, the method including:
Receiving target information sent by first equipment; the first device is a device worn by a first object;
Storing the target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
Optionally, the method further comprises:
analyzing the behavior result of the first object and/or the second object according to the target information;
And sending auxiliary information to the first equipment based on the behavior result.
Optionally, the target information includes at least: the method comprises the steps of enabling image information of a first object, image information of a second object, voice information of the first object, voice information of a space where the second object is located, current position information of the first object and/or operation information of the first object to be displayed.
Optionally, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the second object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
Optionally, the method further comprises:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
And determining prompt information output before or at the occurrence time.
A third aspect of an embodiment of the present application provides an information processing apparatus, including:
the first determining module is used for determining to start information acquisition;
The acquisition module is used for acquiring target information when the start of information acquisition is determined, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
Optionally, the acquisition module is configured to:
When the information acquisition is determined to be started, acquiring the image information of the first object, the image information of the second object, the voice information of the first object, the voice information of the space where the second object is located, the current position information of the first object and/or the operation information of the first object.
Optionally, the apparatus further comprises:
And the first sending module is used for sending the target information to second equipment, wherein the target information is used for analyzing the behavior result of the first object and/or the second object by the second equipment.
Optionally, the apparatus further comprises:
The acquisition module is used for acquiring auxiliary information determined based on the target information;
and the output module is used for outputting the auxiliary information.
Optionally, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the first object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
Optionally, the teaching information includes a teaching video, and the apparatus further includes:
and the second determining module is used for determining the display position and the display angle of the teaching video in the space according to the position information of the first object.
Optionally, the acquiring module is configured to:
Receiving the auxiliary information returned by the second equipment based on the behavior analysis result of the target information;
Or alternatively
And analyzing the behavior result of the first object and/or the second object according to the target information, and outputting auxiliary information according to the behavior result.
Optionally, the acquiring module is specifically configured to:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
And determining prompt information output before or at the occurrence time.
Optionally, the first determining module is configured to:
And determining to start information acquisition according to the received information acquisition instruction sent by the first object.
A fourth aspect of an embodiment of the present application provides an information processing apparatus, including:
the receiving module is used for receiving the target information sent by the first equipment; the first device is a device worn by a first object;
The storage module is used for storing the target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
Optionally, the apparatus further comprises:
The analysis module is used for analyzing the behavior result of the first object and/or the second object according to the target information;
and the second sending module is used for sending auxiliary information to the first equipment based on the behavior result.
Optionally, the target information includes at least: the method comprises the steps of enabling image information of a first object, image information of a second object, voice information of the first object, voice information of a space where the second object is located, current position information of the first object and/or operation information of the first object to be displayed.
Optionally, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the second object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
Optionally, the apparatus further comprises:
the third determining module is used for determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
The prediction module is used for predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time rule;
and the output module is used for determining prompt information output before or at the occurrence time.
According to a fifth aspect of embodiments of the present application, there is provided a non-transitory computer-readable storage medium, which when executed by a processor of a computer, causes the computer to perform the information processing method as described above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
In the embodiment of the application, the first object collects the behavior information of the second object through the first worn device, so that the first device can not occupy the hands of the first object while providing assistance for the first object to take care of the second object through the collected behavior information of the second object, so that the first object can take care of the second object in a whole body, the care effect of the first object on the second object is improved, and the care pressure of the first object is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of a method of information processing according to an exemplary embodiment;
fig. 2 is a flow chart illustrating another information processing method according to another exemplary embodiment;
Fig. 3 is a schematic structural view of an information processing apparatus according to another exemplary embodiment;
fig. 4 is a schematic structural view of another information processing apparatus shown according to another exemplary embodiment;
fig. 5 is a schematic structural view of a first apparatus according to another exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus consistent with aspects of the application as detailed in the accompanying claims.
Before explaining the embodiment of the present application in detail, an application scenario of the embodiment of the present application is explained.
The information processing method provided by the embodiment of the application can be used in nursing scenes of infants and people incapable of self-care in life.
For example, in a scenario where a new-handy mother nurses a baby, due to lack of child-care experience, the new-handy mother generally needs to search some network videos on a mobile phone to learn child-care knowledge such as milk powder brewing, diaper changing, and the like, and record life habits, growth curves, and the like of the baby through the mobile phone. However, when the baby is cryed, the new-handy mother needs to search and learn the related child-raising knowledge by operating the mobile phone while soothing the baby so as to know the reason and the solution of the crying of the baby, so that a lot of inconveniences are brought to the new-handy mother, and the child-raising pressure of the new-handy mother is increased.
Based on the method, the new-handy mother can wear the first equipment capable of executing the information processing method of the embodiment of the application to assist in child care, the first equipment capable of executing the information processing method of the embodiment of the application can automatically acquire the image information and/or the voice information of the infant and the carer, and automatically play the corresponding child care course according to the acquired image information and/or the voice information of the infant and the carer, and record the living habit and the growth curve of the infant, so that the new-handy mother can learn the corresponding child care knowledge while pacifying the infant, and the child care burden of the new-handy mother is reduced.
In addition, wearing the first device with the information processing method of the embodiment of the application can provide audio-visual entertainment functions for the infant caregivers so as to relieve the child care pressure of the caregivers. Meanwhile, the infant safety can be monitored by collecting the image information and/or the voice information of the infant, and when the infant has abnormal behaviors, a carer is timely reminded, so that the infant safety is ensured.
It should be noted that the foregoing are only some exemplary application scenarios provided by the embodiments of the present application, and do not limit the application scenarios of the information processing method provided by the embodiments of the present application.
As shown in fig. 1, an embodiment of the present application provides an information processing method, which is performed by a first device worn by a first object, the method including:
S101: determining to start information acquisition;
S102: when the start of information acquisition is determined, acquiring target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
The first device herein may be a wearable smart device including an image capturing device and/or a voice capturing device, for example, AR (Augmented Reality) glasses, MR (Mixed Reality) glasses, AR helmets, and the like, and may also be other wearable smart devices having an image capturing function and/or a voice capturing function, which is not limited in the embodiment of the present application.
In addition, the first device of the embodiment of the present application may have an information processing capability, or may only have an information transmission capability, and may not have an information processing function, which is not limited in the embodiment of the present application.
When the first device of one embodiment of the present application has information processing capability, it can perform processing by itself and output response information after acquiring the target information,
When the first device according to one embodiment of the present application does not have the information processing capability, the obtained target information may be transmitted to the second device having the information processing capability through the wired or wireless network, and a processing result returned by the second device based on the target information may be received. Of course, if the first device has the information processing capability, the obtained target information may be sent to the second device having the information processing capability through a wired or wireless network, and the processing result returned by the second device based on the target information may be received.
The second subject is a person who does not have life self-care ability and needs nursing by other people, for example, an infant, a person with mobility impairment, a patient with serious illness who needs nursing, an elderly person, and the like.
The first object is a caretaker, such as a new mother, a medical care worker, or the like, who can take care of the second object.
The behavior information of the second object may include limb movements, expressions, sounds, and the like of the second object, and may also be other behavior information of the second object, which is not limited in the embodiment of the present application.
In the embodiment of the application, the first object is the first device worn to collect the target information such as the behavior information of the second object, so that the first device can not occupy the hands of the first object while providing assistance for the first object to take care of the second object through the collected behavior information of the second object, and the first object can take care of the second object in a whole body, thereby improving the care effect of the first object on the second object and reducing the care pressure of the first object.
In one embodiment, the target information collected by the first device includes: image information of a first object, image information of a second object, voice information of the first object, voice information of a space where the second object is located, current location information of the first object, and/or operation information of the first object.
Illustratively, the acquired image information of the first object includes at least: the first subject cares for image information of the process of the second subject.
For example, the collected first object is image information of a process of brewing milk powder by the second object, the first object is image information of the second object when changing a diaper or wetting the diaper, the first object is image information of the second object when changing or feeding medicine, and the like.
In addition, the collected image information of the first object may also be the image information of the first object collected by other image collecting devices received by the first device when the first device determines to start information collection.
For example, when the first object is located indoors, the collected image information of the first object may be image information of the first object collected by an indoor installed monitoring device received by the first device.
Illustratively, the acquired image information of the second object includes at least: image information of the second object when the second object requires the first object to provide assistance.
For example, image information of the second subject collected when the second subject is crying, image information of the second subject collected when the second subject is tumbling, image information of the face of the second subject collected when the second subject has a painful expression, and the like.
In addition, the image information of the second object may be the image information of the second object acquired by the first device, or may be the image information of the second object sent by other image acquisition devices received by the first device, which is not limited in the embodiment of the present application.
Illustratively, the collected voice information of the first object includes: the method comprises the steps of sending out voice information of a command which is sent out by a first object and needs to be executed by a first device, sending out voice information which is sent out by the first object and needs to be recorded by the first device and related to a second object, and collecting sound sent out by the first object when the first object performs assistance on the second object.
For example, a voice command of "turn on music player" issued by the first object, a height and weight recording command of "xx height of 85cm and weight of 4kg" issued by the first object, the first object issues: feeding record instruction with feeding time of 15:30, and medicine changing time record instruction with certain medicine changing completion issued by the first object.
Illustratively, the collected voice information of the space where the second object is located includes: the crying sound made by the second object when crying, the sound made by the second object colliding with the surrounding objects, the abnormal sound made by the object in the space where the second object is located, the abnormal groan sound or breathing sound made by the second object, and the like.
Illustratively, the current location information of the first object includes: and determining the position information of the sight line of the first object based on the two-dimensional or three-dimensional position information of the current position of the first object.
When the position information is two-dimensional information, the position information may include: coordinate information of the ground in the space where the first object is located.
When the position information is three-dimensional information, the position information may be: the coordinates of the ground in the space where the first object is located, the height information relative to the ground, and the like.
Illustratively, the operational information of the first subject includes auxiliary actions taken by the first subject while the second subject is being cared for.
For example, the first subject is engaging in image information of the action by the second subject, and the first subject sleeps the image information of the action by the second subject.
In one embodiment, the method further comprises:
And sending the target information to second equipment, wherein the target information is used for analyzing the behavior result of the first object and/or the second object by the second equipment.
For example, when the target information collected by the first device is the image information of the first object, and the image information includes characteristic information such as a milk bottle, milk powder, a milk powder tank, and water, the collected target information may be sent to the second device, so that the second device analyzes the behavior result of the first object according to the received target information.
In the embodiment of the application, the analysis process of analyzing the behavior result of the first object and/or the second object according to the target information is carried out in the second device, so that the processing capacity of a processor configured in the first device can be reduced, the weight and the volume of a processor part in the first device are reduced, the weight and the volume of the first device are further reduced, and the wearing comfort of a user when wearing the first device is improved.
If the first device has information processing capability, the collected target information can also be analyzed by the first device to determine the behavior result of the first object and/or the second object.
For example, when the target information collected by the first device is the image information of the first object, and the image information includes characteristic information such as a milk bottle, milk powder, a milk powder tank, and water, the behavior of the first object may be determined to be that of the brewed milk powder, and at this time, the obtained behavior result of the first object is that of feeding the second object.
The first device may store a behavioral result analysis model in advance, or store a correspondence table of feature information and behavioral results, and the first device may determine, according to feature information such as milk bottles, milk powder cans, and water included in the image information, a behavioral result of the first object through the behavioral result analysis model, or the correspondence table of feature information and behavioral results.
The first device may also determine the behavior result of the first object in other manners, which is not limited by the embodiment of the present application.
For example, when the target information collected by the first device is voice information of the space where the second object is located, whether the collected voice information is crying of the second object may be determined according to information such as volume and tone of the collected voice information, and if so, it is determined that the second object is crying currently.
For example, when the acquired image information of the first object includes characteristic information such as a medicine and a water cup, the first object may be determined to take the medicine in the auxiliary second object according to the characteristic information such as the medicine and the water cup.
In addition, the first device can also determine whether the medicine held by the first object to the second object is correct or not according to the acquired image of the outer package of the medicine.
And outputting prompt information to enable the first object to determine whether the medicine taken by the first object is correct or not if the medicine taken by the first object is not the pre-stored image of the medicine taken by the second object.
The implementation manner of the first device in judging whether the second object is crying according to the collected voice information of the space where the second object is located may refer to other related technologies, and the embodiments of the present application are not described herein again.
In the embodiment of the application, the first equipment directly analyzes the acquired target information through the processor of the first equipment, so that the transmission process of data between the first equipment and the second equipment is avoided, the transmission time of the target information is shortened, and the analysis result can be obtained more timely.
In one embodiment, the method further comprises:
Acquiring auxiliary information determined based on the target information;
and outputting the auxiliary information.
The implementation process of determining the auxiliary information based on the target information may be performed by the first device or may be performed by the second device.
Illustratively, the acquiring auxiliary information determined based on the target information includes:
Receiving the auxiliary information returned by the second equipment based on the behavior analysis result of the target information;
Or alternatively
And analyzing the behavior result of the first object and/or the second object according to the target information, and outputting auxiliary information according to the behavior result.
As can be seen from the above description, when the first device does not have the information processing capability, the collected target information may be sent to the second device, so that the second device may determine a corresponding behavior result based on the target information, and send corresponding auxiliary information to the first device according to the behavior result.
Accordingly, the first device receives the auxiliary information sent by the second device and outputs the auxiliary information, so that the first object can provide assistance for the second object based on the auxiliary information.
When the first device has information processing capability, the collected target information can be analyzed by self to determine a behavior result corresponding to the target information, and corresponding auxiliary information is output according to the behavior result.
In addition, when the first device has processing capability, the second device may select to send the target information to the first device, and the second device may analyze the behavior result corresponding to the target information based on the target information and provide the auxiliary information corresponding to the behavior result.
Illustratively, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the first object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
For example, when the first device determines that the second object is crying based on the collected target information, the first device may look at a feeding record before the current time of the second object, and determine whether the current time is the feeding time of the second object based on the feeding record before the current time, if so, it indicates that the crying of the second object is due to starvation, and at this time, output feeding prompt information to prompt the first object to feed the second object.
If not, outputting prompt information of crying of the second object so as to prompt the first object to search the crying reason of the second object.
For example, if the first subject is an infant, if it is detected by the image information that the first subject is licking the finger, the first subject may be prompted to want to drink milk or that the first subject is licking the finger to inform a user who cares about the first subject.
The prompting information output by the first device may be voice prompting information, text prompting information, or both voice prompting information and text prompting information.
If the second device judges that the second object is crying based on the received target information collected by the first device, the fed prompt information or the prompt information of the second object crying can be sent to the first device, and accordingly, the first device outputs the prompt information after receiving the prompt information sent by the second device.
The teaching information may be two-dimensional (2D) or three-dimensional (3D) teaching video, may be speech information for teaching, and may also be text or picture information, which is not limited in the embodiment of the present application.
When the first device determines that the diaper or the diaper of the second object is likely to be in a diaper state according to the acquired image information of the second object, the first device can output prompt information, search and play relevant teaching videos or teaching pictures for changing the diaper or the diaper from a network, and when the first object determines that the diaper or the diaper of the second object is likely to be in a diaper state, the first device can refer to the teaching videos or the teaching pictures to change the diaper or the diaper for the second object.
The implementation manner of determining, by the first device, that the diaper or the diaper of the second object is in the wet state according to the image information of the second object may refer to related technologies, and embodiments of the present application are not described herein again.
Similarly, the above process of determining whether the diaper or the diaper of the second object is in the diaper state, and searching the related teaching video or the teaching picture of changing the diaper or the diaper from the network may be performed by the second device, and the second device sends the related teaching video or the teaching picture of changing the diaper or the diaper to the first device for display after obtaining the related teaching video or the teaching picture of changing the diaper or the diaper.
The first device may also determine whether the first object is in a tired state based on a face tag, a mouth, and a state of eyes of the first object in the acquired image information of the first object, for example.
If so, related videos or music capable of relieving fatigue may be searched for and the searched videos or music may be played to help the first subject relieve fatigue.
For example, when the first object is to perform auxiliary actions such as massage for the second object, the first device may display an acupoint map of the human body and a related teaching video of the massage method in the virtual screen according to the collected voice information of the massage of the first object.
In one embodiment, the instructional information comprises an instructional video, and the method further comprises:
And according to the position information of the first object, the display position and the display angle of the teaching video in the space are truly determined.
The first device may acquire the current position information of the first object by using a SLAM (Simultaneous Localization AND MAPPING, instant positioning and mapping) manner, may determine the current position information of the first object by using the received image information including the first object acquired by other image acquisition devices, and may determine the position information of the first object by using other implementation manners, which is not limited in the embodiment of the present application.
For example, since the first subject needs to move along with the feeding bottle, the milk powder can, etc. when the milk powder is brewed, if the teaching video related to the brewed milk powder at other positions is simultaneously watched, that is, the video is watched while the milk powder is brewed, the milk powder may be scattered outside the feeding bottle, and may be scalded due to distraction.
Based on the above, in the embodiment of the application, the first device can acquire the position information of the first object at the current moment while playing the teaching video, and determine the position information of the sight line of the first object according to the position information of the first object, so that the virtual screen for playing the teaching video moves along with the sight line of the first object, thereby facilitating the first object to learn the milk powder brewing method according to the teaching video for brewing the milk powder, and improving the user experience.
It should be noted that, if the entertainment information is video information, the virtual screen playing the entertainment video may be controlled to move according to the line of sight of the first object, so as to improve the fatigue relieving effect of the first object.
In one embodiment, the determining auxiliary information according to the behavior result includes:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
and outputting prompt information before or at the occurrence time.
In an exemplary embodiment, when the first device determines that the first object is brewing milk powder according to the collected image information of the first object, the current time may be recorded as a feeding time of the second object, in addition, the first device may also record the feeding time of the second object according to the voice information of the first object, determine a target time for feeding the second object next time according to a plurality of feeding times of the second object before the recorded current time, and output feeding prompt information before the target time or the target time, so that the first object may timely feed the second object.
For example, the first device determines that the feeding time interval of the second object is about 4 hours according to the feeding record of the second object before the current time, and if the current time is 15:00, may send feeding prompt information of the second object at 19:00 or 18:55 to prompt the first object to feed the second object.
The first device may further determine a target time when the second subject takes the medicine next according to a time law of taking the medicine by the second subject before the current time, and remind the first subject to assist the second subject in taking the medicine before the target time or at the target time.
The determining manner of the first device to determine the target time when the second subject takes the medicine next time may refer to the determining manner of the feeding time of the second subject, which is not described herein.
In one embodiment, the initiation of information gathering is determined based on a received information gathering instruction issued by the first object.
The determining to start information collection may be that the first object triggers a key or a virtual button on the first device, or may start information collection when the first device detects that the first object wears the first device through its own sensor, or may start information collection when an information collection instruction sent by the first object is received.
In addition, the first device may be triggered to collect information by other triggering conditions, which is not limited in the embodiment of the present application.
The information collection instruction sent by the first object may be a special field preset in the first device and used for the first device to start information collection, for example, when the first object needs to control the first device to collect target information, a voice instruction such as "start", "start" and "collect" may be sent. Correspondingly, when the first device receives the voice command sent by the first object, the information acquisition is determined to be started.
The embodiment of the application also provides an information processing method, which is executed by the second device and comprises the following steps:
S201: receiving target information sent by first equipment; the first device is a device worn by a first object;
S202: storing the target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
The second device may be a processor or a terminal device with information processing capability, for example, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc.
The second device may also be a cloud platform with processing capabilities.
As can be seen from the above description, the first device may process the collected target information itself, or may send the collected target information to the second device.
When the first device sends the collected target information to the second device, the second device receives the target information sent by the first device and stores the target information.
In one embodiment, the method further comprises:
analyzing the behavior result of the first object and/or the second object according to the target information;
And sending auxiliary information to the first equipment based on the behavior result.
The process of the second device analyzing the behavior result of the first object and/or the second object according to the received target information may refer to the above implementation process, and the embodiments of the present application are not described herein again.
Illustratively, the target information includes at least: the method comprises the steps of enabling image information of a first object, image information of a second object, voice information of the first object, voice information of a space where the second object is located, current position information of the first object and/or operation information of the first object to be displayed.
Illustratively, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the second object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
In one embodiment, the method further comprises:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
and outputting prompt information before or at the occurrence time.
The specific implementation process of predicting the occurrence time of the next occurrence of the third preset behavior by the second device according to the occurrence time rule of the third preset behavior may refer to the implementation process, and the embodiment of the present application is not described herein again.
The embodiment of the application also provides an information processing method, which comprises the following steps:
And shooting an external scene in real time through a camera on the AR glasses, and transmitting related images to the second equipment for behavior analysis. The second equipment understands the behaviors occurring in the current scene by extracting image semantic information, object interaction logic analysis and other modes, records the current behavior state and behavior occurrence time when the behaviors are matched with the corresponding behaviors in the algorithm library, and stores the current behavior state and behavior occurrence time in the database.
The action which should happen at present is searched periodically through the database, and when the action is judged to happen at present, a mode of message pushing is provided for reminding a caretaker.
For example, when the caretaker infuses the milk powder for the infant, the second device understands that the current behavior is to infuse the milk powder by performing semantic recognition and interactive logic analysis on the milk powder and the milk bottle, and records the infusing time and the infusing milk amount. After a few hours, the second device judges that the infant should be fed again by comparing with the infant behavior rules in the database, and pushes the reminding message to the caretaker. The caretaker can switch the related functions at any time and can call and check the feeding rule of the infant for a certain period of time at any time.
Caregivers can also actively record the life habits of infants through voice input and other modes.
The caretaker can wake up the AR glasses through the mode of pronunciation, and the AR glasses know caretaker's relevant demand through speech recognition, retrieves corresponding teaching video or characters from the network database and pushes to caretaker. If the video is a stereoscopic animation teaching video, the AR glasses can render an image picture which the caretaker should see under the current pose through a real-time positioning algorithm, so that the caretaker can watch the learning teaching video from all angles.
AR glasses may also automatically push courses through judgment of current behavior.
The AR glasses can also provide audio-visual entertainment for users, relieve the child-care pressure of the users, and can not disturb the infants. During the entertainment of the user, the AR glasses also provide a behavior monitoring function, and if the infant has abnormal behaviors, the AR glasses can push relevant prompts to the user, so that the safety of the infant is ensured.
Referring to fig. 3, an embodiment of the present application further provides an information processing apparatus, including:
A first determining module 301, configured to determine to start information collection;
The acquisition module 302 is configured to acquire target information when it is determined to start information acquisition, where the target information at least includes: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
Optionally, the acquisition module is configured to:
When the information acquisition is determined to be started, acquiring the image information of the first object, the image information of the second object, the voice information of the first object, the voice information of the space where the second object is located, the current position information of the first object and/or the operation information of the first object.
Optionally, the apparatus further comprises:
And the first sending module is used for sending the target information to second equipment, wherein the target information is used for analyzing the behavior result of the first object and/or the second object by the second equipment.
Optionally, the apparatus further comprises:
The acquisition module is used for acquiring auxiliary information determined based on the target information;
and the first output module is used for outputting the auxiliary information.
Optionally, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the first object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
Optionally, the teaching information includes a teaching video, and the apparatus further includes:
and the second determining module is used for determining the display position and the display angle of the teaching video in the space according to the position information of the first object.
Optionally, the acquiring module is configured to:
Receiving the auxiliary information returned by the second equipment based on the behavior analysis result of the target information;
Or alternatively
And analyzing the behavior result of the first object and/or the second object according to the target information, and outputting auxiliary information according to the behavior result.
Optionally, the acquiring module is specifically configured to:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
And determining prompt information output before or at the occurrence time.
Optionally, the first determining module is configured to:
And determining to start information acquisition according to the received information acquisition instruction sent by the first object.
Referring to fig. 4, an embodiment of the present application further provides an information processing apparatus, including:
A receiving module 401, configured to receive target information sent by a first device; the first device is a device worn by a first object;
A saving module 402, configured to save the target information, where the target information at least includes: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
Optionally, the apparatus further comprises:
The analysis module is used for analyzing the behavior result of the first object and/or the second object according to the target information;
and the second sending module is used for sending auxiliary information to the first equipment based on the behavior result.
Optionally, the target information includes at least: the method comprises the steps of enabling image information of a first object, image information of a second object, voice information of the first object, voice information of a space where the second object is located, current position information of the first object and/or operation information of the first object to be displayed.
Optionally, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the second object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
Optionally, the apparatus further comprises:
the third determining module is used for determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
The prediction module is used for predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time rule;
And the second output module is used for determining prompt information output before the occurrence time or at the occurrence time.
Referring to fig. 5, in an embodiment of the present application, there is provided a first apparatus 500 including:
a memory 504 for storing processor-executable instructions;
a processor 520 coupled to the memory 504;
Wherein the processor 520 is configured to perform the information processing method provided by any of the foregoing technical solutions.
A block diagram of a first device 500 is shown in accordance with an exemplary embodiment. For example, the first device 500 may be AR glasses, MR glasses, AR helmets, or the like.
Referring to fig. 5, a first device 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the first device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operations at the first device 500. Examples of such data include instructions for any application or method operating on the first device 500, contact data, phonebook data, messages, pictures, video, and so forth. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the first device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the first device 500.
The multimedia component 508 comprises a screen between the first device 500 and the user providing an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. When the first device 500 is in an operation mode, such as a photographing mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the first device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the first device 500. For example, the sensor assembly 514 may detect an on/off state of the first device 500, a relative positioning of the components, such as a display and keypad of the first device 500, the sensor assembly 514 may also detect a change in position of the first device 500 or a component of the first device 500, the presence or absence of a user's contact with the first device 500, an orientation or acceleration/deceleration of the first device 500, and a change in temperature of the first device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the first device 500 and other devices, either wired or wireless. The first device 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the first device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 504, including instructions executable by processor 520 of first device 500 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Embodiments of the present application provide a non-transitory computer-readable storage medium, which when executed by a processor of a computer, enables the computer to perform the information processing method according to one or more of the foregoing technical solutions.
The processor, when executing the instructions, is capable of performing at least the steps of:
Determining to start information acquisition;
When the start of information acquisition is determined, acquiring target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
Optionally, when the information acquisition is determined to be started, acquiring the target information includes:
When the information acquisition is determined to be started, acquiring the image information of the first object, the image information of the second object, the voice information of the first object, the voice information of the space where the second object is located, the current position information of the first object and/or the operation information of the first object.
Optionally, the method further comprises:
And sending the target information to second equipment, wherein the target information is used for analyzing the behavior result of the first object and/or the second object by the second equipment.
Optionally, the method further comprises:
Acquiring auxiliary information determined based on the target information;
and outputting the auxiliary information.
Optionally, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the first object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
Optionally, the teaching information includes a teaching video, and the method further includes:
And according to the position information of the first object, the display position and the display angle of the teaching video in the space are truly determined.
Optionally, the acquiring the auxiliary information determined based on the target information includes:
Receiving the auxiliary information returned by the second equipment based on the behavior analysis result of the target information;
Or alternatively
And analyzing the behavior result of the first object and/or the second object according to the target information, and outputting auxiliary information according to the behavior result.
Optionally, the determining auxiliary information according to the behavior result includes:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
And determining prompt information output before or at the occurrence time.
Optionally, the determining to start information collection includes:
And determining to start information acquisition according to the received information acquisition instruction sent by the first object.
The method comprises the following steps:
Receiving target information sent by first equipment; the first device is a device worn by a first object;
Storing the target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
Optionally, the method further comprises:
analyzing the behavior result of the first object and/or the second object according to the target information;
And sending auxiliary information to the first equipment based on the behavior result.
Optionally, the target information includes at least: the method comprises the steps of enabling image information of a first object, image information of a second object, voice information of the first object, voice information of a space where the second object is located, current position information of the first object and/or operation information of the first object to be displayed.
Optionally, the auxiliary information includes at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the second object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
Optionally, the method further comprises:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
And determining prompt information output before or at the occurrence time.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (17)

1. An information processing method performed by a first device worn by a first object, the method comprising:
Determining to start information acquisition;
When the start of information acquisition is determined, acquiring target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
2. The method of claim 1, wherein when it is determined to initiate information collection, collecting target information comprises:
When the information acquisition is determined to be started, acquiring the image information of the first object, the image information of the second object, the voice information of the first object, the voice information of the space where the second object is located, the current position information of the first object and/or the operation information of the first object.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
And sending the target information to second equipment, wherein the target information is used for analyzing the behavior result of the first object and/or the second object by the second equipment.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
Acquiring auxiliary information determined based on the target information;
and outputting the auxiliary information.
5. The method of claim 4, wherein the auxiliary information comprises at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the first object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
6. The method of claim 5, wherein the instructional information comprises an instructional video, the method further comprising:
And according to the position information of the first object, the display position and the display angle of the teaching video in the space are truly determined.
7. The method of claim 4, wherein the acquiring the auxiliary information determined based on the target information comprises:
Receiving the auxiliary information returned by the second equipment based on the behavior analysis result of the target information;
Or alternatively
And analyzing the behavior result of the first object and/or the second object according to the target information, and outputting auxiliary information according to the behavior result.
8. The method of claim 7, wherein the determining auxiliary information based on the behavioral result comprises:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
And determining prompt information output before or at the occurrence time.
9. The method of claim 1, wherein the determining to initiate information gathering comprises:
And determining to start information acquisition according to the received information acquisition instruction sent by the first object.
10. An information processing method, characterized by being executed by a second device, the method comprising:
Receiving target information sent by first equipment; the first device is a device worn by a first object;
Storing the target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
11. The method according to claim 10, wherein the method further comprises:
analyzing the behavior result of the first object and/or the second object according to the target information;
And sending auxiliary information to the first equipment based on the behavior result.
12. The method of claim 11, wherein the target information comprises at least: the method comprises the steps of enabling image information of a first object, image information of a second object, voice information of the first object, voice information of a space where the second object is located, current position information of the first object and/or operation information of the first object to be displayed.
13. The method of claim 11, wherein the auxiliary information comprises at least one of:
The prompt information is used for prompting the first object to provide a first preset behavior for the second object or prompting the second object to execute a second preset behavior;
Teaching information for teaching the first object to look after the second object;
entertainment information for the first subject to take care of the second subject for fatigue relief.
14. The method of claim 11, wherein the method further comprises:
determining the occurrence time rule of a third preset behavior according to the behavior analysis result;
predicting the occurrence time of the next occurrence of the third preset behavior according to the occurrence time law;
And determining prompt information output before or at the occurrence time.
15. An information processing apparatus, characterized by being executed by a first device worn by a first object, the apparatus comprising:
the first determining module is used for determining to start information acquisition;
The acquisition module is used for acquiring target information when the start of information acquisition is determined, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
16. An information processing apparatus, characterized by being executed by a second device, comprising:
the receiving module is used for receiving the target information sent by the first equipment; the first device is a device worn by a first object;
The storage module is used for storing the target information, wherein the target information at least comprises: behavior information of a second object, wherein the target information is at least for assistance of the first object in caring for the second object.
17. A non-transitory computer readable storage medium, which when executed by a processor of a computer, enables the computer to perform the information processing method of any one of claims 1 to 14.
CN202211458210.6A 2022-11-21 2022-11-21 Information processing method, device and storage medium Pending CN118057480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211458210.6A CN118057480A (en) 2022-11-21 2022-11-21 Information processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211458210.6A CN118057480A (en) 2022-11-21 2022-11-21 Information processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN118057480A true CN118057480A (en) 2024-05-21

Family

ID=91069412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211458210.6A Pending CN118057480A (en) 2022-11-21 2022-11-21 Information processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN118057480A (en)

Similar Documents

Publication Publication Date Title
US9848796B2 (en) Method and apparatus for controlling media play device
US10356398B2 (en) Method for capturing virtual space and electronic device using the same
CN108052079B (en) Device control method, device control apparatus, and storage medium
US10444930B2 (en) Head-mounted display device and control method therefor
RU2642391C2 (en) Method and device for object recognition
US20150026647A1 (en) Mobile terminal and control method thereof
CN104757937B (en) Emotional affection of supporting parents love and remote healthcare nurse wearable device and method
CN107658016B (en) Nounou intelligent monitoring system for elderly healthcare companion
KR20160127709A (en) Method and apparatus for display control, electronic device
CN106652336A (en) Method and device for monitoring voice of children
CN105107073A (en) Sleep aiding method and device
CN109189986A (en) Information recommendation method, device, electronic equipment and readable storage medium storing program for executing
US20180150722A1 (en) Photo synthesizing method, device, and medium
CN111901682A (en) Television mode processing method and system based on automatic identification and television
CN106325469A (en) Information processing method and device
KR20150016021A (en) Mobile terminal and control method thereof
CN108762621A (en) A kind of message display method and mobile terminal
US11107343B2 (en) System and method of user mobility monitoring
CN107610775B (en) Data monitoring method and device of coffee machine
CN107016224A (en) The Nounou intelligent monitoring devices accompanied for health care for the aged
CN112633232A (en) Interaction method and device based on sitting posture detection, equipment, medium and household equipment
CN118057480A (en) Information processing method, device and storage medium
KR20180113115A (en) Mobile terminal and method for controlling the same
CN115857781A (en) Adaptive user registration for electronic devices
CN114153361B (en) Interface display method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination