WO2023119396A1 - Animal behavior prediction device, animal behavior prediction method, and recording medium - Google Patents

Animal behavior prediction device, animal behavior prediction method, and recording medium Download PDF

Info

Publication number
WO2023119396A1
WO2023119396A1 PCT/JP2021/047199 JP2021047199W WO2023119396A1 WO 2023119396 A1 WO2023119396 A1 WO 2023119396A1 JP 2021047199 W JP2021047199 W JP 2021047199W WO 2023119396 A1 WO2023119396 A1 WO 2023119396A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
behavior
prediction result
data
action
Prior art date
Application number
PCT/JP2021/047199
Other languages
French (fr)
Japanese (ja)
Inventor
悠希 有里
拓也 世良
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to PCT/JP2021/047199 priority Critical patent/WO2023119396A1/en
Publication of WO2023119396A1 publication Critical patent/WO2023119396A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Definitions

  • the present invention relates to technology for predicting animal behavior.
  • Patent Literatures 1 and 2 describe techniques for predicting motions and actions of people, animals, etc. based on captured images.
  • Patent Documents 1 and 2 methods for detecting animal behavior based on captured images impose a heavy load on image processing.
  • One purpose of the present invention is to predict animal behavior based on data from sensors that detect animal conditions.
  • an animal behavior prediction device comprises: Acquisition means for acquiring data related to the condition of the target animal; prediction means for predicting the behavior of the target animal based on the data using a trained prediction model and outputting the prediction result; output means for outputting an action that satisfies a predetermined condition as a final prediction result based on the prediction result; Prepare.
  • an animal behavior prediction method comprises obtaining data relating to the condition of the subject animal; predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result; Based on the prediction result, an action that satisfies a predetermined condition is output as a final prediction result.
  • the recording medium comprises obtaining data relating to the condition of the subject animal; predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result;
  • a program is recorded that causes a computer to execute a process of outputting a behavior that satisfies a predetermined condition as a final prediction result based on the prediction result.
  • FIG. 1 shows the overall configuration of a communication system to which an information processing device is applied; An example of the floor plan of the owner's house is shown. It is a block diagram which shows the structure of a home system.
  • 2 is a block diagram showing the configuration of a pet terminal;
  • FIG. 3 is a block diagram showing configurations of a server and a user terminal;
  • FIG. 4 is a block diagram showing the configuration of a server for pet behavior prediction; 4 is a graph showing an example of frequency and momentum;
  • An example of behavior prediction results and final prediction results based on certain sensor data is shown.
  • It is a flow chart of action prediction processing. It is an example of a display of message information in an owner's user terminal.
  • It is a block diagram which shows the functional structure of the information processing apparatus of 2nd Embodiment.
  • 9 is a flowchart of processing by the information processing apparatus of the second embodiment;
  • FIG. 1 shows the overall configuration of a communication system to which an information processing device according to the present disclosure is applied.
  • the communication system 1 includes a home system 100 installed at a pet owner's home 5, a server 200, and a user terminal 300 used by the pet owner.
  • a pet P is staying at the owner's home 5, and a pet terminal 20 is attached to the pet P.
  • - ⁇ A fixed camera 15 is installed at a predetermined place in the home 5.
  • - ⁇ Home system 100 and server 200 can communicate by wire or wirelessly.
  • the server 200 and the owner's user terminal 300 can be wirelessly communicated.
  • the server 200 predicts the behavior of the pet P based on the output data of sensors that detect physical quantities indicating the state of the pet P (hereinafter also referred to as "sensor data"). Then, the server 200 generates message information regarding the predicted behavior of the pet P and transmits it to the owner's user terminal 300 via an interactive SNS (Social Network Service).
  • SMS Social Network Service
  • “message information” includes text messages, stamps, images, and the like.
  • the server 200 predicts the behavior of the pet P based on the sensor data detected by the pet terminal 20, and when a predetermined transmission timing comes, the server 200 transmits the message information representing the predicted behavior of the pet. , to the owner's user terminal 300 via the interactive SNS.
  • the owner can know the behavior of the pet P by viewing the message information transmitted to the user terminal 300 .
  • the predetermined transmission timing may be, for example, when the server 200 detects some behavior of the pet P, or when message information is transmitted from the owner to the pet P via the interactive SNS. .
  • Fig. 2 shows an example of the floor plan of the owner's home 5.
  • the home 5 has an entrance, a hall, a bathroom, a toilet, a living room, a kitchen, a balcony, and the like.
  • a door separating each space is basically open, and the pet P can freely move between each space.
  • Each space is provided with a fixed camera 15 for photographing the pet P and the like.
  • a part of the space of the home 5 is designated as a space in which the pet P is not allowed to enter (hereinafter referred to as a "prohibited space").
  • the entry-prohibited space includes a space in which the pet P is not allowed to enter because it is dangerous and a space in which the pet P is not allowed to enter because it is playing a trick.
  • the bathroom, toilet, kitchen, and balcony shown in gray are determined as entry-prohibited spaces.
  • FIG. 3 is a block diagram showing the configuration of the home system 100 installed in the home 5.
  • home system 100 includes home terminal 10 , fixed camera 15 , microphone 16 , automatic feeder 17 , pet toilet 18 , and speaker 19 .
  • the home system 100 does not have to include all of the above configurations, and may have a partial configuration.
  • Home terminal 10 is, for example, a terminal device such as a PC, tablet, or smartphone, and includes communication unit 11 , processor 12 , memory 13 , and recording medium 14 .
  • the communication unit 11 communicates with external devices. Specifically, the communication unit 11 wirelessly communicates with the pet terminal 20 attached to the pet P using, for example, Bluetooth (registered trademark). Also, the communication unit 11 communicates with the server 200 by wire or wirelessly.
  • the processor 12 is a computer such as a CPU (Central Processing Unit), and controls the entire home terminal 10 by executing a program prepared in advance.
  • the processor 12 may be a GPU (Graphics Processing Unit), FPGA (Field-Programmable Gate Array), DSP (Demand-Side Platform), ASIC (Application Specific Integrated Circuit), or the like.
  • Processor 12 transmits sensor data indicating the state of pet P to server 200 by executing a program prepared in advance.
  • the memory 13 is composed of ROM (Read Only Memory), RAM (Random Access Memory), and the like.
  • the memory 13 stores various programs executed by the processor 12 .
  • the memory 13 is also used as a working memory while the processor 12 is executing various processes.
  • the recording medium 14 is a non-volatile, non-temporary recording medium such as a disk-shaped recording medium or semiconductor memory, and is configured to be detachable from the home terminal 10 .
  • the recording medium 14 records various programs executed by the processor 12 .
  • home terminal 10 transmits sensor data indicating the state of pet P to server 200
  • a program recorded in recording medium 14 is loaded into memory 13 and executed by processor 12 . Images captured by the fixed camera 15 , sounds collected by the microphone 16 , information received from the pet terminal 20 , and the like are temporarily stored in the memory 13 .
  • the fixed camera 15 is installed at a predetermined position inside the home 5. Basically, a necessary number of fixed cameras 15 are installed so as to cover the entire space in which the pet P can move. be.
  • the fixed camera 15 is always in operation, takes a moving image of the shooting range, and transmits the moving image to the home terminal 10. - ⁇
  • a microphone 16 is installed in each space of the home 5.
  • the microphone 16 may be integrated with the fixed camera 15 .
  • the microphone 16 collects sounds generated in each space and transmits the collected sounds to the home terminal 10 .
  • the home terminal 10 transmits the sound collected by the microphone 16 to the server 200 .
  • the automatic feeder 17 is installed in the dining space of the living room as shown in FIG.
  • the automatic feeder 17 is a device that feeds the pet P when the owner is absent. For example, at a preset time, the automatic feeder 17 automatically feeds food to a pet tableware or the like, and sends a notification to the home terminal 10 that the pet P has been fed.
  • Home terminal 10 transmits a notification from automatic feeder 17 to server 200 .
  • Home terminal 10 also transmits to server 200 an image captured by fixed camera 15 around the time when the notification was received.
  • the pet toilet 18 is installed in the toilet space of the living room as shown in FIG.
  • the pet toilet 18 includes, for example, a water supply sheet and a sensor, detects that the pet P has excreted, and notifies the home terminal 10 of the excretion.
  • Home terminal 10 transmits a notification from pet toilet 18 to server 200 .
  • Home terminal 10 also transmits to server 200 an image captured by fixed camera 15 around the time when the notification was received.
  • the speaker 19 is installed in the living room of the home 5 or in an entry-prohibited space, and outputs warning sounds and messages to the pet P. For example, when the pet P enters a restricted space, if the owner records an angry voice at the pet P (such as "Don't enter there"), the pet can be harassed even when the owner is not present. The same voice can be output.
  • FIG. 4 is a block diagram showing the configuration of the pet terminal 20 attached to the pet P.
  • the pet terminal 20 is, for example, attached to the pet instead of the collar of the pet P, or attached to the collar worn by the pet.
  • Pet terminal 20 includes communication unit 21 , processor 22 , memory 23 , pet camera 24 , acceleration sensor 25 , air pressure sensor 26 , biosensor 27 , and microphone 28 .
  • the communication unit 21 communicates with external devices. Specifically, the communication unit 21 wirelessly communicates with the home terminal 10 by, for example, Bluetooth.
  • the processor 22 is a computer such as a CPU, and controls the entire pet terminal 20 by executing a program prepared in advance.
  • the processor 12 periodically transmits information acquired by the pet camera 24, the sensors 25 to 27, and the microphone 28 to the home terminal 10 by executing a program prepared in advance.
  • the memory 23 is composed of ROM, RAM, and the like.
  • the memory 23 stores various programs executed by the processor 22 .
  • the memory 23 is also used as a working memory while the processor 22 is executing various processes.
  • the memory 23 temporarily stores the information acquired by the pet camera 24, the sensors 25 to 27 and the microphone 28.
  • the pet camera 24 is a camera for capturing images from the pet's line of sight.
  • the pet camera 24 may be configured to detect the orientation of the neck of the pet P to determine the photographing direction, or may be mounted near the head of the pet P, and the front of the pet P may be wide-angle. It may be used as a camera for photographing.
  • the pet camera 24 captures an area including the line-of-sight direction of the pet P and transmits the captured image to the home terminal 10 . Thereby, the home terminal 10 can acquire the image of the pet's line of sight.
  • the acceleration sensor 25 is a 3-axis acceleration sensor, measures the movement of the pet P in 3-axis directions, and transmits the results to the home terminal 10 . Specifically, the acceleration sensor 25 can output the pet P's exercise amount, vibration frequency, and the like.
  • the “momentum” is a value that indicates how much the pet has moved in a predetermined period of time.
  • the "frequency” indicates the period of movement of the pet in a predetermined period of time.
  • the atmospheric pressure sensor 26 measures the atmospheric pressure at the location of the pet P. Based on the output of the air pressure sensor 26, the pet terminal 20 detects the vertical movement of the pet P, for example, the number of movements such as jumping (the number of movements (number of ups and downs), the distance, the accumulated distance, etc.), and transmits them to the home terminal 10.
  • the number of movements such as jumping (the number of movements (number of ups and downs), the distance, the accumulated distance, etc.), and transmits them to the home terminal 10.
  • a gyro sensor may be used.
  • a 6-axis sensor in which a 3-axis acceleration sensor and a 3-axis gyro sensor (a 3-axis angular velocity sensor) are integrated may be used. Note that the sensor is not limited to the above as long as it is a sensor capable of measuring the amount of activity of an animal.
  • the biological sensor 27 is a sensor that measures the biological information of the pet P. For example, the body temperature, heart rate, and respiratory rate of the pet P are measured and transmitted to the home terminal 10 .
  • the home terminal 10 transmits the acquired biometric information to the server 200 .
  • the microphone 28 collects sounds around the pet P and transmits them to the home terminal 10 .
  • the home terminal 10 transmits the collected sound to the server 200 .
  • the server 200 can estimate the state of the pet P, such as barking or barking, based on the received sound.
  • the server 200 can estimate the motion state, mental state, and the like of the pet P based on, for example, sounds of the pet P running around and breathing sounds.
  • FIG. 5A is a block diagram showing the configuration of the server 200.
  • the server 200 predicts the behavior of the pet P based on the sensor data of the pet P received from the home terminal 10 .
  • the server 200 transmits and receives messages to and from the user terminal 300 using an interactive SNS.
  • the server 200 includes a communication unit 211 , a processor 212 , a memory 213 , a recording medium 214 and a database (DB) 215 .
  • DB database
  • the communication unit 211 transmits and receives data to and from an external device. Specifically, the communication unit 211 transmits and receives information between the home terminal 10 and the owner's user terminal 300 .
  • the processor 212 is a computer such as a CPU, and controls the entire server 200 by executing a program prepared in advance.
  • processor 212 may be a GPU, FPGA, DSP, ASIC, or the like.
  • the processor 212 predicts pet behavior based on sensor data received from the home terminal 10 .
  • the processor 212 transmits message information regarding the behavior of the pet obtained by prediction to the owner's user terminal 300 via the interactive SNS.
  • the processor 212 is an example of acquisition means, prediction means, determination means, and pre-processing means.
  • the memory 213 is composed of ROM, RAM, and the like. The memory 213 is also used as working memory during execution of various processes by the processor 212 .
  • the recording medium 214 is a non-volatile, non-temporary recording medium such as a disk-shaped recording medium or a semiconductor memory, and is configured to be removable from the server 200 .
  • a recording medium 214 records various programs executed by the processor 212 .
  • the database 215 stores information and images received from the home terminal 10 through the communication unit 211. That is, message information and images sent and received by many users of the user terminals 300 are stored in the database 215 .
  • the database 215 also stores the transmission timing of message information and prepared message information (for example, predetermined messages, stamps, etc.) for each user. These pieces of message information are stored in association with the actions of the pet P. FIG. That is, one or more pieces of message information are stored for each behavior of the pet P.
  • the server 200 may include an input unit such as a keyboard and a mouse, and a display unit such as a liquid crystal display for the administrator to give instructions and input.
  • FIG. 5B is a block diagram showing the internal configuration of the user terminal 300 used by the owner.
  • the user terminal 300 is, for example, a smartphone, tablet, PC, or the like.
  • User terminal 300 includes communication unit 311 , processor 312 , memory 313 , and touch panel 314 .
  • the communication unit 311 transmits and receives data to and from an external device. Specifically, the communication unit 311 transmits and receives information to and from the server 200 .
  • the processor 312 is a computer such as a CPU, and controls the entire user terminal 300 by executing a program prepared in advance.
  • processor 312 may be a GPU, FPGA, DSP, ASIC, or the like.
  • a messaging application for an interactive SNS executed by the server 200 is installed in the user terminal 300 .
  • a "messaging application” is an application that enables the exchange of message information such as text messages, stamps, still images, and videos.
  • Processor 312 receives the transmitted message information via server 200 by the messaging application and displays it on touch panel 314 .
  • Processor 312 also transmits the message information input by the owner to server 200 using the messaging application.
  • the memory 313 is composed of ROM, RAM, and the like.
  • the memory 313 is also used as working memory during execution of various processes by the processor 312 .
  • the touch panel 314 displays message information received by the user terminal 300 .
  • the touch panel 314 also functions as a user input device.
  • FIG. 6 shows the configuration of the server 200 for pet P behavior prediction.
  • the server 200 includes a preprocessing unit 221, an action prediction unit 222, and a determination unit 223 as components for predicting the behavior of the pet P.
  • FIG. 221 shows the configuration of the server 200 for pet P behavior prediction.
  • the server 200 includes a preprocessing unit 221, an action prediction unit 222, and a determination unit 223 as components for predicting the behavior of the pet P.
  • the sensor data is data that is detected by the pet terminal 20 and indicates a physical quantity related to the state of the pet P.
  • FIG. it is assumed that the vibration frequency, the amount of exercise, and the number of times of ascending/descending that are detected by the pet terminal 20 are used as the sensor data.
  • the preprocessing unit 221 preprocesses the sensor data, generates a feature amount indicating the movement of the pet P, and outputs the feature amount to the behavior prediction unit 222 .
  • the preprocessing unit 221 generates feature amounts from sensor data detected by the pet terminal 20 .
  • the vibration frequency, the momentum, and the number of times of ascending/descending are used as sensor data.
  • the frequency of vibration, the amount of exercise, and the number of times of ascending/descending are all time-series data.
  • FIG. 7 is a graph showing an example of frequency and momentum. For the sake of convenience, illustration of the number of times of lifting is omitted.
  • the preprocessing unit 221 divides each data of the vibration frequency, the amount of exercise, and the number of ascending/descending times into specific periods (hereinafter also referred to as “prediction periods”).
  • the prediction period is a unit time for predicting the pet P's behavior. Assuming that the prediction period is 30 seconds, the preprocessing unit 221 predicts the behavior of the pet every 30 seconds.
  • the preprocessing unit 221 divides each piece of data for each prediction period by shifting it by a predetermined amount of time to generate divided data. Assuming that the predetermined time is 3 seconds, the preprocessing unit 221 shifts the frequency data in the prediction period (30 seconds) by 3 seconds to generate 10 pieces of divided data. Similarly, the preprocessing unit 221 generates 10 pieces of divided data for each of the amount of exercise and the number of times of going up and down. As a result, a total of 30 pieces of divided data are obtained. It should be noted that these pieces of divided data are the output data of the sensor data themselves, that is, the raw data.
  • the preprocessing unit 221 calculates five statistical values for each data of the vibration frequency, momentum, and number of times of ascending/descending. Specifically, the five calculated values are the average value, standard deviation, maximum value, minimum value, and the number of times a value greater than the threshold is recorded. That is, the preprocessing unit 221 determines the average value, standard deviation, maximum value, minimum value, and the number of times a value larger than the threshold value is recorded based on divided data of ten frequencies included in one prediction period. Compute five statistics of Similarly, the preprocessing unit 221 also calculates the above-mentioned five statistical values for the frequency of vibration and the number of times of ascending/descending. Thus, the preprocessing unit 221 calculates a total of 15 statistical values.
  • the preprocessing unit 221 outputs a total of 30 pieces of divided data (raw data) and a total of 15 statistical values to the action prediction unit 222 as feature amounts in one prediction period.
  • the output feature amount corresponds to an explanatory variable used for prediction by the behavior prediction model in the behavior prediction unit 222 .
  • the behavior prediction unit 222 predicts the behavior of the pet based on the feature amount input from the preprocessing unit 221.
  • the behavior prediction unit 222 uses a pre-trained behavior prediction model to predict the behavior of the pet from the feature amount, and outputs a score for each predetermined behavior as a prediction result.
  • the behavior prediction model is a model that has learned the relationship between animal state data and animal behavior.
  • the predetermined action for example, various pet actions such as eating, toileting, drinking water, spinning, standing on two legs, stopping, walking, running, and playing can be set.
  • the action prediction unit 222 calculates a score for each action based on the input feature amount.
  • the score for each action is a value indicating the likelihood (probability) that the state of the pet indicated by the input feature amount corresponds to each action, and the higher the value, the higher the probability.
  • the behavior prediction unit 222 outputs a prediction result, which is a set of scores for each behavior, to the determination unit 223 .
  • the behavior prediction model used by the behavior prediction unit 222 for example, a model using a decision tree can be used.
  • the behavior prediction model is not limited to a specific model, and may be a model using other machine learning models, neural networks, or the like.
  • the behavior prediction model is a pre-trained model. That is, the behavior prediction model is a model that has been trained using the feature amount generated by the preprocessing unit 221 as input data and the training data that has the actual behavior of the pet at that time as the correct label.
  • the input data included in the training data is generated from sensor data detected by the pet terminal 20 by preprocessing similar to that of the preprocessing unit 221, for example.
  • the correct label included in the training data is based on the photographed image of the pet when the sensor data used to generate the input data is obtained, and the actual action of the pet at that time is assigned as the correct label.
  • the task of assigning the behavior of the pet as the correct label may be performed by a human, or may be performed using a prediction model for predicting the behavior of the pet from the captured image, or a combination thereof.
  • the determination unit 223 determines the behavior of the pet based on the prediction result input from the behavior prediction unit 222, and outputs the final prediction result. Specifically, the determination unit 223 determines, among the actions included in the prediction result, an action whose score satisfies a predetermined condition, for example, an action whose score is equal to or greater than a predetermined threshold, as the action of the pet at that time, Output as the final prediction result. In the example of FIG. 6, the determination unit 223 sets the predetermined threshold value to "0.5", and selects "rice” and "drinking water” having scores of 0.5 or more among the behaviors included in the prediction results as the final prediction results. Output. Note that the final prediction result may include the score of each action in addition to the action whose score satisfies a predetermined condition.
  • server 200 predicts behavior in real time (hereinafter referred to as “real-time prediction”), and uses sensor data acquired over a predetermined period to summarize behavior within the predetermined period.
  • real-time prediction There is a case of prediction (hereinafter referred to as “non-real-time prediction”).
  • non-real-time prediction for example, one day's worth of pet sensor data is accumulated, and the pet's behavior for that day is collectively predicted.
  • the preprocessing unit 221 is an example of preprocessing means
  • the behavior prediction unit 222 is an example of prediction means
  • the determination unit 223 is an example of output means.
  • the function of the determination unit 223 may be included in the behavior prediction unit 222 and executed by the prediction means.
  • the server 200 can generally predict the behavior of the pet by considering the number and frequency of occurrence of each action. For example, the number of times “rice” occurs is usually 2 to 4 times in a day. Therefore, when “rice” is present four or more times in the prediction results for one day, the determining unit 223 determines that the two to four times with the highest scores among them are “rice”, and the rest are the final predictions. Exclude from results. As a result, it is possible to improve the accuracy of prediction for an action whose number of occurrences in a day is roughly fixed.
  • the server 200 can predict the action in consideration of time for actions for which an approximate time period for the action to occur is fixed. For example, “meal” has a fixed approximate time zone, such as morning, noon, and evening, in which an action occurs. Therefore, it is possible to improve the prediction accuracy of "rice” by considering the time period when the pet eats rice.
  • time is added to the feature amount (explanatory variable) input to the behavior prediction unit 222 .
  • This time may be the time when the sensor that detects the physical quantity indicating the state of the pet P generated the sensor data, or the time when the sensor data was acquired by the server 200 .
  • the behavior prediction model is pre-trained to use time of day as an explanatory variable. Thereby, the behavior prediction unit 222 can predict the behavior of the pet in consideration of the time.
  • the determination unit 223 weights the score of each action included in the prediction result based on the time period to determine the action.
  • the determination unit 223 assigns a large weight to the score of each action included in the prediction result during the time period when the pet eats, and assigns a small weight to the score of each action during other time periods. Then, the determination unit 223 compares the weighted score of each action with a predetermined threshold, and outputs a final prediction result.
  • the action prediction model is pre-trained to use later actions as explanatory variables. As a result, for example, when "drinking water” occurs, there is a high possibility that "rice” or “running” is predicted as a previous action.
  • Fig. 8 shows an example of behavior prediction results and final prediction results based on certain sensor data.
  • the score of "rice” is high in the prediction result at the time corresponding to waveform X1, but the score of "rice” is not high before and after that.
  • the determination unit 223 regards the waveform X1 as noise, and determines the prediction result of “rice” based on the waveform X1. is not included in the final prediction result. That is, the final prediction result does not include the prediction result of "rice". Thereby, the prediction accuracy of continuous behavior can be improved.
  • the determination unit 223 determines that the action during the period including them is “ rice”. That is, the final prediction result indicates that the "meal” is continued from time t1 to t3. Thereby, the prediction accuracy of continuous behavior can be improved.
  • the prediction period is set to 30 seconds for all actions. That is, the preprocessing unit 221 generates feature amounts based on sensor data every 30 seconds, and the behavior prediction unit 222 predicts the behavior of the pet every 30 seconds.
  • a prediction period appropriate to that action may be set. For example, the prediction period for "rice” may be set to 1 minute, the prediction period for "toilet” may be set to 30 seconds, and the prediction period for "walking” may be set to 10 seconds.
  • the preprocessing unit 221 may preprocess the sensor data over the prediction period set for each action as described above to generate a feature amount, and output the feature amount to the action prediction unit 222 . As a result, prediction is performed with a time width suitable for each action, so prediction accuracy can be improved.
  • Exclusion of Conflicting Behavior Pet behavior includes a combination of behaviors that do not occur at the same time. For example, actions such as “walking” and “running” generally do not occur at the same time as “meal”. Therefore, when a plurality of conflicting actions are predicted in the same prediction period, the determination unit 223 excludes the one with the lower score and adopts the one with the higher score as the final prediction result. That is, the final prediction result includes the conflicting behavior with the highest score. As a result, it is possible to prevent multiple unrealistic behaviors from being predicted at the same time.
  • FIG. 9 is a flowchart of action prediction processing. This processing is realized by the processor 212 of the server 200 shown in FIG. 5 executing a program prepared in advance and operating as the component shown in FIG.
  • the preprocessing unit 221 acquires sensor data indicating the state of the pet from the home terminal 10 (step S11). Next, the preprocessing unit 221 preprocesses the sensor data for each predetermined prediction period, generates the above-described feature amount, and outputs it to the behavior prediction unit 222 (step S12). Next, the behavior prediction unit 222 predicts the behavior of the pet from the input feature amount using a pre-trained behavior prediction model, and outputs the prediction result to the determination unit 223 (step S13). Then, the determination unit 223 outputs an action whose score is equal to or higher than a predetermined threshold among the input prediction results as a final prediction result (step S14). Then the process ends.
  • the server 200 transmits message information related to the behavior to the user terminal 300 of the owner.
  • database 215 of server 200 stores a plurality of pieces of message information prepared in advance for each action of pet P.
  • the server 200 acquires a message corresponding to the predicted behavior from the database 215 and transmits it to the user terminal 300 of the owner.
  • FIG. 10 is a display example of message information on the user terminal 300 of the owner. Assume that the owner's name is "Ichiro" and the pet P's name is "John". Also, in this example, the server 200 is set to return message information related to the behavior of the pet P in response to receiving message information from the owner.
  • the owner sends message information asking "What are you doing?"
  • the server 200 predicts that the behavior of the pet P at that time is "meal” by the behavior prediction processing described above, acquires the message information "I'm eating rice” corresponding to "meal”, and sends it to the owner. are sending to
  • the server 200 predicts the behavior of the pet P based on sensor data obtained from the pet terminal 20, and transmits message information corresponding to the behavior to the owner's user terminal 300. do. Therefore, the owner can see the message information according to the behavior of the pet P even when he/she is away from home.
  • FIG. 11 is a block diagram showing the functional configuration of the information processing apparatus according to the second embodiment.
  • the information processing apparatus 50 of the second embodiment includes acquisition means 51 , prediction means 52 and output means 53 .
  • FIG. 12 is a flowchart of processing by the information processing device 50.
  • FIG. Acquisition means 51 acquires data from sensors that detect physical quantities related to the state of the target animal (step S51).
  • the prediction means 52 uses a trained prediction model to predict the behavior of the target animal based on the acquired data, and outputs the prediction result (step S52).
  • the output means 53 determines the behavior of the target animal based on the prediction result, and outputs the final prediction result (step S53).
  • the processing by the information processing device 50 may operate as follows. Acquisition means 51 acquires data relating to the condition of the target animal.
  • the prediction means 52 uses a trained prediction model to predict the behavior of the target animal based on the acquired data.
  • the output means 53 outputs, as a final prediction result, a behavior that satisfies a predetermined condition among the predicted behaviors of the target animal based on the prediction results.
  • the information processing device 50 of the second embodiment it is possible to predict the behavior of the target animal based on sensor data indicating physical quantities related to the state of the target animal.
  • An animal behavior prediction device comprising:
  • the prediction means uses a trained prediction model to output a score indicating the likelihood of each candidate behavior of the target animal based on the data, 1.
  • the animal behavior prediction device according to Supplementary Note 1, wherein the output means outputs the behavior for which the score is equal to or greater than a predetermined threshold value as the final prediction result.
  • the prediction means predicts a candidate behavior of the target animal for each predetermined period, 3.
  • the information processing apparatus according to Supplementary Note 1 or 2, wherein the output means outputs, as a final prediction result, actions within a predetermined number of times determined for each action within the predetermined period.
  • Appendix 4 3. The information processing apparatus according to appendix 1 or 2, wherein the prediction means predicts the behavior of the target animal based on the data and the time when the data was generated.
  • the prediction means weights the score of each action included in the prediction result based on a time period predetermined for each action, 3.
  • the information processing apparatus according to appendix 2, wherein the output means outputs the final prediction result based on the weighted score.
  • Preprocessing means for preprocessing the acquired data the prediction means predicts the behavior of the target animal based on the preprocessed data; 3.
  • Appendix 12 obtaining data relating to the condition of the subject animal; predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result;
  • a recording medium recording a program for causing a computer to execute a process of outputting a behavior that satisfies a predetermined condition as a final prediction result based on the prediction result.
  • An animal behavior prediction device comprising:

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Alarm Systems (AREA)

Abstract

In this information-processing device, an acquisition means acquires data relating to the state of a target animal. A prediction means uses a trained prediction model to predict the behavior of the target animal on the basis of the acquired data, and outputs a prediction result. On the basis of the prediction result, an output means outputs a behavior satisfying a predetermined condition as a final prediction result.

Description

動物行動予測装置、動物行動予測方法、及び、記録媒体Animal behavior prediction device, animal behavior prediction method, and recording medium
 本発明は、動物の行動を予測する技術に関する。 The present invention relates to technology for predicting animal behavior.
 ペットの飼い主は、仕事中や外出時などに、家にいるペットの状態や行動などを把握しておきたいという要求がある。特許文献1、2は、撮影画像に基づいて、人や動物などの動作、行為を予測する手法を記載している。  Pet owners want to know the status and behavior of their pets at home, such as when they are at work or when they go out. Patent Literatures 1 and 2 describe techniques for predicting motions and actions of people, animals, etc. based on captured images.
国際公開WO2018/225187号公報International publication WO2018/225187 特開2018-092494号公報JP 2018-092494 A
 しかし、特許文献1、2のように、撮影画像に基づいて動物の行動を検出する方法は画像処理の負荷が大きい。 However, as in Patent Documents 1 and 2, methods for detecting animal behavior based on captured images impose a heavy load on image processing.
 本発明の1つの目的は、動物の状態を検知するセンサのデータに基づいて動物の行動を予測することにある。 One purpose of the present invention is to predict animal behavior based on data from sensors that detect animal conditions.
 上記の課題を解決するため、本発明の一つの観点では、動物行動予測装置は、
 対象動物の状態に関連するデータを取得する取得手段と、
 訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力する予測手段と、
 前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する出力手段と、
 を備える。
In order to solve the above problems, in one aspect of the present invention, an animal behavior prediction device comprises:
Acquisition means for acquiring data related to the condition of the target animal;
prediction means for predicting the behavior of the target animal based on the data using a trained prediction model and outputting the prediction result;
output means for outputting an action that satisfies a predetermined condition as a final prediction result based on the prediction result;
Prepare.
 本発明の他の観点では、動物行動予測方法は、
 対象動物の状態に関連するデータを取得し、
 訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力し、
 前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する。
In another aspect of the present invention, an animal behavior prediction method comprises
obtaining data relating to the condition of the subject animal;
predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result;
Based on the prediction result, an action that satisfies a predetermined condition is output as a final prediction result.
 本発明のさらに他の観点では、記録媒体は、
 対象動物の状態に関連するデータを取得し、
 訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力し、
 前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する処理をコンピュータに実行させるプログラムを記録する。
In still another aspect of the present invention, the recording medium comprises
obtaining data relating to the condition of the subject animal;
predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result;
A program is recorded that causes a computer to execute a process of outputting a behavior that satisfies a predetermined condition as a final prediction result based on the prediction result.
 本開示によれば、動物の状態を検知するセンサのデータに基づいて動物の行動を予測することが可能となる。 According to the present disclosure, it is possible to predict animal behavior based on data from sensors that detect the state of the animal.
情報処理装置を適用した通信システムの全体構成を示す。1 shows the overall configuration of a communication system to which an information processing device is applied; 飼い主の自宅の間取り図の一例を示す。An example of the floor plan of the owner's house is shown. 自宅システムの構成を示すブロック図である。It is a block diagram which shows the structure of a home system. ペット端末の構成を示すブロック図である。2 is a block diagram showing the configuration of a pet terminal; FIG. サーバ及びユーザ端末の構成を示すブロック図である。3 is a block diagram showing configurations of a server and a user terminal; FIG. ペットの行動予測のためのサーバの構成を示すブロック図である。FIG. 4 is a block diagram showing the configuration of a server for pet behavior prediction; 振動数及び運動量の例を示すグラフである。4 is a graph showing an example of frequency and momentum; あるセンサデータに基づく行動の予測結果及び最終予測結果の例を示す。An example of behavior prediction results and final prediction results based on certain sensor data is shown. 行動予測処理のフローチャートである。It is a flow chart of action prediction processing. 飼い主のユーザ端末におけるメッセージ情報の表示例である。It is an example of a display of message information in an owner's user terminal. 第2実施形態の情報処理装置の機能構成を示すブロック図である。It is a block diagram which shows the functional structure of the information processing apparatus of 2nd Embodiment. 第2実施形態の情報処理装置による処理のフローチャートである。9 is a flowchart of processing by the information processing apparatus of the second embodiment;
 <第1実施形態>
 [全体構成]
 図1は、本開示に係る情報処理装置を適用した通信システムの全体構成を示す。通信システム1は、ペットの飼い主の自宅5に設置された自宅システム100と、サーバ200と、飼い主の使用するユーザ端末300とを含む。飼い主の自宅5には、ペットPが滞在しており、ペットPにはペット端末20が取り付けられている。また、自宅5には所定の場所に固定カメラ15が設置されている。自宅システム100とサーバ200とは、有線又は無線により通信可能である。また、サーバ200と飼い主のユーザ端末300とは無線通信可能である。
<First embodiment>
[overall structure]
FIG. 1 shows the overall configuration of a communication system to which an information processing device according to the present disclosure is applied. The communication system 1 includes a home system 100 installed at a pet owner's home 5, a server 200, and a user terminal 300 used by the pet owner. A pet P is staying at the owner's home 5, and a pet terminal 20 is attached to the pet P. - 特許庁A fixed camera 15 is installed at a predetermined place in the home 5. - 特許庁Home system 100 and server 200 can communicate by wire or wirelessly. Also, the server 200 and the owner's user terminal 300 can be wirelessly communicated.
 基本的な動作として、サーバ200は、ペットPの状態を示す物理量を検知するセンサの出力データ(以下、「センサデータ」とも呼ぶ。)に基づいて、ペットPの行動を予測する。そして、サーバ200は、予測されたペットPの行動に関するメッセージ情報を生成し、対話型SNS(Social Network Service)を介して飼い主のユーザ端末300へ送信する。ここで、「メッセージ情報」は、テキストメッセージ、スタンプ、画像などを含むものとする。 As a basic operation, the server 200 predicts the behavior of the pet P based on the output data of sensors that detect physical quantities indicating the state of the pet P (hereinafter also referred to as "sensor data"). Then, the server 200 generates message information regarding the predicted behavior of the pet P and transmits it to the owner's user terminal 300 via an interactive SNS (Social Network Service). Here, "message information" includes text messages, stamps, images, and the like.
 具体的には、サーバ200は、ペット端末20により検知されたセンサデータに基づいてペットPの行動を予測し、所定の送信タイミングが来ると、予測により得られたペットの行動を表すメッセージ情報を、対話型SNSを介して飼い主のユーザ端末300へ送信する。飼い主は、ユーザ端末300に送信されたメッセージ情報を見ることにより、ペットPの行動を知ることができる。なお、所定の送信タイミングは、例えば、サーバ200がペットPの何らかの行動を検出したとき、飼い主から対話型SNSを介してペットPに対してメッセージ情報が送信されたとき、などとすることができる。 Specifically, the server 200 predicts the behavior of the pet P based on the sensor data detected by the pet terminal 20, and when a predetermined transmission timing comes, the server 200 transmits the message information representing the predicted behavior of the pet. , to the owner's user terminal 300 via the interactive SNS. The owner can know the behavior of the pet P by viewing the message information transmitted to the user terminal 300 . The predetermined transmission timing may be, for example, when the server 200 detects some behavior of the pet P, or when message information is transmitted from the owner to the pet P via the interactive SNS. .
 図2は、飼い主の自宅5の間取り図の一例を示す。自宅5は、玄関、ホール、浴室、トイレ、居間、キッチン、バルコニーなどを有する。各スペースを仕切るドアは基本的に解放されており、ペットPは各スペースを自由に移動することができる。各スペースには、ペットPの様子などを撮影するための固定カメラ15が設置されている。自宅5のスペースのうちの一部は、ペットPが入ってはいけないスペース(以下、「進入禁止スペース」と呼ぶ。)に決められている。なお、進入禁止スペースは、ペットPにとって危険なので進入させないスペースと、ペットPが悪戯をするので進入させないスペースとを含む。図2の例では、グレーで示す浴室、トイレ、キッチン、バルコニーが進入禁止スペースに決められている。 Fig. 2 shows an example of the floor plan of the owner's home 5. The home 5 has an entrance, a hall, a bathroom, a toilet, a living room, a kitchen, a balcony, and the like. A door separating each space is basically open, and the pet P can freely move between each space. Each space is provided with a fixed camera 15 for photographing the pet P and the like. A part of the space of the home 5 is designated as a space in which the pet P is not allowed to enter (hereinafter referred to as a "prohibited space"). In addition, the entry-prohibited space includes a space in which the pet P is not allowed to enter because it is dangerous and a space in which the pet P is not allowed to enter because it is playing a trick. In the example of FIG. 2, the bathroom, toilet, kitchen, and balcony shown in gray are determined as entry-prohibited spaces.
 [自宅システム]
 図3は、自宅5に設置される自宅システム100の構成を示すブロック図である。図3の例では、自宅システム100は、自宅端末10と、固定カメラ15と、マイク16と、自動給餌器17と、ペットトイレ18と、スピーカ19とを含む。但し、自宅システム100は上記の構成の全てを含まなくてもよく、一部の構成でもよい。自宅端末10は、例えば、PC、タブレット、スマートフォンなどの端末装置であり、通信部11と、プロセッサ12と、メモリ13と、記録媒体14とを備える。
[Home system]
FIG. 3 is a block diagram showing the configuration of the home system 100 installed in the home 5. As shown in FIG. In the example of FIG. 3 , home system 100 includes home terminal 10 , fixed camera 15 , microphone 16 , automatic feeder 17 , pet toilet 18 , and speaker 19 . However, the home system 100 does not have to include all of the above configurations, and may have a partial configuration. Home terminal 10 is, for example, a terminal device such as a PC, tablet, or smartphone, and includes communication unit 11 , processor 12 , memory 13 , and recording medium 14 .
 通信部11は、外部装置との通信を行う。具体的に、通信部11は、例えばBluetooth(登録商標)などにより、ペットPに取り付けられたペット端末20と無線通信する。また、通信部11は、サーバ200と有線又は無線により通信する。 The communication unit 11 communicates with external devices. Specifically, the communication unit 11 wirelessly communicates with the pet terminal 20 attached to the pet P using, for example, Bluetooth (registered trademark). Also, the communication unit 11 communicates with the server 200 by wire or wirelessly.
 プロセッサ12は、CPU(Central Processing Unit)などのコンピュータであり、予め用意されたプログラムを実行することにより、自宅端末10の全体を制御する。なお、プロセッサ12は、GPU(Graphics Processing Unit)、FPGA(Field-Programmable Gate Array)、DSP(Demand-Side Platform)、ASIC(Application Specific Integrated Circuit)などであってもよい。プロセッサ12は、予め用意されたプログラムを実行することにより、ペットPの状態を示すセンサデータをサーバ200へ送信する。 The processor 12 is a computer such as a CPU (Central Processing Unit), and controls the entire home terminal 10 by executing a program prepared in advance. The processor 12 may be a GPU (Graphics Processing Unit), FPGA (Field-Programmable Gate Array), DSP (Demand-Side Platform), ASIC (Application Specific Integrated Circuit), or the like. Processor 12 transmits sensor data indicating the state of pet P to server 200 by executing a program prepared in advance.
 メモリ13は、ROM(Read Only Memory)、RAM(Random Access Memory)などにより構成される。メモリ13は、プロセッサ12により実行される各種のプログラムを記憶する。また、メモリ13は、プロセッサ12による各種の処理の実行中に作業メモリとしても使用される。 The memory 13 is composed of ROM (Read Only Memory), RAM (Random Access Memory), and the like. The memory 13 stores various programs executed by the processor 12 . The memory 13 is also used as a working memory while the processor 12 is executing various processes.
 記録媒体14は、ディスク状記録媒体、半導体メモリなどの不揮発性で非一時的な記録媒体であり、自宅端末10に対して着脱可能に構成される。記録媒体14は、プロセッサ12が実行する各種のプログラムを記録している。自宅端末10がサーバ200へペットPの状態を示すセンサデータを送信する際には、記録媒体14に記録されているプログラムがメモリ13にロードされ、プロセッサ12により実行される。また、固定カメラ15が撮影した画像、マイク16が集音した音、ペット端末20から受信した情報などは、一時的にメモリ13に記憶される。 The recording medium 14 is a non-volatile, non-temporary recording medium such as a disk-shaped recording medium or semiconductor memory, and is configured to be detachable from the home terminal 10 . The recording medium 14 records various programs executed by the processor 12 . When home terminal 10 transmits sensor data indicating the state of pet P to server 200 , a program recorded in recording medium 14 is loaded into memory 13 and executed by processor 12 . Images captured by the fixed camera 15 , sounds collected by the microphone 16 , information received from the pet terminal 20 , and the like are temporarily stored in the memory 13 .
 固定カメラ15は、自宅5内の予め決められた位置に設置されている。基本的には、ペットPが移動できる全スペースをカバーするように必要な数の固定カメラ15が設置されており、特に固定カメラ15はペットPの進入禁止スペース付近を撮影可能な位置に設置される。固定カメラ15は常時動作しており、撮影範囲の動画を撮影して自宅端末10へ送信する。 The fixed camera 15 is installed at a predetermined position inside the home 5. Basically, a necessary number of fixed cameras 15 are installed so as to cover the entire space in which the pet P can move. be. The fixed camera 15 is always in operation, takes a moving image of the shooting range, and transmits the moving image to the home terminal 10. - 特許庁
 マイク16は、自宅5の各スペースに設置される。マイク16は、固定カメラ15と一体化されたものでもよい。マイク16は、各スペースにおいて生じた音を集音し、自宅端末10へ送信する。自宅端末10は、マイク16で集音された音をサーバ200へ送信する。 A microphone 16 is installed in each space of the home 5. The microphone 16 may be integrated with the fixed camera 15 . The microphone 16 collects sounds generated in each space and transmits the collected sounds to the home terminal 10 . The home terminal 10 transmits the sound collected by the microphone 16 to the server 200 .
 自動給餌器17は、図2に示すように居間の食事スペースに設けられる。自動給餌器17は、飼い主が不在のときにペットPに餌を与える装置である。例えば、自動給餌器17は、予め設定した時刻になると、ペット用の食器などに自動的に餌を供給するとともに、ペットPに餌を与えたことを示す通知を自宅端末10へ送信する。自宅端末10は、自動給餌器17からの通知をサーバ200へ送信する。また、自宅端末10は、通知を受信した時刻付近で固定カメラ15が撮影した画像をサーバ200へ送信する。 The automatic feeder 17 is installed in the dining space of the living room as shown in FIG. The automatic feeder 17 is a device that feeds the pet P when the owner is absent. For example, at a preset time, the automatic feeder 17 automatically feeds food to a pet tableware or the like, and sends a notification to the home terminal 10 that the pet P has been fed. Home terminal 10 transmits a notification from automatic feeder 17 to server 200 . Home terminal 10 also transmits to server 200 an image captured by fixed camera 15 around the time when the notification was received.
 ペットトイレ18は、図2に示すように居間のトイレスペースに設置される。ペットトイレ18は、例えば給水シートとセンサなどを備え、ペットPが***したことを検出して、自宅端末10へ通知する。自宅端末10は、ペットトイレ18からの通知をサーバ200へ送信する。また、自宅端末10は、通知を受信した時刻付近で固定カメラ15が撮影した画像をサーバ200へ送信する。 The pet toilet 18 is installed in the toilet space of the living room as shown in FIG. The pet toilet 18 includes, for example, a water supply sheet and a sensor, detects that the pet P has excreted, and notifies the home terminal 10 of the excretion. Home terminal 10 transmits a notification from pet toilet 18 to server 200 . Home terminal 10 also transmits to server 200 an image captured by fixed camera 15 around the time when the notification was received.
 スピーカ19は、自宅5の居間や進入禁止スペースなどに設置され、ペットPに対する警告音やメッセージなどを出力する。例えば、ペットPが進入禁止スペースに入ったときに飼い主がペットPを怒る声(「そこは入っちゃだめだよ。」など)を録音しておけば、飼い主がいないときでも、ペットに対して同じ声を出力することができる。 The speaker 19 is installed in the living room of the home 5 or in an entry-prohibited space, and outputs warning sounds and messages to the pet P. For example, when the pet P enters a restricted space, if the owner records an angry voice at the pet P (such as "Don't enter there"), the pet can be harassed even when the owner is not present. The same voice can be output.
 [ペット端末]
 図4は、ペットPに取り付けられるペット端末20の構成を示すブロック図である。ペット端末20は、例えばペットPの首輪の代わりにペットに取り付けられるか、又は、ペットが着けている首輪に対して取り付けられる。ペット端末20は、通信部21と、プロセッサ22と、メモリ23と、ペットカメラ24と、加速度センサ25と、気圧センサ26と、生体センサ27と、マイク28とを備える。
[Pet terminal]
FIG. 4 is a block diagram showing the configuration of the pet terminal 20 attached to the pet P. As shown in FIG. The pet terminal 20 is, for example, attached to the pet instead of the collar of the pet P, or attached to the collar worn by the pet. Pet terminal 20 includes communication unit 21 , processor 22 , memory 23 , pet camera 24 , acceleration sensor 25 , air pressure sensor 26 , biosensor 27 , and microphone 28 .
 通信部21は、外部装置との通信を行う。具体的に、通信部21は、例えばBluetoothなどにより自宅端末10と無線通信する。 The communication unit 21 communicates with external devices. Specifically, the communication unit 21 wirelessly communicates with the home terminal 10 by, for example, Bluetooth.
 プロセッサ22は、CPUなどのコンピュータであり、予め用意されたプログラムを実行することにより、ペット端末20の全体を制御する。プロセッサ12は、予め用意されたプログラムを実行することにより、ペットカメラ24、各センサ25~27及びマイク28により取得した情報を定期的に自宅端末10へ送信する。 The processor 22 is a computer such as a CPU, and controls the entire pet terminal 20 by executing a program prepared in advance. The processor 12 periodically transmits information acquired by the pet camera 24, the sensors 25 to 27, and the microphone 28 to the home terminal 10 by executing a program prepared in advance.
 メモリ23は、ROM、RAMなどにより構成される。メモリ23は、プロセッサ22により実行される各種のプログラムを記憶する。また、メモリ23は、プロセッサ22による各種の処理の実行中に作業メモリとしても使用される。さらに、メモリ23は、ペットカメラ24、各センサ25~27及びマイク28が取得した情報を一時的に記憶する。 The memory 23 is composed of ROM, RAM, and the like. The memory 23 stores various programs executed by the processor 22 . The memory 23 is also used as a working memory while the processor 22 is executing various processes. Furthermore, the memory 23 temporarily stores the information acquired by the pet camera 24, the sensors 25 to 27 and the microphone 28. FIG.
 ペットカメラ24は、ペット目線の画像を撮影するためのカメラである。ペットカメラ24は、例えば、ペットPの首の向きを検出して撮影方向を決定するように構成してもよく、ペットPの頭部付近に装着してもよく、ペットPの前方を広角に撮影するカメラとしてもよい。ペットカメラ24は、ペットPの視線方向を含む領域を撮影し、撮影画像を自宅端末10へ送信する。これにより、自宅端末10はペット目線の画像を取得することができる。 The pet camera 24 is a camera for capturing images from the pet's line of sight. For example, the pet camera 24 may be configured to detect the orientation of the neck of the pet P to determine the photographing direction, or may be mounted near the head of the pet P, and the front of the pet P may be wide-angle. It may be used as a camera for photographing. The pet camera 24 captures an area including the line-of-sight direction of the pet P and transmits the captured image to the home terminal 10 . Thereby, the home terminal 10 can acquire the image of the pet's line of sight.
 加速度センサ25は、3軸の加速度センサであり、ペットPの3軸方向の運動を測定して自宅端末10へ送信する。具体的に、加速度センサ25は、ペットPの運動量、振動数などを出力することができる。ここで、「運動量」は、ペットが所定時間にどのぐらい動いたかを示す値であり、例えば加速度センサにより測定された3軸方向の加速度を合成した値の積分値などを用いることができる。また、「振動数」は、所定時間におけるペットの動きの周期を示し、例えば加速度センサにより測定された3軸方向の加速度を合成した値の変動する周期を用いることができる。 The acceleration sensor 25 is a 3-axis acceleration sensor, measures the movement of the pet P in 3-axis directions, and transmits the results to the home terminal 10 . Specifically, the acceleration sensor 25 can output the pet P's exercise amount, vibration frequency, and the like. Here, the "momentum" is a value that indicates how much the pet has moved in a predetermined period of time. The "frequency" indicates the period of movement of the pet in a predetermined period of time.
 気圧センサ26は、ペットPの場所の気圧を測定する。ペット端末20は、気圧センサ26の出力に基づいて、ペットPの上下方向の動き、例えばジャンプなどの運動の回数(昇降数)、距離、累積距離などを検出し、自宅端末10へ送信する。 The atmospheric pressure sensor 26 measures the atmospheric pressure at the location of the pet P. Based on the output of the air pressure sensor 26, the pet terminal 20 detects the vertical movement of the pet P, for example, the number of movements such as jumping (the number of movements (number of ups and downs), the distance, the accumulated distance, etc.), and transmits them to the home terminal 10.
 また、図4には図示していないが、ジャイロセンサを用いてもよい。また、3軸加速度センサと3軸ジャイロセンサ(3軸の角速度センサ)が一体となった6軸センサを用いてもよい。なお、センサは、動物の活動量を計測可能なセンサであれば、上記に限定されない。 Also, although not shown in FIG. 4, a gyro sensor may be used. Also, a 6-axis sensor in which a 3-axis acceleration sensor and a 3-axis gyro sensor (a 3-axis angular velocity sensor) are integrated may be used. Note that the sensor is not limited to the above as long as it is a sensor capable of measuring the amount of activity of an animal.
 生体センサ27は、ペットPの生体情報を計測するセンサであり、例えばペットPの体温、心拍数、呼吸数などを計測して自宅端末10へ送信する。自宅端末10は、取得した生体情報をサーバ200へ送信する。 The biological sensor 27 is a sensor that measures the biological information of the pet P. For example, the body temperature, heart rate, and respiratory rate of the pet P are measured and transmitted to the home terminal 10 . The home terminal 10 transmits the acquired biometric information to the server 200 .
 マイク28は、ペットPの周辺の音を集音し、自宅端末10へ送信する。自宅端末10は、集音された音をサーバ200へ送信する。サーバ200は、受信した音に基づいて、ペットPが鳴いている、吠えているなどの状態を推定することができる。サーバ200は、例えばペットPが走り回る音や呼吸音などに基づいて、ペットの運動状態や精神状態などを推定することができる。 The microphone 28 collects sounds around the pet P and transmits them to the home terminal 10 . The home terminal 10 transmits the collected sound to the server 200 . The server 200 can estimate the state of the pet P, such as barking or barking, based on the received sound. The server 200 can estimate the motion state, mental state, and the like of the pet P based on, for example, sounds of the pet P running around and breathing sounds.
 [サーバ]
 図5(A)は、サーバ200の構成を示すブロック図である。サーバ200は、自宅端末10から受信したペットPのセンサデータに基づいて、ペットPの行動を予測する。また、サーバ200は、ユーザ端末300との間において、対話型SNSによるメッセージの送受信を実行する。サーバ200は、通信部211と、プロセッサ212と、メモリ213と、記録媒体214と、データベース(DB)215と、を備える。
[server]
FIG. 5A is a block diagram showing the configuration of the server 200. As shown in FIG. The server 200 predicts the behavior of the pet P based on the sensor data of the pet P received from the home terminal 10 . In addition, the server 200 transmits and receives messages to and from the user terminal 300 using an interactive SNS. The server 200 includes a communication unit 211 , a processor 212 , a memory 213 , a recording medium 214 and a database (DB) 215 .
 通信部211は、外部装置との間でデータの送受信を行う。具体的に、通信部211は、自宅端末10、及び、飼い主のユーザ端末300との間で情報を送受信する。 The communication unit 211 transmits and receives data to and from an external device. Specifically, the communication unit 211 transmits and receives information between the home terminal 10 and the owner's user terminal 300 .
 プロセッサ212は、CPUなどのコンピュータであり、予め用意されたプログラムを実行することにより、サーバ200の全体を制御する。なお、プロセッサ212は、GPU、FPGA、DSP、ASICなどであってもよい。具体的に、プロセッサ212は、自宅端末10から受信したセンサデータに基づいて、ペットの行動を予測する。また、プロセッサ212は、予測により得られたペットの行動に関するメッセージ情報を、対話型SNSにより飼い主のユーザ端末300に送信する。プロセッサ212は、取得手段、予測手段、判定手段及び前処理手段の一例である。 The processor 212 is a computer such as a CPU, and controls the entire server 200 by executing a program prepared in advance. Note that processor 212 may be a GPU, FPGA, DSP, ASIC, or the like. Specifically, the processor 212 predicts pet behavior based on sensor data received from the home terminal 10 . In addition, the processor 212 transmits message information regarding the behavior of the pet obtained by prediction to the owner's user terminal 300 via the interactive SNS. The processor 212 is an example of acquisition means, prediction means, determination means, and pre-processing means.
 メモリ213は、ROM、RAMなどにより構成される。メモリ213は、プロセッサ212による各種の処理の実行中に作業メモリとしても使用される。記録媒体214は、ディスク状記録媒体、半導体メモリなどの不揮発性で非一時的な記録媒体であり、サーバ200に対して着脱可能に構成される。記録媒体214は、プロセッサ212が実行する各種のプログラムを記録している。 The memory 213 is composed of ROM, RAM, and the like. The memory 213 is also used as working memory during execution of various processes by the processor 212 . The recording medium 214 is a non-volatile, non-temporary recording medium such as a disk-shaped recording medium or a semiconductor memory, and is configured to be removable from the server 200 . A recording medium 214 records various programs executed by the processor 212 .
 データベース215は、通信部211を通じて自宅端末10から受信した情報や画像を記憶する。即ち、多数のユーザ端末300のユーザが送受信したメッセージ情報や画像がデータベース215に保存される。また、データベース215には、ユーザ毎に、メッセージ情報の送信タイミングや、予め用意されたメッセージ情報(例えば、所定のメッセージ、スタンプなど)を記憶している。これらのメッセージ情報は、ペットPの行動に対応付けて記憶される。即ち、ペットPの行動毎に、1又は複数のメッセージ情報が記憶されている。なお、サーバ200は、管理者などが指示や入力を行うためのキーボード、マウスなどの入力部、及び、液晶ディスプレイなどの表示部を備えていてもよい。 The database 215 stores information and images received from the home terminal 10 through the communication unit 211. That is, message information and images sent and received by many users of the user terminals 300 are stored in the database 215 . The database 215 also stores the transmission timing of message information and prepared message information (for example, predetermined messages, stamps, etc.) for each user. These pieces of message information are stored in association with the actions of the pet P. FIG. That is, one or more pieces of message information are stored for each behavior of the pet P. FIG. Note that the server 200 may include an input unit such as a keyboard and a mouse, and a display unit such as a liquid crystal display for the administrator to give instructions and input.
 [ユーザ端末]
 図5(B)は、飼い主が使用するユーザ端末300の内部構成を示すブロック図である。ユーザ端末300は、例えばスマートフォン、タブレット、PCなどである。ユーザ端末300は、通信部311と、プロセッサ312と、メモリ313と、タッチパネル314と、を備える。
[User terminal]
FIG. 5B is a block diagram showing the internal configuration of the user terminal 300 used by the owner. The user terminal 300 is, for example, a smartphone, tablet, PC, or the like. User terminal 300 includes communication unit 311 , processor 312 , memory 313 , and touch panel 314 .
 通信部311は、外部装置との間でデータの送受信を行う。具体的に、通信部311は、サーバ200との間で情報を送受信する。 The communication unit 311 transmits and receives data to and from an external device. Specifically, the communication unit 311 transmits and receives information to and from the server 200 .
 プロセッサ312は、CPUなどのコンピュータであり、予め用意されたプログラムを実行することにより、ユーザ端末300の全体を制御する。なお、プロセッサ312は、GPU、FPGA、DSP、ASICなどであってもよい。具体的に、ユーザ端末300には、サーバ200が実行する対話型SNS用のメッセージングアプリがインストールされる。「メッセージングアプリ」とは、テキストメッセージ、スタンプ、静止画、動画などのメッセージ情報の交換を可能にするアプリである。プロセッサ312は、メッセージングアプリにより、送信されたメッセージ情報をサーバ200を介して受信し、タッチパネル314に表示する。また、プロセッサ312は、メッセージングアプリにより、飼い主が入力したメッセージ情報をサーバ200へ送信する。 The processor 312 is a computer such as a CPU, and controls the entire user terminal 300 by executing a program prepared in advance. Note that processor 312 may be a GPU, FPGA, DSP, ASIC, or the like. Specifically, a messaging application for an interactive SNS executed by the server 200 is installed in the user terminal 300 . A "messaging application" is an application that enables the exchange of message information such as text messages, stamps, still images, and videos. Processor 312 receives the transmitted message information via server 200 by the messaging application and displays it on touch panel 314 . Processor 312 also transmits the message information input by the owner to server 200 using the messaging application.
 メモリ313は、ROM、RAMなどにより構成される。メモリ313は、プロセッサ312による各種の処理の実行中に作業メモリとしても使用される。タッチパネル314は、ユーザ端末300が受信したメッセージ情報を表示する。また、タッチパネル314は、ユーザの入力装置としても機能する。 The memory 313 is composed of ROM, RAM, and the like. The memory 313 is also used as working memory during execution of various processes by the processor 312 . The touch panel 314 displays message information received by the user terminal 300 . The touch panel 314 also functions as a user input device.
 [ペットの行動予測]
 次に、サーバ200により実行されるペットPの行動予測について詳しく説明する。
 (行動予測のための構成)
 図6は、ペットPの行動予測のためのサーバ200の構成を示す。サーバ200は、ペットPの行動予測のための構成として、前処理部221と、行動予測部222と、判定部223とを備える。
[Pet behavior prediction]
Next, behavior prediction of the pet P executed by the server 200 will be described in detail.
(Configuration for action prediction)
FIG. 6 shows the configuration of the server 200 for pet P behavior prediction. The server 200 includes a preprocessing unit 221, an action prediction unit 222, and a determination unit 223 as components for predicting the behavior of the pet P. FIG.
 前処理部221には、ペットPの状態を検知したセンサデータが入力される。センサデータは、ペット端末20により検知され、ペットPの状態に関連する物理量を示すデータである。本実施形態ではセンサデータとして、ペット端末20により検知された振動数、運動量、及び、昇降回数を用いるものとする。前処理部221は、センサデータに対して前処理を行い、ペットPの動きを示す特徴量を生成して行動予測部222へ出力する。 Sensor data that detects the state of the pet P is input to the preprocessing unit 221 . The sensor data is data that is detected by the pet terminal 20 and indicates a physical quantity related to the state of the pet P. FIG. In this embodiment, it is assumed that the vibration frequency, the amount of exercise, and the number of times of ascending/descending that are detected by the pet terminal 20 are used as the sensor data. The preprocessing unit 221 preprocesses the sensor data, generates a feature amount indicating the movement of the pet P, and outputs the feature amount to the behavior prediction unit 222 .
 ここで、前処理部221による前処理について詳しく説明する。前処理部221は、ペット端末20により検知されたセンサデータから特徴量を生成する。前述のように、センサデータとして、振動数、運動量、及び、昇降回数を用いるものとする。振動数、運動量及び昇降回数は、いずれも時系列データである。図7は、振動数及び運動量の例を示すグラフである。なお、便宜上、昇降回数の図示は省略する。前処理部221は、振動数、運動量及び昇降回数の各データを特定の期間(以下、「予測期間」とも呼ぶ。)毎に分割する。予測期間は、ペットPの行動予測を行う単位時間となる。仮に予測期間を30秒とすると、前処理部221は、30秒毎にペットの行動予測を行うことになる。 Here, the preprocessing by the preprocessing unit 221 will be described in detail. The preprocessing unit 221 generates feature amounts from sensor data detected by the pet terminal 20 . As described above, it is assumed that the vibration frequency, the momentum, and the number of times of ascending/descending are used as sensor data. The frequency of vibration, the amount of exercise, and the number of times of ascending/descending are all time-series data. FIG. 7 is a graph showing an example of frequency and momentum. For the sake of convenience, illustration of the number of times of lifting is omitted. The preprocessing unit 221 divides each data of the vibration frequency, the amount of exercise, and the number of ascending/descending times into specific periods (hereinafter also referred to as “prediction periods”). The prediction period is a unit time for predicting the pet P's behavior. Assuming that the prediction period is 30 seconds, the preprocessing unit 221 predicts the behavior of the pet every 30 seconds.
 前処理部221は、予測期間毎の各データを所定時間ずつずらして分割し、分割データを生成する。仮に所定時間を3秒とすると、前処理部221は、予測期間(30秒)の振動数のデータを3秒ずつずらして、10個の分割データを生成する。同様に、前処理部221は、運動量及び昇降回数について、それぞれ10個の分割データを生成する。これにより、合計30個の分割データが得られる。なお、これらの分割データは、センサデータの出力データ自体、即ち、生データである。 The preprocessing unit 221 divides each piece of data for each prediction period by shifting it by a predetermined amount of time to generate divided data. Assuming that the predetermined time is 3 seconds, the preprocessing unit 221 shifts the frequency data in the prediction period (30 seconds) by 3 seconds to generate 10 pieces of divided data. Similarly, the preprocessing unit 221 generates 10 pieces of divided data for each of the amount of exercise and the number of times of going up and down. As a result, a total of 30 pieces of divided data are obtained. It should be noted that these pieces of divided data are the output data of the sensor data themselves, that is, the raw data.
 さらに、前処理部221は、振動数、運動量及び昇降回数の各データについて、5つの統計値を計算する。具体的に、5つの計算値は、平均値、標準偏差、最大値、最小値、及び、閾値より大きい値が記録された回数とする。即ち、前処理部221は、1つの予測期間に含まれる10個の振動数の分割データに基づいて、それらの平均値、標準偏差、最大値、最小値、閾値より大きい値が記録された回数の5つの統計値を計算する。同様に、前処理部221は、振動数及び昇降回数についても、それぞれ上記の5つの統計値を計算する。こうして、前処理部221は、合計15個の統計値を計算する。 Furthermore, the preprocessing unit 221 calculates five statistical values for each data of the vibration frequency, momentum, and number of times of ascending/descending. Specifically, the five calculated values are the average value, standard deviation, maximum value, minimum value, and the number of times a value greater than the threshold is recorded. That is, the preprocessing unit 221 determines the average value, standard deviation, maximum value, minimum value, and the number of times a value larger than the threshold value is recorded based on divided data of ten frequencies included in one prediction period. Compute five statistics of Similarly, the preprocessing unit 221 also calculates the above-mentioned five statistical values for the frequency of vibration and the number of times of ascending/descending. Thus, the preprocessing unit 221 calculates a total of 15 statistical values.
 そして、前処理部221は、合計30個の分割データ(生データ)と、合計15個の統計値を、1つの予測期間における特徴量として行動予測部222へ出力する。なお、出力される特徴量は、行動予測部222において行動予測モデルが予測に用いる説明変数に相当する。 Then, the preprocessing unit 221 outputs a total of 30 pieces of divided data (raw data) and a total of 15 statistical values to the action prediction unit 222 as feature amounts in one prediction period. Note that the output feature amount corresponds to an explanatory variable used for prediction by the behavior prediction model in the behavior prediction unit 222 .
 行動予測部222は、前処理部221から入力された特徴量に基づいて、ペットの行動を予測する。行動予測部222は、予め訓練済みの行動予測モデルを用いて、特徴量からペットの行動を予測し、所定の行動毎のスコアを予測結果として出力する。即ち、行動予測モデルは、動物の状態に関するデータと、動物の行動との関係を学習したモデルである。ここで、所定の行動としては、例えば、ご飯、トイレ、水飲み、回転、二足立ち、停止、歩き、走り、遊びなどのペットの各種の行動を設定することができる。行動予測部222は、入力された特徴量に基づいて、各行動に対するスコアを算出する。各行動についてのスコアは、入力された特徴量が示すペットの状態が、各行動に該当する尤度(確率)を示す値であり、値が大きいほど確率が高いものとする。行動予測部222は、各行動に対するスコアの集合である予測結果を判定部223へ出力する。 The behavior prediction unit 222 predicts the behavior of the pet based on the feature amount input from the preprocessing unit 221. The behavior prediction unit 222 uses a pre-trained behavior prediction model to predict the behavior of the pet from the feature amount, and outputs a score for each predetermined behavior as a prediction result. In other words, the behavior prediction model is a model that has learned the relationship between animal state data and animal behavior. Here, as the predetermined action, for example, various pet actions such as eating, toileting, drinking water, spinning, standing on two legs, stopping, walking, running, and playing can be set. The action prediction unit 222 calculates a score for each action based on the input feature amount. The score for each action is a value indicating the likelihood (probability) that the state of the pet indicated by the input feature amount corresponds to each action, and the higher the value, the higher the probability. The behavior prediction unit 222 outputs a prediction result, which is a set of scores for each behavior, to the determination unit 223 .
 行動予測部222が使用する行動予測モデルとしては、例えば決定木を用いたモデルなどを用いることができる。なお、本実施形態では、行動予測モデルは特定のものに限定されず、他の機械学習モデルやニューラルネットワークなどを用いたモデルとすることができる。 As the behavior prediction model used by the behavior prediction unit 222, for example, a model using a decision tree can be used. Note that, in this embodiment, the behavior prediction model is not limited to a specific model, and may be a model using other machine learning models, neural networks, or the like.
 行動予測モデルは、予め訓練済みのモデルである。即ち、行動予測モデルは、前処理部221が生成した特徴量を入力データとし、そのときのペットの実際の行動を正解ラベルとする訓練データを用いて訓練されたモデルである。ここで、訓練データに含まれる入力データは、例えば、ペット端末20により検知したセンサデータから前処理部221と同様の前処理により生成される。また、訓練データに含まれる正解ラベルは、入力データの生成に用いたセンサデータを取得したときのペットの撮影画像に基づき、そのときのペットの実際の行動が正解ラベルとして付与される。なお、ペットの行動を正解ラベルとして付与する作業は、人間が行ってもよく、撮影画像からペットの行動を予測する予測モデルなどを用いて行ってもよく、それらを組み合わせてもよい。 The behavior prediction model is a pre-trained model. That is, the behavior prediction model is a model that has been trained using the feature amount generated by the preprocessing unit 221 as input data and the training data that has the actual behavior of the pet at that time as the correct label. Here, the input data included in the training data is generated from sensor data detected by the pet terminal 20 by preprocessing similar to that of the preprocessing unit 221, for example. Further, the correct label included in the training data is based on the photographed image of the pet when the sensor data used to generate the input data is obtained, and the actual action of the pet at that time is assigned as the correct label. The task of assigning the behavior of the pet as the correct label may be performed by a human, or may be performed using a prediction model for predicting the behavior of the pet from the captured image, or a combination thereof.
 判定部223は、行動予測部222から入力された予測結果に基づいて、ペットの行動を判定し、最終予測結果を出力する。具体的に、判定部223は、予測結果に含まれる各行動のうち、スコアが所定の条件を満足する行動、例えばスコアが所定の閾値以上である行動をそのときのペットの行動と判定し、最終予測結果として出力する。図6の例では、判定部223は、所定の閾値を「0.5」とし、予測結果に含まれる行動のうちスコアが0.5以上である「ご飯」及び「水飲み」を最終予測結果として出力する。なお、最終予測結果は、スコアが所定の条件を満足する行動に加えて、各行動のスコアを含んでもよい。 The determination unit 223 determines the behavior of the pet based on the prediction result input from the behavior prediction unit 222, and outputs the final prediction result. Specifically, the determination unit 223 determines, among the actions included in the prediction result, an action whose score satisfies a predetermined condition, for example, an action whose score is equal to or greater than a predetermined threshold, as the action of the pet at that time, Output as the final prediction result. In the example of FIG. 6, the determination unit 223 sets the predetermined threshold value to "0.5", and selects "rice" and "drinking water" having scores of 0.5 or more among the behaviors included in the prediction results as the final prediction results. Output. Note that the final prediction result may include the score of each action in addition to the action whose score satisfies a predetermined condition.
 このように、本実施形態では、予め訓練済みの行動予測モデルを用いることにより、ペットの状態を検知したセンサデータに基づいて、ペットの行動を予測することができる。なお、実際には、サーバ200は、リアルタイムで行動を予測する場合(以下、「リアルタイム予測」と呼ぶ。)と、所定期間にわたって取得したセンサデータを用いて、その所定期間内の行動をまとめて予測する場合(以下、「非リアルタイム予測」と呼ぶ。)とがある。非リアルタイム予測としては、例えば、ペットの1日分のセンサデータを蓄積し、その1日におけるペットの行動をまとめて予測する場合などがある。 Thus, in this embodiment, by using a pre-trained behavior prediction model, it is possible to predict the behavior of the pet based on sensor data that detects the state of the pet. In practice, server 200 predicts behavior in real time (hereinafter referred to as “real-time prediction”), and uses sensor data acquired over a predetermined period to summarize behavior within the predetermined period. There is a case of prediction (hereinafter referred to as “non-real-time prediction”). As non-real-time prediction, for example, one day's worth of pet sensor data is accumulated, and the pet's behavior for that day is collectively predicted.
 上記の構成において、前処理部221は前処理手段の一例であり、行動予測部222は予測手段の一例であり、判定部223は出力手段の一例である。なお、上記の構成において、判定部223の機能を行動予測部222内に含め、予測手段により実行することとしてもよい。 In the above configuration, the preprocessing unit 221 is an example of preprocessing means, the behavior prediction unit 222 is an example of prediction means, and the determination unit 223 is an example of output means. In addition, in the above configuration, the function of the determination unit 223 may be included in the behavior prediction unit 222 and executed by the prediction means.
 (予測精度改善手法)
 次に、上記の手法によるペットの行動予測の精度を改善するための種々の手法について説明する。
(1)1日の行動回数の制限
 非リアルタイム予測の場合、サーバ200は、一般的に各行動が発生する回数や頻度を考慮してペットの行動を予測することができる。例えば、通常、1日において「ご飯」が発生する回数は2~4回である。よって、判定部223は、1日の予測結果中に「ご飯」が4回以上あった場合、それらのうちスコアが上位である2~4回を「ご飯」と判定し、それ以外を最終予測結果から除外する。これにより、1日において発生する回数が大体決まっているような行動について、予測の精度を向上させることができる。
(Prediction accuracy improvement method)
Next, various methods for improving the accuracy of pet behavior prediction by the above method will be described.
(1) Restriction on the number of actions per day In the case of non-real-time prediction, the server 200 can generally predict the behavior of the pet by considering the number and frequency of occurrence of each action. For example, the number of times “rice” occurs is usually 2 to 4 times in a day. Therefore, when “rice” is present four or more times in the prediction results for one day, the determining unit 223 determines that the two to four times with the highest scores among them are “rice”, and the rest are the final predictions. Exclude from results. As a result, it is possible to improve the accuracy of prediction for an action whose number of occurrences in a day is roughly fixed.
(2)時間の考慮
 ペットの行動のうち、その行動が発生するおよその時間帯が決まっている行動については、サーバ200は時間を考慮して行動を予測することができる。例えば、「ご飯」は、朝、昼、晩など、行動が発生するおよその時間帯が決まっている。よって、ペットがご飯を食べる時間帯を考慮することにより、「ご飯」の予測精度を向上させることができる。
(2) Consideration of Time Among pet actions, the server 200 can predict the action in consideration of time for actions for which an approximate time period for the action to occur is fixed. For example, "meal" has a fixed approximate time zone, such as morning, noon, and evening, in which an action occurs. Therefore, it is possible to improve the prediction accuracy of "rice" by considering the time period when the pet eats rice.
 1つの具体的な方法では、行動予測部222に入力する特徴量(説明変数)に時刻を加える。この時刻は、ペットPの状態を示す物理量を検知するセンサがセンサデータを生成した時刻であってもよく、サーバ200によりそのセンサデータが取得された時刻であってもよい。1つの方法では、行動予測モデルは、時刻を説明変数として用いるように予め訓練される。これにより、行動予測部222は、時刻を考慮してペットの行動を予測することができる。他の方法では、判定部223は、予測結果に含まれる各行動のスコアに対して時間帯に基づく重み付けを行い、行動を判定する。例えば、判定部223は、ペットが食事する時間帯においては予測結果に含まれる各行動のスコアに大きな重みを付与し、それ以外の時間帯には各行動のスコアに小さな重みを付与する。そして、判定部223は、重み付け後の各行動のスコアを所定の閾値と比較し、最終予測結果を出力する。 In one specific method, time is added to the feature amount (explanatory variable) input to the behavior prediction unit 222 . This time may be the time when the sensor that detects the physical quantity indicating the state of the pet P generated the sensor data, or the time when the sensor data was acquired by the server 200 . In one method, the behavior prediction model is pre-trained to use time of day as an explanatory variable. Thereby, the behavior prediction unit 222 can predict the behavior of the pet in consideration of the time. In another method, the determination unit 223 weights the score of each action included in the prediction result based on the time period to determine the action. For example, the determination unit 223 assigns a large weight to the score of each action included in the prediction result during the time period when the pet eats, and assigns a small weight to the score of each action during other time periods. Then, the determination unit 223 compares the weighted score of each action with a predetermined threshold, and outputs a final prediction result.
(3)時間的に前後する行動の考慮
 ペットの行動には、相互に関連を有し、時間的に連続して行われるものがある。例えば、「ご飯」の後には、「トイレ」や「水飲み」が行われることが多く、「走る」の後には「水飲み」が行われることが多い。よって、直近で発生した行動を、行動予測モデルに入力する説明変数に加えることにより、その行動に不随して行われる行動が予測しやすくなる。この場合、行動予測モデルは、直近に発生した行動を説明変数として用いるように予め訓練される。
(3) Consideration of Temporally Sequential Behavior Some pet behaviors are interrelated and temporally continuous. For example, "toilet" and "drinking water" are often performed after "meal", and "drinking water" is often performed after "running". Therefore, by adding the most recent action to the explanatory variables to be input to the action prediction model, it becomes easier to predict the action that accompanies the action. In this case, the behavior prediction model is pre-trained to use the most recently occurring behavior as an explanatory variable.
 また、非リアルタイム予測の場合には、時間的に後に行われた行動を考慮して、それより前に発生した行動を予測することもできる。この場合、行動予測モデルは、時間的に後に行われた行動を説明変数として用いるように予め訓練される。これにより、例えば、「水飲み」が発生した場合に、その前の行動として「ご飯」や「走る」が予測される可能性が高くなる。 In addition, in the case of non-real-time prediction, it is also possible to consider actions that occurred later in time and predict actions that occurred earlier. In this case, the action prediction model is pre-trained to use later actions as explanatory variables. As a result, for example, when "drinking water" occurs, there is a high possibility that "rice" or "running" is predicted as a previous action.
(4)ノイズの除去
 ペットの行動には、通常、ある程度の時間継続して行われるものと、瞬間的に発生するものとがある。例えば、「ご飯」、「トイレ」、「水飲み」などは、ある程度の時間継続して行われる(これらの行動を以下「継続的行動」とも呼ぶ)。例えば、ある時刻の行動が「ご飯」と予測された場合、その次の時刻の行動も「ご飯」である場合が多い。よって、判定部223は、上記のような継続的行動については、ある時刻でその行動が予測された場合でも、その時刻の前後の一定期間(例えば30秒など)において同じ行動が予測されていない場合には、その行動をノイズとして除去する。これにより、継続的行動の予測精度を向上させることができる。
(4) Elimination of Noise Pet behaviors are generally divided into behaviors that continue for a certain amount of time and behaviors that occur instantaneously. For example, "rice", "toilet", and "drinking water" are performed continuously for a certain period of time (these behaviors are hereinafter also referred to as "continuous behaviors"). For example, when the action at a certain time is predicted to be "rice", the action at the next time is often "rice" as well. Therefore, with respect to the continuous behavior as described above, even if the behavior is predicted at a certain time, the determination unit 223 determines that the same behavior is not predicted for a certain period of time (for example, 30 seconds) before and after that time. If so, remove the behavior as noise. Thereby, the prediction accuracy of continuous behavior can be improved.
 図8は、あるセンサデータに基づく行動の予測結果及び最終予測結果の例を示す。この例では、波形X1に対応する時刻における予測結果において「ご飯」のスコアが高くなっているが、その前後では「ご飯」のスコアは高くない。このような場合、判定部223は、波形X1に対応する時刻の「ご飯」のスコアが所定の閾値を超えていたとしても、波形X1をノイズと見なし、波形X1に基づく「ご飯」の予測結果を最終予測結果に含めない。即ち、最終予測結果は、「ご飯」の予測結果を含まないものとなる。これにより、継続的行動の予測精度を向上させることができる。 Fig. 8 shows an example of behavior prediction results and final prediction results based on certain sensor data. In this example, the score of "rice" is high in the prediction result at the time corresponding to waveform X1, but the score of "rice" is not high before and after that. In such a case, even if the score of “rice” at the time corresponding to the waveform X1 exceeds the predetermined threshold, the determination unit 223 regards the waveform X1 as noise, and determines the prediction result of “rice” based on the waveform X1. is not included in the final prediction result. That is, the final prediction result does not include the prediction result of "rice". Thereby, the prediction accuracy of continuous behavior can be improved.
(5)予測結果の拡大
 上記のように、「ご飯」、「トイレ」、「水飲み」などの継続的行動は、ある程度の時間継続して行われる。よって、近い時刻で同一の継続的行動が予測された場合には、判定部223は、それらの間も同じ行動が発生しているとみなすことができる。例えば、ある時刻t1の行動と、その2つ先の時刻t3の行動が「ご飯」と予測された場合、判定部223は、その間の時刻t2の行動もご飯とみなし、時刻t1~t3の間「ご飯」が継続したと判定する。具体的に、図8の例において、波形X2に示すように時間的に近接する複数の時刻で同一の行動「ご飯」が予測された場合、判定部223は、それらを含む期間の行動を「ご飯」と判定する。即ち、最終予測結果は、時刻t1~t3の間、「ご飯」が継続されていることを示すものとなる。これにより、継続的行動の予測精度を向上させることができる。
(5) Expansion of Prediction Result As described above, continuous actions such as “rice”, “toilet”, and “drinking water” continue for a certain amount of time. Therefore, when the same continuous behavior is predicted at close times, the determining unit 223 can consider that the same behavior is occurring between them. For example, when an action at a certain time t1 and an action at time t3, which is two times ahead, are predicted to be "rice", the determination unit 223 also regards the action at time t2 between them as rice, and It is determined that "rice" has continued. Specifically, in the example of FIG. 8 , when the same action “rice” is predicted at a plurality of times close to each other in terms of time as shown in the waveform X2, the determination unit 223 determines that the action during the period including them is “ rice”. That is, the final prediction result indicates that the "meal" is continued from time t1 to t3. Thereby, the prediction accuracy of continuous behavior can be improved.
(6)予測期間の時間幅の調整
 上記の例では、全ての行動について、予測期間は30秒に設定されている。即ち、前処理部221は、30秒毎のセンサデータに基づいて特徴量を生成し、行動予測部222は、30秒毎にペットの行動を予測している。その代わりに、行動毎に、その行動に適切な予測期間を設定してもよい。例えば、「ご飯」の予測期間は1分、「トイレ」の予測期間は30秒、「歩き」の予測期間は10秒などと設定してもよい。この場合、前処理部221は、上記のように行動毎に設定した予測期間にわたってセンサデータを前処理して特徴量を生成し、行動予測部222へ出力すればよい。これにより、行動毎に適した時間幅で予測が行われるので、予測精度を向上させることができる。
(6) Adjustment of Time Width of Prediction Period In the above example, the prediction period is set to 30 seconds for all actions. That is, the preprocessing unit 221 generates feature amounts based on sensor data every 30 seconds, and the behavior prediction unit 222 predicts the behavior of the pet every 30 seconds. Alternatively, for each action, a prediction period appropriate to that action may be set. For example, the prediction period for "rice" may be set to 1 minute, the prediction period for "toilet" may be set to 30 seconds, and the prediction period for "walking" may be set to 10 seconds. In this case, the preprocessing unit 221 may preprocess the sensor data over the prediction period set for each action as described above to generate a feature amount, and output the feature amount to the action prediction unit 222 . As a result, prediction is performed with a time width suitable for each action, so prediction accuracy can be improved.
(7)相反する行動の除外
 ペットの行動には、時間的に同時に発生しない行動の組み合わせがある。例えば、一般的に、「歩く」や「走る」などの行動と、「ご飯」とは同時に発生しない。よって、同一の予測期間において相反する複数の行動が予測された場合、判定部223は、それらのうちスコアが低い方を除外し、スコアが高い方を最終予測結果として採用する。即ち、最終予測結果は、相反する行動のうちスコアが最も高いものを含むこととなる。これにより、現実的にあり得ない複数の行動が同時に予測されることを防止できる。
(7) Exclusion of Conflicting Behavior Pet behavior includes a combination of behaviors that do not occur at the same time. For example, actions such as “walking” and “running” generally do not occur at the same time as “meal”. Therefore, when a plurality of conflicting actions are predicted in the same prediction period, the determination unit 223 excludes the one with the lower score and adopts the one with the higher score as the final prediction result. That is, the final prediction result includes the conflicting behavior with the highest score. As a result, it is possible to prevent multiple unrealistic behaviors from being predicted at the same time.
 (行動予測処理)
 次に、サーバ200による行動予測処理について説明する。図9は、行動予測処理のフローチャートである。この処理は、図5に示すサーバ200のプロセッサ212が、予め用意されたプログラムを実行し、図6に示す構成要素として動作することにより実現される。
(behavior prediction processing)
Next, behavior prediction processing by the server 200 will be described. FIG. 9 is a flowchart of action prediction processing. This processing is realized by the processor 212 of the server 200 shown in FIG. 5 executing a program prepared in advance and operating as the component shown in FIG.
 まず、前処理部221は、自宅端末10からペットの状態を示すセンサデータを取得する(ステップS11)。次に、前処理部221は、所定の予測期間毎に、センサデータの前処理を行い、前述の特徴量を生成して行動予測部222へ出力する(ステップS12)。次に、行動予測部222は、予め訓練済みの行動予測モデルを用いて、入力された特徴量からペットの行動を予測し、予測結果を判定部223へ出力する(ステップS13)。そして、判定部223は、入力された予測結果のうち、スコアが所定の閾値以上である行動を最終予測結果として出力する(ステップS14)。そして、処理は終了する。 First, the preprocessing unit 221 acquires sensor data indicating the state of the pet from the home terminal 10 (step S11). Next, the preprocessing unit 221 preprocesses the sensor data for each predetermined prediction period, generates the above-described feature amount, and outputs it to the behavior prediction unit 222 (step S12). Next, the behavior prediction unit 222 predicts the behavior of the pet from the input feature amount using a pre-trained behavior prediction model, and outputs the prediction result to the determination unit 223 (step S13). Then, the determination unit 223 outputs an action whose score is equal to or higher than a predetermined threshold among the input prediction results as a final prediction result (step S14). Then the process ends.
 [メッセージ情報の送信]
 次に、飼い主のユーザ端末300へのメッセージ情報の送信について説明する。本実施形態では、サーバ200は、上記の行動予測処理により予測されたペットPの行動に基づいて、その行動に関連する内容のメッセージ情報を飼い主のユーザ端末300へ送信する。具体的には、サーバ200のデータベース215は、ペットPの行動毎に予め用意された複数のメッセージ情報を記憶している。サーバ200は、行動予測処理によりペットPの行動を予測すると、予測した行動に対応するメッセージをデータベース215から取得し、飼い主のユーザ端末300へ送信する。
[Send message information]
Next, transmission of message information to the owner's user terminal 300 will be described. In this embodiment, based on the behavior of the pet P predicted by the behavior prediction process, the server 200 transmits message information related to the behavior to the user terminal 300 of the owner. Specifically, database 215 of server 200 stores a plurality of pieces of message information prepared in advance for each action of pet P. FIG. When the server 200 predicts the behavior of the pet P by the behavior prediction process, the server 200 acquires a message corresponding to the predicted behavior from the database 215 and transmits it to the user terminal 300 of the owner.
 図10は、飼い主のユーザ端末300におけるメッセージ情報の表示例である。なお、飼い主の名前を「一郎」とし、ペットPの名前を「ジョン」とする。また、この例では、サーバ200は、飼い主からのメッセージ情報の受信に応答して、ペットPの行動に関連するメッセージ情報を返信するように設定されているものとする。 FIG. 10 is a display example of message information on the user terminal 300 of the owner. Assume that the owner's name is "Ichiro" and the pet P's name is "John". Also, in this example, the server 200 is set to return message information related to the behavior of the pet P in response to receiving message information from the owner.
 まず、飼い主が「何してるの?」というメッセージ情報を送信する。これに対し、サーバ200は、前述の行動予測処理により、そのときのペットPの行動を「ご飯」と予測し、「ご飯」に対応するメッセージ情報「ご飯食べてるんだ」を取得して飼い主に送信している。 First, the owner sends message information asking "What are you doing?" In response to this, the server 200 predicts that the behavior of the pet P at that time is "meal" by the behavior prediction processing described above, acquires the message information "I'm eating rice" corresponding to "meal", and sends it to the owner. are sending to
 その後、飼い主が「お昼寝中かい?」というメッセージ情報を送信すると、サーバ200は、行動予測処理により、そのときのペットPの行動を「歩く」と予測し、「歩く」に対応するメッセージ情報「ウォーキングだよ!」を取得して飼い主に送信している。 After that, when the owner transmits the message information "Are you taking a nap?" "Walking!" is obtained and sent to the owner.
 このように、本実施形態によれば、サーバ200は、ペット端末20から得られたセンサデータに基づいてペットPの行動を予測し、その行動に対応するメッセージ情報を飼い主のユーザ端末300へ送信する。よって、飼い主は外出先などにおいてもペットPの行動に応じたメッセージ情報を見ることができる。 Thus, according to this embodiment, the server 200 predicts the behavior of the pet P based on sensor data obtained from the pet terminal 20, and transmits message information corresponding to the behavior to the owner's user terminal 300. do. Therefore, the owner can see the message information according to the behavior of the pet P even when he/she is away from home.
 <第2実施形態>
 図11は、第2実施形態の情報処理装置の機能構成を示すブロック図である。第2実施形態の情報処理装置50は、取得手段51と、予測手段52と、出力手段53と、を備える。
<Second embodiment>
FIG. 11 is a block diagram showing the functional configuration of the information processing apparatus according to the second embodiment. The information processing apparatus 50 of the second embodiment includes acquisition means 51 , prediction means 52 and output means 53 .
 図12は、情報処理装置50による処理のフローチャートである。取得手段51は、対象動物の状態に関連する物理量を検知するセンサのデータを取得する(ステップS51)。予測手段52は、訓練済みの予測モデルを用いて、取得したデータに基づいて対象動物の行動を予測し、予測結果を出力する(ステップS52)。出力手段53は、予測結果に基づいて、対象動物の行動を判定し、最終予測結果を出力する(ステップS53)。 FIG. 12 is a flowchart of processing by the information processing device 50. FIG. Acquisition means 51 acquires data from sensors that detect physical quantities related to the state of the target animal (step S51). The prediction means 52 uses a trained prediction model to predict the behavior of the target animal based on the acquired data, and outputs the prediction result (step S52). The output means 53 determines the behavior of the target animal based on the prediction result, and outputs the final prediction result (step S53).
 なお、情報処理装置50による処理は次のように動作してもよい。取得手段51は、対象動物の状態に関連するデータを取得する。予測手段52は、訓練済みの予測モデルを用いて、取得したデータに基づいて対象動物の行動を予測する。出力手段53は、予測結果に基づいて、予測される前記対象動物の行動のうち、所定の条件を満たす行動を、最終予測結果として出力する。 The processing by the information processing device 50 may operate as follows. Acquisition means 51 acquires data relating to the condition of the target animal. The prediction means 52 uses a trained prediction model to predict the behavior of the target animal based on the acquired data. The output means 53 outputs, as a final prediction result, a behavior that satisfies a predetermined condition among the predicted behaviors of the target animal based on the prediction results.
 第2実施形態の情報処理装置50によれば、対象動物の状態に関連する物理量を示すセンサデータに基づいて、対象動物の行動を予測することができる。 According to the information processing device 50 of the second embodiment, it is possible to predict the behavior of the target animal based on sensor data indicating physical quantities related to the state of the target animal.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。 Some or all of the above embodiments can also be described as the following additional remarks, but are not limited to the following.
 (付記1)
 対象動物の状態に関連するデータを取得する取得手段と、
 訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力する予測手段と、
 前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する出力手段と、
 を備える動物行動予測装置。
(Appendix 1)
Acquisition means for acquiring data related to the condition of the target animal;
prediction means for predicting the behavior of the target animal based on the data using a trained prediction model and outputting the prediction result;
output means for outputting an action that satisfies a predetermined condition as a final prediction result based on the prediction result;
An animal behavior prediction device comprising:
 (付記2)
 前記予測手段は、訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動の候補それぞれの尤度を示すスコアを出力し、
 前記出力手段は、前記スコアが所定の閾値以上である行動を前記最終予測結果として出力する付記1に記載の動物行動予測装置。
(Appendix 2)
The prediction means uses a trained prediction model to output a score indicating the likelihood of each candidate behavior of the target animal based on the data,
1. The animal behavior prediction device according to Supplementary Note 1, wherein the output means outputs the behavior for which the score is equal to or greater than a predetermined threshold value as the final prediction result.
 (付記3)
 前記予測手段は、所定の期間毎に前記対象動物の行動の候補を予測し、
 前記出力手段は、前記所定の期間内において、前記行動毎に決められた所定回数以内の行動を最終予測結果として出力する付記1又は2に記載の情報処理装置。
(Appendix 3)
The prediction means predicts a candidate behavior of the target animal for each predetermined period,
3. The information processing apparatus according to Supplementary Note 1 or 2, wherein the output means outputs, as a final prediction result, actions within a predetermined number of times determined for each action within the predetermined period.
 (付記4)
 前記予測手段は、前記データと、当該データが生成された時刻とに基づいて前記対象動物の行動を予測する付記1又は2に記載の情報処理装置。
(Appendix 4)
3. The information processing apparatus according to appendix 1 or 2, wherein the prediction means predicts the behavior of the target animal based on the data and the time when the data was generated.
 (付記5)
 前記予測手段は、前記行動毎に予め決められた時間帯に基づいて前記予測結果に含まれる各行動のスコアを重み付けし、
 前記出力手段は、重み付け後のスコアに基づいて前記最終予測結果を出力する付記2に記載の情報処理装置。
(Appendix 5)
The prediction means weights the score of each action included in the prediction result based on a time period predetermined for each action,
3. The information processing apparatus according to appendix 2, wherein the output means outputs the final prediction result based on the weighted score.
 (付記6)
 前記予測手段は、前記データと、当該データが検知された時刻より前の時刻における予測結果および後の時刻における予測結果の少なくとも一方と、に基づいて、前記対象動物の行動を予測する付記1又は2に記載の情報処理装置。
(Appendix 6)
Supplementary note 1, wherein the prediction means predicts the behavior of the target animal based on the data and at least one of a prediction result at a time before and after the time when the data was detected; 3. The information processing apparatus according to 2.
 (付記7)
 前記予測結果に含まれる所定の行動について、当該行動の前後の所定期間内に同一の行動が予測されていない場合には、前記最終予測結果は、当該行動を含まない付記1又は2に記載の情報処理装置。
(Appendix 7)
With respect to a predetermined action included in the prediction result, if the same action is not predicted within a predetermined period before and after the action, the final prediction result does not include the action. Information processing equipment.
 (付記8)
 前記予測結果に含まれる所定の行動について、当該行動の前後の所定期間内に同一の行動が予測されている場合には、前記最終予測結果は、前記所定の期間内では前記所定の行動が継続されていることを含む付記1又は2に記載の情報処理装置。
(Appendix 8)
With respect to a predetermined action included in the prediction result, if the same action is predicted within a predetermined period before and after the action, the final prediction result indicates that the predetermined action continues within the predetermined period. 3. The information processing apparatus according to appendix 1 or 2, including
 (付記9)
 取得されたデータを前処理する前処理手段を備え、
 前記予測手段は、前記前処理後のデータに基づいて前記対象動物の行動を予測し、
 前記前処理手段は、予測しようとする行動毎に予め決められた時間幅のデータを用いて前処理を行い、前記前処理後のデータとして出力する付記1又は2に記載の情報処理装置。
(Appendix 9)
Preprocessing means for preprocessing the acquired data,
the prediction means predicts the behavior of the target animal based on the preprocessed data;
3. The information processing apparatus according to appendix 1 or 2, wherein the preprocessing means performs preprocessing using data of a predetermined time width for each behavior to be predicted, and outputs the preprocessed data.
 (付記10)
 同一時刻における予測結果が、予め決められた相反する行動を含む場合、前記最終予測結果は、前記相反する行動のうち最もスコアが高い行動を含む付記2に記載の情報処理装置。
(Appendix 10)
3. The information processing apparatus according to Supplementary Note 2, wherein when prediction results at the same time include predetermined conflicting behaviors, the final prediction result includes behavior with the highest score among the conflicting behaviors.
 (付記11)
 対象動物の状態に関連するデータを取得し、
 訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力し、
 前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する動物行動予測方法。
(Appendix 11)
obtaining data relating to the condition of the subject animal;
predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result;
An animal behavior prediction method for outputting a behavior that satisfies a predetermined condition as a final prediction result based on the prediction result.
 (付記12)
 対象動物の状態に関連するデータを取得し、
 訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力し、
 前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する処理をコンピュータに実行させるプログラムを記録した記録媒体。
(Appendix 12)
obtaining data relating to the condition of the subject animal;
predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result;
A recording medium recording a program for causing a computer to execute a process of outputting a behavior that satisfies a predetermined condition as a final prediction result based on the prediction result.
 (付記13)
 対象動物の状態に関連するデータを取得する取得手段と、
 訓練済みの予測モデル を用いて、前記データに基づいて、前記対象動物の行動を予測する予測手段と、
 前記予測手段により予測される前記対象動物の行動のうち、所定の条件を満たす行動 を最終予測結果として出力する出力手段と、
 を備える動物行動予測装置。
(Appendix 13)
Acquisition means for acquiring data related to the condition of the target animal;
prediction means for predicting the behavior of the target animal based on the data using a trained prediction model;
output means for outputting, as a final prediction result, a behavior that satisfies a predetermined condition among the behaviors of the target animal predicted by the prediction means;
An animal behavior prediction device comprising:
 以上、実施形態及び実施例を参照して本発明を説明したが、本発明は上記実施形態及び実施例に限定されるものではない。本発明の構成や詳細には、本発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present invention has been described with reference to the embodiments and examples, the present invention is not limited to the above embodiments and examples. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 10 自宅端末
 15 固定カメラ
 16 マイク
 17 自動給餌器
 18 ペットトイレ
 19 スピーカ
 20 ペット端末
 24 ペットカメラ
 27 生体センサ
 100 自宅システム
 200 サーバ
 300 ユーザ端末
10 Home Terminal 15 Fixed Camera 16 Microphone 17 Automatic Feeder 18 Pet Toilet 19 Speaker 20 Pet Terminal 24 Pet Camera 27 Biosensor 100 Home System 200 Server 300 User Terminal

Claims (12)

  1.  対象動物の状態に関連するデータを取得する取得手段と、
     訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力する予測手段と、
     前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する出力手段と、
     を備える動物行動予測装置。
    Acquisition means for acquiring data related to the condition of the target animal;
    prediction means for predicting the behavior of the target animal based on the data using a trained prediction model and outputting the prediction result;
    output means for outputting an action that satisfies a predetermined condition as a final prediction result based on the prediction result;
    An animal behavior prediction device comprising:
  2.  前記予測手段は、訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動の候補それぞれの尤度を示すスコアを出力し、
     前記出力手段は、前記スコアが所定の閾値以上である行動を前記最終予測結果として出力する請求項1に記載の動物行動予測装置。
    The prediction means uses a trained prediction model to output a score indicating the likelihood of each candidate behavior of the target animal based on the data,
    2. The animal behavior prediction device according to claim 1, wherein the output means outputs behaviors for which the score is equal to or greater than a predetermined threshold value as the final prediction result.
  3.  前記予測手段は、所定の期間毎に前記対象動物の行動の候補を予測し、
     前記出力手段は、前記所定の期間内において、前記行動毎に決められた所定回数以内の行動を最終予測結果として出力する請求項1又は2に記載の情報処理装置。
    The prediction means predicts a candidate behavior of the target animal for each predetermined period,
    3. The information processing apparatus according to claim 1, wherein said output means outputs, as a final prediction result, actions within a predetermined number of times determined for each action within said predetermined period.
  4.  前記予測手段は、前記データと、当該データが生成された時刻とに基づいて前記対象動物の行動を予測する請求項1又は2に記載の情報処理装置。 The information processing apparatus according to claim 1 or 2, wherein the prediction means predicts the behavior of the target animal based on the data and the time when the data was generated.
  5.  前記予測手段は、前記行動毎に予め決められた時間帯に基づいて前記予測結果に含まれる各行動のスコアを重み付けし、
     前記出力手段は、重み付け後のスコアに基づいて前記最終予測結果を出力する請求項2に記載の情報処理装置。
    The prediction means weights the score of each action included in the prediction result based on a time period predetermined for each action,
    3. The information processing apparatus according to claim 2, wherein said output means outputs said final prediction result based on the score after weighting.
  6.  前記予測手段は、前記データと、当該データが検知された時刻より前の時刻における予測結果および後の時刻における予測結果の少なくとも一方と、に基づいて、前記対象動物の行動を予測する請求項1又は2に記載の情報処理装置。 2. The prediction means predicts the behavior of the target animal based on the data and at least one of a prediction result at a time before and after the time when the data is detected. 3. The information processing device according to 2.
  7.  前記予測結果に含まれる所定の行動について、当該行動の前後の所定期間内に同一の行動が予測されていない場合には、前記最終予測結果は、当該行動を含まない請求項1又は2に記載の情報処理装置。 3. The final prediction result does not include a predetermined action included in the prediction result if the same action is not predicted within a predetermined period before and after the action. information processing equipment.
  8.  前記予測結果に含まれる所定の行動について、当該行動の前後の所定期間内に同一の行動が予測されている場合には、前記最終予測結果は、前記所定の期間内では前記所定の行動が継続されていることを含む請求項1又は2に記載の情報処理装置。 With respect to a predetermined action included in the prediction result, if the same action is predicted within a predetermined period before and after the action, the final prediction result indicates that the predetermined action continues within the predetermined period. 3. The information processing apparatus according to claim 1, comprising:
  9.  取得されたデータを前処理する前処理手段を備え、
     前記予測手段は、前記前処理後のデータに基づいて前記対象動物の行動を予測し、
     前記前処理手段は、予測しようとする行動毎に予め決められた時間幅のデータを用いて前処理を行い、前記前処理後のデータとして出力する請求項1又は2に記載の情報処理装置。
    Preprocessing means for preprocessing the acquired data,
    the prediction means predicts the behavior of the target animal based on the preprocessed data;
    3. The information processing apparatus according to claim 1, wherein the preprocessing means performs preprocessing using data of a predetermined time duration for each action to be predicted, and outputs the preprocessed data.
  10.  同一時刻における予測結果が、予め決められた相反する行動を含む場合、前記最終予測結果は、前記相反する行動のうち最もスコアが高い行動を含む請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein when the prediction results at the same time include predetermined conflicting actions, the final prediction result includes the action with the highest score among the conflicting actions.
  11.  対象動物の状態に関連するデータを取得し、
     訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力し、
     前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する動物行動予測方法。
    obtaining data relating to the condition of the subject animal;
    predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result;
    An animal behavior prediction method for outputting a behavior that satisfies a predetermined condition as a final prediction result based on the prediction result.
  12.  対象動物の状態に関連するデータを取得し、
     訓練済みの予測モデルを用いて、前記データに基づいて、前記対象動物の行動を予測し、予測結果を出力し、
     前記予測結果に基づいて、所定の条件を満たす行動を最終予測結果として出力する処理をコンピュータに実行させるプログラムを記録した記録媒体。
    obtaining data relating to the condition of the subject animal;
    predicting the behavior of the target animal based on the data using a trained prediction model, and outputting the prediction result;
    A recording medium recording a program for causing a computer to execute a process of outputting a behavior that satisfies a predetermined condition as a final prediction result based on the prediction result.
PCT/JP2021/047199 2021-12-21 2021-12-21 Animal behavior prediction device, animal behavior prediction method, and recording medium WO2023119396A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/047199 WO2023119396A1 (en) 2021-12-21 2021-12-21 Animal behavior prediction device, animal behavior prediction method, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/047199 WO2023119396A1 (en) 2021-12-21 2021-12-21 Animal behavior prediction device, animal behavior prediction method, and recording medium

Publications (1)

Publication Number Publication Date
WO2023119396A1 true WO2023119396A1 (en) 2023-06-29

Family

ID=86901545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/047199 WO2023119396A1 (en) 2021-12-21 2021-12-21 Animal behavior prediction device, animal behavior prediction method, and recording medium

Country Status (1)

Country Link
WO (1) WO2023119396A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200327442A1 (en) * 2016-07-01 2020-10-15 Intel Corporation Technologies for user-assisted machine learning
WO2021014588A1 (en) * 2019-07-23 2021-01-28 株式会社Rabo Server for providing service for acquiring animal behavioral information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200327442A1 (en) * 2016-07-01 2020-10-15 Intel Corporation Technologies for user-assisted machine learning
WO2021014588A1 (en) * 2019-07-23 2021-01-28 株式会社Rabo Server for providing service for acquiring animal behavioral information

Similar Documents

Publication Publication Date Title
KR102022893B1 (en) Pet care method and system using the same
US12011258B2 (en) Method and apparatus for determining a fall risk
CN113397520B (en) Information detection method and device for indoor object, storage medium and processor
US10769796B2 (en) Information processing apparatus, information processing method and recording medium
US20200282261A1 (en) Automated activity detection and tracking
KR20190028022A (en) Method and apparatus for providing a graphic user interface that shows behavior and emotion of a pet
KR102026183B1 (en) Method and system for state analysis of pets using sensor technology
JP2016146070A (en) Information processor, information processing method and information processing system
JP2007264706A (en) Image processing device, surveillance camera and video surveillance system
WO2023119396A1 (en) Animal behavior prediction device, animal behavior prediction method, and recording medium
JP2019058098A (en) Pet and human friendship degree measuring device, and pet and human friendship degree measuring program
CN117197998A (en) Sensor integrated nursing system of thing networking
KR102331335B1 (en) Vulnerable person care robot and its control method
KR20200051320A (en) IoT BASED MONITORING METHOD AND SYSTEM FOR DETECTING SEPARATION ANXIETY OF PET USING SUPPORT VECTOR MACHINE AND COMPLEX EVENT PROCESSING
KR102270637B1 (en) Method for analyzing pet behavior based on interaction with adoptees
WO2022201293A1 (en) Information processing device, information processing method, and recording medium
JP2023079291A (en) Image processing device, image processing method, and program
WO2022074828A1 (en) Information processing device, information processing method, and recording medium
JP7410555B2 (en) Animal joint monitoring system
WO2023144877A1 (en) Animal management assistance device, animal management assistance method, and recording medium
WO2022201290A1 (en) Communication system, communication device, communication method and recording medium
US20240008453A1 (en) System for inducing pet movement in metaverse environment
JP7162369B1 (en) Information processing method, program and information processing device
US20240008455A1 (en) Information processing device, information processing method, and recording medium
WO2023191044A1 (en) Animal management device, program, and animal health output method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21968833

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023568800

Country of ref document: JP