CN115100560A - Method, device and equipment for monitoring bad state of user and computer storage medium - Google Patents

Method, device and equipment for monitoring bad state of user and computer storage medium Download PDF

Info

Publication number
CN115100560A
CN115100560A CN202210592881.5A CN202210592881A CN115100560A CN 115100560 A CN115100560 A CN 115100560A CN 202210592881 A CN202210592881 A CN 202210592881A CN 115100560 A CN115100560 A CN 115100560A
Authority
CN
China
Prior art keywords
target user
information
monitoring
state
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210592881.5A
Other languages
Chinese (zh)
Inventor
李卫军
卢宝莉
于丽娜
覃鸿
李智伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Semiconductors of CAS
Original Assignee
Institute of Semiconductors of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Semiconductors of CAS filed Critical Institute of Semiconductors of CAS
Priority to CN202210592881.5A priority Critical patent/CN115100560A/en
Publication of CN115100560A publication Critical patent/CN115100560A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device and equipment for monitoring bad states of a user and a computer storage medium, wherein the method for monitoring the bad states of the user acquires a video to be processed through image acquisition equipment; under the condition that a target user is in a central area of a video to be processed, carrying out image recognition on multi-frame images of the video to be processed to obtain expression information, behavior information and posture information of the target user; determining a monitoring result of the bad state of the target user according to the expression information, the behavior information and the posture information, wherein the monitoring result comprises eye use abnormity and state abnormity; and outputting first prompt information when the monitoring result comprises eye use abnormity and/or state abnormity. According to the determined monitoring result, the target user and/or the preset contact person can be reminded, so that the target user can correct the current posture or the current behavior to realize the real-time monitoring of the bad state of the child (the target user).

Description

Method, device and equipment for monitoring bad state of user and computer storage medium
Technical Field
The invention relates to the technical field of robots, in particular to a method, a device and equipment for monitoring bad states of users and a computer storage medium.
Background
The habit of a person in daily life can not only reflect the character of the person, but also reflect the psychological state of the person; the health growth of children has been generally concerned by parents, teachers and society, wherein the cultivation of various habits such as life and study of children and the monitoring of psychological states are the key points of the parents, teachers and society.
At present, parents or teachers generally supervise and correct the habit development of children's life and study; or the body is forcibly corrected by physical means, such as guardrail sitting posture correctors, braces sitting posture correcting products and the like; however, in the above method, parents or teachers often cannot supervise and urge at any time, and it is difficult to help children develop correct habits, and physical means is not only ineffective, but also easily injure children. Parents or teachers cannot timely find problems and take measures in terms of psychology, eyesight protection and behavior of children.
Therefore, how to monitor and correct the habits of children without accompanying parents or teachers and how to find out the adverse conditions of the psychology, eyesight, behaviors and the like of the children are urgent problems to be solved.
Disclosure of Invention
The invention provides a method, a device, equipment and a computer storage medium for monitoring adverse states of a user, which are used for solving the defects that parents or teachers cannot always supervise and urge at any time, the habits of children are difficult to correct, and the psychological states of the children cannot be found in time in the prior art, and realizing the effect of monitoring the adverse states of the children in real time.
The invention provides a method for monitoring bad states of users, which comprises the following steps:
acquiring a video to be processed through image acquisition equipment;
under the condition that a target user is located in the central area of the video to be processed, carrying out image recognition on multi-frame images of the video to be processed to obtain expression information, behavior information and posture information of the target user;
determining a monitoring result of the bad state of the target user according to the expression information, the behavior information and the posture information, wherein the monitoring result comprises eye use abnormity and state abnormity;
and outputting first prompt information under the condition that the monitoring result comprises the eye use abnormity and/or the state abnormity, wherein the first prompt information is used for reminding the target user to correct the current posture or the current behavior.
According to the method provided by the invention, the image recognition is carried out on the multi-frame image of the video to be processed to obtain the expression information, the behavior information and the posture information of the target user, and the method comprises the following steps:
carrying out image recognition on the multi-frame image, and determining a target area taking the target user as a center;
performing human body key point detection and behavior recognition on the target user in the target area to obtain the posture information and the behavior information of the target user;
and inputting the multi-frame images into an emotion recognition network in a recognition model to obtain the expression information of the target user, wherein the recognition model is obtained by training an initial recognition model according to a plurality of sample images.
According to the method provided by the invention, the determining a monitoring result of the bad state of the target user according to the expression information, the behavior information and the posture information, wherein the monitoring result comprises eye use abnormity and state abnormity, and the method comprises the following steps:
determining the eye using distance of the target user according to the attitude information, and determining the monitoring result as the eye using abnormality under the condition that the eye using distance is smaller than a preset distance;
determining emotion labels of the target user according to the expression information, wherein the emotion labels comprise a negative emotion label and a positive emotion label; determining whether the current behavior of the target user is abnormal or not according to the behavior information and a standard body state model in a database; and determining that the monitoring result is the state abnormality when the proportion of the emotion label to the negative emotion label is greater than or equal to a preset ratio and/or the current behavior of the target user is abnormal.
According to the method provided by the invention, the method further comprises:
in a preset time period, when the times corresponding to the eye use abnormity or the state abnormity are respectively larger than or equal to a preset time and the times have an increasing trend, at least one of the following operations is executed:
outputting second prompt information;
and sending third prompt information to a terminal corresponding to the preset contact of the target user, wherein the third prompt information is used for reminding the preset contact of the target user.
According to the method provided by the invention, the method further comprises the following steps:
outputting fourth prompt information, wherein the fourth prompt information is used for reminding the target user of inputting the current emotional state;
receiving an emotion state input by the target user, and performing incremental learning according to the expression information, the emotion state and the emotion label of the target user to obtain an incremental learning result;
and modifying the network parameters of the emotion recognition network based on the incremental learning result.
According to the method provided by the invention, the method further comprises:
acquiring a monitoring video acquired by a monitoring system;
determining the posture information of the target user according to the monitoring video;
and outputting fifth prompt information under the condition that the target user is determined to have an abnormal posture according to the posture information.
The invention provides a device for monitoring bad states of users, which comprises:
the acquisition module is used for acquiring a video to be processed through image acquisition equipment;
the image recognition module is used for carrying out image recognition on multi-frame images of the video to be processed under the condition that a target user is in the central area of the video to be processed to obtain expression information, behavior information and posture information of the target user;
the processing module is used for determining a monitoring result of the bad state of the target user according to the expression information, the behavior information and the posture information, wherein the monitoring result comprises eye use abnormity and state abnormity;
and the output module is used for outputting first prompt information under the condition that the monitoring result comprises the eye use abnormity and/or the state abnormity, wherein the first prompt information is used for reminding the target user to correct the current posture or the current behavior.
The invention provides an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize any one of the above methods for monitoring the bad state of the user.
The present invention provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of monitoring a user's adverse condition as in any of the above.
The present invention provides a computer program product comprising a computer program which, when executed by a processor, implements a method of monitoring a user's adverse conditions as described in any of the above.
The invention provides a user bad state monitoring method, a device, equipment and a computer storage medium, wherein the user bad state monitoring method acquires a video to be processed through image acquisition equipment; under the condition that a target user is located in the central area of the video to be processed, carrying out image recognition on multi-frame images of the video to be processed to obtain expression information, behavior information and posture information of the target user; determining a monitoring result of the bad state of the target user according to the expression information, the behavior information and the posture information, wherein the monitoring result comprises eye use abnormity and state abnormity; and outputting first prompt information under the condition that the monitoring result comprises the eye use abnormity and/or the state abnormity. According to the determined monitoring result, the target user can be reminded, so that the target user can correct the current posture or the current behavior, and the real-time monitoring on the living and learning habits and the psychological state of the child (the target user) is realized.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for monitoring a bad status according to an embodiment of the present invention;
fig. 2 is a second schematic flowchart of a method for monitoring a user adverse status according to an embodiment of the present invention;
fig. 3 is a third schematic flowchart of a method for monitoring a user adverse status according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a device for monitoring a user adverse state according to an embodiment of the present invention;
fig. 5 is a schematic physical structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of monitoring a user bad status according to an embodiment of the present invention, and as shown in fig. 1, the present invention provides a method for monitoring a user bad status, including:
s101: and acquiring a video to be processed through image acquisition equipment.
The method and the device are suitable for acquiring the learning or living scene of the target user.
In the invention, the video to be processed is acquired through the image acquisition equipment. Wherein, the image acquisition device can be two monocular cameras or one binocular camera.
In practical application, the calibration parameters of the image acquisition equipment can be determined through camera calibration before the image acquisition equipment leaves a factory. The calibration parameters comprise internal parameters and external parameters. The internal reference refers to parameters related to the characteristics of the camera (image acquisition device), such as focal length, pixel size, and the like; the external reference refers to parameters in a world coordinate system, such as the position, rotation direction and the like of a camera, and the position and orientation of the camera in a certain three-dimensional space can be determined through the external reference.
It can be understood that after the video to be processed is acquired, continuous behaviors and/or expressions of the target user can be acquired through processing of the video to be processed, so that the state change of the target user in the video to be processed is determined; by combining the state change of the target user, whether the state and/or the eye using distance of the target user are abnormal or not can be judged more accurately, so that the behavior, the posture, the eye using distance and the psychology of the target user can be monitored, whether the target user is in a bad state or not can be found in time, and the effect of real-time monitoring is ensured.
S102, under the condition that the target user is located in the central area of the video to be processed, carrying out image recognition on multi-frame images of the video to be processed to obtain expression information, behavior information and posture information of the target user.
The method and the device are suitable for determining scenes of expression information, behavior information and posture information of the target user.
In the invention, after the video to be processed is obtained through S101, the multi-frame image of the video to be processed is subjected to image recognition, a target area taking a target user as a center is determined, and then the target user in the target area is detected and/or recognized, so that expression information, behavior information and posture information of the target user are obtained.
In the invention, because the video is composed of a series of image frames, at least one image frame in the video to be processed can be extracted firstly, an image corresponding to each image frame in the at least one image frame is determined, then the image corresponding to each image frame in the at least one image frame is input into a preset image classification model, and the image corresponding to each image frame in the at least one image frame is processed through the image classification model, so that the characteristics of the image corresponding to each image frame can be obtained. And finally, inputting the fused features into the full connection layer and the activation function for classification, and obtaining the predicted category. At this time, the image frame where the target user is located in at least one image frame can be determined according to the predicted category, the images corresponding to the image frames where all the target users are located in at least one image frame are used as multi-frame images, then image recognition is carried out on the multi-frame images, and a target area with the target user as the center in the multi-frame images is determined.
In an optional aspect of the present invention, the target area of the multi-frame image centered on the target user may be a fixed-size and fixed-shape area. In another alternative, the target area of the multi-frame image centered on the target user may be an irregularly-shaped area. The irregular-shaped region may include only the target user, and the irregular-shaped region refers to the outline of the entire target user in the target region; the irregular-shaped region may include only the designated portion of the target user, and the irregular-shaped region refers to the outline of the designated portion of the target user in the target region. The designated part of the target user is a part on the body of the target user, which can acquire expression information, behavior information and posture information of the target user, and illustratively, when the expression information of the target user needs to be acquired, the designated part of the target user can be the whole face of the target user; when the gesture information of the target user needs to be obtained, the designated part of the target user may be the whole body of the target user.
In the invention, the gesture information and the behavior information of the target user can be determined by carrying out human body key point detection and behavior recognition on the target user in the target area, and then the multi-frame image is input into an emotion recognition network in a recognition model to obtain the expression information of the target user; and determining expression information, behavior information and posture information of the target user by extracting the image characteristics of the target area corresponding to the multi-frame image. The image characteristics can be extracted through an overall recognition method and/or a local recognition method under the condition that the expression information, the behavior information and the posture information of a target user are determined by extracting the image characteristics of a target area corresponding to a multi-frame image. For example, when the image features corresponding to the expression information are extracted from the target area, if an overall recognition method is adopted, the expressive face may be analyzed as an overall from the deformation of the face or from the movement of the face, so as to find the image differences under various expressions. If a local recognition method is adopted, each part of the human face can be recognized separately during recognition, such as eyes, a mouth, eyebrows and the like. Wherein, the extracted image features can be gray scale features, motion features and frequency features; the gray feature is obtained by graying the image and then utilizing different expressions or behaviors to obtain different gray values. The motion characteristics are obtained by utilizing the motion information of the main expression points of the face and the main motion points of the human limbs under different expression conditions. The frequency domain features are obtained using the difference of the images under different frequency decompositions. The behavior information and the expression information of the target user can be classified by adopting classification identification methods such as a linear classifier, a neural network classifier, a support vector machine, a hidden Markov model and the like to obtain corresponding classification results. The behavior information of the target user can also be obtained by image processing using a gesture recognition algorithm.
It can be understood that, in the invention, after the expression information, behavior information and posture information of the target user are obtained through image recognition, the current behavior, posture and psychology of the target user can be monitored through the expression information, behavior information and posture information of the target user, and whether the target user is in a bad state or not is timely determined based on the monitoring result of the current behavior, posture and psychology of the target user, so as to achieve the purpose of real-time supervision.
In some embodiments of the present invention, S102 may further include S1021-S1023, S1021-S1023 as follows:
and S1021, performing image recognition on the multi-frame images, and determining a target area taking a target user as a center.
In some embodiments of the present invention, image recognition is performed on a multi-frame image of a video to be processed, a position of a target user in each frame of image in the multi-frame image is determined, and an area with the target user as a center in each frame of image is used as a target area.
In some embodiments of the present invention, the target area may be a fixed size, or may be an area that contains only the target user. When the target area is of a fixed size, the multi-frame image can be zoomed, and the multi-frame image is subjected to image segmentation according to the fixed size to obtain the target area of the fixed size, and at the moment, the percentage between the area of the area where the target user is located and the whole area of the target area should not be smaller than a preset value. When the target region is a region including only the target user, the multi-frame image may be grayed first, then the grayed multi-frame image is binarized to obtain a processed multi-frame image, finally, the contour of the target user is extracted from the processed multi-frame image, the extracted contour of the target user is excluded and denoised, and the contour of the target user is divided by using the contour of the target user as a boundary to obtain the target region.
And S1022, detecting key points of the human body and identifying behaviors of the target user in the target area to obtain posture information and behavior information of the target user.
In some embodiments of the present invention, human body key point detection is performed on a target user in a target region, position information of at least one human body key point of the target user in the target region is determined, and then a current posture of the target user can be determined based on the position information of the at least one human body key point in the target region, so as to obtain posture information and behavior information corresponding to the current posture of the target user.
In some embodiments of the present invention, for example, when detecting a human key point of a target user, a region where each joint of the target user is located in a target region may be regressed to obtain at least one joint point, and then target monitoring may be performed on the at least one joint point obtained through regression to obtain position information of the at least one human key point in the target region, so as to complete human key point detection.
In addition, behavior recognition can be performed on the target user in the target area to obtain behavior information of the target user, such as whether behaviors of eating, rubbing eyes and the like exist.
And S1023, inputting the multi-frame images into an emotion recognition network in the recognition model to obtain expression information of the target user, wherein the recognition model is obtained after training the initial recognition model according to the plurality of sample images.
In some embodiments of the present invention, the multiple frames of images are input into an emotion recognition network in the recognition model, and feature extraction is performed on the face of the target user in the multiple frames of images through the emotion recognition network to obtain face change information of the target user, that is, the face change information may be image features.
In some embodiments of the present invention, the recognition model is obtained by training the initial recognition model according to a plurality of sample images, that is, the plurality of sample images may be input into the initial recognition model to train the initial recognition model, and after the initial recognition model completes training, the recognition model is obtained; the training of the initial recognition model can be completed before the multi-frame image of the video to be processed is subjected to image recognition.
In some embodiments of the present invention, the extracted facial change information (image features) is classified and recognized by the emotion recognition network, so that expression information of the target user can be obtained.
In some embodiments of the invention, the expression information may characterize six emotions, which are anger, happiness, sadness, surprise, disgust, and fear, respectively.
It can be understood that, in the invention, the posture information, the behavior information and the face change information of the target user are obtained through recognition, and whether the target user is in a bad state at present can be timely judged based on the posture information, the behavior information and the face change information.
S103, determining a monitoring result of the bad state of the target user according to the expression information, the behavior information and the posture information, wherein the monitoring result comprises eye use abnormity and state abnormity.
The method and the device are suitable for judging whether the target user is in a bad state scene.
In the invention, after the expression information, the posture information and the behavior information of the target user are obtained through S102, the monitoring result of the bad state of the target user can be determined by combining the expression information, the posture information and the behavior information, and the monitoring result comprises eye use abnormity and state abnormity; wherein the state abnormality may include an emotional abnormality and a behavioral abnormality, i.e., the bad state includes at least one of an eye use abnormality, an emotional abnormality, and a behavioral abnormality.
In the invention, as the target user may have multiple emotions at the same time, the expression information represents multi-dimensional emotion information, namely the expression information comprises scores of various emotions expressed by the current facial expression of the target user. For example, if the total score of each emotion is 10, the identified expression information may be anger 0, happy 5, sad 0, surprised 6, disgust 0, and fear 0; wherein, the score of each emotion is related to the fluctuation value of the target user's emotion, for example, if the score corresponding to anger is 0, it indicates that the target user does not currently have an angry emotion, and if the score corresponding to anger is 10, it indicates that the target user is currently very angry.
In one possible implementation manner, whether the target user has a current abnormal state or not may be determined by determining whether the target user has a certain specified emotion, for example, whether the target user has an emotion such as anger, sadness, or disgust, and the like, for example: and under the condition that the target user is determined to have a certain appointed emotion, determining that the target user has abnormal state, thereby realizing the monitoring of the state of the target user and determining the bad state of the target user in time. In another possible implementation manner, whether the target user has abnormal state currently is determined by determining whether the score value of each emotion is higher than a preset emotion score value; such as: in the case where the score corresponding to anger is 5 and is higher than the preset emotion score of 3, it can be determined that the target user has an abnormality in state.
It can be understood that, by the current expression information of the target user, it is determined whether the emotion of the target user is in an adverse state. Therefore, the behavior and the posture of the target user can be monitored, the emotion of the target user can be monitored, the monitoring diversity is increased, the application range of the invention is expanded, and the purpose of timely determining the bad state of the target user is achieved.
In some embodiments of the present invention, S103 may further include S1031-S1032, where S1031-S1032 are as follows:
and S1031, determining the eye using distance of the target user according to the posture information, and determining the monitoring result as eye using abnormity under the condition that the eye using distance is smaller than a preset distance.
In some embodiments of the present invention, the eye distance of the target user may be understood as the distance from the target user's eyes to the book or the electronic device, such as the distance from the target user's eyes to the IPAD. In determining the eye-using distance of the target user, the determination may be performed according to an image captured by a camera having depth information, where the camera may include a structured light camera or a Time of flight (TOF) camera, for example. In addition, the distance from the human body key point to the book or the electronic device can also be determined by determining the human body key point, such as the key point of the eyes or the key point of the nose.
In some embodiments of the present invention, the image capturing device in S101 is disposed on the robot, and when the head of the target user is low and/or the image capturing device cannot directly capture the eyes of the target user, that is, when the positions of the eyes of the target user cannot be directly determined, it is necessary to first obtain the positions of the key points of the eyes of the target user in the space by inference, and then, based on the positions of the eyes in the space, obtain the midpoint of the connecting line of the eyes, and determine the distance between the midpoint of the connecting line of the eyes and the central point of the target object (such as the book lamp), so as to obtain the eye using distance. Wherein, in case that the image capturing device is a binocular camera, the robot may determine the positions of the key points of both eyes of the target user in the space by determining the distance between the head and the left shoulder of the target user, the distance between the head and the right shoulder of the target user, and whether the left shoulder and the right shoulder of the target user are located at the same height; in the case where the image capturing device is a monocular camera, the robot may select an object with a fixed size as a reference object, and determine the positions of the eyes of the target user in space according to the relative positions of the object in the target area and the target user and/or the ratio of the size of the object in the target area to the real size.
S1032, determining emotion labels of the target users according to the expression information, wherein the emotion labels comprise negative emotion labels and positive emotion labels; determining whether the current behavior of the target user is abnormal or not according to the behavior information and a standard body state model in a database; and determining that the monitoring result is abnormal in state under the condition that the proportion of the negative emotion labels in the emotion labels is greater than or equal to a preset ratio and/or the current behavior of the target user is abnormal.
In some embodiments of the invention, the mood labels comprise a negative mood label and a positive mood label; illustratively, anger, sadness, disgust, and fear are negative emotion labels; surprise and happiness are positive emotional labels.
In some embodiments of the present invention, the probability that the emotion label of the target user is a negative emotion label (positive emotion label) is determined according to the expression information, and whether the proportion of the negative emotion label to the emotion label is greater than or equal to a preset ratio is determined. For example, if it is determined that the target user has a probability of being in a negative emotion of 80% according to the preset weight value and the score of each emotion label, the proportion of the emotion labels of the target user to the negative emotion labels is determined to be 80%, and the proportion is greater than a preset ratio (50%), and the monitoring result is determined to be abnormal.
In some embodiments of the present invention, the current posture of the target user may be determined according to the behavior information, the current posture of the target user is compared with the standard posture model in the database to obtain a difference between the current posture of the target user and the standard posture model in the database, and when the difference is greater than a preset difference, it is determined that the current behavior of the target user is abnormal.
It can be understood that after the eye using distance of the target user is determined, whether the eye distance from the book or the desktop is proper or not in the current posture of the target user can be judged, and if not, the eye using abnormality of the target user is indicated; after determining whether the emotion label of the target user is a positive emotion label or a negative emotion label, judging the current emotion condition of the target user, wherein if the negative emotion is too much, the target user is abnormal in emotion; after the difference value between the current body state of the target user and the standard body state model in the database is determined, whether the current behavior of the target user is abnormal or not can be judged; the embodiment of the invention can monitor the eye use condition, the psychological condition and the behavior habit condition of the target user, and improves the timeliness of determining the living and learning habits and the psychological state of the target user.
And S104, outputting first prompt information under the condition that the monitoring result comprises eye use abnormity and/or state abnormity, wherein the first prompt information is used for reminding a target user to correct the current posture or the current behavior.
In the present invention, at least one of the following two cases exists in the monitoring result: and under the condition of eye use abnormity and abnormal state, outputting first prompt information to remind a target user to correct the current posture or current behavior.
It can be understood that the invention can output the first prompt information in time to remind the target user when the target user is determined to be in the bad state, so that the target user can correct the current posture or the current behavior independently, thereby helping the target user develop good life and study habits.
The method and the device can judge whether the target user has potential problems or not based on the bad state of the target user, and prompt the preset contact person to adopt corresponding intervention suggestions to the target user in time, so that the monitoring effect on the target user is improved.
In some embodiments of the present invention, the method for monitoring the bad status of the user further includes:
in a preset time period, under the condition that the times corresponding to the eye use abnormity or the state abnormity are respectively larger than or equal to the preset times and the times have an increasing trend, at least one of the following operations is executed:
outputting second prompt information;
and sending third prompt information to a terminal corresponding to the preset contact of the target user, wherein the third prompt information is used for reminding the preset contact of the target user.
In some embodiments of the present invention, within a preset time period, if the number of times of eye use abnormality in the monitoring result is greater than or equal to the preset number of times, it means that the target user may have myopia or strabismus, and at this time, a reminding message is output; within a preset time period, if the number of times of state abnormity in the monitoring result is greater than or equal to a preset number of times; the target user is subjected to emotional abnormality or behavior abnormality for too many times, and whether the target user has a potential problem or not can be further judged through the expression information and the behavior information acquired in the step S102, and the reminding information is output. For example, if the number of times of the target user having the emotional abnormality exceeds 15 times in one month, it is determined that the target user has a depression tendency.
In some embodiments of the present invention, in a preset time period, when the respective times corresponding to the eye use abnormalities or the state abnormalities are greater than or equal to the preset times, it means that the target user currently has a probability of adverse effects such as myopia or hyperkinetic syndrome or depression, and at this time, second prompt information may be output to remind the target user that there is a risk of myopia currently or that the mood needs to be relaxed recently, so as to bring importance to the target user. Wherein, the second prompt message can be in a voice form.
In some embodiments of the present invention, third prompt information may be further sent to a terminal corresponding to the preset contact of the target user, where the third prompt information is used to remind the preset contact of the target user, so that the preset contact of the target user may take an intervention measure in time. The third prompt message may be sent to a preset contact of the target user in the form of information, where the third prompt message may include an intervention suggestion for a potential problem of the target user.
In some embodiments of the invention, the preset contact of the target user is stored in the database, and the preset contact may be a guardian of the target user. The method and the device can determine whether the target user has a bad state or not based on the emotion, the eye using distance and the behavior of the target user, determine whether the target user has a potential problem or not, and timely remind the target user to correct or remind a preset contact person to take a corresponding intervention suggestion for the target user, so that the monitoring effect on the target user is improved.
Based on fig. 1, fig. 2 is a second schematic flow chart of a method for monitoring a user bad status according to an embodiment of the present invention, as shown in fig. 2, the method for monitoring a user bad status further includes steps S105-S107, where steps S105-S107 are as follows:
and S105, outputting fourth prompt information, wherein the fourth prompt information is used for reminding a target user of inputting the current emotional state.
In some embodiments of the present invention, in the case of obtaining the expression information of the target user, outputting fourth prompt information; because the fourth prompt information is used for prompting the target user to input the emotional state, the equipment executing the monitoring method of the adverse state of the user interacts with the target user through the output of the fourth prompt information, and the feedback of the target user about the current emotional state of the target user is obtained.
In some embodiments of the present invention, outputting the fourth prompting message may be asking the target user whether to be happy or how much to be in mood by voice; the emotion label can also be given by manually observing the facial expression, the tone and tone of the speaking and the like of the target user by related personnel.
It is understood that by outputting the fourth prompt information, the true emotion of the target user can be determined.
S106, receiving the emotion state input by the target user, and performing incremental learning according to the expression information, the emotion state and the facial change information of the target user to obtain an incremental learning result.
In some embodiments of the present invention, after receiving the emotional state input by the target user, the recognition model combines the emotional state input by the target user and the expression information and facial change information obtained through S102, and obtains an incremental learning result by using an incremental learning method; the incremental learning result includes current real emotion information of the target user, emotion information corresponding to the real emotion information and the expression information obtained in step S102, and a difference between the real emotion information and emotion information corresponding to the facial change information.
In some embodiments of the invention, the recognition model is constructed using time series data, i.e., the recognition model is a time series model.
It can be understood that the incremental learning result is obtained by performing the incremental learning, so that a training sample is provided for the emotion recognition network, the recognition accuracy of the emotion recognition network is improved, and the purpose of improving the monitoring effect on the adverse state of the user is achieved.
And S107, correcting the network parameters of the emotion recognition network based on the incremental learning result.
In some embodiments of the present invention, according to the incremental learning result stored in the recognition model, the real expression information corresponding to the emotion state input by the target user may be obtained through analysis according to the incremental learning result, and then the emotion recognition network in the network model is modified based on the difference between the real emotion information and the expression information obtained through S102 and the difference between the real emotion information and the facial change information, so as to change the network parameters of the emotion recognition network.
It can be understood that the network parameters of the emotion recognition network are corrected through the real-time monitoring of the user and the obtained incremental learning result, so that the recognition accuracy of the emotion recognition network and the monitoring effect on the adverse state of the user are improved.
Based on fig. 1 and fig. 3 are a third schematic flow chart of a method for monitoring a user adverse state according to an embodiment of the present invention, as shown in fig. 3, the method for monitoring a user adverse state further includes steps S108 to S110, where S108 to S110 are as follows:
and S108, acquiring the monitoring video acquired by the monitoring system.
In some embodiments of the present invention, the image capturing device is disposed on a robot, and the robot may be interconnected with a monitoring system to obtain a monitoring video captured by the monitoring system; the monitoring system may be a home monitoring system for monitoring conditions in a target room in a home.
And S109, determining the posture information of the target user according to the monitoring video.
In some embodiments of the present invention, in a case where the robot is interconnected with the monitoring system, the robot may read a monitoring video in the monitoring system, and determine the posture information of the target user according to the monitoring video. Wherein, the posture information comprises a static posture and a motion posture, and the static posture can be standing, sitting, lying and the like; the motion gesture may be walking, running, jumping, etc.
And S110, outputting fifth prompt information under the condition that the abnormal posture of the target user is determined according to the posture information, wherein the fifth prompt information is used for reminding the target user to correct the abnormal posture.
In some embodiments of the present invention, after the posture information of the target user is obtained, the posture information of the target user may be compared with the standard posture information in the database to determine whether the posture information of the target user is abnormal. And outputting fifth prompt information under the condition that the target user has an abnormal posture, wherein the fifth prompt information is used for reminding the target user to correct the abnormal posture. The fifth prompt message may be a voice prompt, a light prompt, an alarm prompt, or the like.
It can be understood that the body state information of the target user is acquired through interconnection with the monitoring system, and the body state information of the target user is reminded to correct when the body state information of the target user is abnormal. The invention can monitor the behaviors, postures and psychological states of the target user during learning, and can monitor the body state of the target user in life, thereby enlarging the monitoring range.
Fig. 4 is a schematic structural diagram of a device for monitoring a user's bad status according to an embodiment of the present invention, and as shown in fig. 4, the present invention provides a device for monitoring a user's bad status, which is suitable for a method for monitoring a user's bad status according to an embodiment of the present invention, where the device 7 for monitoring a user's bad status includes:
the acquisition module 71 is used for acquiring a video to be processed through image acquisition equipment;
the image recognition module 72 is configured to perform image recognition on a multi-frame image of the video to be processed to obtain expression information, behavior information, and posture information of a target user when the target user is located in a central area of the video to be processed;
the processing module 73 is configured to determine a monitoring result of an adverse state of the target user according to the expression information, the behavior information, and the posture information, where the monitoring result includes an eye use abnormality and a state abnormality;
an output module 74, configured to output first prompt information when the monitoring result includes the eye usage abnormality and/or the state abnormality, where the first prompt information is used to remind the target user to correct a current posture or a current behavior.
In some embodiments of the present invention, the identifying module 72 is further configured to perform image identification on the multiple frames of images, and determine a target area centered on the target user;
the recognition module 72 is further configured to perform human body key point detection and behavior recognition on the target user in the target area to obtain the posture information and the behavior information of the target user;
the recognition module 72 is further configured to input the multiple frames of images into an emotion recognition network in a recognition model to obtain expression information of the target user, where the recognition model is obtained by training an initial recognition model according to multiple sample images.
In some embodiments of the present invention, the processing module 73 is further configured to determine an eye distance of the target user according to the posture information, and determine that the monitoring result is the eye abnormality if the eye distance is smaller than a preset distance;
the processing module 73 is further configured to determine emotion labels of the target user according to the expression information, where the emotion labels include a negative emotion label and a positive emotion label; determining whether the current behavior of the target user is abnormal or not according to the behavior information and a standard body state model in a database; and determining that the monitoring result is the state abnormity under the condition that the proportion of the negative emotion label to the emotion label is greater than or equal to a preset ratio and/or the current behavior of the target user is abnormal.
In some embodiments of the present invention, the output module 74 is further configured to, in a preset time period, perform at least one of the following operations when a number of times corresponding to each of the eye use abnormality or the state abnormality is greater than or equal to a preset number of times and the number of times has an increasing trend:
outputting second prompt information;
and sending third prompt information to a terminal corresponding to a preset contact of the target user, wherein the third prompt information is used for reminding the preset contact of the target user. In some embodiments of the invention, the monitoring device 7 of the bad status of the user further comprises:
a receiving module 75, configured to receive an emotional state input by the target user, and perform incremental learning according to the expression information, the emotional state, and the facial change information of the target user to obtain an incremental learning result;
a correction module 76 for correcting network parameters of the emotion recognition network based on the incremental learning result;
the output module 74 is further configured to output a fourth prompt message, where the fourth prompt message is used to prompt the target user to input the current emotional state.
In some embodiments of the present invention, the receiving module 75 is further configured to obtain a monitoring video collected by a monitoring system;
the processing module 73 is further configured to determine the posture information of the target user according to the monitoring video;
the output module 74 is further configured to output a fifth prompt message when it is determined that the target user has an abnormal posture according to the posture information.
Fig. 5 is a schematic entity structure diagram of an electronic device provided in an embodiment of the present application, and as shown in fig. 5, the electronic device provided in the present application may include: a processor (processor)810, a communication Interface 820, a memory 830 and a communication bus 840, wherein the processor 810, the communication Interface 820 and the memory 830 communicate with each other via the communication bus 840. The processor 810 may call logic instructions in the memory 830 to perform the bad condition monitoring methods provided by the above methods.
In addition, the logic instructions in the memory 830 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present application further provides a computer program product, which includes a computer program, which can be stored on a non-transitory computer-readable storage medium, and when the computer program is executed by a processor, the computer can execute the method for monitoring the bad status provided by the above methods.
In yet another aspect, the present application also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements a method for monitoring a bad state provided by the above methods.
The above-described embodiments of the apparatus are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, i.e. may be located in one place, or may also be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for monitoring adverse conditions of a user, comprising:
acquiring a video to be processed through image acquisition equipment;
under the condition that a target user is in the central area of the video to be processed, carrying out image recognition on multi-frame images of the video to be processed to obtain expression information, behavior information and posture information of the target user;
determining a monitoring result of the bad state of the target user according to the expression information, the behavior information and the posture information, wherein the monitoring result comprises eye use abnormity and state abnormity;
and outputting first prompt information under the condition that the monitoring result comprises the eye use abnormity and/or the state abnormity, wherein the first prompt information is used for reminding the target user to correct the current posture or the current behavior.
2. The method for monitoring the adverse state of the user according to claim 1, wherein the step of performing image recognition on the multi-frame image of the video to be processed to obtain expression information, behavior information and posture information of the target user comprises the steps of:
carrying out image recognition on the multi-frame image, and determining a target area taking the target user as a center;
performing human body key point detection and behavior recognition on the target user in the target area to obtain the posture information and the behavior information of the target user;
and inputting the multi-frame images into an emotion recognition network in a recognition model to obtain the expression information of the target user, wherein the recognition model is obtained after training an initial recognition model according to a plurality of sample images.
3. The method for monitoring the adverse state of the user according to claim 1, wherein the determining the monitoring result of the adverse state of the target user according to the expression information, the behavior information and the posture information, the monitoring result including eye use abnormality and state abnormality comprises:
determining the eye using distance of the target user according to the attitude information, and determining the monitoring result as the eye using abnormality under the condition that the eye using distance is smaller than a preset distance;
determining emotion labels of the target user according to the expression information, wherein the emotion labels comprise a negative emotion label and a positive emotion label; determining whether the current behavior of the target user is abnormal or not according to the behavior information and a standard body state model in a database; and determining that the monitoring result is the state abnormity under the condition that the proportion of the negative emotion label to the emotion label is greater than or equal to a preset ratio and/or the current behavior of the target user is abnormal.
4. The method of claim 3, further comprising:
in a preset time period, when the times corresponding to the eye use abnormity or the state abnormity are respectively larger than or equal to a preset time and the times have an increasing trend, at least one of the following operations is executed:
outputting second prompt information;
and sending third prompt information to a terminal corresponding to the preset contact of the target user, wherein the third prompt information is used for reminding the preset contact of the target user.
5. The method for monitoring the adverse status of a user according to any one of claims 1 to 4, wherein the method further comprises:
outputting fourth prompt information, wherein the fourth prompt information is used for reminding the target user of inputting the current emotional state;
receiving an emotion state input by the target user, and performing incremental learning according to the expression information, the emotion state and the emotion label of the target user to obtain an incremental learning result;
and modifying the network parameters of the emotion recognition network based on the incremental learning result.
6. The method for monitoring the adverse status of a user according to any one of claims 1 to 4, wherein the method further comprises:
acquiring a monitoring video acquired by a monitoring system;
determining the posture information of the target user according to the monitoring video;
and outputting fifth prompt information under the condition that the target user is determined to have an abnormal posture according to the posture information.
7. An apparatus for monitoring adverse conditions of a user, the apparatus comprising:
the acquisition module is used for acquiring a video to be processed through image acquisition equipment;
the image recognition module is used for carrying out image recognition on multi-frame images of the video to be processed under the condition that a target user is in the central area of the video to be processed to obtain expression information, behavior information and posture information of the target user;
the processing module is used for determining a monitoring result of the bad state of the target user according to the expression information, the behavior information and the posture information, wherein the monitoring result comprises eye use abnormity and state abnormity;
and the output module is used for outputting first prompt information under the condition that the monitoring result comprises the eye use abnormity and/or the state abnormity, wherein the first prompt information is used for reminding the target user to correct the current posture or the current behavior.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for monitoring the adverse condition of a user according to any one of claims 1 to 6 when executing the program.
9. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method for monitoring the adverse condition of a user according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements a method for monitoring an adverse condition of a user as claimed in any one of claims 1 to 6.
CN202210592881.5A 2022-05-27 2022-05-27 Method, device and equipment for monitoring bad state of user and computer storage medium Pending CN115100560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210592881.5A CN115100560A (en) 2022-05-27 2022-05-27 Method, device and equipment for monitoring bad state of user and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210592881.5A CN115100560A (en) 2022-05-27 2022-05-27 Method, device and equipment for monitoring bad state of user and computer storage medium

Publications (1)

Publication Number Publication Date
CN115100560A true CN115100560A (en) 2022-09-23

Family

ID=83289866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210592881.5A Pending CN115100560A (en) 2022-05-27 2022-05-27 Method, device and equipment for monitoring bad state of user and computer storage medium

Country Status (1)

Country Link
CN (1) CN115100560A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116884649A (en) * 2023-09-06 2023-10-13 山西数字政府建设运营有限公司 Control system for monitoring user safety

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361356A (en) * 2016-08-24 2017-02-01 北京光年无限科技有限公司 Emotion monitoring and early warning method and system
CN109685007A (en) * 2018-12-21 2019-04-26 深圳市康康网络技术有限公司 Method for early warning, user equipment, storage medium and the device being accustomed to eye
WO2020248376A1 (en) * 2019-06-14 2020-12-17 平安科技(深圳)有限公司 Emotion detection method and apparatus, electronic device, and storage medium
CN112788990A (en) * 2018-09-28 2021-05-11 三星电子株式会社 Electronic device and method for obtaining emotion information
WO2021208735A1 (en) * 2020-11-17 2021-10-21 平安科技(深圳)有限公司 Behavior detection method, apparatus, and computer-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361356A (en) * 2016-08-24 2017-02-01 北京光年无限科技有限公司 Emotion monitoring and early warning method and system
CN112788990A (en) * 2018-09-28 2021-05-11 三星电子株式会社 Electronic device and method for obtaining emotion information
CN109685007A (en) * 2018-12-21 2019-04-26 深圳市康康网络技术有限公司 Method for early warning, user equipment, storage medium and the device being accustomed to eye
WO2020248376A1 (en) * 2019-06-14 2020-12-17 平安科技(深圳)有限公司 Emotion detection method and apparatus, electronic device, and storage medium
WO2021208735A1 (en) * 2020-11-17 2021-10-21 平安科技(深圳)有限公司 Behavior detection method, apparatus, and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王泽杰等: "融合人体姿态估计和目标检测的学生课堂行为识别", 华东师范大学学报 (自然科学版), no. 2, 31 March 2022 (2022-03-31), pages 0 - 2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116884649A (en) * 2023-09-06 2023-10-13 山西数字政府建设运营有限公司 Control system for monitoring user safety
CN116884649B (en) * 2023-09-06 2023-11-17 山西数字政府建设运营有限公司 Control system for monitoring user safety

Similar Documents

Publication Publication Date Title
KR102339915B1 (en) Systems and methods for guiding a user to take a selfie
CN108256433B (en) Motion attitude assessment method and system
US20200175262A1 (en) Robot navigation for personal assistance
Vinola et al. A survey on human emotion recognition approaches, databases and applications
KR101697476B1 (en) Method for recognizing continuous emotion for robot by analyzing facial expressions, recording medium and device for performing the method
CN110464367B (en) Psychological anomaly detection method and system based on multi-channel cooperation
US11127181B2 (en) Avatar facial expression generating system and method of avatar facial expression generation
US10610109B2 (en) Emotion representative image to derive health rating
US11158403B1 (en) Methods, systems, and computer readable media for automated behavioral assessment
CN115100560A (en) Method, device and equipment for monitoring bad state of user and computer storage medium
EP3872694A1 (en) Avatar facial expression generating system and method of avatar facial expression generation
CA3050456A1 (en) Facial modelling and matching systems and methods
Hou Deep learning-based human emotion detection framework using facial expressions
EP3799407B1 (en) Initiating communication between first and second users
CN109697413B (en) Personality analysis method, system and storage medium based on head gesture
Liliana et al. The fuzzy emotion recognition framework using semantic-linguistic facial features
CN113326729A (en) Multi-mode classroom concentration detection method and device
Wei et al. 3D facial expression recognition based on Kinect
KR20220005945A (en) Method, system and non-transitory computer-readable recording medium for generating a data set on facial expressions
Ketcham et al. Emotional detection of patients major depressive disorder in medical diagnosis
CN113780158B (en) Intelligent concentration detection method
Alom et al. Optimized facial features-based age classification
CN115331292B (en) Face image-based emotion recognition method and device and computer storage medium
Weng et al. Developing early senses about the world:" Object Permanence" and visuoauditory real-time learning
Sevinç et al. A sentiment analysis study on recognition of facial expressions: gauss and canny methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination