CN116156310A - Wearable camera gesture monitoring and recognition system and method - Google Patents

Wearable camera gesture monitoring and recognition system and method Download PDF

Info

Publication number
CN116156310A
CN116156310A CN202310032255.5A CN202310032255A CN116156310A CN 116156310 A CN116156310 A CN 116156310A CN 202310032255 A CN202310032255 A CN 202310032255A CN 116156310 A CN116156310 A CN 116156310A
Authority
CN
China
Prior art keywords
head
camera
projection
projection pattern
miniature camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310032255.5A
Other languages
Chinese (zh)
Inventor
冯志全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202310032255.5A priority Critical patent/CN116156310A/en
Publication of CN116156310A publication Critical patent/CN116156310A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

A wearable camera gesture monitoring and recognition system and method, the system includes: the device comprises a head wearing type wearing frame, a camera and a head wearing type micro camera, wherein the head wearing type wearing frame is fixedly connected with the camera and used for acquiring scene video information; the projection device projects the same scene shot by the miniature camera and is used for enhancing information; the central position of the projection pattern is positioned on the optical axis of the head-mounted miniature camera by adjusting the optical axis direction of the projection device; the prompt module is used for sending out prompts when the system is abnormal; the angle sensing module is used for acquiring angle posture information of the head-mounted miniature camera, and the angle posture information comprises: viewing angle θ and elevation angle ω. The system solves the problems that shooting is not easy to occur by adopting the head-mounted camera or the experimental scoring result is influenced by the insufficient shooting operation scene and head movement, and the like, realizes accurate guidance on user behaviors by adopting a multi-projection area technology, and avoids identifying other irrelevant equipment.

Description

Wearable camera gesture monitoring and recognition system and method
Technical Field
The invention relates to the field of experimental monitoring, in particular to a wearable camera gesture monitoring and recognizing system and method.
Background
Chinese invention patent name: an intelligent recognition system and method, patent number: CN113705518A, which provides an intelligent recognition system, said system comprising: the primary identification device is arranged in the operation scene and is used for shooting the operation process, identifying the primary detection object and feeding back the identification result; the secondary identification device is worn on the body of the operator and is used for shooting the operation process, identifying a secondary detection object and feeding back an identification result; and the result acquisition device is used for acquiring the recognition results fed back by the primary recognition device and the secondary recognition device. The method adopts a mode of multistage identification mutually matched, takes the hand position, experimental equipment, experimental results, experimental phenomena and the like as detection objects, and realizes real-time identification of the detection objects in an operation scene with high identification rate under the conditions that the human is operated naturally and the experimental process is not interrupted. Meanwhile, the application also provides an intelligent identification method based on the system.
In the experimental operation examination process, the experimental operation video is often obtained by adopting a mode of arranging cameras in multiple directions and is used for scoring the examination process in the later period. Wherein diversified camera of settling is including wearing the head-mounted camera on the head, current head-mounted camera has following problem: (1) the shooting range is too flexible and uncertain. For example, applicants have found in numerous experiments and tests that a user unintentionally may slightly shake his head, possibly causing an identification error, or that a slight head-shake by the user may cause a decrease in the detection effect of the system; (2) The operation scene is often not in the range of the camera, so that objects outside the range of the shooting scene cannot be identified; (3) The user does not determine the photographing position of the camera and does not determine whether the camera can photograph the whole scene; (4) The position is not naturally and intentionally adjusted every time an experiment is performed, so that other irrelevant devices are not mistakenly identified (for example, in a circuit experiment, devices which are not connected before are often mistakenly identified); (5) The large amount of statistics of the early prototype system shows that about 90% of "misrecognitions" (i.e., identifying one object as another object) and "no identification (i.e., no object that should be identified in the scene)" are all caused by incomplete or undetected objects due to the fact that the headset camera does not capture the operating scene. In summary, "worry that the head-mounted camera cannot capture the operation scene picture", almost common problems and experience anxiety of all testers. Especially in examination systems, the false perception of the system, or the false recognition by the customer or user, is almost "zero tolerant".
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a wearable camera gesture monitoring and recognizing system and a method, wherein the system can intuitively feel or observe the region shot by a head-mounted miniature camera through a projection device, thereby avoiding false recognition caused by head shaking; if the user is focused on the experimental procedure itself and does not notice the projected pattern of the projection device, the system will automatically recognize and process; in the experimental operation process, if the head of the user shakes or moves abnormally, the system can give warning and automatically enter a pause state, so that the operation result evaluation of the system to the user is not influenced, and the operation experience of the user is fundamentally improved.
The technical scheme adopted by the invention is as follows:
a wearable camera gesture monitoring and recognition system comprises a head wearing type wearing frame, a camera,
the head-mounted miniature camera is fixedly connected with the head-mounted wearing frame and is used for acquiring scene video information;
the projection device projects the same scene shot by the miniature camera and is used for enhancing projection information; the central position of the projection pattern is positioned on the optical axis of the head-mounted miniature camera by adjusting the optical axis direction of the projection device;
the prompt module is used for sending out prompts when the system is abnormal;
an angle sensor for obtaining angle posture information of a head-mounted micro camera, the angle posture information including: viewing angle θ and elevation angle ω.
A wearable camera gesture monitoring and recognition method comprises the following steps:
s0, establishing a database for storing the size R and style S of the projection pattern;
s1, inquiring the size R and the style S of the projection pattern corresponding to each experimental operation step in a database DB;
s2, the projection device projects onto the experiment table top to generate N projection patterns or projection areas;
s3, judging whether the projection pattern of the head-mounted miniature camera is on an experimental desktop, and if so, receiving input video data from the head-mounted miniature camera; if not, executing step S4;
s4, judging whether the projection pattern can be detected in the head-mounted miniature camera, and if so, receiving input video data from the head-mounted miniature camera; if not, executing step S5;
s5, judging whether the head gesture of the wearer is standard, and if so, receiving the input video data from the head-mounted micro-camera; if not, executing step S6;
s6, judging whether the head of the wearer shakes normally, and if so, receiving input video data from the head-mounted miniature camera; if not, respectively identifying and processing the objects within each projection pattern area.
The beneficial effects of the invention are as follows:
the system is provided with the projection device, so that the shooting range can be definitely displayed, and the application problems that shooting is not easy to occur or the operation scene is not fully shot, the head movement affects the experiment scoring result and the like due to the adoption of the head-mounted camera are solved; according to the invention, projection patterns with different sizes and different styles are automatically generated according to the evaluation indexes of the operation, and the multi-projection area technology is adopted, so that accurate guidance on user behaviors is realized, and other irrelevant equipment is prevented from being identified.
Drawings
FIG. 1 is a schematic diagram of a wearable camera gesture monitoring and recognition system;
FIG. 2 is a flow chart of a method for monitoring and recognizing the gesture of a wearable camera;
FIG. 3 is a flow chart of a fifth method of the present invention;
FIG. 4 is a schematic diagram of a neural network according to a fifth embodiment of the present invention;
fig. 5 is a plan view of an experimental table and its projection area in a sixth embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings: in order to clearly illustrate the technical features of the present solution, the present invention will be described in detail below with reference to the following detailed description and the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different structures of the invention. In order to simplify the present disclosure, components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and processes are omitted so as to not unnecessarily obscure the present invention.
As shown in fig. 1, the invention provides a wearable camera gesture monitoring and recognition system, which is formed by integrating a head-mounted miniature camera (called miniature camera for short), a projection device and an angle sensing module, wherein the angle sensing module comprises an angle sensor and is fixed on a head-mounted wearing frame (called wearing frame for short) through a connecting body. The head wearing type wearing frame is worn on the head of a user and is mainly used for supporting other sensing devices. In the embodiment of the invention, the wearing frame adopts a headgear. The method comprises the steps that a miniature camera acquires scene video information, a projection device projects enhanced projection information into the same scene seen by the miniature camera, an angle sensor (such as a gyroscope) acquires angle posture information of the miniature camera, the posture information mainly comprises a view angle (theta) and an elevation angle (omega), the view angle refers to an angle between an optical axis of the head-mounted miniature camera and a plane alpha by taking a plane alpha perpendicular to the length direction of a laboratory table as a reference; the elevation angle refers to the angle between the optical axis of the head-mounted miniature camera and the plane beta with the plane beta of the experiment table as a reference. The system further includes a camera, the camera including: the upper camera and the front camera are used for acquiring video and photo data.
The optical axis direction of the projection device is adjusted so that the center position of the projection pattern (for example, a circle with a red edge, the projection pattern of the present invention represents an area or range) is approximately located on the optical axis of the miniature camera, and experimental equipment, objects or operations located in the projection pattern are necessarily in an area that can be photographed by the head-mounted miniature camera. That is, the projection pattern metaphor shows the shooting range of the head-mounted miniature camera; as the user's head moves, the projected pattern changes. This provides both behavioral constraints consistent with the user experience and visual guidance to the user for the movement of the user's head and experimental observation process. The projection device can be replaced by laser or a laser group formed by a plurality of laser emitters, each laser emitter can generate projection patterns with different sizes and different styles, accurate guidance on user behaviors is realized, and other irrelevant equipment is prevented from being identified. Preferably the system can project different styles of projection patterns at different time steps, which can assist the system in directing or assessing the user's operational behavior.
In a first embodiment of the present invention, there is provided a wearable camera pose monitoring and recognition system, the system comprising: wear frame, head-mounted miniature camera (refer to miniature camera for short), projection arrangement, angle perception module, camera, angle perception module is fixed in on wear frame (refer to wearing frame for short) at the head-mounted through the connector, and miniature camera acquires scene video information, projection arrangement projects reinforcing projection information to the same scene that miniature camera sees. The projection device is a laser group, the projection directions of different laser projectors are equal to the included angle between the optical axis directions of the head-mounted miniature cameras, and the projection line of any one laser projector and the optical axis line of the head-mounted miniature camera are positioned in a plane passing through the optical axis, so that the center of a scene shot by the head-mounted miniature camera is the center of a projection pattern generated by the laser group. The shooting range of the head-mounted miniature camera is controlled by the user at will depending on the distance between the head and the scene object.
In a second embodiment of the present invention, there is provided a wearable camera pose monitoring and recognition system, the system including: wear frame, head-mounted miniature camera (refer to miniature camera for short), projection arrangement, angle perception module, camera, angle perception module is fixed in on wear frame (refer to wearing frame for short) at the head-mounted through the connector, and miniature camera acquires scene video information, projection arrangement projects reinforcing projection information to the same scene that miniature camera sees. A rectangular projection device consisting of a laser cluster is fixed on a flexible head-mounted bracket. During the experimental operation of the user, only objects or objects located within the rectangular projection area may be identified by the head mounted miniature camera, and the size range of the rectangular projection area may be changed by adjusting the projection direction of each laser. When the user focuses on a certain object or object on the experiment table in a natural way, a part of related objects centering on the object or object is necessarily positioned in the rectangular projection area, so that the current operation scene of the user is ensured to be in the shooting range of the head-mounted miniature camera. Therefore, the user can perform experimental operation with confidence, and the worry that the head-mounted miniature camera cannot shoot or cannot shoot the current operation scene image is avoided.
In the third embodiment of the present invention, the micro motor is used to drive the laser to rotate around the shaft instead of the laser group in the first embodiment, the head-mounted micro camera is arranged on the shaft (the head-mounted micro camera does not rotate along with the rotation of the connecting rod), and the laser projection device is fixed on the head-mounted bracket.
In a fourth embodiment of the present invention, as shown in fig. 2, the present invention further provides a method for monitoring and identifying a pose of a wearable camera, which specifically includes the following steps:
step 1: constructing a database DB, wherein the database can inquire the size and style of the projection pattern according to the operation steps;
step 2: inquiring the size R and style S of the projection pattern corresponding to each experimental operation step in a database DB;
step 3: the projection device generates N (N is more than or equal to 1) projection patterns or projection areas on the experiment table surface;
step 4: judging whether the projection pattern of the head-mounted miniature camera is on an experimental desktop or not, if so, turning to the step 8; otherwise, go to the next step:
step 5: judging whether the projection pattern can be detected in the head-mounted miniature camera, if so, turning to the step 8; otherwise, go to the next step:
step 6: judging whether the head posture is standard or not, if so, turning to the step 8; otherwise, go to the next step:
step 7: judging whether the head is swayed normally or not, if so, turning to the step 8; otherwise, go to step 9:
step 8: receiving input video data from a head mounted miniature camera;
step 9: and respectively identifying and processing objects within each projection pattern area, and mainly scoring and evaluating the experimental operation of the user based on the perception and identification results of the operation scene and the scoring rule.
When the system is abnormal, the method further comprises the step of processing abnormal conditions, specifically:
(1) The system prompts the user to pay attention to the experiment table surface in a voice, text or other mode;
(2) Stopping receiving or processing input data from the head-mounted miniature cameras, temporarily suspending the input of all cameras by the system, and reminding a user to suspend experiments or re-perform current operation in a voice, text mode and the like;
(3) The projection device employs special methods to prevent the user from continuing to operate. For example, let the projection device flash quickly; projecting a striking color or a special pattern, etc.;
(4) Step S4 is performed.
The "system anomaly" includes the following cases: (1) the projected pattern is not on the desktop; (2) failure to detect the projected pattern; (3) head pose is not canonical; (4) head shaking.
When an exception condition occurs, there are two processing schemes. One is to restart the start-up and receive video data, such as voice prompts: "this experiment was not effective, please restart the experimental operation". And secondly, prompting the user to restart the operation of the current step, and simultaneously starting the system to receive the video data.
In the present invention, the size R is a generalized concept. For example, if the projected pattern is a circle, then R is the radius of the circle; if the projected pattern is rectangular, then R is the length and width of the rectangle. Second, different styles of projection patterns can be generated according to different needs. For example, if an alert to an operation or head behavior, a chaotic interference pattern is generated; a rectangular or circular pattern is normally generated so as not to affect the normal perception and recognition of the scene by the system.
In the invention, judging whether the projection pattern of the head-mounted miniature camera specifically comprises the following steps on the experiment table top:
1. detecting the position P (Px, py) of the projected pattern
Collecting projection pattern samples from image frames acquired by a camera above, and labeling the samples;
training a sample by adopting a neural network model to obtain a recognition model M1 of a projection pattern;
identifying a projection pattern in the scene in the image frame of the upper camera using the identification model M1;
the center point position P (Px, py) of the projection pattern is further projected by acquiring the detection frame position of the projection pattern.
2. Test table top position Q (Qx, qy):
collecting samples of an experiment table top from image frames acquired by a camera above, and labeling the samples;
training a sample by adopting a neural network model to obtain an identification model M2 of the experiment table top;
identifying an experimental desktop in the scene by using the desktop identification model M2;
the center point position Q (Qx, qy) of the experiment table is further calculated by acquiring the position of the detection frame (also referred to as a labeling frame or bounding box) of the experiment table.
3. If it meets
The value of Px-Qx is less than or equal to a-c/2 and the value of Py-Qy is less than or equal to b-d/2
The projection of the miniature camera is positioned on the experimental table top; otherwise, the projection of the miniature camera is not on the laboratory table. Wherein a and b are the length and width of the desktop detection frame image, and c and d are the length and width of the projection pattern detection frame, respectively.
In the present invention, determining whether a projected pattern can be detected in a head-mounted miniature camera specifically includes:
collecting projection pattern samples from image frames acquired from a head-mounted miniature camera, and labeling the samples;
training a sample by adopting a neural network model to obtain a recognition model M3 of a projection pattern;
identifying a projection pattern in an image frame of the head-mounted micro camera using the identification model M3;
if the label returning the projection pattern exists, the detection is successful; otherwise, the detection fails.
In the embodiment of the invention, judging whether the head gesture is standard specifically comprises the following steps:
if theta is more than or equal to theta 0 or omega is more than or equal to omega 0, the head posture is not standard; otherwise, the head pose is canonical. Wherein, θ0 and ω0 are experience thresholds set in advance, and the experience thresholds of θ0 and ω0 can be determined by 'false recognition rate' of a large number of different users.
In the present invention, judging whether the head is normally shaken specifically includes:
calculating acceleration
v_θ=(θ(t+1)-θ(t))
v_ω=(ω(t+1)-ω(t))
If v_θ is greater than or equal to v_θ0 or v_ω is greater than or equal to v_ω0, the user head rapidly shakes; otherwise, the head is swayed normally. Wherein v_θ0 and v_ω0 are empirical thresholds preset in advance, and the empirical thresholds of v_θ0 and v_ω0 can be determined through the 'false recognition rate' of a large number of different users.
Because the shooting direction of the head-mounted WeChat camera is consistent with the projection direction of the projection device, the shielding problem can not be generated, and the operation of both hands can not influence the acquisition of the system to the projection pattern (the identification depends on the brightness of projection), so that the normal and smooth performance of the experiment can not be hindered.
In the fifth embodiment of the present invention, the angle sensor is an acceleration gyroscope.
In the examination system, we use the following real chemical instruments for operation: iron wires, copper wires, magnesium strips, sand paper, crucible tongs, tweezers, a lighter, an alcohol lamp, a wide-mouth bottle containing dilute hydrochloric acid, a beaker and the like.
Items in the experimental scenario were detected using the YOLOv5s network, as shown in fig. 3. Firstly, video data of all experimental objects in an experimental scene are acquired from different view angles and distances to perform frame cutting operation, and effective pictures are selected as a data set. And labeling the acquired pictures by using a LabelImg labeling tool, and sending labeling information as a training sample into a YOLOv5 network model for training to finally obtain the test object detection model based on YOLOv 5. In the identification process, the YOLOv5s network identification model automatically returns basic information such as the left upper corner position, the height, the width and the like of a detection frame of the identified object.
As shown in fig. 4, the color image with the size of 608x608 is firstly extracted by the convolutional neural network of the dark 53, wherein the Focus, CBL, CSP, SPP module is used for extracting the image features more effectively; the extracted features are then divided into three feature maps with different scales according to different convolution depths, the three feature maps with different scales are subjected to multi-scale feature fusion by a follow-up FPN and PAN structure (the upper feature map is higher in number of layers of the network and contains stronger semantic information, the lower feature map is lower in number of layers of the convolution layers, and the position information loss is less), so that three feature maps with different scales are obtained after fusion, and finally the small-size feature map is used for detecting a large-size object, and the large-size feature map is used for detecting a small-size object.
In the fifth embodiment of the invention, 6 laser emitting devices are adopted to form a circle, so that a circular projection pattern is generated, each laser projection position is firstly identified by utilizing YoLov5, so that surrounding boxes of all projection points are calculated, a plurality of operation examination experiments such as ' preparing 100 g of 5% sodium chloride solution ' and the like can be smoothly completed, the identification accuracy reaches 100% in 50 repeated experiments of different operators, and the problems of ' wrong giving and ' cross buckling ' of the system to the scoring points are completely eradicated. The system detects the bounding box in real time, and if the bounding box is not on the experiment table, the system pauses sensing and recognizing the scene and prompts the operator to pay attention to the head gesture until the bounding box is detected on the experiment table. In this embodiment, the laser spot position or its bounding box may be detected using either the camera directly above or the image in the head-mounted miniature camera. During the operation, if the head of the operator rotates left and right, the system pauses to receive each path of video data, prompts the eyes of the user to naturally pay attention to the experimental process and pay attention to the standard operation, and recommends the user to operate the current step again, so that the system enters a normal working state.
In a sixth embodiment of the present invention, there is provided a wearable camera pose monitoring and recognition system, the system including: wear frame, head-mounted miniature camera (refer to miniature camera for short), projection arrangement, angle perception module, camera, angle perception module is fixed in on wear frame (refer to wearing frame for short) at the head-mounted through the connector, and miniature camera acquires scene video information, projection arrangement projects reinforcing projection information to the same scene that miniature camera sees. In this embodiment, the laser projection device employs a micro projector disposed on a camera directly above the desktop, which can dynamically control the projected pattern. In middle school operation experiments, the placement of experimental instruments always has clear specification requirements. For example, the user is required to replace the device after using the device, the operation area is required to be distinguished from the device, and so on. As shown in fig. 5, the laser projection pattern may be divided into a laboratory instrument area, a laboratory operation area, and a laboratory auxiliary area. Experimental equipment and instruments which need to be used are placed in an experimental instrument area, an experimental operation area is a core area for experimental operation and observation, and auxiliary objects such as rags are placed in an experimental auxiliary area or an area for experimental record are placed in the experimental auxiliary area. By checking the objects and situation awareness of different areas, the relevant behavior of the user can be identified and evaluated. For example, after the experiment is finished, if the user does not clear the experiment operation area, the user can deduct according to the grading standard; if the used instrument is not put back in the experimental instrument area in the experimental process, the operation is not standard, and the instrument can be warned or buckled. It should be noted that by situational awareness of objects on the desktop through different partitions, behavioral analysis and operational state evaluation can be performed effectively, efficiently, and accurately. In addition, the embodiment designs different projection patterns according to different operation steps, and guides a user to operate. For example, the projected pattern is projected onto an object to be operated next, thereby guiding the user to operate the instrument or device next.
In the sixth embodiment of the present invention, the present embodiment can control the range of perception and recognition by changing the size of the projection pattern. Indeed, during the course of an experiment, if the projection device worn on the head is too close to the object being operated, the system may not detect the bounding box of that object, resulting in missed detection (i.e., creating a "no recognition" problem). According to the method of the invention, the system prompts the operator (the user and the operator are equivalent concepts in the invention) to adjust the distance between the head and the operation object, if the object still cannot be detected within the appointed time, the detection fails or the user is required to repeat the operation step until the object is detected (note: the number and the name of the experimental instrument equipment required by each operation are obviously known in the experimental process). That is, the present embodiment can control the distance between the head of the user and the object of the scene by adjusting the size of the projection pattern, so that the user does not need to worry about the distance problem of the head from the scene to influence the examination result, and avoid the anxiety feeling of the operation experience.
In the sixth embodiment, 6 projection pattern styles are used, which represent different semantics (as shown in table 1). Table 1 shows that the system projects different styles of projection patterns at different time steps, which can assist the system in directing or assessing the user's operational behavior. For example, when a user observes a single rectangular pattern, the user concentrates on observing the experimental phenomenon while recording the observed experimental phenomenon. For another example, when the projected pattern is projected near the scale cursor, the system automatically reads the cursor scale.
TABLE 1 several projection pattern semantics in embodiments of the invention
Figure BDA0004047665490000091
Figure BDA0004047665490000101
In a seventh embodiment of the invention for preparing oxygen in a laboratory, the experimental operation examination using the method of the invention mainly comprises the following steps:
the first step: the tightness of the oxygen generator was checked.
A red "·" projection pattern is projected onto the experiment table top through the head-mounted miniature projection device, when a user plugs the rubber plug into the test tube port, the projection pattern is projected to the vicinity of the rubber plug and the test tube port, the miniature camera equipment bound on the miniature projection device shoots a scene image taking the projection pattern as the center, and the YOLOv5 network further identifies the rubber plug, the test tube and the human hand in the scene; when a user holds the test tube with the hand and presses the catheter into water with the other hand and observes whether bubbles are generated, the "·" shaped projection pattern falls on the human hand and the water surface successively, and the micro camera device obtains an image near the projection point and identifies the bubbles.
In the operation process of the step, if the head is not normally rocked in the process of plugging the rubber plug into the test tube port by a user, the red "·" projection pattern is not near the test tube port, and further the operation image of plugging the rubber plug into the test tube is not in the shooting range of the miniature camera. In this case, the system will obtain the ". Cndot" location in the image and the location and size of the laboratory desktop through the YOLOv5 network. If the YOLOv5 network cannot detect ". Cndot.", or the detected ". Cndot.cndot.is not on the experimental desktop, the system pauses the input of all visual devices on the one hand, prompts the user to pay attention to observe the experimental process, normalize head movements on the other hand, and further prompts the user to restart the air tightness check.
Also, during the process of observing the air bubbles on the water surface, if the user is looking at the water, the user often cannot obtain the correct observation result. For this reason, the user is required to project a "·" like projection pattern onto the water surface above the conduit buried below the water surface by adjusting the head pose, thereby ensuring that the user is not "cross-polled" or "randomly assigned" by the system as long as the user follows the course of natural experimentation.
And a second step of: 3g of potassium permanganate was weighed. First, the evaluation point of "left object right code" is evaluated. In this step, the micro-projector is set to project a rectangular pattern. When the rectangular pattern frames objects in the balance tray, the system firstly recognizes the position and length and width information of the rectangular pattern, the balance tray and the like through the YOLOv5 network, and then further recognizes the objects in the rectangular pattern frame, so that the potassium permanganate and the weights and the placement positions thereof are recognized.
In the process of placing weights, if the rectangular projection pattern does not frame the tray of the balance, the system prompts a user to adjust the head posture; if the micro-projection device is not adjusted within the set time, the micro-projection device projects an X-shaped pattern of high-frequency flicker on the desktop, and the user is prompted by voice to indicate that the experiment is invalid, and the experiment is restarted.
Similarly, during the subsequent balance leveling process, the user is required to look at the wander code to ensure that the wander code is within the rectangular pattern and the YOLOv5 network identifies the beam scale position within the rectangular pattern in real time. If the head of the user shakes rapidly to cause that the rectangular projection pattern is not on the experiment table or the scale position area where the tourist code is located cannot be framed, the system prompts the user to observe the leveling process; if not corrected within a prescribed time, the system may consider that the user operation process is not normalized and lose the score of the knowledge point.
And a third step of: the potassium permanganate was poured into a test tube. In this step, the knowledge point to be examined is "one-horizontal two-horizontal three-vertical", i.e. the test tube is firstly placed horizontally, then the potassium permanganate is placed in the test tube, and then the test tube is slowly erected. In this step, a circular projection pattern is used. Only when the circular projection pattern encloses the cuvette and the medicine, the system works normally; otherwise, the system will repeatedly alert the user to adjust the distance and orientation of the head from the cuvette so that both the cuvette and the drug are within the circular projection pattern.
Other steps are monitored and identified using similar methods as above.
And finally, finishing the test bed. This step projects a plurality of projection patterns onto the desktop: an operating area and an instrument area. Firstly, identifying the positions and the sizes of different areas by using a YOLOv5 network; then, the object in each projection area is identified, and it is judged whether or not the object coincides with the initial state. If the score is consistent, scoring according to a scoring standard; otherwise, the score is deducted according to the scoring standard.
The invention also provides a computer readable storage medium storing a computer program which when executed by a processor implements the steps of the wearable camera pose monitoring and recognition method.
A wearable camera pose monitoring and recognition device, the device being adapted for use with the wearable camera pose monitoring and recognition system.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (11)

1. The utility model provides a wearing formula camera gesture control and recognition system, includes, wear the frame, the camera of wearing, its characterized in that, the system still includes:
the head-mounted miniature camera is fixedly connected with the head-mounted wearing frame and is used for acquiring scene video information;
the projection device projects the same scene shot by the miniature camera and is used for enhancing projection information; the central position of the projection pattern is positioned on the optical axis of the head-mounted miniature camera by adjusting the optical axis direction of the projection device;
the prompt module is used for sending a prompt to a user when the system is abnormal;
the angle sensing module is used for acquiring angle posture information of the head-mounted miniature camera, and the angle posture information comprises: viewing angle θ and elevation angle ω.
2. The wearable camera pose monitoring and recognition system according to claim 1, wherein the shooting area of the head-mounted miniature camera is not smaller than the coverage of the projection pattern of the projection device, so as to ensure that the head-mounted miniature camera shoots and acquires experimental equipment, objects and operation processes located in the projection pattern.
3. The method for monitoring and identifying the posture of the wearable camera is characterized by comprising the following steps of:
s0, establishing a database for storing the size R and style S of the projection pattern;
s1, inquiring the size R and the style S of the projection pattern corresponding to each experimental operation step in a database;
s2, the projection device projects onto the experiment table top to generate N projection patterns or projection areas;
s3, judging whether the projection pattern of the head-mounted miniature camera is on an experimental desktop, and if so, receiving input video data from the head-mounted miniature camera; if not, executing step S4;
s4, judging whether the projection pattern can be detected in the head-mounted miniature camera, and if so, receiving input video data from the head-mounted miniature camera; if not, executing step S5;
s5, judging whether the head gesture of the wearer is standard, and if so, receiving the input video data from the head-mounted micro-camera; if not, executing step S6;
s6, judging whether the head of the wearer shakes normally, if so, receiving input video data from the head-mounted miniature camera, and respectively identifying and processing objects in each projection pattern area; otherwise, performing exception handling.
4. The method for monitoring and recognizing the gesture of the wearable camera according to claim 3, wherein when the system is abnormal, the method further comprises automatically performing abnormality processing on the system, and specifically comprises:
a prompting module of the system prompts a user to pay attention to the experiment table top;
the system stops receiving or processing the input data from the head mounted miniature camera and the prompting module prompts the user to pause the experiment or resume the current operation.
5. The method for monitoring and recognizing the pose of a wearable camera according to claim 3, wherein determining whether the projected pattern of the headset-type micro camera is on the experimental desktop is specifically:
acquiring a projection pattern sample and an experiment table top sample in an image frame from a camera, and marking the samples;
training the marked sample by adopting a neural network model to obtain a recognition model M1 of the projection pattern and a recognition model M2 of the experiment table top;
the method comprises the steps of utilizing an identification model M1 to identify a projection pattern acquired by a camera, utilizing an identification model M2 to identify an experiment table top, and acquiring a monitoring frame position of the projection pattern and a detection frame position of the experiment table top;
acquiring a central point position P (Px, py) of the projection pattern and a central point position Q (Qx, qy) of the experiment table top;
if it meets
The value of Px-Qx is less than or equal to a-c/2 and the value of Py-Qy is less than or equal to b-d/2
The projection of the projection device is positioned on the experiment table top, otherwise, the projection of the projection device is not positioned on the experiment table top, and the system is abnormal; wherein a and b are the length and width of the desktop detection frame image, and c and d are the length and width of the projection pattern detection frame, respectively.
6. The method for monitoring and recognizing the posture of a wearable camera according to claim 3, wherein determining whether the projected pattern can be detected in the head-mounted miniature camera is specifically:
collecting a projection pattern sample in an image frame acquired by a head-mounted miniature camera, and marking the sample;
training the marked sample by adopting a neural network to obtain an identification model M3 of the projection pattern;
identifying a projection pattern in an image frame of the head-mounted micro camera using the identification model M3;
if the label of the projection pattern is returned, the detection is successful; otherwise, the detection fails.
7. The method for monitoring and recognizing the posture of a wearable camera according to claim 3, wherein the judging whether the posture of the head of the wearer is normalized is specifically:
presetting threshold values theta 0 and omega 0 of a visual angle theta and an elevation angle omega;
if theta is more than or equal to theta 0 or omega is more than or equal to omega 0, the head posture of the wearer is not standard; conversely, the head posture of the wearer is normalized.
8. The method for monitoring and recognizing the posture of a wearable camera according to claim 3, wherein the judging whether the head of the wearer is properly swayed is specifically:
presetting a head speed threshold v_θ0 and v_ω0 of a wearer;
if the speeds v_theta and v_omega meet the conditions that v_theta is larger than or equal to v_theta 0 or v_omega is larger than or equal to v_omega 0, the head of the wearer rapidly shakes; otherwise, the head of the wearer shakes normally;
the calculation method of the speeds v_theta and v_omega comprises the following steps:
v_θ=(θ(t+1)-θ(t))
v_ω=(ω(t+1)-ω(t))
the angle sensing module provides data θ (t+1), θ (t), ω (t+1), ω (t).
9. The method for monitoring and recognizing the gesture of the wearable camera according to claim 3, wherein the steps of recognizing and processing the objects within each projection pattern area respectively comprise: and scoring and evaluating the experimental operation of the user based on the perception of the operation scene, the identification result and the scoring rule.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the wearable camera pose monitoring and recognition method according to any of claims 3-9.
11. A wearable camera pose monitoring and recognition device, characterized in that the device is applicable to the wearable camera pose monitoring and recognition system according to claim 1 or 2.
CN202310032255.5A 2023-01-10 2023-01-10 Wearable camera gesture monitoring and recognition system and method Pending CN116156310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310032255.5A CN116156310A (en) 2023-01-10 2023-01-10 Wearable camera gesture monitoring and recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310032255.5A CN116156310A (en) 2023-01-10 2023-01-10 Wearable camera gesture monitoring and recognition system and method

Publications (1)

Publication Number Publication Date
CN116156310A true CN116156310A (en) 2023-05-23

Family

ID=86357726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310032255.5A Pending CN116156310A (en) 2023-01-10 2023-01-10 Wearable camera gesture monitoring and recognition system and method

Country Status (1)

Country Link
CN (1) CN116156310A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102410832A (en) * 2010-08-06 2012-04-11 佳能株式会社 Position and orientation measurement apparatus and position and orientation measurement method
CN108874030A (en) * 2018-04-27 2018-11-23 努比亚技术有限公司 Wearable device operating method, wearable device and computer readable storage medium
CN110855976A (en) * 2019-10-08 2020-02-28 南京云计趟信息技术有限公司 Camera abnormity detection method and device and terminal equipment
CN111121774A (en) * 2020-01-14 2020-05-08 上海曼恒数字技术股份有限公司 Infrared positioning camera capable of detecting self posture in real time

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102410832A (en) * 2010-08-06 2012-04-11 佳能株式会社 Position and orientation measurement apparatus and position and orientation measurement method
CN108874030A (en) * 2018-04-27 2018-11-23 努比亚技术有限公司 Wearable device operating method, wearable device and computer readable storage medium
CN110855976A (en) * 2019-10-08 2020-02-28 南京云计趟信息技术有限公司 Camera abnormity detection method and device and terminal equipment
CN111121774A (en) * 2020-01-14 2020-05-08 上海曼恒数字技术股份有限公司 Infrared positioning camera capable of detecting self posture in real time

Similar Documents

Publication Publication Date Title
AU2018337036B2 (en) Augmented reality devices for hazardous contaminant testing
EP3402384B1 (en) Systems and methods for determining distance from an object
US11579904B2 (en) Learning data collection device, learning data collection system, and learning data collection method
US9357966B1 (en) Drug screening device for monitoring pupil reactivity and voluntary and involuntary eye muscle function
US9489574B2 (en) Apparatus and method for enhancing user recognition
WO2021082662A1 (en) Method and apparatus for assisting user in shooting vehicle video
JP2005250990A (en) Operation support apparatus
US20160073017A1 (en) Electronic apparatus
CN110880188A (en) Calibration method, calibration device and calibration system for near-eye display optical system
CN111783640A (en) Detection method, device, equipment and storage medium
JP2016036390A (en) Information processing unit, focal point detection method and focal point detection program
JP2019215688A (en) Visual line measuring device, visual line measurement method and visual line measurement program for performing automatic calibration
KR20200035003A (en) Information processing apparatus, information processing method, and program
KR101535801B1 (en) Process inspection device, method and system for assembling process in product manufacturing using depth map sensors
CN113729619B (en) Portable fundus camera and method of locking/unlocking the same
US20210153794A1 (en) Evaluation apparatus, evaluation method, and evaluation program
CN116156310A (en) Wearable camera gesture monitoring and recognition system and method
US10823964B2 (en) Work assistance apparatus, work assistance method, and computer-readable, non-transitory recording medium recording work assistance program executed by computer
JP2002215321A (en) Indicating image control device
CN114445759A (en) Data sharing method and device for remote sampling, electronic equipment and readable storage medium
WO2021200002A1 (en) Microscope system, projection unit, and egg testing assistance method
EP3884873B1 (en) Evaluation device, evaluation method, and evaluation program
CN217938189U (en) Vision detection device
EP4117270A1 (en) Information processing device, information processing method, and information processing system
Han et al. FPLP3D: security robot for face recognition in the workplace environment using face pose detection assisted controlled FACE++ tool position: a three-dimensional robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination