CN118155272A - Detection method and system for preset behaviors and electronic equipment - Google Patents

Detection method and system for preset behaviors and electronic equipment Download PDF

Info

Publication number
CN118155272A
CN118155272A CN202311432241.9A CN202311432241A CN118155272A CN 118155272 A CN118155272 A CN 118155272A CN 202311432241 A CN202311432241 A CN 202311432241A CN 118155272 A CN118155272 A CN 118155272A
Authority
CN
China
Prior art keywords
frame
image
target
feature points
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311432241.9A
Other languages
Chinese (zh)
Inventor
黄宁波
柏林
刘彪
舒海燕
袁添厦
祝涛剑
沈创芸
王恒华
方映峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Gosuncn Robot Co Ltd
Original Assignee
Guangzhou Gosuncn Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Gosuncn Robot Co Ltd filed Critical Guangzhou Gosuncn Robot Co Ltd
Priority to CN202311432241.9A priority Critical patent/CN118155272A/en
Publication of CN118155272A publication Critical patent/CN118155272A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a detection method and system for preset behaviors and electronic equipment, wherein the detection method comprises the following steps: acquiring a previous frame synchronous image, inputting the previous frame synchronous image into a target detection network of a double detection head, and detecting a preset behavior target frame; the robot enters a tracking mode, and a pedestrian re-recognition algorithm based on deep learning is executed to track; executing an ORB feature detection algorithm, and calculating feature points of a target agent target frame; using a pyramid optical flow tracking method to find feature points matched with feature points of a target frame of the target agent calculated according to the first camera device; calculating the depth of all the characteristic points in the first image pickup device under the coordinate system of the next frame; and calculating the three-dimensional coordinates of the first camera device under the next frame of coordinate system, converting the three-dimensional coordinates into a navigation map coordinate system, obtaining the tracked position of the target agent in the navigation map, and sending pushing information and warning information.

Description

Detection method and system for preset behaviors and electronic equipment
Technical Field
The present invention relates to the field of robot detection technologies, and in particular, to a method, a system, and an electronic device for detecting a preset behavior.
Background
Some non-civilized behaviors of urinating anywhere often occur in public places such as parks, street sides and the like, and if children are involved, the behaviors can be forgiving, but the behaviors cannot be forgiving for adult males. Therefore, the management of public places such as parks, streets and the like hopes to find adult men who urinate anywhere in time, and criticize education on the surface. Meanwhile, records are made, so that the people with the impoverishment of urinating anywhere can be effectively deterred.
Currently, the existing detection technology of the urine behaviors of adult males anywhere is mainly based on the following scheme:
The first scheme uses a scheme of deep learning-based target detection and keypoint detection: according to the scheme, firstly, an image is acquired from a fixed camera, a target detection network with two classifications is input after preprocessing to detect target frames of adult males and females (females are detected for reducing false alarms), and then the detected image of the adult males is scratched out from the input image and is input into a key point detection network, so that key points of a human body are obtained. Finally, analyzing the key points, and if the gesture described by the key points accords with the gesture of standing urine of the adult male, judging that the urine behaviors exist anywhere, and reporting and giving out an alarm.
The above solution typically requires that the camera be installed first in places where urination anywhere may occur. When a picture which is shot from the front and is not far away from the camera is obtained, the scheme can accurately detect the random urination behaviors of the adult male. However, when the image is acquired from the side or from the back of the person, the above solution usually cannot detect, analyze and judge the key points correctly due to the blocking of the key points of the arm, thus causing missed detection. In addition, when an image is acquired from a place far from the camera, the above solution generally cannot correctly distinguish between a male and a female, thus causing false detection, and even if the difference is correct, the target is too small to correctly detect key points of the human body, thus causing false detection and omission.
In addition, the scheme has the problem of alarming and ageing, and the detected people suffering from the urine at any place cannot be tracked due to the fixed camera. When the relevant security personnel arrive at the place where the urination event occurs, the people who urinate anywhere may have left off, and it becomes impossible to conduct on-the-spot criticizing education on them. Moreover, as the camera may not collect the facial information of the person urinating anywhere, the identity related information is not known, and thus the behavior of urinating anywhere can not be recorded.
Disclosure of Invention
The invention aims to provide a detection method and system for preset behaviors and a new technical scheme of electronic equipment, which at least can solve the problems that detection omission and false detection are easy to cause, alarm timeliness exists and the like in the prior art.
In a first aspect of the present invention, a method for detecting a preset behavior is provided, including:
Based on a first camera device and a second camera device on the robot, acquiring a previous frame synchronous image through the first camera device, inputting a target detection network of double detection heads, and detecting a preset behavior target frame;
Acquiring a later frame of synchronous image through the first camera device, enabling the robot to enter a tracking mode, and executing a pedestrian re-identification algorithm based on deep learning to track according to the detected preset behavior target frame so as to detect a target agent;
the tracked target agent target frame is scratched out of a later frame of synchronous image acquired by the first camera device, an ORB feature detection algorithm is executed on the target agent target frame, and feature points of the target agent target frame are calculated;
Using a pyramid optical flow tracking method to find out feature points matched with the feature points of the target frame of the target agent calculated according to the first camera device from a later frame of synchronous image acquired by the second camera device;
according to the calculated feature points matched with each other, internal references of the first image pickup device and the second image pickup device and a triangulation method, calculating the depth of all feature points in the first image pickup device under a coordinate system of a later frame;
Calculating three-dimensional coordinates of the first camera device under a frame of coordinate system behind the first camera device according to the depth of the feature points in the first camera device, converting the three-dimensional coordinates into a navigation map coordinate system, obtaining the tracked position of the target agent in a navigation map, and sending push information and warning information;
wherein the preset behavior refers to the behavior of the adult male in the urinary system anywhere, and the target agent refers to the urinary system anywhere.
Optionally, the first image capturing device and the second image capturing device are arranged on the head of the robot, and the step of acquiring the previous frame synchronization image by the first image capturing device based on the first image capturing device and the second image capturing device on the robot includes:
setting a robot patrol route;
and controlling the robot patrol according to the set patrol route.
Optionally, the interval time between the acquired previous frame synchronization image and the acquired subsequent frame synchronization image is 400-600ms.
Optionally, the step of performing a pedestrian re-recognition algorithm based on deep learning comprises:
preprocessing a later frame of synchronous image acquired by the first camera device;
Inputting YOLOV single-classification human body target detection network into the preprocessed next frame of synchronous image, and detecting all human bodies appearing in the image;
The method comprises the steps that images in a behavior target frame detected by a preset behavior target frame detection module are scratched out from a previous frame synchronous image acquired by the first camera device;
Inputting the preprocessed next frame of synchronous image into a pedestrian re-identification tracking algorithm, and outputting a 512-dimensional feature vector;
All detected human bodies are scratched out of the synchronous image of the next frame, preprocessed (each pixel value is divided by 255) are input into a pedestrian re-recognition tracking algorithm of the module, and N512-dimensional feature vectors are output;
And respectively calculating the Euclidean distances of N512-dimensional feature vectors and one 512-dimensional feature vector output in the pedestrian re-recognition tracking algorithm to obtain the tracked target agent.
Optionally, the pixels of the next frame synchronization image acquired by the first image capturing device are divided by 255 when the next frame synchronization image is preprocessed.
Alternatively, assuming that the feature points matched with each other have N pairs, let the pixel coordinates of the feature point of the ith first image capturing device beThe pixel coordinates of the feature points of the ith second imaging device matched with the pixel coordinates are/>Let the internal parameter of the first image capturing device be (f lx,fly,clx,cly), and the second image capturing device be (f rx,fry,crx,cry), and the calculating the feature point of the first image capturing device by triangulation includes:
converting the feature point pixel coordinates to a camera normalization plane,
xlx=(ulx-clx)/flx
xly=(uly-cly)/fly
xrx=(urx-crx)/frx
X ry=(ury-cry)/fry, wherein x l=(xlx,xly,1),xr=(xrx,xry, 1)
Wherein x l,xr is the coordinate point of the characteristic point of the synchronous image of the next frame on the normalization plane, and is brought into the triangle ranging formula,
Is an antisymmetric matrix of x r,
R, t represents an extrinsic transformation matrix from the first camera to the second camera,
S l is the depth of the feature point of the first image capturing device in the coordinate system of the next frame to be calculated.
Alternatively, the formula for calculating the three-dimensional coordinate mean is as follows:
The three-dimensional coordinates of the target agent in the navigation map coordinate system are,
Wherein x m、ym and z m respectively represent three planes meaning a map, and T lcw is a first camera pose meter.
In a second aspect of the present invention, a detection system for preset behavior is provided, which is applied to the detection method for preset behavior described in the foregoing embodiment, where the detection system includes:
The acquisition module is used for acquiring a previous frame synchronous image and a next frame synchronous image;
The input module is used for inputting the acquired previous frame synchronous image into a target detection network of the double detection heads and detecting a preset behavior target frame;
The tracking module is used for executing a pedestrian re-recognition algorithm based on deep learning to track according to the detected preset behavior target frame so as to detect a target agent;
The first calculation module is used for matting out the tracked target agent target frame from the later frame of synchronous image acquired by the first camera device, executing an ORB feature detection algorithm and calculating feature points of the target agent target frame;
The second calculation module is used for finding out feature points matched with the feature points of the target frame of the target agent calculated according to the first image pickup device from a later frame of synchronous image obtained by a second image pickup device according to a pyramid optical flow tracking method;
The third calculation module calculates the depth of all the feature points in the first image pickup device under a later frame coordinate system according to the calculated feature points matched with each other, the internal references of the first image pickup device and the second image pickup device and a triangulation method;
The fourth calculation module calculates three-dimensional coordinates of the first camera device under a frame coordinate system behind the first camera device according to the depth of the feature points in the first camera device, converts the three-dimensional coordinates into a navigation map coordinate system, obtains the tracked position of the target agent in the navigation map, and sends pushing information and warning information;
wherein the preset behavior refers to the behavior of the adult male in the urinary system anywhere, and the target agent refers to the urinary system anywhere.
In a third aspect of the present invention, there is provided an electronic apparatus comprising: a processor and a memory, in which computer program instructions are stored, wherein the computer program instructions, when executed by the processor, cause the processor to perform the steps of the method for detecting a preset behavior described in the above embodiments.
In a fourth aspect of the present invention, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the detection method of preset behavior described in the above embodiments.
According to the detection method of the preset behavior, the robot inspection mode is used for inspecting public places such as parks and streets. The inspection robot is convenient and flexible to deploy, and compared with a fixed camera, the inspection robot can solve the warning aging problem through the tracking and navigation map function, so that adult males without civilization who urinate in a standing way are effectively deterred. According to the invention, the standing and anywhere urination behaviors of the adult male are regarded as one target, meanwhile, the double detection heads are adopted, the first camera device and the second camera device are utilized to detect the two behavior targets at a short distance and a long distance, so that the detection accuracy of the standing and anywhere urination behavior target frame of the adult male in a multi-shooting angle and long-distance image is greatly improved, and false detection or omission detection is avoided.
Meanwhile, the pedestrian re-recognition algorithm based on deep learning tracks the detected adult men standing anywhere in urination, the position of the anywhere in the navigation map of the urine agent is calculated by utilizing the images of the first camera device and the second camera device which are carried by the robot and at different angles, and then the position of the anywhere in the navigation map is transmitted to the mobile phone client of the manager through the 5G network, so that even if the anywhere in the urine agent leaves a place where he urinates, the manager can easily find the position of the robot in the navigation map because the robot tracks the robot and reports the position of the robot in real time.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a block flow diagram of a method for detecting preset behavior according to an embodiment of the present invention;
FIG. 2 is another flow chart of a method for detecting preset behavior according to an embodiment of the present invention
Fig. 3 is a schematic diagram of the operation of an electronic device according to an embodiment of the invention.
Reference numerals:
A processor 201;
A memory 202; an operating system 2021; an application 2022;
A network interface 203;
An input device 204;
A hard disk 205;
A display device 206.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
The following describes a detection method of male adult wherever urination behavior according to an embodiment of the present invention with reference to the accompanying drawings.
As shown in fig. 1 and fig. 2, a method for detecting preset behavior according to an embodiment of the present invention includes:
s1, acquiring a previous frame of synchronous image through a first camera device based on a first camera device and a second camera device on a robot, inputting a target detection network of double detection heads, and detecting a target frame of adult male standing and anywhere urination behaviors;
S2, acquiring a later frame of synchronous image through a first camera device, enabling the robot to enter a tracking mode, and executing a pedestrian re-identification algorithm based on deep learning to track according to the detected adult male standing and anywhere urination behavior target frame so as to detect a anywhere urination agent;
S3, the tracked target frame of the urinary in place agent is scratched out of the later frame of synchronous image acquired by the first camera device, an ORB feature detection algorithm is executed on the target frame, and feature points of the target frame of the urinary in place agent are calculated;
S4, finding out feature points matched with feature points of a target frame of the urinary in-place agent calculated according to the first camera device from a later frame of synchronous image acquired by the second camera device by using a pyramid optical flow tracking method;
S5, calculating the depth of all the characteristic points in the first image pickup device under a coordinate system of a later frame according to the calculated mutually matched characteristic points, internal references of the first image pickup device and the second image pickup device and a triangulation method;
And S6, calculating three-dimensional coordinates of the first camera device under a later frame of coordinate system according to the depth of the feature points in the first camera device, converting the three-dimensional coordinates into a navigation map coordinate system, obtaining the tracked position of the urinary in-place agent in the navigation map, and sending push information and warning information.
In other words, referring to fig. 1 and 2, in the detection method of the preset behavior of the embodiment of the present invention, first, a previous frame synchronization image may be acquired by a first image capturing device based on the first image capturing device and a second image capturing device on a robot. The first camera device and the second camera device are used as double detection heads, the first camera device can be a left camera or a left camera, and the second camera device can be a right camera or a right camera. The first image pickup device and the second image pickup device can be used as left and right binocular cameras to carry out shooting at two different angles, and short-distance and long-distance shooting images with non-parallel imaging planes are obtained. Then, the synchronous image of the previous frame acquired by the first image pickup device (left camera) is input into a target detection network of a double detection head, and the target frame of the standing and anywhere urination behavior of the adult male is detected.
In the present invention, YOLOV was modified to detect "behavioral targets" of adult males for wherever they urinate. YOLOV5 single heads were modified to double heads, one for each of the near, multi-angle targets in the image, i.e., to detect near, multi-angle "standing urination targets" and "wrong class targets" in the image. The wrong targets here refer to close-range, multi-angle two-hand sagging standing, two-hand boxing standing, two-hand trousers pocket standing, and the like. The other detection head is used for detecting remote and multi-angle targets in the image, namely a remote and multi-angle 'standing urine target' and an 'wrong type target' in the image, wherein the wrong type targets refer to remote and multi-angle double-hand sagging standing, double-hand boxing standing, double-hand trouser pocket standing and the like.
And acquiring a later frame of synchronous image through the first camera device, enabling the robot to enter a tracking mode, and executing a pedestrian re-identification algorithm based on deep learning to track according to the detected adult male standing and anywhere urination behavior target frame so as to detect a anywhere urination agent.
Then, the tracked target frame of the urinary target in the place can be extracted from the synchronous image of the next frame acquired by the first camera device, and an ORB (ORB English full term Oriented FAST and Rotated BRIEF) feature detection algorithm is executed on the target frame, so that feature points of the target frame of the urinary target in the place can be calculated. And a pyramid optical flow tracking method is used for finding out feature points matched with feature points of a target frame of the urinary-in-place agent calculated according to the first image pickup device from a later frame synchronous image acquired by the second image pickup device (right camera).
Then, the depth of all the feature points in the first image capturing device under the coordinate system of the next frame can be calculated according to the calculated feature points matched with each other, the internal references of the first image capturing device and the second image capturing device and the triangulation method. And finally, calculating three-dimensional coordinates of the first camera device under the next frame of coordinate system according to the depth of the characteristic points in the first camera device, converting the three-dimensional coordinates into a navigation map coordinate system, obtaining the tracked position of the urinary in-place agent in the navigation map, and sending push information and warning information.
In the present invention, first, a frame of synchronous image (one frame includes left and right images, both of which have a size of 1920×1080) is acquired from two cameras with different angles (left and right binocular cameras are understood to be the left and right binocular cameras, but the imaging planes are not parallel), the left camera image is denoted as left_img, and the right camera image is denoted as right_img. After left camera image left_img pretreatment (each pixel value is divided by 255), sending the left camera image left_img pretreatment into YOLOV deep learning network with double detection heads to detect a standing and anywhere urine behavior target frame of an adult male, and effectively solving the problems of false detection and omission caused by too far shooting angle and distance.
One of the detection heads detects a close-range, multi-angle 'standing urine target' and an 'wrong type target' in an image, wherein the wrong type target refers to close-range, multi-angle double-hand sagging standing, double-hand fist holding standing, double-hand trousers pocket standing and the like. The output format of the detection head is [ x1, y1, x2, y2, id ], wherein x1 and y1 represent the upper left corner coordinates of the frame, x2 and y2 represent the lower right corner coordinates of the frame, id represents the category of the target in the frame, the value 0 or 1 is taken, wherein 0 represents a short-distance and multi-angle 'standing urination target', and 1 represents a short-distance and multi-angle 'wrong category target'.
The other detection head detects a remote and multi-angle 'standing urine target' and an 'wrong type target' in the image, wherein the wrong type target refers to a remote and multi-angle two-hand sagging standing position, a two-hand boxing standing position, a two-hand trousers pocket standing position and the like. The output format of the detection head is [ x1, y1, x2, y2, id ], wherein x1 and y1 represent the upper left corner coordinates of the frame, x2 and y2 represent the lower right corner coordinates of the frame, id represents the category of the target in the frame, the value 2 or 3 is taken, 2 represents a near-far-angle multi-angle standing urine target, and 3 represents a far-distance multi-angle error category target. The target boxes with id 0 and 2 are finally selected to be input into the next module.
In the process of adult male standing urination behavior target frame tracking, firstly, a frame of synchronous images (one frame comprises left and right images) with the size of 1920 x 1080 are obtained from two cameras with different angles, and the time interval between the obtained frame of images and the frame of images obtained by the last module (adult male standing urination behavior target frame detection module) is 500ms. For distinguishing, the invention refers to a frame of image acquired by the adult male standing urine behavior target frame detection module as a previous frame, and refers to a frame of image acquired by the adult male standing urine behavior target frame detection module as a next frame. The "next frame" left camera image is still noted left img and the right camera image is also noted right img.
In order to solve the problem of warning aging, the image of the urinary agent anywhere, the coordinates of the image in the navigation map and the coordinates of the robot in the navigation map (two-dimensional) are pushed to the mobile phone client of the manager deployed with the navigation map, so that the manager can conveniently know the position of the urinary agent anywhere in real time. The coordinates of the robot in the navigation map (two-dimension) are provided by the robot navigation module in real time, and the image of the anywhere urine agent is obtained by the adult male standing urine behavior target frame detection module, so the module needs to calculate the coordinates of the anywhere urine agent in the navigation map.
Therefore, according to the detection method of the preset behavior, the public places such as parks and streets are inspected in a robot inspection mode. The inspection robot is convenient and flexible to deploy, and compared with a fixed camera, the inspection robot can solve the warning aging problem through the tracking and navigation map function, so that adult males without civilization who urinate in a standing way are effectively deterred. According to the invention, the standing and anywhere urination behaviors of the adult male are regarded as one target, meanwhile, the double detection heads are adopted, the first camera device and the second camera device are utilized to detect the two behavior targets at a short distance and a long distance, so that the detection accuracy of the standing and anywhere urination behavior target frame of the adult male in a multi-shooting angle and long-distance image is greatly improved, and false detection or omission detection is avoided.
Meanwhile, the pedestrian re-recognition algorithm based on deep learning tracks the detected adult men standing anywhere in urination, the position of the anywhere in the navigation map of the urine agent is calculated by utilizing the images of the first camera device and the second camera device which are carried by the robot and at different angles, and then the position of the anywhere in the navigation map is transmitted to the mobile phone client of the manager through the 5G network, so that even if the anywhere in the urine agent leaves a place where he urinates, the manager can easily find the position of the robot in the navigation map because the robot tracks the robot and reports the position of the robot in real time.
In some embodiments of the present invention, the interval time between the previous frame synchronization image and the next frame synchronization image is 400-600ms. The first camera device and the second camera device are arranged at the head of the robot, and the step of acquiring the previous frame synchronous image through the first camera device based on the first camera device and the second camera device on the robot comprises the following steps of:
setting a robot patrol route;
And controlling the robot to patrol according to the set patrol route.
That is, in the process of adult male standing urination behavior target frame tracking, first, a frame of synchronization images (one frame including left and right images) of 1920×1080 in size are acquired from two cameras of different angles, and the time interval between the acquired frame of images and the frame of images acquired by the last module (adult male standing urination behavior target frame detection module) is 400-600ms, preferably 500ms (see fig. 2). For distinguishing, the invention refers to a frame of image acquired by the adult male standing urine behavior target frame detection module as a previous frame, and refers to a frame of image acquired by the adult male standing urine behavior target frame detection module as a next frame. The "next frame" left camera image is still noted left img and the right camera image is also noted right img.
The first camera device and the second camera device are arranged on the head of the robot, and based on the first camera device and the second camera device on the robot, in the process of acquiring the previous frame synchronous image through the first camera device, a patrol route of the robot is set first, and then the patrol of the robot is controlled according to the set patrol route. The robot detects the standing and anywhere urination behaviors of adult males by continuously patrol fixed routes of public places such as parks, streets and the like and synchronously acquiring images of the places by using cameras with different angles. Meanwhile, the people are tracked and timely alarmed, so that adult males of the people who are not civilized and urinate in a standing way are effectively frightened.
In some embodiments of the present invention, the step of performing a deep learning based pedestrian re-recognition algorithm for tracking includes:
preprocessing a later frame of synchronous image acquired by a first camera device;
Inputting YOLOV single-classification human body target detection network into the preprocessed next frame of synchronous image, and detecting all human bodies appearing in the image;
the method comprises the steps that images in a behavior target frame detected by an adult male standing urination behavior target frame detection module are scratched out of a previous frame of synchronous image obtained by a first camera device;
inputting the preprocessed next frame of synchronous image into a pedestrian re-identification tracking algorithm, and outputting a 512-dimensional feature vector; the next frame synchronization image acquired by the first image capturing device is divided by 255 when being preprocessed.
All detected human bodies are scratched out of the synchronous image of the next frame, preprocessed (each pixel value is divided by 255) are input into a pedestrian re-recognition tracking algorithm of the module, and N512-dimensional feature vectors are output.
And respectively calculating the Euclidean distance of N512-dimensional feature vectors and one 512-dimensional feature vector output in the pedestrian re-recognition tracking algorithm to obtain the tracked non-civilized adult male standing anywhere urination.
In other words, the invention uses the pedestrian re-recognition network based on deep learning for tracking, and the specific steps are as follows:
a. The left image left_img of the next frame is preprocessed (each pixel value divided by 255) and input YOLOV to a single-class human body target detection network to detect all human bodies appearing in the image.
B. the image in the behavior target frame detected by the adult male standing urine behavior target frame detection module is scratched out from the left image of the previous frame, the preprocessed image (each pixel value is divided by 255) is input into a pedestrian re-recognition tracking algorithm of the module, and the tracking algorithm finally outputs a 512-dimensional feature vector.
C. All the human bodies detected in the step (a) are scratched out of the left image of the next frame, preprocessed (each pixel value is divided by 255) and input into a pedestrian re-identification tracking algorithm of the module. Assuming that a total of N human bodies are detected, the tracking algorithm finally outputs N512-dimensional feature vectors.
D. The euclidean distance of the dimension feature vector output in (c) and the feature vector output in (b) is calculated, respectively. Finally, the human body represented by the feature vector in (c) with the smallest Euclidean distance of the feature vector in (b) is the unclean adult male who has tracked standing and anywhere urination.
It should be noted that, the present invention does not make any modification to YOLOV for detecting a human body in (a), but directly detects a human body using the original plate YOLOV 5. During training, the loss function of each head still uses YOLOV, but the loss value of each head is multiplied by a factor of 0.5 and added and then back propagated. Layer 26 of YOLOV network for pedestrian re-recognition outputs feature vectors of 512 dimensions. The invention uses a Triplet loss function during training. The data for training the pedestrian re-identification network is collected and marked for the company, and the training method adopts a YOLOV method.
In some embodiments of the present invention, the pixel coordinates of the feature points of the ith first imaging device are given by N pairs of feature points matched with each otherThe pixel coordinates of the feature points of the ith second imaging device matched with the pixel coordinates are/>Let the internal reference of the first image capturing device be (f lx,fly,clx,cly), the second image capturing device be (f rx,fry,crx,cry), and the triangulation calculation of the feature point of the first image capturing device includes:
converting the feature point pixel coordinates to a camera normalization plane,
xlx=(ulx-clx)/flx
xly=(uly-cly)/fly
xrx=(urx-crx)/frx
X ry=(ury-cry)/fry, wherein x l=(xlx,xly,1),xr=(xrx,xry, 1)
Wherein x l,xr is the coordinate point of the characteristic point of the synchronous image of the next frame on the normalization plane, and is brought into the triangle ranging formula,
Is an antisymmetric matrix of x r,
R, t represents an extrinsic transformation matrix from a first camera to a second camera,
S l is the depth of the feature point of the first image capturing device of the next frame under the coordinate system to be calculated.
In the present invention, the formula for calculating the three-dimensional coordinate mean is as follows:
The three-dimensional coordinates of the urine agent at the location in the navigation map coordinate system are,
Wherein x m、ym and z m respectively represent three planes meaning a map, and T lcw is a first camera pose meter.
That is, in order to solve the problem of alarm timeliness, the image of the urinary agent anywhere, the coordinates of the urinary agent anywhere in the navigation map and the coordinates of the robot in the navigation map (two-dimensional) are pushed to the mobile phone client of the manager deployed with the navigation map, so that the manager can conveniently know the position of the urinary agent anywhere in real time.
The coordinates of the robot in the navigation map (two dimensions) are provided by the robot navigation module in real time, and the image of the anywhere urine agent is obtained by the adult male standing urine behavior target frame detection module, so the module needs to calculate the coordinates of the anywhere urine agent in the navigation map, and the specific calculation method comprises the following steps:
(a) And (3) extracting the tracked image in the target frame of the target of the urinary in-place agent from the left camera image left_img of the next frame, and executing an ORB feature detection algorithm on the image to calculate feature points in the target frame of the target of the urinary in-place agent.
(B) And (3) finding out feature points which are matched with the feature points calculated in the step (a) in the right camera image right_img of the next frame by using a pyramid optical flow tracking method, namely finding out the position of the feature points in the target frame of the urinary agent in the left camera image left_img of the next frame, wherein the feature points appear in the right camera image right_img of the next frame.
(C) And (3) calculating the depth of all the characteristic points in the (a) under the coordinate system of the left camera of the next frame according to the calculated mutually matched characteristic points, the internal parameters of the left and right cameras and the triangulation method, and then calculating the three-dimensional coordinates of all the characteristic points in the (a) under the coordinate system of the left camera of the next frame according to the depth of the characteristic points. The specific calculation formula of the step is as follows:
Assuming that N pairs of mutually matched characteristic points are shared, enabling the pixel coordinate of the i-th left camera characteristic point to be as follows The pixel coordinate of the ith right camera characteristic point matched with the pixel coordinate is/>Let the left camera reference (f lx,fly,clx,cly), the right camera reference (f rx,fry,crx,cry), triangulate the depth of the left camera feature point as follows,
First, the feature point pixel coordinates are converted onto the camera normalization plane (the following formula ignores i indicating what feature point is)
xlx=(ulx-clx)/flx
xly=(uly-cly)/fly
xrx=(urx-crx)/frx
X ry=(ury-cry)/fry, wherein x l=(xlx,xly,1),xr=(xrx,xry, 1)
X l,xr is the coordinate point of the characteristic point of the left and right cameras of the next frame on the normalized plane, and is brought into the triangle ranging formula,
Is an antisymmetric matrix of x r,
R, t represent the transformation matrix of the external parameters from the left camera to the right camera, which are calibrated in advance by a calibration program,
S l is the depth of the left camera feature point of the next frame under the camera coordinate system to be calculated.
After s l is calculated, the three-dimensional coordinates of the characteristic point of the left camera of the next frame under the camera coordinate system can be calculated, namely X l=(xlx*sl,xly*sl,sl
(D) And (3) combining the pose of the robot, and converting the three-dimensional coordinate mean value calculated in the step (c) into a navigation map coordinate system.
The robot pose, namely an external parameter transformation matrix from a camera coordinate system (left and right cameras) to a world coordinate system, is represented by T lcw, and synchronously acquires the left and right camera poses of the current frame when acquiring the images of the previous frame and the left and right cameras. The specific calculation formula of the step is as follows:
Firstly, calculating the three-dimensional coordinate mean value calculated in the step (c), wherein the formula is as follows:
The three-dimensional coordinates of the urine agent at the location in the navigation map coordinate system are,
Wherein x m、ym and z m respectively represent three planes meaning a map, and T lcw is a first camera pose meter.
Since the navigation map is a two-dimensional map, (x m,ym) is the final return value for this step.
(E) The image of the urinary agent anywhere and the coordinates of the urinary agent in the navigation map (namely (x m,ym) calculated in the last step) and the coordinates of the robot in the navigation map are pushed to the mobile phone client of the management party.
In summary, according to the method for detecting preset behavior in the embodiment of the present invention, false detection and missing detection caused by too far shooting angles and distances are avoided. The invention discards the idea of detecting the analysis posture of key points, and regards the standing and anywhere urination behavior of adult males as a 'target', and uses a deep learning target detection network YOLOV for detection. However, the adult male standing anywhere urination behavior is regarded as a 'behavior target', and false detection and missed detection caused by the shooting angle and the too far distance cannot be solved at all. The invention modifies YOLOV for detecting the 'behavioral targets' of the adult male anywhere to be changed into double detection heads, wherein one detection head is used for detecting the near-distance and multi-angle targets in the image, namely the near-distance and multi-angle 'standing urination targets' and the 'wrong type targets' in the image, and the wrong type targets refer to near-distance and multi-angle double-hand sagging standing, double-hand fist holding standing, double-hand trousers pocket standing and the like; the other detection head is used for detecting remote and multi-angle targets in the image, namely a remote and multi-angle 'standing urine target' and an 'wrong type target' in the image, wherein the wrong type targets refer to remote and multi-angle double-hand sagging standing, double-hand boxing standing, double-hand trouser pocket standing and the like.
Meanwhile, in order to solve the warning aging problem, the invention uses a pedestrian re-recognition algorithm based on deep learning to track the detected adult men standing on the anywhere urination, and uses two cameras with different angles carried by the robot to shoot (can be understood as left and right binocular cameras, but the imaging planes of the cameras are not parallel) (the left image is used for detecting and tracking the anywhere urination agent) to calculate the position of the anywhere urination agent in the navigation map, and then the position of the anywhere urination agent in the navigation map (two-dimensional map) is transmitted to the mobile phone client of the manager through a 5G network (the client is provided with the navigation map), so that even if the anywhere urination person leaves the place where he urinates, the manager can easily find the position of the robot in the navigation map because the robot keeps track of him and reports the position of the robot in real time.
According to a second aspect of the present invention, there is provided a detection system for preset behavior, which is applied to the detection method for preset behavior in the foregoing embodiment, and the detection system includes an acquisition module, an input module, a tracking module, a first calculation module, a second calculation module, a third calculation module, and a fourth calculation module. The acquisition module is used for acquiring a previous frame synchronous image and a next frame synchronous image. The input module is used for inputting the acquired previous frame synchronous image into a target detection network of the double detection heads and detecting a preset behavior target frame. The tracking module is used for executing a pedestrian re-recognition algorithm based on deep learning to track according to the detected preset behavior target frame so as to detect a target agent.
The first calculation module is used for extracting the tracked target human target frame from the later frame synchronous image acquired by the first camera device, executing an ORB feature detection algorithm and calculating feature points of the target human target frame. The second calculation module finds out feature points matched with the feature points of the target frame of the target agent calculated according to the first image pickup device from the later frame of synchronous image acquired by the second image pickup device by using a pyramid optical flow tracking method. And the third calculation module calculates the depth of all the characteristic points in the first image pickup device under the coordinate system of the next frame according to the calculated mutually matched characteristic points, the internal references of the first image pickup device and the second image pickup device and a triangulation method. The fourth calculation module calculates three-dimensional coordinates of the first camera device under the coordinate system of the next frame according to the depth of the feature points in the first camera device, converts the three-dimensional coordinates into the coordinate system of the navigation map, obtains the position of the tracked target agent in the navigation map, and sends pushing information and warning information.
According to a third aspect of the present invention, there is also provided an electronic apparatus comprising: a processor 201 and a memory 202, wherein computer program instructions are stored in the memory 202, wherein the computer program instructions, when executed by the processor 201, cause the processor 201 to perform the steps of the detection method of preset behaviour in the above-described embodiments.
Further, as shown in fig. 3, the electronic device further comprises a network interface 203, an input device 204, a hard disk 205, and a display device 206.
The interfaces and devices described above may be interconnected by a bus architecture. The bus architecture may include any number of interconnected buses and bridges. One or more central processing units 201 (CPUs), in particular represented by processor 201, and various circuits of one or more memories 202, represented by memories 202, are connected together. The bus architecture may also connect various other circuits together, such as peripheral devices, voltage regulators, and power management circuits. It is understood that a bus architecture is used to enable connected communications between these components. The bus architecture includes, in addition to a data bus, a power bus, a control bus, and a status signal bus, all of which are well known in the art and therefore will not be described in detail herein.
The network interface 203 may be connected to a network (e.g., the internet, a local area network, etc.), and may obtain relevant data from the network and store the relevant data in the hard disk 205.
Input device 204 may receive various instructions entered by an operator and send to processor 201 for execution. The input device 204 may include a keyboard or pointing device (e.g., a mouse, a trackball, a touch pad, or a touch screen, among others).
A display device 206 may display results obtained by the execution of instructions by the processor 201.
The memory 202 is used for storing programs and data necessary for the operation of the operating system 2021, and data such as intermediate results in the calculation process of the processor 201.
It will be appreciated that the memory 202 in embodiments of the invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM), erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), or flash memory, among others. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. The memory 202 of the apparatus and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory 202.
In some implementations, the memory 202 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof: an operating system 2021 and application programs 2022.
The operating system 2021 contains various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application programs 2022 include various application programs 2022, such as a Browser (Browser), for implementing various application services. The program implementing the method of the embodiment of the present invention may be contained in the application program 2022.
The above-described processor 201 executes the steps of the detection method of the preset behavior according to the above-described embodiment when calling and executing the application program 2022 and data stored in the memory 202, specifically, the program or instructions stored in the application program 2022.
The method disclosed in the above embodiment of the present invention may be applied to the processor 201 or implemented by the processor 201. The processor 201 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 201 or by instructions in the form of software. The processor 201 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or the processor 201 may be any conventional processor 201 or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 202, and the processor 201 reads the information in the memory 202 and, in combination with its hardware, performs the steps of the method described above.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions of the application, or a combination thereof.
For a software implementation, the techniques herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions herein. The software codes may be stored in the memory 202 and executed by the processor 201. The memory 202 may be implemented within the processor 201 or external to the processor 201.
Specifically, the processor 201 is further configured to read the computer program and perform the steps of predicting a stake pocket method and outputting answers to questions asked by the user.
In a fourth aspect of the present invention, there is also provided a computer-readable storage medium storing a computer program, which when executed by the processor 201, causes the processor 201 to perform the steps of the preset behavior detection method of the above embodiment.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform part of the steps of the transceiving method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk, etc., which can store program codes.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.

Claims (10)

1. A method for detecting a preset behavior, comprising:
Based on a first camera device and a second camera device on the robot, acquiring a previous frame synchronous image through the first camera device, inputting a target detection network of double detection heads, and detecting a preset behavior target frame;
Acquiring a later frame of synchronous image through the first camera device, enabling the robot to enter a tracking mode, and executing a pedestrian re-identification algorithm based on deep learning to track according to the detected preset behavior target frame so as to detect a target agent;
the tracked target agent target frame is scratched out of a later frame of synchronous image acquired by the first camera device, an ORB feature detection algorithm is executed on the target agent target frame, and feature points of the target agent target frame are calculated;
Using a pyramid optical flow tracking method to find out feature points matched with the feature points of the target frame of the target agent calculated according to the first camera device from a later frame of synchronous image acquired by the second camera device;
according to the calculated feature points matched with each other, internal references of the first image pickup device and the second image pickup device and a triangulation method, calculating the depth of all feature points in the first image pickup device under a coordinate system of a later frame;
Calculating three-dimensional coordinates of the first camera device under a frame of coordinate system behind the first camera device according to the depth of the feature points in the first camera device, converting the three-dimensional coordinates into a navigation map coordinate system, obtaining the tracked position of the target agent in a navigation map, and sending push information and warning information; wherein the preset behavior refers to the behavior of the adult male in the urinary system anywhere, and the target agent refers to the urinary system anywhere.
2. The method for detecting a preset behavior according to claim 1, wherein the first image capturing device and the second image capturing device are provided on a head of the robot, and the step of acquiring a previous frame synchronization image by the first image capturing device based on the first image capturing device and the second image capturing device on the robot comprises:
setting a robot patrol route;
and controlling the robot patrol according to the set patrol route.
3. The method according to claim 1, wherein the interval time between the previous frame synchronization image and the next frame synchronization image is 400-600ms.
4. The method for detecting preset behavior according to claim 1, wherein the step of performing a pedestrian re-recognition algorithm based on deep learning for tracking includes:
preprocessing a later frame of synchronous image acquired by the first camera device;
Inputting YOLOV single-classification human body target detection network into the preprocessed next frame of synchronous image, and detecting all human bodies appearing in the image;
The method comprises the steps that images in a behavior target frame detected by a preset behavior target frame detection module are scratched out from a previous frame synchronous image acquired by the first camera device;
Inputting the preprocessed next frame of synchronous image into a pedestrian re-identification tracking algorithm, and outputting a 512-dimensional feature vector;
All detected human bodies are scratched out of the synchronous image of the next frame, preprocessed (each pixel value is divided by 255) are input into a pedestrian re-recognition tracking algorithm of the module, and N512-dimensional feature vectors are output;
And respectively calculating the Euclidean distances of N512-dimensional feature vectors and one 512-dimensional feature vector output in the pedestrian re-recognition tracking algorithm to obtain the tracked target agent.
5. The method according to claim 4, wherein the first image capturing device acquires a next frame synchronization image, and divides the pixels of the next frame synchronization image by 255 when preprocessing.
6. The method for detecting the behavior of adult male urine anywhere according to claim 5, wherein the pixel coordinates of the feature points of the ith first imaging device are set to beThe pixel coordinates of the feature points of the ith second imaging device matched with the pixel coordinates are/>Let the internal parameter of the first image capturing device be (f lx,fly,clx,cly), and the second image capturing device be (f rx,fry,crx,cry), and the calculating the feature point of the first image capturing device by triangulation includes:
converting the feature point pixel coordinates to a camera normalization plane,
Wherein x l=(xlx,xly,1),xr=(xrx,xry, 1)
Wherein x l,xr is the coordinate point of the characteristic point of the synchronous image of the next frame on the normalization plane, and is brought into the triangle ranging formula,
Is an antisymmetric matrix of x r,
R, t represents an extrinsic transformation matrix from the first camera to the second camera,
S l is the depth of the feature point of the first image capturing device in the coordinate system of the next frame to be calculated.
7. The method for detecting the behavior of adult males in urinary anywhere according to claim 6, wherein the formula for calculating the three-dimensional coordinate mean is as follows:
The three-dimensional coordinates of the target agent in the navigation map coordinate system are,
Wherein x m、ym and z m respectively represent three planes meaning a map, and T lcw is a first camera pose meter.
8. A detection system of preset behavior, applied to the detection method of preset behavior according to any one of claims 1 to 7, characterized in that the detection system comprises:
The acquisition module is used for acquiring a previous frame synchronous image and a next frame synchronous image;
The input module is used for inputting the acquired previous frame synchronous image into a target detection network of the double detection heads and detecting a preset behavior target frame;
The tracking module is used for executing a pedestrian re-recognition algorithm based on deep learning to track according to the detected preset behavior target frame so as to detect a target agent;
The first calculation module is used for matting out the tracked target agent target frame from the later frame of synchronous image acquired by the first camera device, executing an ORB feature detection algorithm and calculating feature points of the target agent target frame;
The second calculation module is used for finding out feature points matched with the feature points of the target frame of the target agent calculated according to the first image pickup device from a later frame of synchronous image obtained by a second image pickup device according to a pyramid optical flow tracking method;
The third calculation module calculates the depth of all the feature points in the first image pickup device under a later frame coordinate system according to the calculated feature points matched with each other, the internal references of the first image pickup device and the second image pickup device and a triangulation method;
The fourth calculation module calculates three-dimensional coordinates of the first camera device under a frame coordinate system behind the first camera device according to the depth of the feature points in the first camera device, converts the three-dimensional coordinates into a navigation map coordinate system, obtains the tracked position of the target agent in the navigation map, and sends pushing information and warning information; wherein the preset behavior refers to the behavior of the adult male in the urinary system anywhere, and the target agent refers to the urinary system anywhere.
9. An electronic device, comprising: a processor and a memory in which computer program instructions are stored, wherein the computer program instructions, when executed by the processor, cause the processor to perform the steps of the method of detecting a preset behaviour as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the steps of the method for detecting a preset behaviour according to any one of claims 1 to 7.
CN202311432241.9A 2023-10-31 2023-10-31 Detection method and system for preset behaviors and electronic equipment Pending CN118155272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311432241.9A CN118155272A (en) 2023-10-31 2023-10-31 Detection method and system for preset behaviors and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311432241.9A CN118155272A (en) 2023-10-31 2023-10-31 Detection method and system for preset behaviors and electronic equipment

Publications (1)

Publication Number Publication Date
CN118155272A true CN118155272A (en) 2024-06-07

Family

ID=91293735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311432241.9A Pending CN118155272A (en) 2023-10-31 2023-10-31 Detection method and system for preset behaviors and electronic equipment

Country Status (1)

Country Link
CN (1) CN118155272A (en)

Similar Documents

Publication Publication Date Title
WO2020114528A2 (en) Method, device, and system for tracking persons potentially infected at public places during epidemic
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
WO2021082112A1 (en) Neural network training method, skeleton diagram construction method, and abnormal behavior monitoring method and system
US8995714B2 (en) Information creation device for estimating object position and information creation method and program for estimating object position
CN111045000A (en) Monitoring system and method
CN111368615B (en) Illegal building early warning method and device and electronic equipment
JP2023516502A (en) Systems and methods for image-based location determination and parking monitoring
CN101167086A (en) Human detection and tracking for security applications
CN111241913A (en) Method, device and system for detecting falling of personnel
CN113192646B (en) Target detection model construction method and device for monitoring distance between different targets
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
CN111079536B (en) Behavior analysis method, storage medium and device based on human body key point time sequence
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
CN112613668A (en) Scenic spot dangerous area management and control method based on artificial intelligence
CN114677633A (en) Multi-component feature fusion-based pedestrian detection multi-target tracking system and method
CN107610224A (en) It is a kind of that algorithm is represented based on the Weakly supervised 3D automotive subjects class with clear and definite occlusion modeling
CN113505643B (en) Method and related device for detecting violation target
CN114220063A (en) Target detection method and device
CN112150508A (en) Target tracking method, device and related equipment
CN111144260A (en) Detection method, device and system of crossing gate
CN116912517A (en) Method and device for detecting camera view field boundary
CN118155272A (en) Detection method and system for preset behaviors and electronic equipment
Tang Development of a multiple-camera tracking system for accurate traffic performance measurements at intersections
WO2022107548A1 (en) Three-dimensional skeleton detection method and three-dimensional skeleton detection device
CN113824880B (en) Vehicle tracking method based on target detection and UWB positioning

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination