CN112950668A - Intelligent monitoring method and system based on mold position measurement - Google Patents

Intelligent monitoring method and system based on mold position measurement Download PDF

Info

Publication number
CN112950668A
CN112950668A CN202110217177.7A CN202110217177A CN112950668A CN 112950668 A CN112950668 A CN 112950668A CN 202110217177 A CN202110217177 A CN 202110217177A CN 112950668 A CN112950668 A CN 112950668A
Authority
CN
China
Prior art keywords
joint point
color image
human body
image
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110217177.7A
Other languages
Chinese (zh)
Inventor
陈小忠
高桢
姚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beidou Jingtrace Technology Shandong Co ltd
Original Assignee
Beidou Jingtrace Technology Shandong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beidou Jingtrace Technology Shandong Co ltd filed Critical Beidou Jingtrace Technology Shandong Co ltd
Priority to CN202110217177.7A priority Critical patent/CN112950668A/en
Publication of CN112950668A publication Critical patent/CN112950668A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent monitoring method and system based on mold position measurement, comprising the following steps: acquiring a color image and a depth image of a monitoring scene, and realizing the alignment of pixel positions of the color image and the depth image; based on the color image, obtaining pixel coordinates of each set joint point of the target person in the color image by utilizing a human body posture estimation algorithm; mapping the joint point coordinates to a depth image to obtain a corresponding depth value; obtaining the actual space three-dimensional coordinates of the joint points in the scene through coordinate conversion based on the depth values; and determining the pixel range of the target personnel, determining the motion track of the target personnel, and carrying out intrusion detection on the personnel. The present invention no longer treats the target person as a particle, but rather as an entity consisting of multiple spatial points. Obtaining the space three-dimensional coordinates of each joint point of the human body through human body posture estimation and space coordinate conversion; a more refined positioning result can be obtained.

Description

Intelligent monitoring method and system based on mold position measurement
Technical Field
The invention relates to the technical field of intelligent monitoring, in particular to an intelligent monitoring method and system based on mode position measurement.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The intelligent monitoring technology aiming at the personnel intrusion in the special area has the key point of detecting and positioning the personnel. In the prior art, the target person is generally regarded as a mass point, and then the coordinates of the centroid (center of gravity) are calculated as the position of the target person. However, in some limited area human intrusion monitoring, the positioning with particle as target often cannot meet the requirement of detection accuracy. For example, for certain areas of exhibition or radiological activity, it is desirable that personnel not have any part of the boundary crossing. However, if the traditional particle position detection is adopted, the hands of the person invade the limited area but the trunk of the body is outside the area, and the centroid is still outside the area, so that the person is considered not to invade. Although some current infrared technologies can achieve intrusion detection of any target, the infrared technologies do not have the pertinence of detecting the target, and high false alarm rate is easily caused.
Disclosure of Invention
In order to solve the above problems, the present invention provides an intelligent monitoring method and system based on mode position measurement, which do not regard the target entity as a particle, but calculate the spatial positions of all the points on the surface of the target entity. Through the mode position measurement, the spatial position of all points on the surface of the target personnel can be obtained, the motion state and the spatial position of the personnel can be effectively monitored, the fine degree of personnel positioning is improved, and the false alarm rate is reduced.
In some embodiments, the following technical scheme is adopted:
an intelligent monitoring method based on mode position measurement comprises the following steps:
acquiring a color image and a depth image of a monitoring scene, and realizing the alignment of pixel positions of the color image and the depth image;
based on the color image, obtaining pixel coordinates of each set joint point of the target person in the color image by utilizing a human body posture estimation algorithm;
mapping the joint point coordinates to a depth image to obtain a corresponding depth value;
obtaining the actual space three-dimensional coordinates of the joint points in the scene through coordinate conversion based on the depth values;
and determining the pixel range of the target personnel, determining the motion track of the target personnel, and carrying out intrusion detection on the personnel.
In other embodiments, the following technical solutions are adopted:
an intelligent monitoring system based on mode position measurement, comprising:
the image acquisition module is used for acquiring a color image and a depth image of a monitored scene and realizing the alignment of pixel positions of the color image and the depth image;
the joint point acquisition module is used for acquiring pixel coordinates of each set joint point of the target person in the color image by utilizing a human body posture estimation algorithm based on the color image;
the joint point mapping module is used for mapping the joint point coordinates to a depth image to obtain a corresponding depth value;
the coordinate conversion module is used for obtaining the actual space three-dimensional coordinates of the joint points in the scene through coordinate conversion based on the depth values;
and the monitoring analysis module is used for determining the pixel range of the target personnel, determining the motion track of the target personnel and carrying out intrusion detection on the personnel.
In other embodiments, the following technical solutions are adopted:
a terminal device comprising a processor and a computer-readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the intelligent monitoring method based on the mode position measurement.
In other embodiments, the following technical solutions are adopted:
a computer readable storage medium, wherein a plurality of instructions are stored, said instructions being adapted to be loaded by a processor of a terminal device and to perform the above intelligent monitoring method based on mode position measurement.
Compared with the prior art, the invention has the beneficial effects that:
(1) the present invention no longer treats the target person as a particle, but rather as an entity consisting of multiple spatial points. And obtaining the space three-dimensional coordinates of each joint point of the human body through human body posture estimation and space coordinate conversion. Compared with the traditional particle positioning, the method can obtain more precise positioning results.
(2) The mode position measurement is realized by adopting a visual detection mode, and the method is different from the current position measurement method by utilizing a GPS receiver, a UWB or Bluetooth label and the like, does not need the human participation of a user, and can realize the non-inductive measurement of the position.
(3) The invention provides a function of tracking the track of target personnel, and each target personnel in a scene is endowed with a unique fixed ID in the monitoring process; different targets can be distinguished through the ID; the movement conditions of different persons can be monitored in a targeted manner.
Additional features and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of an intelligent monitoring method based on mode position measurement according to an embodiment of the present invention;
FIG. 2 is a schematic view of a human joint according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a coordinate transformation process in an embodiment of the invention;
FIG. 4 is a schematic diagram of a detection plane in an embodiment of the invention.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
In one or more embodiments, an intelligent monitoring method based on mode position measurement is disclosed, referring to fig. 1, comprising the following steps:
step (1): acquiring a color image and a depth image of a monitoring scene, and realizing the alignment of pixel positions of the color image and the depth image;
specifically, a color RGB image and a depth image of a scene are simultaneously acquired through an RGB-D camera, and the alignment of pixel positions of the color image and the depth image is realized, namely the information of four channels of RGB-D at each pixel position can be obtained.
Step (2): based on the color image, obtaining pixel coordinates of each set joint point of the target person in the color image by utilizing a human body posture estimation algorithm;
specifically, each joint point of the target person is obtained by using the acquired color RGB image based on a human body posture estimation algorithm.
In this embodiment, 18 key points are used to represent human body joint points, as shown in fig. 2;
the specific method for realizing human body posture estimation based on the RGB image in the embodiment comprises the following steps:
the method of deep learning is adopted, firstly, the collected color images are input into the first 10 convolutional layers of the network structure of VGG19 to obtain the characteristics of the input images, and then joint detection is carried out by using a confidence map. After the joint points are obtained, the joints are formed by using a 'position affinity domain' method, the optimal connection mode of every two joint points is determined by a Hungarian algorithm, the posture of the human body is formed, and the pixel coordinates of the joint points of the human body in a color image are output finally.
Of course, other methods may be used, such as: the method of OpenPose is adopted, and the like.
And (3): mapping the joint point coordinates to a depth image to obtain a corresponding depth value;
and (4): obtaining the actual space three-dimensional coordinates of the joint points in the scene through coordinate conversion based on the depth values;
specifically, after the joint point pixel coordinates of the target person in the color image are obtained, the joint point coordinates are mapped to the depth image to obtain corresponding depth values. Based on the pixel coordinates and corresponding depth values of each joint point, the actual spatial three-dimensional coordinates of each joint point in the scene can be obtained by the following coordinate conversion steps. The specific implementation process is shown in fig. 3.
Firstly, based on the pixel coordinates of each joint point on the color image, the depth value corresponding to each joint point in the corresponding depth image is determined. In the measurement of the depth value, the depth usually changes, and there are some pixel positions where the depth value cannot be obtained due to target reflection or the like. Therefore, in the present invention, in order to obtain a stable depth value, a neighborhood window is used to estimate the depth value at each pixel position, i.e. for the target with coordinates (i, j), the depth value D (i, j) can be calculated as:
Figure BDA0002954250780000061
wherein D (x, y) is a depth value with coordinates (x, y) in the depth map.
S is the size of the neighborhood window, and is set to 15 in the present invention. ψ (x, y) is a sign function, and its calculation formula is:
Figure BDA0002954250780000062
secondly, solving the internal reference matrix of the camera by a checkerboard method. Collecting a plurality of checkerboard images shot at different angles, and obtaining an internal reference matrix of the camera by using a Zhang-Yongyou calibration method.
And thirdly, pre-selecting four groups of reference points in the field, and accurately measuring to obtain actual three-dimensional space coordinates (X, Y, Z) and corresponding pixel coordinates (i, j).
And fourthly, solving the posture of the camera through a PNP algorithm based on the four groups of reference points, namely solving a rotation matrix and a translation matrix of the camera.
And fifthly, solving the actual space three-dimensional coordinates of each joint point of the human body by combining the internal and external parameter matrixes of the camera based on the pixel coordinates and the depth values of each joint point.
The concrete solving formula is as follows:
Figure BDA0002954250780000063
wherein (u, v) are pixel coordinates of the joint point, Zc is a depth value of the joint point,
Figure BDA0002954250780000064
is an internal reference matrix of the camera and is,
Figure BDA0002954250780000065
and (Xw, Yw, Zw) are actual space three-dimensional coordinates of the joint points.
Through the steps, the actual space three-dimensional position measurement of each target person joint point can be realized.
And (5): and determining the pixel range of the target personnel, determining the motion track of the target personnel, and carrying out intrusion detection on the personnel.
Specifically, after the joint point of the target person is obtained, the occupation information of the target in the color image can be determined, i.e., the pixel range of the person can be determined through a rectangular frame.
And aiming at each identified person, the motion trail of the target person can be obtained through a target tracking algorithm. In this embodiment, the deepSORT is used as a target tracking algorithm to realize personnel tracking and enable target personnel in a scene to keep a unique ID.
Of course, other target tracking algorithms may be selected by those skilled in the art, such as: the traditional feature extraction-based methods such as BoostingTracker and KCFTTracker also have end-to-end multi-target tracking methods such as SORT and DeepsORT based on deep learning in recent years.
For human intrusion detection, there are currently some well-established methods, such as: UWB-based electronic fences, infrared-based intrusion detection. However, these methods have certain limitations, for example, UWB-based electronic fences still detect objects as a mass point, and infrared-based intrusion detection lacks effective discrimination of the objects.
In the embodiment, the RGB-D camera is utilized, so that the semantic information of the target personnel can be obtained while the position measurement of the target personnel is realized, and the target pertinence is better. In the intrusion detection of the person in the embodiment, two detection surfaces are arranged in a limited area and can be a reminding detection surface and an alarm detection surface respectively according to severity, as shown in fig. 4; wherein:
the reminding detection surface is arranged on the outermost side of the no-entry area, the alarm detection surface is arranged after a distance is formed inwards, and a certain distance is still kept between the alarm detection surface and the limited no-entry area.
Sending out an out-of-range prompt when detecting that a human body joint point invades the prompt detection surface; when detecting that the human body joint points cross the alarm detection surface, immediately sending an alarm to a manager and carrying out image snapshot, storing the scene, and simultaneously storing the historical motion track of the person for subsequent analysis.
The distance between the two detection surfaces is set by combining the intrusion detection requirement of the application scene and the effective distance of the depth camera.
The depth camera used in this embodiment has an effective distance of 20 m. For scenes with high strict intrusion detection requirements, such as military exclusion zones, nuclear power stations and other applications, the detection prompting surface is required to be at least outside a 10m exclusion zone, and the alarm detection surface is required to be at least outside a 5m exclusion zone, so that the processing time of an administrator is ensured; for scenes with strict intrusion detection requirements, such as banks, museums and other applications, the detection prompting surface is required to be at least outside the forbidden area by 5m, and the detection alarming surface is required to be at least outside the forbidden area by 2 m; for other applications generally required by intrusion detection, the reminding detection plane can be set outside the forbidden region 3m, and the alarm detection plane is set outside the forbidden region 2m, specifically referring to table 1.
TABLE 1 monitoring distance design requirements
Detection requirements Strict Is more strict In general
Reminding of distance between detection surfaces 10m 5m 3m
Alarm detection face distance 5m 2m 1m
Example two
In one or more embodiments, disclosed is an intelligent monitoring system based on mode position measurement, comprising:
the image acquisition module is used for acquiring a color image and a depth image of a monitored scene and realizing the alignment of pixel positions of the color image and the depth image;
the joint point acquisition module is used for acquiring pixel coordinates of each set joint point of the target person in the color image by utilizing a human body posture estimation algorithm based on the color image;
the joint point mapping module is used for mapping the joint point coordinates to a depth image to obtain a corresponding depth value;
the coordinate conversion module is used for obtaining the actual space three-dimensional coordinates of the joint points in the scene through coordinate conversion based on the depth values;
and the monitoring analysis module is used for determining the pixel range of the target personnel, determining the motion track of the target personnel and carrying out intrusion detection on the personnel.
As a more specific implementation manner, the present embodiment adopts an RGB-D camera as the scene information collecting device, and certainly, other manners may also be adopted to realize scene collection, such as: color cameras in combination with laser radars, millimeter wave radars, and the like.
The data communication device can be a wired transmission device such as a USB connecting line and a network cable, and can also be a wireless transmission device such as wifi, Bluetooth, 4G and 5G.
The data processing process is realized by a processor, and the processor comprises processing units such as an ARM, an FPGA, a GPU and the like, and can complete calculation and command execution related in the invention.
The data can be stored by RAM, ROM, hard disk or U disk, and the collected scene information and the data and result needed to be stored in the calculation process can be stored.
Scene changes can be displayed in real time through a liquid crystal display screen, an LCD display, an LED display and the like.
EXAMPLE III
In one or more embodiments, a terminal device is disclosed, which includes a server including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the intelligent monitoring method based on the mold position measurement in the first embodiment. For brevity, no further description is provided herein.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software.
The intelligent monitoring method based on module position measurement in the first embodiment may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Example four
In one or more embodiments, a computer-readable storage medium is disclosed, in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor of a terminal device and to execute the intelligent monitoring method based on mode position measurement described in the first embodiment.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (9)

1. An intelligent monitoring method based on mode position measurement is characterized by comprising the following steps:
acquiring a color image and a depth image of a monitoring scene, and realizing the alignment of pixel positions of the color image and the depth image;
based on the color image, obtaining pixel coordinates of each set joint point of the target person in the color image by utilizing a human body posture estimation algorithm;
mapping the joint point coordinates to a depth image to obtain a corresponding depth value;
obtaining the actual space three-dimensional coordinates of the joint points in the scene through coordinate conversion based on the depth values;
and determining the pixel range of the target personnel, determining the motion track of the target personnel, and carrying out intrusion detection on the personnel.
2. The intelligent monitoring method based on the model position measurement as claimed in claim 1, wherein the pixel coordinates of each set joint point of the target person in the color image are obtained based on the color image by using a human body posture estimation algorithm, and the specific process comprises:
inputting the obtained color image into a trained neural network model to obtain the characteristics of the input image, and then using a confidence map to perform joint detection; and after the joint points are obtained, forming each joint by using a position affinity domain method, finally determining the optimal connection mode of every two joint points through a Hungarian algorithm, forming the posture of the human body, and finally outputting the pixel coordinates of each joint point of the human body in the color image.
3. The intelligent monitoring method based on module position measurement as claimed in claim 1, wherein the real space three-dimensional coordinates of the joint point in the scene are obtained through coordinate transformation based on the depth value, and the specific process includes:
estimating the depth value of each pixel position by adopting a neighborhood window;
solving an internal reference matrix of the camera by a checkerboard method;
solving the posture of the camera through a PNP algorithm based on the four groups of reference points, namely solving a camera rotation matrix and a translation matrix;
and solving the actual space three-dimensional coordinates of each joint point of the human body based on the pixel coordinates and the depth values of each joint point and by combining the internal and external parameter matrixes of the camera.
4. The intelligent monitoring method based on the model position measurement as claimed in claim 3, wherein the actual spatial three-dimensional coordinates of each joint point of the human body are specifically:
Figure FDA0002954250770000021
wherein (u, v) is the pixel coordinate of the joint point, ZcIs the depth value of the joint point,
Figure FDA0002954250770000022
is an internal reference matrix of the camera and is,
Figure FDA0002954250770000023
the external parameter matrix of the camera is, R and T are respectively a rotation matrix and a translation matrix, and (Xw, Yw, Zw) are actual space three-dimensional coordinates of the joint point.
5. The intelligent monitoring method based on the mode position measurement as claimed in claim 1, characterized in that, the DeepsORT is adopted as a target tracking algorithm to realize the tracking of the personnel and keep the target personnel in the scene with a unique ID.
6. The intelligent monitoring method based on the mode position measurement as claimed in claim 1, wherein the intrusion detection of the personnel is carried out by the specific process comprising: setting a reminding detection surface and an alarm detection surface at set intervals;
the reminding detection surface is configured to send out an out-of-range reminding after detecting that the human body joint points invade the reminding detection surface;
the alarm detection surface is configured to immediately give an alarm to a manager and perform image capture when the human body joint points are detected to cross the alarm detection surface, and the historical motion track of the target person is saved.
7. An intelligent monitoring system based on mode position measurement, comprising:
the image acquisition module is used for acquiring a color image and a depth image of a monitored scene and realizing the alignment of pixel positions of the color image and the depth image;
the joint point acquisition module is used for acquiring pixel coordinates of each set joint point of the target person in the color image by utilizing a human body posture estimation algorithm based on the color image;
the joint point mapping module is used for mapping the joint point coordinates to a depth image to obtain a corresponding depth value;
the coordinate conversion module is used for obtaining the actual space three-dimensional coordinates of the joint points in the scene through coordinate conversion based on the depth values;
and the monitoring analysis module is used for determining the pixel range of the target personnel, determining the motion track of the target personnel and carrying out intrusion detection on the personnel.
8. A terminal device comprising a processor and a computer-readable storage medium, the processor being configured to implement instructions; a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the intelligent monitoring method based on mode position measurement according to any of claims 1-6.
9. A computer-readable storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded by a processor of a terminal device and to perform the intelligent monitoring method based on mode position measurement according to any one of claims 1 to 6.
CN202110217177.7A 2021-02-26 2021-02-26 Intelligent monitoring method and system based on mold position measurement Pending CN112950668A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110217177.7A CN112950668A (en) 2021-02-26 2021-02-26 Intelligent monitoring method and system based on mold position measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110217177.7A CN112950668A (en) 2021-02-26 2021-02-26 Intelligent monitoring method and system based on mold position measurement

Publications (1)

Publication Number Publication Date
CN112950668A true CN112950668A (en) 2021-06-11

Family

ID=76246428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110217177.7A Pending CN112950668A (en) 2021-02-26 2021-02-26 Intelligent monitoring method and system based on mold position measurement

Country Status (1)

Country Link
CN (1) CN112950668A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486777A (en) * 2021-07-02 2021-10-08 北京一维大成科技有限公司 Behavior analysis method and device for target object, electronic equipment and storage medium
CN114140832A (en) * 2022-01-30 2022-03-04 西安华创马科智能控制***有限公司 Method and device for detecting pedestrian boundary crossing risk in well, electronic equipment and storage medium
CN114842372A (en) * 2022-03-31 2022-08-02 北京的卢深视科技有限公司 Contact type foul detection method and device, electronic equipment and storage medium
WO2023273093A1 (en) * 2021-06-30 2023-01-05 奥比中光科技集团股份有限公司 Human body three-dimensional model acquisition method and apparatus, intelligent terminal, and storage medium
WO2023015938A1 (en) * 2021-08-13 2023-02-16 上海商汤智能科技有限公司 Three-dimensional point detection method and apparatus, electronic device, and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273093A1 (en) * 2021-06-30 2023-01-05 奥比中光科技集团股份有限公司 Human body three-dimensional model acquisition method and apparatus, intelligent terminal, and storage medium
CN113486777A (en) * 2021-07-02 2021-10-08 北京一维大成科技有限公司 Behavior analysis method and device for target object, electronic equipment and storage medium
WO2023015938A1 (en) * 2021-08-13 2023-02-16 上海商汤智能科技有限公司 Three-dimensional point detection method and apparatus, electronic device, and storage medium
CN114140832A (en) * 2022-01-30 2022-03-04 西安华创马科智能控制***有限公司 Method and device for detecting pedestrian boundary crossing risk in well, electronic equipment and storage medium
CN114842372A (en) * 2022-03-31 2022-08-02 北京的卢深视科技有限公司 Contact type foul detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112950668A (en) Intelligent monitoring method and system based on mold position measurement
CN110660186B (en) Method and device for identifying target object in video image based on radar signal
CN104680555B (en) Cross the border detection method and out-of-range monitoring system based on video monitoring
WO2022012158A1 (en) Target determination method and target determination device
CN106558181B (en) Fire monitoring method and apparatus
US12008794B2 (en) Systems and methods for intelligent video surveillance
WO2019129255A1 (en) Target tracking method and device
WO2021170030A1 (en) Method, device, and system for target tracking
CN106559749B (en) Multi-target passive positioning method based on radio frequency tomography
CN103810717B (en) A kind of human body behavioral value method and device
CN105894529B (en) Parking space state detection method and apparatus and system
CN104935893A (en) Monitoring method and device
JP2004531842A (en) Method for surveillance and monitoring systems
JP2004534315A (en) Method and system for monitoring moving objects
CN109448326B (en) Geological disaster intelligent group defense monitoring system based on rapid image recognition
CN108234927A (en) Video frequency tracking method and system
CN105785989A (en) System for calibrating distributed network camera by use of travelling robot, and correlation methods
CN110929584A (en) Network training method, monitoring method, system, storage medium and computer equipment
Santo et al. Device-free and privacy preserving indoor positioning using infrared retro-reflection imaging
CN111666821A (en) Personnel gathering detection method, device and equipment
CN110703760A (en) Newly-increased suspicious object detection method for security inspection robot
CN112396804A (en) Point cloud-based data processing method, device, equipment and medium
CN113673319B (en) Abnormal gesture detection method, device, electronic device and storage medium
CN105469054A (en) Model construction method of normal behaviors and detection method of abnormal behaviors
CN105844756B (en) A kind of number method of counting based on radio frequency back-scattered signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination