CN112002039A - Automatic control method for file cabinet door based on artificial intelligence and human body perception - Google Patents

Automatic control method for file cabinet door based on artificial intelligence and human body perception Download PDF

Info

Publication number
CN112002039A
CN112002039A CN202010852646.8A CN202010852646A CN112002039A CN 112002039 A CN112002039 A CN 112002039A CN 202010852646 A CN202010852646 A CN 202010852646A CN 112002039 A CN112002039 A CN 112002039A
Authority
CN
China
Prior art keywords
visitor
file cabinet
human body
key point
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010852646.8A
Other languages
Chinese (zh)
Inventor
王冬井
黄莎莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010852646.8A priority Critical patent/CN112002039A/en
Publication of CN112002039A publication Critical patent/CN112002039A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00896Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys specially adapted for particular uses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/22Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
    • G07C9/25Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
    • G07C9/257Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition electronically

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an automatic control method of a file cabinet door based on artificial intelligence and human body perception, which comprises the following steps: the method comprises the steps of firstly acquiring authority information of a visitor, then acquiring an image of the visitor by using a camera, obtaining a key point heat map after processing of a neural network, obtaining the staying time of the visitor based on the key point heat map, processing the key point heat map to obtain the orientation of the visitor when the staying time exceeds a certain time, judging whether a file cabinet exists in the orientation range of the visitor, and if so, automatically controlling a cabinet door of the file cabinet by combining the authority information of the visitor. The method utilizes the neural network technology to realize the automatic control of the file cabinet doors, and does not need to install a sensor in each file cabinet, thereby reducing the waste of funds.

Description

Automatic control method for file cabinet door based on artificial intelligence and human body perception
Technical Field
The invention relates to the field of artificial intelligence and automatic control of file cabinets, in particular to an automatic control method for a cabinet door of a file cabinet based on artificial intelligence and human body perception.
Background
The existing technology for automatically opening and closing the file cabinet is to detect a human body by utilizing an infrared sensor, so that the file cabinet is automatically opened when a person approaches, but the mode needs to install an independent infrared sensor in each cabinet body, the file cabinets in a file hall are numerous, and the mode undoubtedly needs higher cost.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an automatic control method of a file cabinet door based on artificial intelligence and human body perception, which comprises the following steps:
firstly, carrying out identity authentication on a visitor to acquire authority information;
acquiring a first image by using a camera, and processing the acquired first image by using a human body detection network to obtain a human body enclosure frame;
step three, cutting the first image by using the human body surrounding frame to obtain a second image, and extracting key points of the two feet from the second image by using a key point detection network to obtain a key point heat map, wherein each frame of key point heat map has an initial mark value of 0;
step four, processing the key point heat map to obtain the residence time of the visitor, wherein the specific calculation process is as follows:
starting from the obtained second frame key point heat map, carrying out thermal stack processing, specifically, calculating the area of the overlapped part of hot spots in the current frame key point heat map and the previous frame key point heat map, and when the area is larger than a preset first threshold value, determining that people do not move between two frames, and changing the marking value of the current frame key point heat map from 0 to 1; calculating the area of the overlapped part of the heat maps of the key points of the multiple frames to obtain a mark value sequence, obtaining the retention time according to the number of 1 s in the mark value sequence and the sampling frequency of the camera, and judging that the result is the retention of the personnel when the retention time is greater than a preset second threshold value; when the dwell time is larger than a second threshold value, n frames of key point heat maps are obtained;
step five, when the residence time of the visitor is greater than a second threshold value, acquiring the orientation of the visitor, specifically:
projecting the two-foot key points of the visitor in the center point of the file cabinet and the nth frame key point heat map to a built BIM (building information model) of the area where the file cabinet is located in advance by utilizing projection transformation, and obtaining the position and the orientation of the visitor according to the left and right two-foot key points;
and step six, detecting whether a central point of the file cabinet exists in the direction range of the visitor, if so, combining the authority information of the visitor to realize the automatic control of the cabinet door of the file cabinet, and performing visual processing on the BIM by utilizing the Web GIS technology.
The method for performing identity authentication comprises the following steps: radio frequency card, face recognition, fingerprint recognition.
The training process of the human body detection network comprises the following steps: and constructing a training data set by using the collected first image, wherein labels are x, y, w and h, the x and the y are coordinates of the central point of the human body surrounding frame, the w is the width of the human body surrounding frame, the h is the height of the human body surrounding frame, and training of the network is carried out by using a mean square error loss function.
The training process of the two-foot key point detection network comprises the following steps: constructing a training data set by using the obtained second image, taking labels as key points of left and right feet of the human body, and training the network by using a mean square error loss function; wherein, the labeling process of the label is as follows: the key points comprise a left foot key point and a right foot key point, each key point corresponds to a single channel, and the hot spots of the key points are generated by adopting Gaussian blur after the pixel positions of the key points are marked in the corresponding channels.
The position of the visitor is specifically the central point of the connecting line of the key points of the left foot and the right foot of the visitor in the BIM.
The method for acquiring the orientation of the visitor comprises the following steps: in the BIM, a right foot key point points to a left foot key point to obtain a first vector, the first vector is rotated by 90 degrees along the clockwise direction to obtain a second vector, and the direction pointed by the second vector is the direction of the visitor.
The orientation range is a sector area, and the specific acquisition process of the sector area is as follows: the central point is used as a circular point, r is used as a radius, and a second vector is used as a reference, so that the left and right directions are respectively offset by theta to obtain the orientation range of the visitor; where θ is the angle value.
The method for detecting whether the center point of the file cabinet exists in the direction range of the visitor comprises the following steps: connecting the central point of the connecting line of the key points of the left and the right feet with the central point of the file cabinet to obtain a straight line, calculating an included angle between the straight line and the vertical line, calculating the distance between the central point of the connecting line of the key points of the left and the right feet and the central point of the file cabinet, and judging that the central point of the file cabinet exists in the orientation range of the visitor when the distance is less than r and the included angle is less than theta.
The invention has the beneficial effects that:
1. the invention is based on the computer vision detection mode, besides detecting the human body, the invention also combines the functions of radio frequency card, human face identification and the like to realize the identification of the identity and the binding of the authority, so that confidential files can be protected.
2. According to the invention, the images acquired by the camera are processed, so that the information of the stay time, the orientation and the like of the personnel is acquired, a sensor is not required to be installed in each file cabinet, and the waste of funds is reduced.
3. The invention can realize the automatic opening and closing of the file cabinet door, can avoid the leakage of files caused by forgetting to close the file cabinet door by related personnel, and can effectively reduce the probability of misjudgment by combining the orientation information of people, so that the judgment result is more accurate.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a diagram of the projection results of the center point and key points of two legs of the file cabinet.
Detailed Description
In order that those skilled in the art will better understand the present invention, the following further description is provided in conjunction with the embodiments and the accompanying drawings.
The main purpose of the invention is to collect images through a camera, realize human body detection by using a DNN network, realize automatic opening of a cabinet door when a human body is close to a cabinet body, and realize automatic closing of the cabinet door when the human body is far away from the cabinet body.
The computer vision detection technology has the remarkable advantages of non-contact, high efficiency, economy and the like, and has wide application prospect in various detection management applications. Therefore, the form of combining BIM and computer vision is adopted, and the supervision efficiency can be effectively improved.
The implementation flow of the invention is shown in fig. 1, specifically, a camera is used for collecting images of a file cabinet area, a deep neural network is used for processing the collected images, a human body target is identified, key points of two feet of a human body are obtained, the staying time and the action track of related personnel are obtained in a thermal power stacking mode, after the key points of the two feet are projected on the ground of a BIM, the orientation of the human body can be obtained, when the distance between the human body and the cabinet body is close enough and stays towards the cabinet body for a certain time, the file cabinet needs to be opened, the operation of unlocking and opening the cabinet door is executed, and when the human body leaves, the operation of closing the file cabinet is executed. The invention is also combined with the identity verification function, and the authority of all the file cabinets can be judged only by one-time verification.
Example (b):
the invention will be further described by taking the case of a visitor in an archive:
and constructing a BIM (building information model) of the archive, wherein the BIM comprises camera perception information, corresponding geographical position information, information of the current environment and the like.
And (4) arranging verification devices such as a radio frequency card, face recognition, fingerprint recognition and the like at the entrance or the exit of the archive to acquire the authority information of the visitor.
The computer vision detection technology has the remarkable advantages of non-contact, high efficiency, economy and the like, and has wide application prospect in various detection management applications. Therefore, the invention adopts the form of combining BIM and computer vision, and can effectively improve the supervision efficiency.
The method comprises the following steps of collecting a first image of a file cabinet region through a camera in a file hall, sending the collected first image into a human body detection network for processing, and obtaining a human body enclosure frame, wherein the human body detection network obtains the human body enclosure frame by adopting a CenterNet method, namely, a central point of the enclosure frame and the width and height of the enclosure frame are regressed by a DNN network, and the training content of the human body detection network is as follows:
the data set adopts indoor images including images of human bodies;
the data labels are x, y, w and h, wherein x and y are coordinates of the central point of the human body surrounding frame, w is the width of the human body surrounding frame, and h is the height of the human body surrounding frame. During marking, the human body of the shielded part should be marked out of the surrounding frame. Images and labels x, y, w, h in the data set need to be normalized.
The data set and the label data are used as the input of a human body detection network, and a human body detection encoder and a human body detection decoder are trained end to end; the human body detection encoder performs feature extraction on an input image, inputs the image data subjected to normalization processing, and outputs the image data as a first feature map; the human body detection decoder performs up-sampling on the first characteristic diagram and generates a human body surrounding frame.
The loss function used is the mean square error loss function.
Thus, the human body enclosure frame is obtained.
When a plurality of people exist in the archive, the multi-target tracking is realized by adopting a mode of calculating the frame surrounding IOU of adjacent frames, and the repeated detection of different frames on the same target is avoided. The IOU mode is to obtain the intersection ratio of two bounding boxes, and when the value of the IOU is greater than 0.7, the two bounding boxes are judged to be the same target. This section is well known and will not be discussed in detail.
The method comprises the following steps of utilizing a human body surrounding frame to cut a first image to obtain a second image, wherein the second image only comprises a human body image, extracting key points of two feet of the second image by using a key point detection network of the two feet to obtain a key point heat map, and specifically training contents of the key point detection network of the two feet are as follows:
the data set adopts cut human body images and should contain human body images with both feet shielded.
The label is the key point of the left and right feet of the human body, and the labeling process is as follows: the key points are divided into two types, including a left foot key point and a right foot key point, each type of key point should correspond to a single channel, the positions of the key point pixels are marked in the channel, and then Gaussian blur is adopted to enable the key point hot spots to be formed at the marked points. The invention uses two types of key points, so that the label image comprises two channels. The key points of the human body which are blocked should be marked.
Using a data set and a label end-to-end training key point encoder and a key point decoder, wherein the key point encoder is used for extracting the characteristics of an input image, inputting the image data subjected to normalization processing, and outputting the image data as a second characteristic diagram; and the key point decoder performs up-sampling and feature extraction on the second feature map to finally generate a key point heat map.
The loss function used is the mean square error loss function.
To this end, a key point heatmap is obtained.
The key points of the two feet are selected by the invention because: the human body is three-dimensional in the space, and the projection process is changed into overlooking visual angle from squint visual angle and is referred to with ground, and other key points of human body all have certain spatial distance to ground, always can produce great error after the projection, even make the projection point fall to other regions to influence the judged result, and select for use both feet key point can minimize the error of projection position and actual position.
Assigning an initial mark value of 0 to each frame of key point heat map, and processing the key point heat map to obtain the staying time, the orientation and the distance between the visitor and the file cabinet, wherein:
the calculation process of the personnel residence time is as follows:
calculating the area of the hot spot overlapping part in the current frame key point heat map and the previous frame key point heat map, and when the area is larger than a preset first threshold, taking the first threshold to be 0.6 in the embodiment, considering that the personnel between two frames do not move, and changing the marking value of the current frame key point heat map from 0 to 1; calculating the area of the overlapped part of the heat map of the key points of the multiple frames to obtain a marker value sequence, such as the sequence 0000111111, obtaining the retention time according to the number of 1 in the marker value sequence, and judging that the retention result is that the person stays when the retention time is greater than a preset second threshold value; and when the dwell time is greater than a second threshold value, obtaining n frames of key point heat maps.
When the residence time of the visitor is greater than a second threshold, acquiring the orientation of the visitor, specifically:
according to the imaging principle, the center point of the file cabinet and the key points of the two feet in the key point thermal map of the nth frame are projected to the BIM of the region where the file cabinet is built in advance by utilizing projection transformation, the projection result is shown in figure 2, the hollow point is the center point of the file cabinet, and the black solid point is the key points of the two feet of the detected visitor. Projective transformation content is well known, and the invention is not discussed here.
Obtaining the position and the orientation of the visitor according to the left and right double-foot key points, namely, the central point of the connecting line of the left and right double-foot key points of the visitor in the BIM represents the position of the visitor; in the BIM, a right foot key point points to a left foot key point to obtain a first vector, the first vector rotates by 90 degrees along the clockwise direction to obtain a second vector, and the direction pointed by the second vector is the direction of a visitor; in fig. 2, the direction indicated by the arrow is the direction of the visitor.
Thus, the orientation of the visitor is obtained.
Detect whether have filing cabinet central point in visitor's orientation scope, specifically:
the orientation range is a sector area, as shown in fig. 2, the specific acquisition method of the sector area is as follows:
the central point is used as a circular point, r is used as a radius, a second vector is used as a reference, theta is offset left and right, the value of theta is 45 degrees in the embodiment, the orientation range of visitors is obtained, and the angular radian of a sector area is 90 degrees; wherein r is an empirical length, and an implementer can select the value of r according to the actual situation.
The coordinate of the key point of the left foot in the BIM is (X)a,Ya) The coordinate of the key point of the right foot is (X)b,Yb) Obtaining the coordinates (X) of the connecting line central point of the key points of the left and right feet according to the coordinates of the key points of the left and right feet0,Y0) Wherein:
Figure BDA0002645246610000041
the coordinate of the center point of the file cabinet is (X, Y), and the distance between the center point of the connecting line and the center point of the file cabinet is calculated:
Figure BDA0002645246610000042
and connecting the center point of the file cabinet with the center point of the connecting line to obtain a straight line, and calculating an included angle between the straight line and the vertical line.
When the distance L is less than r and the included angle is less than 45 °, the file cabinet is considered to be located within the direction range of the visitor, such as the file cabinet a in fig. 2 is within the direction range of the visitor, and the file cabinet B is not within the direction range of the visitor.
When the access personnel have a file cabinet central point in the orientation range, the access personnel are combined with the authority information of the access personnel to judge whether the access personnel have the authority to open the file cabinet in the orientation range of the access personnel, and if the access personnel have the authority, the cabinet door of the file cabinet is automatically opened.
The purpose of obtaining the orientation is to make the judgment result more accurate. If the visitor only leans against the file cabinet for a certain time, the file cabinet does not need to be opened, and if the visitor only judges the distance between the visitor and the file cabinet and the staying time, the obtained result is unreliable.
After the file cabinet door is opened, the key point heatmaps of the visitors are superposed to obtain the movement track of the visitors, and when the movement track of the visitors faces away from the opened file cabinet and the distance between the visitors and the opened file cabinet is greater than a certain value, the file cabinet door is automatically closed.
The state of filing cabinet door is obtained by the interior sensor of cabinet, and the state divide into two kinds: opening and closing. Meanwhile, the automatic opening and closing of the cabinet door of the file cabinet is realized by mechanical equipment in the cabinet body. This section is not an essential part of the present invention and is not discussed.
When a plurality of visitors exist in the file room, the residence time, the orientation and the orientation range of each visitor are calculated and obtained respectively, and the automatic control of the cabinet door of the file cabinet is realized by combining the authority information of each visitor.
The BIM is visually processed by utilizing the Web GIS technology, so that related personnel can inquire and analyze on Web, and supervisors can conveniently monitor the walking condition of visitors in an archive and the condition of taking and placing archives in real time.
The foregoing is intended to provide those skilled in the art with a better understanding of the invention, and is not to be construed as limiting the invention, since modifications may be made within the spirit and scope of the invention.

Claims (8)

1. A method for automatically controlling a cabinet door of a file cabinet based on artificial intelligence and human body perception is characterized by comprising the following steps:
firstly, carrying out identity authentication on a visitor to acquire authority information;
acquiring a first image by using a camera, and processing the acquired first image by using a human body detection network to obtain a human body enclosure frame;
step three, cutting the first image by using the human body surrounding frame to obtain a second image, and extracting key points of the two feet from the second image by using a key point detection network to obtain a key point heat map, wherein each frame of key point heat map has an initial mark value of 0;
step four, processing the key point heat map to obtain the residence time of the visitor, wherein the specific calculation process is as follows:
starting from the obtained second frame key point heat map, carrying out thermal stack processing, specifically, calculating the area of the overlapped part of hot spots in the current frame key point heat map and the previous frame key point heat map, and when the area is larger than a preset first threshold value, determining that people do not move between two frames, and changing the marking value of the current frame key point heat map from 0 to 1; calculating the area of the overlapped part of the heat maps of the key points of the multiple frames to obtain a mark value sequence, obtaining the retention time according to the number of 1 s in the mark value sequence and the sampling frequency of the camera, and judging that the result is the retention of the personnel when the retention time is greater than a preset second threshold value; when the dwell time is larger than a second threshold value, n frames of key point heat maps are obtained;
step five, when the residence time of the visitor is greater than a second threshold value, acquiring the orientation of the visitor, specifically:
projecting the two-foot key points of the visitor in the center point of the file cabinet and the nth frame key point heat map to a built BIM (building information model) of the area where the file cabinet is located in advance by utilizing projection transformation, and obtaining the position and the orientation of the visitor according to the left and right two-foot key points;
and step six, detecting whether a central point of the file cabinet exists in the direction range of the visitor, if so, combining the authority information of the visitor to realize the automatic control of the cabinet door of the file cabinet, and performing visual processing on the BIM by utilizing the Web GIS technology.
2. The method of claim 1, wherein the method of performing identity verification comprises: radio frequency card, face recognition, fingerprint recognition.
3. The method of claim 1, wherein the training process of the human detection network is: and constructing a training data set by using the collected first image, wherein labels are x, y, w and h, the x and the y are coordinates of the central point of the human body surrounding frame, the w is the width of the human body surrounding frame, the h is the height of the human body surrounding frame, and training of the network is carried out by using a mean square error loss function.
4. The method of claim 1, wherein the training process of the two-footed keypoint detection network is: constructing a training data set by using the obtained second image, taking labels as key points of left and right feet of the human body, and training the network by using a mean square error loss function; wherein, the labeling process of the label is as follows: the key points comprise a left foot key point and a right foot key point, each key point corresponds to a single channel, and the hot spots of the key points are generated by adopting Gaussian blur after the pixel positions of the key points are marked in the corresponding channels.
5. The method of claim 1, wherein the location of the visitor is specified by a center point of a line connecting key points of left and right feet of the visitor in the BIM.
6. The method of claim 5, wherein the visitor's heading is obtained by: in the BIM, a right foot key point points to a left foot key point to obtain a first vector, the first vector is rotated by 90 degrees along the clockwise direction to obtain a second vector, and the direction pointed by the second vector is the direction of the visitor.
7. The method of claim 6, wherein the orientation range is a sector area, and the specific acquisition process of the sector area is as follows: the central point is used as a circular point, r is used as a radius, and a second vector is used as a reference, so that the left and right directions are respectively offset by theta to obtain the orientation range of the visitor; where θ is the angle value.
8. The method as claimed in claim 7, wherein the detection of the presence of the filing cabinet center point within the visitor's heading is performed by: connecting the central point of the connecting line of the key points of the left and the right feet with the central point of the file cabinet to obtain a straight line, calculating an included angle between the straight line and the vertical line, calculating the distance between the central point of the connecting line of the key points of the left and the right feet and the central point of the file cabinet, and judging that the central point of the file cabinet exists in the orientation range of the visitor when the distance is less than r and the included angle is less than theta.
CN202010852646.8A 2020-08-22 2020-08-22 Automatic control method for file cabinet door based on artificial intelligence and human body perception Withdrawn CN112002039A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010852646.8A CN112002039A (en) 2020-08-22 2020-08-22 Automatic control method for file cabinet door based on artificial intelligence and human body perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010852646.8A CN112002039A (en) 2020-08-22 2020-08-22 Automatic control method for file cabinet door based on artificial intelligence and human body perception

Publications (1)

Publication Number Publication Date
CN112002039A true CN112002039A (en) 2020-11-27

Family

ID=73473182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010852646.8A Withdrawn CN112002039A (en) 2020-08-22 2020-08-22 Automatic control method for file cabinet door based on artificial intelligence and human body perception

Country Status (1)

Country Link
CN (1) CN112002039A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115983802A (en) * 2023-03-03 2023-04-18 泰州市人民医院 Centralized file management terminal and file management method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07249138A (en) * 1994-03-09 1995-09-26 Nippon Telegr & Teleph Corp <Ntt> Residence time measuring method
JP2012221382A (en) * 2011-04-12 2012-11-12 Mitsubishi Electric Building Techno Service Co Ltd Locker room management system
CN104574437A (en) * 2013-10-29 2015-04-29 松下电器产业株式会社 Staying state analysis device, staying state analysis system and staying state analysis method
US20150123794A1 (en) * 2013-11-06 2015-05-07 Jari Hämäläinen Method and apparatus for recording location specific activity of a user and uses thereof
CN104636745A (en) * 2013-11-08 2015-05-20 株式会社理光 Method and device for extracting scale-invariant features and method and device for recognizing objects
CN104912432A (en) * 2014-03-13 2015-09-16 欧姆龙株式会社 Automatic door control device and automatic door control method
CN104954736A (en) * 2014-03-26 2015-09-30 松下知识产权经营株式会社 Stay condition analyzing apparatus, stay condition analyzing system, and stay condition analyzing method
CN108647242A (en) * 2018-04-10 2018-10-12 北京天正聚合科技有限公司 A kind of generation method and system of thermodynamic chart
CN108898109A (en) * 2018-06-29 2018-11-27 北京旷视科技有限公司 The determination methods, devices and systems of article attention rate
US20190303677A1 (en) * 2018-03-30 2019-10-03 Naver Corporation System and method for training a convolutional neural network and classifying an action performed by a subject in a video using the trained convolutional neural network
CN110858295A (en) * 2018-08-24 2020-03-03 广州汽车集团股份有限公司 Traffic police gesture recognition method and device, vehicle control unit and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07249138A (en) * 1994-03-09 1995-09-26 Nippon Telegr & Teleph Corp <Ntt> Residence time measuring method
JP2012221382A (en) * 2011-04-12 2012-11-12 Mitsubishi Electric Building Techno Service Co Ltd Locker room management system
CN104574437A (en) * 2013-10-29 2015-04-29 松下电器产业株式会社 Staying state analysis device, staying state analysis system and staying state analysis method
US20150123794A1 (en) * 2013-11-06 2015-05-07 Jari Hämäläinen Method and apparatus for recording location specific activity of a user and uses thereof
CN104636745A (en) * 2013-11-08 2015-05-20 株式会社理光 Method and device for extracting scale-invariant features and method and device for recognizing objects
CN104912432A (en) * 2014-03-13 2015-09-16 欧姆龙株式会社 Automatic door control device and automatic door control method
CN104954736A (en) * 2014-03-26 2015-09-30 松下知识产权经营株式会社 Stay condition analyzing apparatus, stay condition analyzing system, and stay condition analyzing method
US20190303677A1 (en) * 2018-03-30 2019-10-03 Naver Corporation System and method for training a convolutional neural network and classifying an action performed by a subject in a video using the trained convolutional neural network
CN108647242A (en) * 2018-04-10 2018-10-12 北京天正聚合科技有限公司 A kind of generation method and system of thermodynamic chart
CN108898109A (en) * 2018-06-29 2018-11-27 北京旷视科技有限公司 The determination methods, devices and systems of article attention rate
CN110858295A (en) * 2018-08-24 2020-03-03 广州汽车集团股份有限公司 Traffic police gesture recognition method and device, vehicle control unit and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
仲越: "结合深度信息的人体姿态识别", 《信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115983802A (en) * 2023-03-03 2023-04-18 泰州市人民医院 Centralized file management terminal and file management method

Similar Documents

Publication Publication Date Title
Steder et al. Robust place recognition for 3D range data based on point features
Siagian et al. Biologically inspired mobile robot vision localization
Cui et al. Multi-modal tracking of people using laser scanners and video camera
Anati et al. Robot localization using soft object detection
Blanco et al. A robust, multi-hypothesis approach to matching occupancy grid maps
CN103310442B (en) Based on intelligent positioning system and the localization method thereof of multifrequency information fusion
Liu et al. A contrario comparison of local descriptors for change detection in very high spatial resolution satellite images of urban areas
Ji et al. RGB-D SLAM using vanishing point and door plate information in corridor environment
CN112002039A (en) Automatic control method for file cabinet door based on artificial intelligence and human body perception
Kang et al. Continuous multi-views tracking using tensor voting
Kirchner et al. A robust people detection, tracking, and counting system
Shi et al. Feature selection for reliable data association in visual SLAM
Trahanias et al. Visual recognition of workspace landmarks for topological navigation
Cicirelli et al. Target recognition by components for mobile robot navigation
Ohno et al. Privacy-preserving pedestrian tracking with path image inpainting and 3D point cloud features
Hua et al. Circular coding: A technique for visual localization in urban areas
Colios et al. A framework for visual landmark identification based on projective and point-permutation invariant vectors
Gong et al. ROS-based object localization using RFID and laser scan
Xu et al. Indoor localization using region-based convolutional neural network
KR100647285B1 (en) A method for constructing an artificial mark for autonomous driving of an intelligent system, an apparatus and method for determining the position of an intelligent system using the artificial mark, and an intelligent system employing the same
Xu et al. Vision-IMU based obstacle detection method
Lin et al. Site model supported monitoring of aerial images
Hsieh et al. Face Mole Detection, Classification and Application.
Mata et al. Learning visual landmarks for mobile robot navigation
Hung et al. Real-time counting people in crowded areas by using local empirical templates and density ratios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20201127

WW01 Invention patent application withdrawn after publication