CN111126133A - Intelligent refrigerator access action recognition method based on deep learning - Google Patents
Intelligent refrigerator access action recognition method based on deep learning Download PDFInfo
- Publication number
- CN111126133A CN111126133A CN201911089592.8A CN201911089592A CN111126133A CN 111126133 A CN111126133 A CN 111126133A CN 201911089592 A CN201911089592 A CN 201911089592A CN 111126133 A CN111126133 A CN 111126133A
- Authority
- CN
- China
- Prior art keywords
- hand
- food material
- state
- information
- access
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000009471 action Effects 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 235000013305 food Nutrition 0.000 claims abstract description 67
- 239000000463 material Substances 0.000 claims abstract description 65
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 239000013598 vector Substances 0.000 claims description 6
- 238000013145 classification model Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000006399 behavior Effects 0.000 description 4
- 210000004247 hand Anatomy 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 244000302512 Momordica charantia Species 0.000 description 2
- 235000009811 Momordica charantia Nutrition 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 244000141359 Malus pumila Species 0.000 description 1
- 235000009812 Momordica cochinchinensis Nutrition 0.000 description 1
- 235000018365 Momordica dioica Nutrition 0.000 description 1
- 241000581835 Monodora junodii Species 0.000 description 1
- 235000021016 apples Nutrition 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000003857 wrist joint Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an intelligent refrigerator access action recognition method based on deep learning, which comprises the following steps of: s1, video data are obtained through the camera and input; s2, segmenting the input video into images by frames by adopting a current advanced target detector and detecting hand information in the images; s3, when the hand information is detected to be located in the cutting area, cutting the hand information picture and sending the hand information picture into the food material classification network for food material classification, and obtaining a food material classification result; s4, when the hand information is detected to be located in the tracking area, sending the hand information into a simple target tracking algorithm to obtain a tracking track of the hand movement; s5, judging the hand state in the tracking track by using an access state judgment rule to obtain the access state of the user; s6, combining the access state of the user with the food material classification result and outputting an access action result; and S7, reinitializing and waiting for entering the next action recognition detection.
Description
The invention relates to the field of computer vision, in particular to an intelligent refrigerator access action recognition method based on deep learning.
Background
Action recognition is the premise and the basis for realizing interaction between people and intelligent equipment, and under the trend that the internet of things is more and more popular, action recognition is more and more important. At present, there are some mature non-deep learning motion recognition methods, such as a spatio-temporal recognition method, a sequence method and a hierarchical method. However, because the difference of the motion, the obstruction and the difference of the video are difficult to determine the starting point, it is difficult to extract effective and stable features to describe the behavior, and it is difficult to consider both the recognition rate and the real-time performance. In recent years, video motion recognition based on deep learning is becoming mainstream, such as a recognition method based on a single frame, a recognition method based on a CNN extended network, a recognition method based on a two-way CNN, a recognition method based on an LSTM, and a three-dimensional convolution kernel method, but the algorithms of these methods are complicated, the deployment is difficult, the real-time performance is difficult to achieve, and the use requirements of people cannot be met.
Disclosure of Invention
The invention aims to solve the problems and provides an intelligent refrigerator access action recognition method based on deep learning, which effectively improves the recognition effect.
In order to achieve the purpose, the technical scheme of the invention is as follows:
an intelligent refrigerator access action recognition method based on deep learning comprises the following steps:
s1, video data are obtained through the camera and input;
s2, segmenting the input video into images by frames by adopting a current advanced target detector and detecting hand information in the images;
s3, when the hand information is detected to be located in the cutting area, cutting the hand information picture and sending the hand information picture into the food material classification network for food material classification, and obtaining a food material classification result;
s4, when the hand information is detected to be located in the tracking area, sending the hand information into a simple target tracking algorithm to obtain a tracking track of the hand movement;
s5, judging the hand state in the tracking track by using an access state judgment rule to obtain the access state of the user;
s6, combining the access state of the user with the food material classification result and outputting an access action result;
and S7, reinitializing and waiting for entering the next action recognition detection.
Further, the target detector in the step S2 employs Caffe-SSD-mobail nett v 1.
Further, the hand information in the step S2 includes hand food material information and hand position information; the hand information detection method comprises the following steps:
s21, obtaining regression vectors of a candidate window and a boundary box of the hand area through a target detector, performing regression through the boundary box, calibrating the candidate window, and combining the boundary boxes with high overlapping through non-maximum suppression;
s22, removing the false detection areas, obtaining a hand frame after fine adjustment, and obtaining hand position information through the hand frame;
and S23, obtaining the hand taking type vector through the hand frame, and performing a classification task on the hand taking type vector to obtain the hand food material information.
Further, the cropping hand information picture in step S3 is cropped by using OpenCV.
Further, the food material classification network in step S3 adopts a multi-classification model SqueezeNet.
Further, the access state determination rule in step S5 is: in a section of tracking track, the hand state is food material at the beginning, and no food material is at the end, so that the access state of the user is 'storage'; in a section of tracking track, the hand state is no food material at the beginning, and the food material is present at the end, so that the access state of the user is 'get'; when the hand state is food material at the beginning and food material at the end in a section of tracking track, the access state of the user is 'both stored and fetched'; when the hand state is no food material at the beginning and no food material at the end in a section of tracking track, the access state of the user is hesitant.
Furthermore, in the access state judgment rule, the hand state at the beginning and the hand state at the end are judged in a multi-detection and majority mode.
Compared with the prior art, the invention has the advantages and positive effects that:
the invention provides a brand new action recognition scheme under an intelligent refrigerator scene, which adopts a MobailNeetV 1 model as a target detector and combines with a stack tracking algorithm, and can avoid large memory overhead because the MobailNeetV 1 model parameter quantity is not large and the stack tracking algorithm is not a neural network; on the other hand, the method adopts the Squeezenet as a classification network to classify the food materials taken by the hands, the Squeezenet has the characteristics of high speed, high real-time performance, small occupied memory, high accuracy and easiness in embedded deployment, and the hand images are intercepted and classified only for a plurality of frames before and after the access action, so that the time and memory expenses can be greatly reduced, and the operation rate is effectively improved. The technical scheme of the invention has lower algorithm complexity, can be deployed in embedded equipment, has stronger algorithm robustness and can meet the requirement of real-time processing, and the algorithm can be not only used for an intelligent refrigerator, but also meet the use requirements of other intelligent household equipment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram of the framework of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments of the present invention by a person skilled in the art without any creative effort, should be included in the protection scope of the present invention.
As shown in fig. 1, the present invention proposes an access action recognition scheme that utilizes a combination of target detection and target tracking to determine an access status and utilizes a classification network to recognize food materials.
First, access action recognition
The identification of access actions in an intelligent refrigerator is a key step for realizing interaction between a person and the refrigerator. The action recognition module realizes the recognition of four action actions, which are respectively: deposit, fetch, hesitation, both deposit and fetch. The four simple actions are recognized, so that the usability of the refrigerator is greatly improved, and the recognition accuracy is improved. The movable body of the four actions is the hand, namely the hand above the wrist joint, so the camera only needs to detect the hand information of the human body. The detection hand adopts the current advanced target detector to detect and obtain the state and the hand position information of the hand. Whether the food is taken or not is judged according to the hand state, and the initial position information of the hand is provided for the target tracker to track the whole access action. The front hand state and the rear hand state of the whole action are analyzed in real time, so that the aim of action recognition is fulfilled.
The detector adopts CNN (convolutional neural network) to detect hands, and adopts a target detector Caffe-SSD-MobailNeetV 1 with lower network complexity, better detection effect and higher real-time property. The behavior action recognition cannot be realized only by a static image of a single frame, and because the single frame image can only judge the state of the image of the frame and cannot know the start and the end of the action, the whole video is required to comprehensively judge the behavior action. Moreover, detecting each frame of image greatly reduces the frame rate at which the algorithm operates, so relying on a CNN network alone is not sufficient. It is therefore necessary to track the entire motion using a target tracking algorithm. The method can record the hand state information of a whole action multi-frame image, and can accurately judge the action behavior in a section of video by judging the hand information before and after the action.
The module selects a Stacke target tracking algorithm which is robust to Color change and motion deformation and can process in real time, and the algorithm integrates Hog characteristics (the characteristics are sensitive to deformation and motion blur, but can achieve good effect on Color change) and Color characteristics (the characteristics are sensitive to Color, but have good tracking algorithm on deformation and motion blur) and can solve the problems encountered in most tracking processes.
Because the MobailNetV 1 model parameter quantity is not large and the Stacke tracking algorithm is not a neural network, large memory overhead can be avoided, and the embedded device is more friendly.
We do not track the hands of each frame of the video, and the scheme only tracks in the central region of the image.
We process each frame of video in real time. Hand information was detected by MobileNetV1 every two frames: hand food information (whether to take food material) and hand position information. And if the hand is in the tracking range, storing the hand information into a list, and providing the hand position information for a Stacke target tracking algorithm to track the motion. And repeating the above operations until the action is finished.
When the action is finished, we judge the access status using the following rule.
1. The first 3, 8 and 13 positions of the hand information list are taken for judgment, most principles are adopted, if the information on the 3 position is no food material, and the information on the 8 and 13 positions is food material, the hand state at the beginning of action is food material, so that misjudgment caused by detection errors can be prevented to a certain extent, and the algorithm robustness is increased.
2. The positions of the last 3, 8 and 13 of the hand information list are taken for judgment, and the majority principle is adopted.
3. In one action, the hand state is with food material at the beginning, and no food material at the end, the hand state is stored.
4. In one action, the hand state is no food material at the beginning, and the hand state is food material at the end, so that the hand state is 'taking'.
5. In one action, the hand state is as food material, and the hand state is as food material, then it is as 'both existing and taking'.
6. In one motion, the hand state is empty, and at the end, the hand state is "hesitant".
Secondly, food material identification
Food materials are main observation sources in the refrigerator, and how to accurately identify each food material is the primary task of the intelligent refrigerator. Under the current task, the target detection task is excellent for the hand positioning task, namely, the position information of the hand can be detected efficiently and accurately. But because the inter-class differences of the targets are small, such as balsam pear and cucumber; the difference between classes is large, such as green apples and red apples, and the classification result of target detection is often unsatisfactory. The classification task and the anchor frame coordinate regression task have natural contradiction, the classification task is insensitive to position deviation, and the position regression is extremely sensitive to position deviation, so when a target detector shows excellent coordinate regression, the classification task effect of the target detector is weakened to a certain extent by the position regression task.
In order to solve the problem, a target detection network (MobileNetV 1) is used for detecting the hand position, and the detected hand position picture is cut out and sent to a classifier for food material classification. This becomes a double task and therefore incurs time and memory overhead. We therefore take the following strategy: firstly, a lightweight neural network with small network complexity and good classification effect is adopted, and secondly, hand image interception and classification are only carried out on a plurality of frames before and after the access action. The overhead in time and memory can be greatly reduced by such a strategy.
The module adopts SqueezeNet as a classification network, and the SqueezeNet has the characteristics of high speed, high real-time performance, small occupied memory, high accuracy and easiness in embedded deployment. The scheme classifies 150 food materials and can realize real-time processing.
The invention is mainly divided into 5 modules:
1. image acquisition: a color camera was used to acquire image data at 1280 × 720 resolution as input.
2. Hand information and hand positioning: and judging whether the hand holds the food material by utilizing a MobileNet network, and acquiring position information of the hand to prepare for next interception and tracking.
3. Food material identification: and when the hand is in the intercepting range, intercepting the hand picture, and sending the hand picture into a SqueezeNet network for food material classification and identification.
4. Target tracking: when the hand is in the tracking range, the picture is tracked, and hand information in the tracking track is stored.
5. And (3) action recognition: and judging the hand state in the action track, acquiring the access action state by utilizing the specified judgment rule, and finally acquiring the complete access state information of the user by combining the returned information of food material identification.
The specific action recognition steps of the invention are shown in fig. 1:
(1) OpenCV calls for a color camera, obtaining video data at 1280 × 720 resolution as input.
(2) Target detection is performed every two frames from the first frame image to obtain a hand image (rectangular frame), false detection areas of the hand image are removed through border frame regression and non-maximum value inhibition to obtain hand food material information (whether food materials exist) and hand position information, so that subsequent image tracking is more accurate, and robustness is higher.
(3) When the coordinates of the lower right corner of a hand image (a rectangular frame) detected by the target detection algorithm are in a specified cutting range and the hand food material information is food material, the rectangular frame is cut by using OpenCV, the cut rectangular frame is sent to a trained multi-classification model SqueezeNet, and the food material is judged and stored in a list 1.
(4) When the coordinates of the lower right corner of the hand image (rectangular frame) detected by the target detection algorithm are within a specified tracking range, the sample tracker is matched with the detector to obtain a tracking frame with the highest confidence coefficient, and the tracking frame is added to the track for tracking. And stores the hand food material information in the tracking process in the list lsit 2.
(5) Repeating the steps 1, 2, 3 and 4 until the tracking track disappears, namely the action is finished.
(6) The hand food material information at 3 positions in front and at the back of list2 is taken, and the access action is obtained by the action judgment rule, wherein the actions are four in total: deposit, fetch, hesitation, both deposit and fetch.
(7) And (4) combining the food material types obtained in the step (3) with the action information obtained in the step (6), stamping a time stamp and returning the information to the background, so that the food material access information of the user at a certain time can be known.
(8) Emptying list1, list2, and tracking the track in preparation for the next motion detection.
The invention provides a brand new action recognition scheme under an intelligent refrigerator scene, which adopts a MobailNeetV 1 model as a target detector and combines with a stack tracking algorithm, and can avoid large memory overhead because the MobailNeetV 1 model parameter quantity is not large and the stack tracking algorithm is not a neural network; on the other hand, the method adopts the Squeezenet as a classification network to classify the food materials taken by the hands, the Squeezenet has the characteristics of high speed, high real-time performance, small occupied memory, high accuracy and easiness in embedded deployment, and the hand images are intercepted and classified only for a plurality of frames before and after the access action, so that the time and memory expenses can be greatly reduced, and the operation rate is effectively improved. The technical scheme of the invention has lower algorithm complexity, can be deployed in embedded equipment, has stronger algorithm robustness and can meet the requirement of real-time processing, and the algorithm can be not only used for an intelligent refrigerator, but also meet the use requirements of other intelligent household equipment.
Claims (7)
1. A deep learning-based intelligent refrigerator access action recognition method is characterized by comprising the following steps: the method comprises the following steps:
s1, video data are obtained through the camera and input;
s2, segmenting the input video into images by frames by adopting a current advanced target detector and detecting hand information in the images;
s3, when the hand information is detected to be located in the cutting area, cutting the hand information picture and sending the hand information picture into the food material classification network for food material classification, and obtaining a food material classification result;
s4, when the hand information is detected to be located in the tracking area, sending the hand information into a simple target tracking algorithm to obtain a tracking track of the hand movement;
s5, judging the hand state in the tracking track by using an access state judgment rule to obtain the access state of the user;
s6, combining the access state of the user with the food material classification result and outputting an access action result;
and S7, reinitializing and waiting for entering the next action recognition detection.
2. The intelligent refrigerator access action recognition method based on deep learning of claim 1, wherein: the target detector in the step S2 employs Caffe-SSD-mobail inetv 1.
3. The intelligent refrigerator access action recognition method based on deep learning of claim 2, wherein: the hand information in the step S2 includes hand food material information and hand position information; the hand information detection method comprises the following steps:
s21, obtaining regression vectors of a candidate window and a boundary box of the hand area through a target detector, performing regression through the boundary box, calibrating the candidate window, and combining the boundary boxes with high overlapping through non-maximum suppression;
s22, removing the false detection areas, obtaining a hand frame after fine adjustment, and obtaining hand position information through the hand frame;
and S23, obtaining the hand taking type vector through the hand frame, and performing a classification task on the hand taking type vector to obtain the hand food material information.
4. The intelligent refrigerator access action recognition method based on deep learning of claim 1, wherein: and the cutting hand information picture in the step S3 is cut by adopting OpenCV.
5. The intelligent refrigerator access action recognition method based on deep learning of claim 1, wherein: the food material classification network in the step S3 adopts a multi-classification model SqueezeNet.
6. The intelligent refrigerator access action recognition method based on deep learning of claim 1, wherein: the access state determination rule in step S5 is: in a section of tracking track, the hand state is food material at the beginning, and no food material is at the end, so that the access state of the user is 'storage'; in a section of tracking track, the hand state is no food material at the beginning, and the food material is present at the end, so that the access state of the user is 'get'; when the hand state is food material at the beginning and food material at the end in a section of tracking track, the access state of the user is 'both stored and fetched'; when the hand state is no food material at the beginning and no food material at the end in a section of tracking track, the access state of the user is hesitant.
7. The intelligent refrigerator access action recognition method based on deep learning of claim 6, wherein: in the access state judgment rule, the hand state at the beginning and the hand state at the end are judged by adopting a multi-detection and majority mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911089592.8A CN111126133A (en) | 2019-11-08 | 2019-11-08 | Intelligent refrigerator access action recognition method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911089592.8A CN111126133A (en) | 2019-11-08 | 2019-11-08 | Intelligent refrigerator access action recognition method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111126133A true CN111126133A (en) | 2020-05-08 |
Family
ID=70495703
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911089592.8A Pending CN111126133A (en) | 2019-11-08 | 2019-11-08 | Intelligent refrigerator access action recognition method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111126133A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860509A (en) * | 2020-07-28 | 2020-10-30 | 湖北九感科技有限公司 | Coarse-to-fine two-stage non-constrained license plate region accurate extraction method |
CN113239789A (en) * | 2021-05-11 | 2021-08-10 | 上海汉时信息科技有限公司 | Shopping behavior analysis method and device |
CN113468359A (en) * | 2020-07-14 | 2021-10-01 | 青岛海信电子产业控股股份有限公司 | Intelligent refrigerator and food material identification method |
CN113496245A (en) * | 2020-06-23 | 2021-10-12 | 青岛海信电子产业控股股份有限公司 | Intelligent refrigerator and method for identifying food material storing and taking |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107665336A (en) * | 2017-09-20 | 2018-02-06 | 厦门理工学院 | Multi-target detection method based on Faster RCNN in intelligent refrigerator |
CN108921007A (en) * | 2018-05-08 | 2018-11-30 | 河海大学常州校区 | A kind of Handwritten Numeral Recognition Method based on SqueezeNet |
CN109343701A (en) * | 2018-09-03 | 2019-02-15 | 电子科技大学 | A kind of intelligent human-machine interaction method based on dynamic hand gesture recognition |
CN109559331A (en) * | 2017-09-27 | 2019-04-02 | 九阳股份有限公司 | A kind of food management method based on video image |
CN109558775A (en) * | 2017-09-27 | 2019-04-02 | 九阳股份有限公司 | A kind of refrigerator food management method |
CN109840504A (en) * | 2019-02-01 | 2019-06-04 | 腾讯科技(深圳)有限公司 | Article picks and places Activity recognition method, apparatus, storage medium and equipment |
CN110019938A (en) * | 2017-11-29 | 2019-07-16 | 深圳Tcl新技术有限公司 | Video Information Retrieval Techniquess method, apparatus and storage medium based on RGB classification |
CN110348505A (en) * | 2019-07-02 | 2019-10-18 | 高新兴科技集团股份有限公司 | Vehicle color disaggregated model training method, device and vehicle color identification method |
-
2019
- 2019-11-08 CN CN201911089592.8A patent/CN111126133A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107665336A (en) * | 2017-09-20 | 2018-02-06 | 厦门理工学院 | Multi-target detection method based on Faster RCNN in intelligent refrigerator |
CN109559331A (en) * | 2017-09-27 | 2019-04-02 | 九阳股份有限公司 | A kind of food management method based on video image |
CN109558775A (en) * | 2017-09-27 | 2019-04-02 | 九阳股份有限公司 | A kind of refrigerator food management method |
CN110019938A (en) * | 2017-11-29 | 2019-07-16 | 深圳Tcl新技术有限公司 | Video Information Retrieval Techniquess method, apparatus and storage medium based on RGB classification |
CN108921007A (en) * | 2018-05-08 | 2018-11-30 | 河海大学常州校区 | A kind of Handwritten Numeral Recognition Method based on SqueezeNet |
CN109343701A (en) * | 2018-09-03 | 2019-02-15 | 电子科技大学 | A kind of intelligent human-machine interaction method based on dynamic hand gesture recognition |
CN109840504A (en) * | 2019-02-01 | 2019-06-04 | 腾讯科技(深圳)有限公司 | Article picks and places Activity recognition method, apparatus, storage medium and equipment |
CN110348505A (en) * | 2019-07-02 | 2019-10-18 | 高新兴科技集团股份有限公司 | Vehicle color disaggregated model training method, device and vehicle color identification method |
Non-Patent Citations (1)
Title |
---|
黄尚科主编: "《人工智能与数据挖掘的原理及应用》", 延边大学出版社 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113496245A (en) * | 2020-06-23 | 2021-10-12 | 青岛海信电子产业控股股份有限公司 | Intelligent refrigerator and method for identifying food material storing and taking |
CN113468359A (en) * | 2020-07-14 | 2021-10-01 | 青岛海信电子产业控股股份有限公司 | Intelligent refrigerator and food material identification method |
CN111860509A (en) * | 2020-07-28 | 2020-10-30 | 湖北九感科技有限公司 | Coarse-to-fine two-stage non-constrained license plate region accurate extraction method |
CN113239789A (en) * | 2021-05-11 | 2021-08-10 | 上海汉时信息科技有限公司 | Shopping behavior analysis method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126133A (en) | Intelligent refrigerator access action recognition method based on deep learning | |
US10417503B2 (en) | Image processing apparatus and image processing method | |
US20220417590A1 (en) | Electronic device, contents searching system and searching method thereof | |
US11335092B2 (en) | Item identification method, system and electronic device | |
CN110796051B (en) | Real-time access behavior detection method and system based on container scene | |
CN109299703B (en) | Method and device for carrying out statistics on mouse conditions and image acquisition equipment | |
KR100860988B1 (en) | Method and apparatus for object detection in sequences | |
US7460689B1 (en) | System and method of detecting, recognizing, and tracking moving targets | |
US20080013837A1 (en) | Image Comparison | |
Ko et al. | Background subtraction on distributions | |
CN106682619B (en) | Object tracking method and device | |
US20120243733A1 (en) | Moving object detecting device, moving object detecting method, moving object detection program, moving object tracking device, moving object tracking method, and moving object tracking program | |
JP6157165B2 (en) | Gaze detection device and imaging device | |
GB2414615A (en) | Object detection, scanning and labelling | |
García-Martín et al. | Robust real time moving people detection in surveillance scenarios | |
CN110569770A (en) | Human body intrusion behavior recognition method and device, storage medium and electronic equipment | |
US9947106B2 (en) | Method and electronic device for object tracking in a light-field capture | |
WO2023025010A1 (en) | Stroboscopic banding information recognition method and apparatus, and electronic device | |
CN110651274A (en) | Movable platform control method and device and movable platform | |
CN114463781A (en) | Method, device and equipment for determining trigger gesture | |
CN109986553B (en) | Active interaction robot, system, method and storage device | |
CN117152807A (en) | Human head positioning method, device and storage medium | |
Ma et al. | Depth assisted occlusion handling in video object tracking | |
CN108428241A (en) | The movement locus catching method of mobile target in HD video | |
CN115116136A (en) | Abnormal behavior detection method, device and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200508 |
|
RJ01 | Rejection of invention patent application after publication |