CN117612245A - Automatic counting method for conventional rope skipping test - Google Patents

Automatic counting method for conventional rope skipping test Download PDF

Info

Publication number
CN117612245A
CN117612245A CN202311246652.9A CN202311246652A CN117612245A CN 117612245 A CN117612245 A CN 117612245A CN 202311246652 A CN202311246652 A CN 202311246652A CN 117612245 A CN117612245 A CN 117612245A
Authority
CN
China
Prior art keywords
rope
sequence
skipping
rope skipping
feature vectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311246652.9A
Other languages
Chinese (zh)
Inventor
周茂林
李杨杨
***
叶聪
邓学杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leti Technology Co ltd
Original Assignee
Feixiang Technology Guangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feixiang Technology Guangzhou Co ltd filed Critical Feixiang Technology Guangzhou Co ltd
Priority to CN202311246652.9A priority Critical patent/CN117612245A/en
Publication of CN117612245A publication Critical patent/CN117612245A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B5/00Apparatus for jumping
    • A63B5/16Training devices for jumping; Devices for balloon-jumping; Jumping aids
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/17Counting, e.g. counting periodical movements, revolutions or cycles, or including further data processing to determine distances or speed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

An automatic counting method for conventional rope skipping tests is disclosed. Firstly, a rope skipping monitoring video of a monitored object is obtained, then, a sequence of optimized weighted rope skipping state feature vectors is extracted from the rope skipping monitoring video, and finally, a rope skipping count value is determined based on the sequence of optimized weighted rope skipping state feature vectors. Thus, the rope skipping times of rope-skipping persons can be accurately and conveniently obtained.

Description

Automatic counting method for conventional rope skipping test
Technical Field
The present disclosure relates to the field of automatic counting, and more particularly, to an automatic counting method for conventional rope skipping testing.
Background
Rope skipping is a common physical activity that can improve a person's physical fitness and coordination. In the scenes of physical training, fitness activity, physical fitness evaluation and the like, the rope skipping count is one of important indexes for evaluating the rope skipping activity. By counting the number of rope jumps, one can measure one's endurance and coordination. Accurately counting rope skipping times is important for assessing the effectiveness of rope skipping activities, monitoring fitness progress, and making appropriate training plans.
Conventional rope skipping counting methods typically rely on manual observation or hand-held devices, which have problems with errors, inconvenience, or interference with the rope-skipping person. Thus, an optimized counting scheme for conventional rope skipping tests is desired.
Disclosure of Invention
In view of this, the present disclosure proposes an automatic counting method for conventional rope skipping test, which can accurately and conveniently obtain the rope skipping times of rope-skipping persons.
According to an aspect of the present disclosure, there is provided an automatic counting method for a conventional rope-skipping test, including: acquiring a rope skipping monitoring video of a monitored object; extracting a sequence of optimized weighted rope skipping state feature vectors from the rope skipping monitoring video; and determining a rope skipping count value based on the sequence of optimized weighted rope skipping state feature vectors.
According to the embodiment of the disclosure, a rope skipping monitoring video of a monitored object is firstly obtained, then a sequence of optimized weighted rope skipping state characteristic vectors is extracted from the rope skipping monitoring video, and finally a rope skipping count value is determined based on the sequence of optimized weighted rope skipping state characteristic vectors. Thus, the rope skipping times of rope-skipping persons can be accurately and conveniently obtained.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an automatic counting method for a conventional rope jump test, according to an embodiment of the present disclosure.
Fig. 2 shows an architectural diagram of an auto-counting method for conventional rope skipping testing, according to an embodiment of the present disclosure.
Fig. 3 shows a flowchart of substep S120 of an automatic counting method for a conventional rope-skipping test, according to an embodiment of the present disclosure.
Fig. 4 shows a flowchart of sub-step S122 of an automatic counting method for a conventional rope-skipping test, according to an embodiment of the present disclosure.
Fig. 5 shows a flowchart of sub-step S1222 of an automatic counting method for a conventional rope jump test, according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an automatic counting system for conventional rope skipping testing, according to an embodiment of the present disclosure.
Fig. 7 illustrates an application scenario diagram of an automatic counting method for a conventional rope skipping test according to an embodiment of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. All other embodiments, which can be made by one of ordinary skill in the art without undue burden based on the embodiments of the present disclosure, are also within the scope of the present disclosure.
As used in this disclosure and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
The conventional rope skipping refers to that a tester grips one end of the rope respectively by both hands, the rope is tripped from an initial position, the rope swings under the action of hand force, and the rope passes through the sole and then winds around from the top of the head to return to the tripped initial position, so that one rope skipping action is completed, and the plurality of rope skipping actions are continuously completed, namely a conventional rope skipping process. The counting method for conventional rope skipping tests typically involves manual visual inspection and rope handle end sensor counting methods. The conventional rope skipping test manual visual inspection counting method is simple and can finish counting without additional equipment technical support, but manual visual inspection is easily influenced by manual fatigue, external environment interference and subjective factors of different personnel, so that different counting results are caused, and the phenomenon is more easy to occur particularly under the condition of high-frequency rope skipping action. The counting method of the rope handle end sensor is greatly improved compared with a manual visual inspection method, and can also avoid artificial subjective factors and overcome the influence of external environment interference, but the method cannot effectively treat abnormal rope skipping behaviors such as false skipping, forward and reverse alternate skipping, tripping and the like.
Aiming at the technical problems, the technical concept of the present disclosure is to analyze a rope skipping monitoring video by using a computer vision technology based on deep learning, so as to realize automatic rope skipping automatic counting, so as to evaluate the effect of rope skipping activities.
Fig. 1 shows a flowchart of an automatic counting method for a conventional rope jump test, according to an embodiment of the present disclosure. Fig. 2 shows an architectural diagram of an auto-counting method for conventional rope skipping testing, according to an embodiment of the present disclosure. As shown in fig. 1 and 2, an automatic counting method for a conventional rope-skipping test according to an embodiment of the present disclosure includes the steps of: s110, acquiring a rope skipping monitoring video of a monitored object; s120, extracting a sequence of optimized weighted rope skipping state feature vectors from the rope skipping monitoring video; and S130, determining a rope skipping count value based on the sequence of the optimized weighted rope skipping state characteristic vectors.
More specifically, in step S110, a rope skipping monitoring video of the monitored object is acquired. It should be appreciated that conventional rope skipping counting methods typically rely on manual observation or hand-held devices, which have problems with errors, inconvenience, or interference with the rope skipper. By acquiring the rope skipping monitoring video, the action and behavior of the rope skipping person can be acquired from the video without affecting the normal play of the rope skipping person. The video data contains state information of the rope skipping person, including rope skipping actions and rope skipping times, and can be used as an important data source of a subsequent deep learning model.
In an embodiment of the present disclosure, the capturing of the rope skipping monitoring video of the monitored object may be performed by a camera, a smart phone, or a network camera. Taking a camera as an example, when the rope skipping activity is carried out, one or more cameras are installed at proper positions to capture the rope skipping action of the monitored object, and one or more proper positions are selected to install the camera, so that the rope skipping action of the monitored object can be comprehensively captured, and omission or shielding is avoided. It should be noted that in the installation process, resolution, frame rate of the camera, and view angle and position of the camera need to be comprehensively considered, the frame rate represents the number of images collected per second, and a higher frame rate can provide a smoother video stream, so as to ensure that the rope skipping action of the monitored object can be completely captured, and avoid action ambiguity.
More specifically, in step S120, a sequence of optimized weighted rope-skipping state feature vectors is extracted from the rope-skipping surveillance video. Accordingly, in one possible implementation, as shown in fig. 3, extracting the sequence of optimized weighted rope-skipping state feature vectors from the rope-skipping monitoring video includes: s121, extracting rope skipping state characteristics of the rope skipping monitoring video to obtain a sequence of rope skipping state characteristic vectors; s122, passing the sequence of the rope-skipping state feature vectors through a time attention mechanism module to obtain a sequence of weighted rope-skipping state feature vectors; and S123, carrying out semantic information homogenization activation of feature rank expression on the sequence of the weighted rope skipping state feature vector to obtain the sequence of the optimized weighted rope skipping state feature vector.
The characteristic representation of the image can be automatically learned in view of the convolutional neural network (Convolutional Neural Network, CNN). In the technical solution of the present disclosure, it is desirable to extract image feature information of each image frame in the rope skipping monitoring video by using a convolutional neural network. The method comprises the steps of obtaining a sequence of rope skipping state feature vectors by respectively passing each image frame in the rope skipping monitoring video through a rope skipping state feature extractor based on a convolutional neural network model.
Accordingly, in one possible implementation manner, the rope skipping monitoring video is subjected to rope skipping state feature extraction to obtain a sequence of rope skipping state feature vectors, which includes: and respectively passing each image frame in the rope skipping monitoring video through a rope skipping state feature extractor based on a convolutional neural network model to obtain a sequence of the rope skipping state feature vector.
Specifically, convolutional neural networks perform feature extraction on an input image through a series of convolutional and pooling layers. The convolution layer convolves the image with a set of learnable filters to capture local features at different locations. The pooling layer performs downsampling on the feature map output by the convolution layer, so that important features are reserved while the data dimension is reduced. Through the processing of the convolution layer and the pooling layer, the convolution neural network can gradually extract the features of more and more deep layers. In other words, the characteristics learned by convolutional neural networks may reflect the critical actions, gestures, and other information related to the rope skipping condition of the rope skipping person.
In an embodiment of the present disclosure, the network structure of the convolutional neural network model includes an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a flattening layer, a full connection layer, and an output layer. Wherein the first convolution layer uses 64 convolution kernels of size 3x3, using a ReLU function as an activation function; the second convolution layer uses 128 convolution kernels of 3x3 in size, using the ReLU function as an activation function; the first pooling layer and the second pooling layer both adopt the maximum pooling operation with the pooling core size of 2x 2.
It should be appreciated that in one specific example, in this convolutional neural network model, different layers have different functions and roles: the input layer receives input data and transfers the data to the next layer for processing. The first convolution layer performs a convolution operation using 64 convolution check inputs of size 3x3 to extract features of the input data. The ReLU function acts as an activation function, increasing the non-linear capability of the network. The first pooling layer adopts 2x2 maximum pooling operation to downsample the output of the convolution layer, reduce the spatial dimension of the data, and retain important features. The second convolution layer uses 128 convolution kernels of size 3x3 to convolve the output of the first pooled layer to further extract features. The ReLU function is also used as an activation function. The second pooling layer again adopts 2x2 max pooling operation to downsample the output of the second convolution layer, reducing the space dimension of the data. The flattening layer flattens the output of the pooling layer into a one-dimensional vector to prepare for the full connection layer. Full tie layer: and carrying out matrix multiplication operation on the flattened eigenvectors and weights, and carrying out nonlinear transformation through an activation function. The fully connected layer can learn higher level feature representations. The output of the last full-connection layer of the output layer is subjected to proper transformation to obtain a final output result. In classification problems, the output layer typically converts the output into a probability distribution using a softmax function. Through the network structure, the convolutional neural network can effectively extract features from input data, and performs tasks such as classification or regression through the full connection layer. The combination of the different layers and the parameter settings can be adjusted according to specific problems to obtain better performance.
And then, passing the sequence of the rope skipping state characteristic vectors through a time attention mechanism module to obtain a sequence of weighted rope skipping state characteristic vectors. Here, the sequence of characteristic vectors of the rope skipping state is passed through a time attention mechanism module, and each characteristic vector can be given different weights, so that different time periods of the rope skipping state are weighted. The purpose of this is to highlight important information related to the count in the rope-skipping state sequence and to attenuate disturbances or noise not related to the count.
Accordingly, in one possible implementation, as shown in fig. 4, passing the sequence of rope-jump status feature vectors through a time attention mechanism module to obtain a sequence of weighted rope-jump status feature vectors includes: s1221, mapping each rope skipping state feature vector in the sequence of rope skipping state feature vectors into a mapping feature vector through linear transformation to obtain a sequence of mapping feature vectors; and S1222, obtaining the sequence of weighted rope-skipping state feature vectors based on the similarity between each mapping feature vector in the sequence of mapping feature vectors.
It should be appreciated that a temporal attention mechanism is an attention mechanism for processing time series data that may assign different weights to each element in the series to highlight important time periods or elements and mitigate the effects of unrelated time periods or elements. In this implementation, the temporal attention mechanism module is used to weight the sequence of rope-jump status feature vectors to highlight important information related to the count and mitigate disturbances or noise unrelated to the count. Specifically, the temporal attention mechanism module includes the following two steps: mapping the sequence of feature vectors: first, a sequence of rope-jump status feature vectors is mapped into a sequence of mapped feature vectors by linear transformation. This linear transformation may be a fully connected layer, mapping the input feature vector to a new feature space. Calculating a sequence of weighted feature vectors: and calculating the sequence of weighted rope skipping state feature vectors based on the similarity between the mapping feature vectors in the sequence of mapping feature vectors. The similarity may be obtained by calculating a distance or similarity measure between the mapped feature vectors. Common methods include the use of cosine similarity or euclidean distance. Through the time attention mechanism module, different weights can be given to the sequence of the characteristic vectors of the rope skipping state, so that important information related to counting is highlighted, and interference or noise unrelated to counting is weakened. This helps to improve the performance and robustness of the model to rope skipping counting tasks.
Accordingly, in one possible implementation, as shown in fig. 5, the obtaining the sequence of weighted rope-jump status feature vectors based on the similarity between the mapping feature vectors in the sequence of mapping feature vectors includes: s12221, respectively calculating a plurality of similarities between the query vector and other mapping feature vectors in the sequence of mapping feature vectors by taking each mapping feature vector in the sequence of mapping feature vectors as a query vector, and carrying out weighted summation on the plurality of similarities corresponding to each query vector to obtain a plurality of attention weight values; s12222, carrying out normalization processing on the attention weight values to obtain normalized attention weight values; and S12223, respectively weighting each mapping feature vector in the sequence of mapping feature vectors based on the normalized attention weight values to obtain the sequence of weighted rope skipping state feature vectors.
It should be appreciated that normalization is a common data processing technique for converting data to standard values within a specific range. In one example of the present disclosure, normalization refers to processing the attention weight values such that their sum is equal to 1 in order to better represent their relative importance throughout the sequence. By normalizing the attention weighting values, it is ensured that they have a value between 0 and 1 and that their sum is 1. Thus, each attention weight value represents the importance or weight of the corresponding mapped feature vector. These normalized attention weight values may be used to weight the mapped feature vectors to obtain a weighted sequence of rope-jump status feature vectors. This can highlight important feature vectors and mitigate the effects of unimportant feature vectors.
In the technical scheme of the disclosure, each rope skipping state feature vector in the sequence of rope skipping state feature vectors expresses an image semantic feature of a corresponding image frame in the rope skipping monitoring video, and after the image semantic feature under local time distribution can be enhanced by the time attention mechanism module, the feature distribution difference of each weighted rope skipping state feature vector in the sequence of weighted rope skipping state feature vectors is increased, so that the overall relevance of feature vector dimensions of the sequence of weighted rope skipping state feature vectors is poor.
Here, considering that the sequence of weighted rope-skipping state feature vectors is composed of each weighted rope-skipping state feature vector, the sequence of weighted rope-skipping state feature vectors may be regarded as an overall feature set composed of local feature sets of each weighted rope-skipping state feature vector, thereby having a scale heuristic relationship of correlation in feature set dimensions. And because the weighted rope-skipping state feature vectors follow the time-intensified image semantic distribution of each image frame in the rope-skipping monitoring video, the weighted rope-skipping state feature vectors also have semantic association relations corresponding to the image semantic time sequence association distribution information of the whole rope-skipping monitoring video under the global video.
Based on this, the first and second light sources,each of the weighted rope-jump status feature vectors is described herein, e.g., asAnd carrying out semantic information homogenization activation of feature rank expression.
Accordingly, in one possible implementation manner, performing semantic information uniformity activation of feature rank expression on the sequence of weighted rope-skipping state feature vectors to obtain the sequence of optimized weighted rope-skipping state feature vectors, including: carrying out multisource information fusion pre-verification distribution evaluation optimization on each weighted rope skipping state feature vector in the sequence of weighted rope skipping state feature vectors by using the following homogenization activation formula to obtain the sequence of optimized weighted rope skipping state feature vectors; wherein, the homogenization activation formula is:wherein (1)>Is the weighted rope skipping state feature vector +.>Is>Personal characteristic value->Characteristic vector representing the weighted rope skipping state>Is>Is a logarithm based on 2, and +.>Is a weight superparameter,/->Is the first +.>And characteristic values.
Here, the weighted rope skipping state feature vector is consideredThe feature distribution mapping of the feature distribution in the high-dimensional feature space from local to whole can present different mapping modes on different feature distribution levels based on the mixed scale semantic features in the feature space, the rank expression semantic information homogenization is further based on the feature vector norm based on the scale heuristic mapping strategy, feature matching is carried out by combining the feature distribution scale, similar feature rank expressions are activated in a similar way, and the correlation between feature rank expressions with larger difference is reduced, so that the weighted rope skipping state feature vector is solved>The problem of insufficient association of the characteristic distribution under different spatial rank expressions is solved, the overall association expression effect of the sequence of the weighted rope-skipping state characteristic vector on the plurality of weighted rope-skipping state characteristic vectors is improved, and the accuracy of decoding characteristic extraction of the cyclic neural network model is improved, so that the accuracy of decoding values is improved.
More specifically, in step S130, a rope-skip count value is determined based on the sequence of the optimized weighted rope-skip state feature vectors. Accordingly, in one possible implementation, determining the rope-skipping count value based on the sequence of optimized weighted rope-skipping state feature vectors includes: and passing the optimized weighted sequence of rope skipping state characteristic vectors through a decoder based on a cyclic neural network model to obtain a decoding value, wherein the decoding value is used for representing a rope skipping count value.
Further, the weighted sequence of rope-jump state feature vectors is passed through a decoder based on a recurrent neural network model to obtain a decoded value, the decoded value being used to represent a rope-jump count value. That is, by inputting the weighted rope-skipping state feature vector sequence into a decoder based on a recurrent neural network (Recurrent Neural Network, RNN) model, a corresponding decoded value may be obtained for representing the count value of the rope-skipping. Wherein the recurrent neural network is a neural network model capable of processing sequence data. It has the ability to memorize and understand context, to model sequence information and to output corresponding results. In the problem of rope skipping counting, a weighted rope skipping state feature vector sequence is input into a decoder, and the decoder can infer and output the rope skipping counting value, namely a decoding value according to feature information in the sequence. In practical application, the decoded value can be used as a counting value of the rope skipping, so as to help evaluate the effect of rope skipping activity, monitor the body-building progress and make a training plan.
It should be appreciated that the recurrent neural network (Recurrent Neural Network, RNN) is a neural network model with recurrent connections. The recurrent neural network is capable of taking into account contextual information and has memory capabilities when processing the sequence data. An important feature of the recurrent neural network is that it can accept variable length input sequences and use the same weight parameters at each time step. In this implementation, the recurrent neural network model is used as a decoder to convert the optimally weighted rope-jump state feature vector sequence into decoded values, which are used to represent the rope-jump count values. The decoder functions to map an input sequence to an output sequence. In this case, the decoder receives as input the optimized weighted rope jump state feature vector sequence and processes it through the recurrent neural network model. The recurrent neural network model takes into account the contextual information in the sequence and gradually generates an output sequence. At each time step, the recurrent neural network model generates an output based on the current input and the hidden state of the previous time step, and takes the output as the input of the next time step. This process will continue until a complete decoded value is generated. The decoded value is used to represent a rope skipping count value, which can be predicted or generated by a recurrent neural network model by inputting an optimally weighted rope skipping state feature vector sequence into a decoder.
Further, the present disclosure also provides for optimization of the humanoid detection model and optimization of the rope detection model. The input end of the backbone network adopts a convolution downsampling mode to replace a simple image scaling mode to reduce an input image to a specified 320 resolution, meanwhile, the channel number of the image is kept unchanged, compared with the simple image scaling mode, more characteristic information can be kept while the image is scaled, and meanwhile, abstract image characteristics of higher layers are obtained. Specifically, the first convolution layer downsamples the input 1280 resolution image to 640 resolution with the convolution kernel of 3*3; the second convolution layer downsamples the 640 resolution image to 320 resolution with the convolution kernel of 3*3, taking the 320 resolution image as the next layer feature extraction input of the network. When detecting and outputting the human shape, a branch is newly added for outputting five coordinate points of the top of the head, the palm and the sole coordinates of the hands and feet of the human body. During training, the loss function calculates the loss for the positions of the five coordinate points in addition to the regression loss of the human frame. Compared with the method for simply calculating the regression loss of the humanoid frame, the method can be used for detecting the integrity of the humanoid frame, and avoids the phenomenon of image residual limbs such as the appearance of a human body lens.
Regarding the optimized part of the rope detection model, an improved filtering algorithm is adopted to realize denoising enhancement of the image aiming at the fuzzy condition of the rope input image. The method comprises the following steps: firstly, carrying out a conventional mean value filtering and median value filtering algorithm on a rope image O to obtain an image A and an image B respectively, subtracting the image O from the image A and the image B respectively to obtain an image A1 and an image B1, wherein the image A1 and the image B1 respectively represent a particle noise part and a fuzzy noise part of the image O, A1+B1 represents net value noise C of the image, and O-C is the final filtering image. Aiming at the condition that similar rope objects are more in natural environment, in order to reduce the false detection rate of the rope, a mode of labeling the field-associated characteristics is adopted on data processing, for example, the labeling of the handheld rope is effective labeling, and otherwise, the labeling is ineffective labeling. In the data preprocessing, a random RGB channel exchange+HSV color matching mode is adopted to replace pure HSV color matching. According to the method, under the condition of fewer color rope samples, the rope samples with various colors can be enhanced, and the detection capability of the model is improved.
In summary, according to the automatic counting method for the conventional rope skipping test disclosed by the embodiment of the disclosure, the rope skipping times of a rope-skipping person can be accurately and conveniently obtained.
Fig. 6 shows a block diagram of an automatic counting system 100 for conventional rope skipping testing, according to an embodiment of the present disclosure. As shown in fig. 6, an automatic counting system 100 for a conventional rope skipping test according to an embodiment of the present disclosure includes: the monitoring video acquisition module 110 is used for acquiring a rope skipping monitoring video of the monitored object; the feature vector extraction module 120 is configured to extract a sequence of feature vectors of the optimized and weighted rope skipping state from the rope skipping monitoring video; and a rope-skipping count value determining module 130, configured to determine a rope-skipping count value based on the sequence of the optimized weighted rope-skipping state feature vectors.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the automatic counting system 100 for a conventional rope skipping test described above have been described in detail in the above description of the automatic counting method for a conventional rope skipping test with reference to fig. 1 to 5, and thus, repetitive descriptions thereof will be omitted.
As described above, the automatic counting system 100 for a conventional rope-skipping test according to the embodiment of the present disclosure may be implemented in various wireless terminals, such as a server or the like having an automatic counting algorithm for a conventional rope-skipping test. In one possible implementation, the automatic counting system 100 for conventional rope skipping testing according to embodiments of the present disclosure may be integrated into the wireless terminal as one software module and/or hardware module. For example, the automatic counting system 100 for conventional rope skipping tests may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the automatic counting system 100 for conventional rope skipping tests may equally be one of the many hardware modules of the wireless terminal.
Alternatively, in another example, the automatic counting system 100 for the conventional rope-skipping test and the wireless terminal may be separate devices, and the automatic counting system 100 for the conventional rope-skipping test may be connected to the wireless terminal through a wired and/or wireless network and transmit interactive information in a contracted data format.
Fig. 7 illustrates an application scenario diagram of an automatic counting method for a conventional rope skipping test according to an embodiment of the present disclosure. As shown in fig. 7, in this application scenario, first, a rope-skipping monitoring video (e.g., D shown in fig. 7) of a monitored object is acquired by a camera (e.g., C shown in fig. 7), and then, the rope-skipping monitoring video is input to a server (e.g., S shown in fig. 7) in which an automatic counting algorithm for a conventional rope-skipping test is deployed, wherein the server can process the rope-skipping monitoring video using the automatic counting algorithm for a conventional rope-skipping test to obtain a decoded value representing a rope-skipping count value.
Further, there is also provided an automatic counting method including: step 1: and erecting a special camera for rope skipping test at a test place, wherein a tester is positioned in the middle of the visual field range of the camera and is ready for rope skipping, waiting for the initiation of a rope skipping instruction, and starting a shooting function by the camera after the initiation of the rope skipping instruction, so as to shoot a complete human-shaped image and a rope image of the tester. Step 2: in the online operation process of the rope skipping test equipment, a camera continuously works, each frame of rope skipping action of a tester is shot, and the key information of each frame is obtained and stored by analysis and processing of a reporting algorithm. Step 3: detecting all pedestrian position coordinates P0 in the report frame through a pedestrian positioning detection algorithm, screening out the coordinate positions of the testers through a regional target search algorithm, eliminating the coordinate information interference of bystanders, and locking the position coordinates P1 of the testers; the pedestrian positioning detection algorithm can adopt a yolov5_person target detection algorithm, and the positioning detection implementation of the algorithm can be briefly described as follows: sample labeling, model training updating and reasoning detection. And collecting and marking pedestrian position information, manufacturing a pedestrian marking data set, continuously and iteratively updating a pedestrian detection model, and selecting a model with minimum loss for pedestrian position detection. The area target searching algorithm may adopt an iterative intersection ratio (IOU) maximization algorithm, the algorithm may be briefly described as presetting a test area Z, iterating the intersection ratio of P0 and Z, and selecting the pedestrian position coordinate with the intersection ratio greater than 0 and maximized as P1. Step 4: in the operation process of the algorithm, cutting a tester image I according to the input P1, and detecting the position Kp of the human body key point required by rope skipping counting through a rope skipping human body key point detection algorithm; the cutting tester image I needs to contain complete human body images and complete rope image information; the rope skipping human body key point detection algorithm can adopt an optimized alpha phase_lite human body posture estimation algorithm, and the algorithm can be briefly described as three steps of key point labeling data set making, model optimization and training and key point detection. The key point labeling data set is manufactured again according to key point information required by rope skipping, an alpha algorithm model is modified to be suitable for training of the data set, and a loss minimum model is obtained through iterative optimization training again and used for detecting key points of a rope skipping human body; the key points Kp of the rope skipping human body mainly comprise wrist joint information W1 and W2 of two hands, ankle joint information A1 and A2 of two feet and knee joint information K1 and K2 of two feet. Step 5: in the algorithm running process, detecting the position information L1 and L2 of the rope through a rope key information positioning detection algorithm according to the input image I; the rope key information positioning detection algorithm can adopt a yolov5_rope algorithm, and the algorithm can be briefly described into three steps of making a rope key information data set, optimizing and compressing a yolov5 algorithm model, retraining and rope detection. And collecting and marking position information containing key parts at two ends of the rope, manufacturing a rope key information marking data set, continuously and iteratively updating a rope key information positioning detection model, and selecting a model with the minimum loss for rope key information positioning detection. Step 6: solving the ascending or descending state of the rope by utilizing the geometric position relation of the position information L1 and L2 of the double wrist joint information W1 and W2 and the rope; wherein the geometric position relationship refers to a straight line included angle, a corner point distance, a point-line distance and the like; more specifically, the geometrical positional relationship of the wrist joint information W1 and W2 of the two hands and the positional information L1 and L2 of the rope refers to the pixel distance and the pixel coordinate positional relationship between the center point of W1 and W2 and the center point of L1 and L2, respectively; wherein the pixel distances of W1 and L1, and W2 and L2 are respectively calculated by Euclidean distance, and are respectively D1 and D2; the pixel coordinate position relationship between W1 and L1 refers to the pixel ordinate-size relationship between them, if the pixel ordinate of W1 is greater than L1, the rope is located above and denoted Up1, otherwise, it is located below and denoted Down1; similarly, the pixel ordinate positional relationship between W2 and L2 can be expressed as Up2 or Down2. Step 7: in the online operation process of the rope skipping test equipment, the period process of detecting Down1 to Up1 to Down1 and simultaneously meeting Down2 to Up2 to Down2 is an effective rope throwing process, and the effective rope skipping process records the information { A1}, { A2}, the information { K1}, and { K2}, of the ankle joints of all frames. Step 8: calculating the jump height according to the geometric position relation of the ankle joint information A1 and A2 and the knee joint information K1 and K2; wherein the calculation method of the jump height J1 of A1 is the extreme value in the { A1} sequence, and the calculation method of the jump height J2 of A2 is the extreme value in the { A2} sequence. Wherein J1 is greater than delta1 or J2 is greater than delta2 is a valid jump, otherwise it is considered an invalid jump; wherein the delta1 is an empirical value, the method of calculation is 0.1 times the value of the difference in the { K1} sequence, and similarly, the method of calculation of the delta2 is 0.1 times the value of the difference in the { K2} sequence. Step 9: if the effective rope throwing process comprises effective jump, the number of the rope jumps is increased by one automatically, and meanwhile, in the shooting process of the camera, the algorithm is continuously monitored, so that the real-time performance of the whole rope jump process counting is finished.
It can be understood that the method improves the accuracy and the robustness of rope skipping counting and realizes the automation of rope skipping counting by detecting the key position information of the human body and the position information of the rope.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (7)

1. An automatic counting method for a conventional rope skipping test, comprising: acquiring a rope skipping monitoring video of a monitored object; extracting a sequence of optimized weighted rope skipping state feature vectors from the rope skipping monitoring video; and determining a rope skipping count value based on the sequence of optimized weighted rope skipping state feature vectors.
2. The automatic counting method for conventional rope-skipping testing according to claim 1, wherein extracting the sequence of optimized weighted rope-skipping state feature vectors from the rope-skipping monitoring video comprises: extracting rope skipping state characteristics of the rope skipping monitoring video to obtain a sequence of rope skipping state characteristic vectors; passing the sequence of the rope skipping state feature vectors through a time attention mechanism module to obtain a weighted sequence of the rope skipping state feature vectors; and carrying out semantic information homogenization activation of feature rank expression on the sequence of the weighted rope skipping state feature vector to obtain the sequence of the optimized weighted rope skipping state feature vector.
3. The automatic counting method for conventional rope skipping test according to claim 2, wherein the rope skipping monitoring video is subjected to rope skipping state feature extraction to obtain a sequence of rope skipping state feature vectors, comprising: and respectively passing each image frame in the rope skipping monitoring video through a rope skipping state feature extractor based on a convolutional neural network model to obtain a sequence of the rope skipping state feature vector.
4. An automatic counting method for a conventional rope-jump test according to claim 3, wherein passing the sequence of rope-jump status feature vectors through a time attention mechanism module to obtain a sequence of weighted rope-jump status feature vectors comprises: mapping each rope skipping state feature vector in the sequence of the rope skipping state feature vectors into a mapping feature vector through linear transformation to obtain a sequence of the mapping feature vector; and obtaining the sequence of weighted rope-jump status feature vectors based on the similarity between each mapping feature vector in the sequence of mapping feature vectors.
5. The automatic counting method for a conventional rope-skipping test of claim 4, wherein obtaining the sequence of weighted rope-skipping state feature vectors based on similarities between individual ones of the sequence of mapped feature vectors comprises: taking each mapping feature vector in the sequence of mapping feature vectors as a query vector, respectively calculating a plurality of similarities between the query vector and other mapping feature vectors in the sequence of mapping feature vectors, and carrying out weighted summation on the plurality of similarities corresponding to each query vector to obtain a plurality of attention weight values; normalizing the plurality of attention weight values to obtain a plurality of normalized attention weight values; and weighting each mapping feature vector in the sequence of mapping feature vectors based on the plurality of normalized attention weight values respectively to obtain the sequence of weighted rope skipping state feature vectors.
6. The automatic counting method for conventional rope-skipping testing of claim 5, wherein performing semantic information uniformity activation of feature rank expression on the sequence of weighted rope-skipping state feature vectors to obtain the sequence of optimized weighted rope-skipping state feature vectors comprises: carrying out multisource information fusion pre-verification distribution evaluation optimization on each weighted rope skipping state feature vector in the sequence of weighted rope skipping state feature vectors by using the following homogenization activation formula to obtain the sequence of optimized weighted rope skipping state feature vectors; wherein, the homogenization activation formula is:wherein (1)>Is the weighted rope skipping state feature vector +.>Is>Personal characteristic value->Characteristic vector representing the weighted rope skipping state>Is>Is a logarithm based on 2, and +.>Is a weight superparameter,/->Is the first +.>And characteristic values.
7. The automatic counting method for a conventional rope-skipping test of claim 6, wherein determining a rope-skipping count value based on the sequence of optimized weighted rope-skipping state feature vectors comprises: and passing the optimized weighted sequence of rope skipping state characteristic vectors through a decoder based on a cyclic neural network model to obtain a decoding value, wherein the decoding value is used for representing a rope skipping count value.
CN202311246652.9A 2023-09-26 2023-09-26 Automatic counting method for conventional rope skipping test Pending CN117612245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311246652.9A CN117612245A (en) 2023-09-26 2023-09-26 Automatic counting method for conventional rope skipping test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311246652.9A CN117612245A (en) 2023-09-26 2023-09-26 Automatic counting method for conventional rope skipping test

Publications (1)

Publication Number Publication Date
CN117612245A true CN117612245A (en) 2024-02-27

Family

ID=89956757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311246652.9A Pending CN117612245A (en) 2023-09-26 2023-09-26 Automatic counting method for conventional rope skipping test

Country Status (1)

Country Link
CN (1) CN117612245A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956139A (en) * 2019-12-02 2020-04-03 郑州大学 Human motion action analysis method based on time series regression prediction
KR20200118660A (en) * 2019-04-08 2020-10-16 한국전자통신연구원 Apparatus and method for motion recognition of a jump rope
CN112464808A (en) * 2020-11-26 2021-03-09 成都睿码科技有限责任公司 Rope skipping posture and number identification method based on computer vision
CN115346149A (en) * 2022-06-21 2022-11-15 浙江大沩人工智能科技有限公司 Rope skipping counting method and system based on space-time diagram convolution network
CN116484874A (en) * 2023-03-15 2023-07-25 北京智美源素科技有限公司 Video generation method, device, storage medium and computer equipment
CN116492634A (en) * 2023-06-26 2023-07-28 广州思林杰科技股份有限公司 Standing long jump testing method based on image visual positioning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200118660A (en) * 2019-04-08 2020-10-16 한국전자통신연구원 Apparatus and method for motion recognition of a jump rope
CN110956139A (en) * 2019-12-02 2020-04-03 郑州大学 Human motion action analysis method based on time series regression prediction
CN112464808A (en) * 2020-11-26 2021-03-09 成都睿码科技有限责任公司 Rope skipping posture and number identification method based on computer vision
CN115346149A (en) * 2022-06-21 2022-11-15 浙江大沩人工智能科技有限公司 Rope skipping counting method and system based on space-time diagram convolution network
CN116484874A (en) * 2023-03-15 2023-07-25 北京智美源素科技有限公司 Video generation method, device, storage medium and computer equipment
CN116492634A (en) * 2023-06-26 2023-07-28 广州思林杰科技股份有限公司 Standing long jump testing method based on image visual positioning

Similar Documents

Publication Publication Date Title
CN113065558B (en) Lightweight small target detection method combined with attention mechanism
CN112381011B (en) Non-contact heart rate measurement method, system and device based on face image
CN112446270A (en) Training method of pedestrian re-identification network, and pedestrian re-identification method and device
CN108256404A (en) Pedestrian detection method and device
CN110070029A (en) A kind of gait recognition method and device
CN115311186B (en) Cross-scale attention confrontation fusion method and terminal for infrared and visible light images
CN116343284A (en) Attention mechanism-based multi-feature outdoor environment emotion recognition method
CN114973097A (en) Method, device, equipment and storage medium for recognizing abnormal behaviors in electric power machine room
Fan et al. Hybrid lightweight Deep-learning model for Sensor-fusion basketball Shooting-posture recognition
CN117671787A (en) Rehabilitation action evaluation method based on transducer
Nenavath et al. Intelligent trigonometric particle filter for visual tracking
CN112446253A (en) Skeleton behavior identification method and device
CN112329571A (en) Self-adaptive human body posture optimization method based on posture quality evaluation
CN117612245A (en) Automatic counting method for conventional rope skipping test
CN116959097A (en) Action recognition method, device, equipment and storage medium
CN110490049A (en) The method for distinguishing total balance of the body obstacle based on multiple features and SVM
Jameel et al. Generating Spectrum Images from Different Types—Visible, Thermal, and Infrared Based on Autoencoder Architecture (GVTI-AE)
CN115205972A (en) Six-step hand washing evaluation method and device based on deep learning
KR20230077367A (en) Artificial intelligence apparatus for determining deterioration level based on image of transmission tower and method thereof
CN111598110B (en) HOG algorithm image recognition method based on grid cell memory
CN113869161A (en) Method and system for realizing intelligent runway based on face recognition and skeleton algorithm
CN113256556A (en) Image selection method and device
Gao et al. AONet: attentional occlusion-aware network for occluded person re-identification
CN117666788B (en) Action recognition method and system based on wearable interaction equipment
Jabraelzadeh et al. Providing a hybrid method for face detection and gender recognition by a transfer learning and fine-tuning approach in deep convolutional neural networks and the Yolo algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240510

Address after: Room 619, No. 2 Tengfei 1st Street, Zhongxin Guangzhou Knowledge City, Huangpu District, Guangzhou City, Guangdong Province, 510700, Room B504, Zhongke Chuanggu Zhongchuang Space

Applicant after: Guangzhou Yuedong Artificial Intelligence Technology Co.,Ltd.

Country or region after: China

Address before: Room 1503, No. 266 Kefeng Road, Huangpu District, Guangzhou City, Guangdong Province, 510700

Applicant before: Feixiang Technology (Guangzhou) Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240524

Address after: Room 3040-2, South Software Building, Guangzhou Nansha Information Technology Park, No. 2 Huanshi Avenue South, Nansha District, Guangzhou, Guangdong Province, 511400

Applicant after: Guangzhou Leti Technology Co.,Ltd.

Country or region after: China

Address before: Room 619, No. 2 Tengfei 1st Street, Zhongxin Guangzhou Knowledge City, Huangpu District, Guangzhou City, Guangdong Province, 510700, Room B504, Zhongke Chuanggu Zhongchuang Space

Applicant before: Guangzhou Yuedong Artificial Intelligence Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right