CN112381118A - Method and device for testing and evaluating dance test of university - Google Patents

Method and device for testing and evaluating dance test of university Download PDF

Info

Publication number
CN112381118A
CN112381118A CN202011147588.5A CN202011147588A CN112381118A CN 112381118 A CN112381118 A CN 112381118A CN 202011147588 A CN202011147588 A CN 202011147588A CN 112381118 A CN112381118 A CN 112381118A
Authority
CN
China
Prior art keywords
dance
space
test
evaluation
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011147588.5A
Other languages
Chinese (zh)
Other versions
CN112381118B (en
Inventor
邵进圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baise University
Original Assignee
Baise University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baise University filed Critical Baise University
Priority to CN202011147588.5A priority Critical patent/CN112381118B/en
Publication of CN112381118A publication Critical patent/CN112381118A/en
Application granted granted Critical
Publication of CN112381118B publication Critical patent/CN112381118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for testing and evaluating a dance test of a university, belonging to the technical field of dance teaching, wherein the method comprises the steps of collecting a field video of a dance test of a student; constructing a dance picture set; transmitting the dance picture set into a preset TLSVM model, and identifying different action types; acquiring space-time regions corresponding to different action types, and constructing a space-time region set corresponding to the training set; obtaining a test space-time region corresponding to each action in a field video; determining space-time regions corresponding to the test space-time regions one by one from the training space-time region set, and identifying dance actions in the field video based on the space-time regions; and comparing the dance motions recognized in the live video with preset dance testing motions, determining the similarity between the dance motions and the preset dance testing motions, and taking the similarity as an evaluation result to finish the evaluation of the university dance test. This application avoids invigilating mr's subjective factor influence, accomplishes fairly to carry out dance evaluation.

Description

Method and device for testing and evaluating dance test of university
Technical Field
The application relates to the technical field of dance teaching, in particular to a method, a device, equipment and a storage medium for testing and evaluating a university dance test.
Background
The internet is widely applied due to the continuous development of the times. The traditional dance teaching mode is also innovatively developed along with the traditional dance teaching mode, and the traditional dance teaching mode is reformed and innovated through an internet information technology, so that the individual development, the independent learning and the innovation ability of students are developed. Let the student independently study through the internet, cultivate the outstanding dancing talent that the student becomes comprehensive development. The development of new media brings ways and modes for the innovation of dance teaching, the application of the Internet and opportunities for the reform and innovation of dance teaching. The development of science and technology and internet breaks through the traditional dance teaching and promotes the reform and innovation of the traditional dance teaching.
The conventional dance evaluation method is mainly a traditional evaluation mode, a test student performs dance in an examination room, and a dance teacher performs on-site scoring according to the performance of the student, so that the conventional dance evaluation method is time-consuming, and evaluation results are easily affected by personal subjectivity of the evaluation teacher and are not fair enough. Therefore, the prior art has the problems that time is consumed when dance evaluation is carried out, and personal subjective factors are easily brought into an evaluation teacher.
Disclosure of Invention
The embodiment of the application aims to provide a method, a device, equipment and a storage medium for dance test evaluation of a university, so as to solve the problems that time is consumed and an evaluation teacher easily brings personal subjective factors when dance evaluation is performed in the prior art.
In order to solve the technical problem, an embodiment of the application provides a dance test assessment method for a university, which adopts the following technical scheme:
a university dance test evaluation method, comprising:
collecting a live video of a dance test of a student;
acquiring a large number of dance action pictures only labeled with action types on a network, and constructing a dance picture set;
taking the dance picture set as a training set, transmitting the dance picture set into a preset TLSVM model, and identifying different action types in the training set;
acquiring image feature spaces corresponding to different action types, acquiring video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests, determining a space-time region corresponding to each video feature space, and constructing a space-time region set corresponding to the training set;
taking the field video as a test set, acquiring a video feature space corresponding to each action in the test set, and determining a space-time area corresponding to the video feature space as a test space-time area;
determining a space-time region corresponding to the test space-time region based on the space-time region set, and identifying different dance actions in the field video based on the space-time region;
and comparing the dance actions identified in the field video with preset dance testing actions, determining the similarity between the dance actions and the preset dance testing actions based on a preset algorithm, and taking the similarity as an evaluation result to finish the university dance test evaluation.
Further, the live video of gathering student's dance examination includes:
dance videos are recorded in a shooting mode during dance examinations of students, and recorded contents serve as collected live videos.
Further, the obtaining of a large number of dance motion pictures labeled only with motion categories on the network includes:
taking the dance action pictures as keywords, searching on the Internet based on a big data searching mode, and downloading a large number of dance action pictures.
Further, the method for recognizing the different action types in the training set by the preset TLSVM model comprises the following steps:
acquiring figure images of different dance pictures in the training set, and determining different limb positions in the figure images;
constructing a set of limb positions based on the different limb positions, wherein the set of limb positions comprises: a left hand set, a right hand set, a left leg set, a right leg set, a head set and a human body set;
determining space-time regions corresponding to the different limb positions based on the limb position set;
and identifying different action types in the training set by judging the space-time areas corresponding to the different limb positions.
Further, the acquiring the image feature space corresponding to the different motion types includes:
acquiring different limb positions corresponding to the different action types, and identifying extreme values of the spaces corresponding to the different limbs based on a Hessian matrix mode;
constructing scale spaces corresponding to different limbs based on the extreme values;
acquiring feature points in the scale space, filtering and accurately positioning the feature points;
and acquiring the main direction of the different characteristic points and the characteristic values of the different characteristic points, constructing corresponding shape characteristics of the different characteristic points based on the main direction and the characteristic values, and taking the shape characteristics as image characteristic spaces corresponding to the different action types.
Further, the obtaining a video feature space corresponding to each action in the test set includes:
performing video segmentation processing on the test set, and segmenting the test set into coherent images;
acquiring color features, texture features and figure shape features of the image;
and respectively obtaining image feature spaces of the images, and obtaining video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests.
Further, the determining a spatiotemporal region corresponding to the video feature space as a test spatiotemporal region includes:
acquiring a two-dimensional image block corresponding to a person in a video;
constructing three-dimensional image blocks of the two-dimensional image blocks based on the video time sequence;
and taking the cube block corresponding to the three-dimensional image block as a test space-time area corresponding to the video feature space.
Further, the determining the spatiotemporal region corresponding to the test spatiotemporal region based on the set of spatiotemporal regions includes:
and judging the space-time region corresponding to the test space-time region by using a comparison mode.
Further, the determining of the similarity between the dance test and the dance test based on the preset algorithm, and taking the similarity as an evaluation result to complete the evaluation of the dance test of the university includes:
if the evaluation students are single evaluation, directly comparing dance actions identified in the site video with preset test dance actions, judging the similarity of the actions, if the similarity exceeds a preset evaluation qualified threshold, judging that the evaluation results of the students are qualified, otherwise, judging that the evaluation results of the students are unqualified;
if the evaluation students are group evaluation, obtaining dance actions identified in the field video and comparing the dance actions with preset test dance actions, judging action similarity of all students in the group, and obtaining a similarity average value, wherein if the similarity average value exceeds a preset evaluation qualified threshold value, the group evaluation result is qualified, otherwise, the evaluation student evaluation result is unqualified.
In order to solve the technical problem, an embodiment of the application further provides a device for evaluating a dance test of a university, which adopts the following technical scheme:
a university dance test evaluation device, comprising:
the video acquisition module is used for acquiring the field video of the dance test of the student;
the dance picture set building module is used for obtaining a large number of dance action pictures only labeled with action types on a network and building a dance picture set;
the action type recognition module is used for taking the dance picture set as a training set, transmitting the dance picture set into a preset TLSVM model, and recognizing different action types in the training set;
the training spatiotemporal region set generation module is used for acquiring image feature spaces corresponding to different action types, acquiring video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests, determining a spatiotemporal region corresponding to each video feature space, and constructing a spatiotemporal region set corresponding to the training set;
the testing space-time region determining module is used for taking the field video as a testing set, acquiring a video feature space corresponding to each action in the testing set, and determining a space-time region corresponding to the video feature space as a testing space-time region;
the dance action recognition module is used for determining a space-time region corresponding to the test space-time region based on the space-time region set and recognizing different dance actions in the field video based on the space-time region;
and the dance test evaluation module is used for comparing the dance motions recognized in the field video with preset test dance motions, determining the similarity between the dance motions and the preset test dance motions based on a preset algorithm, and taking the similarity as an evaluation result to finish the dance test evaluation of the university.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory in which a computer program is stored and a processor, the processor implementing the steps of a university dance test evaluation method set forth in an embodiment of the present application when executing the computer program.
In order to solve the above technical problem, an embodiment of the present application further provides a nonvolatile computer-readable storage medium, which adopts the following technical solutions:
a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a university dance test evaluation method set forth in an embodiment of the present application.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the embodiment of the application discloses a method, a device, equipment and a storage medium for testing and evaluating a dance test of a university, wherein the method comprises the steps of collecting a field video of the dance test of a student; constructing a dance picture set; transmitting the dance picture set into a preset TLSVM model, and identifying different action types; acquiring space-time regions corresponding to different action types, and constructing a space-time region set corresponding to the training set; obtaining a test space-time region corresponding to each action in a field video; determining space-time regions corresponding to the test space-time regions one by one from the training space-time region set, and identifying dance actions in the field video based on the space-time regions; and comparing the dance motions recognized in the live video with preset dance testing motions, determining the similarity between the dance motions and the preset dance testing motions, and taking the similarity as an evaluation result to finish the evaluation of the university dance test. This application avoids invigilating mr's subjective factor influence, accomplishes fairly to carry out dance evaluation.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is a diagram of an exemplary system architecture to which embodiments of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a university dance test evaluation method in an embodiment of the present application;
FIG. 3 is a logic diagram illustrating the implementation of one embodiment of the method for assessing a university dance test according to the embodiment of the present application;
fig. 4 is a schematic structural view of an embodiment of the university dance test evaluation apparatus according to the embodiment of the present application;
fig. 5 is a schematic structural diagram of an embodiment of a computer device in an embodiment of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the method for evaluating the university dance test provided in the embodiment of the present application is generally performed by a server/terminal device, and accordingly, the device for evaluating the university dance test is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow chart of one embodiment of a university dance test evaluation method of the present application is shown, the method comprising the steps of:
step 201, collecting a live video of a dance test of a student.
In this application embodiment, the collection student dance examination's live video includes: dance videos are recorded in a shooting mode during dance examinations of students, and recorded contents serve as collected live videos.
And 202, acquiring a large number of dance motion pictures only labeled with motion types on the network, and constructing a dance picture set.
In this embodiment of the application, the obtaining of a large number of dance motion pictures only labeled with motion categories on a network includes: taking the dance action pictures as keywords, searching on the Internet based on a big data searching mode, and downloading a large number of dance action pictures.
And 203, taking the dance picture set as a training set, transmitting the dance picture set into a preset TLSVM model, and recognizing different action types in the training set.
In the embodiment of the present application, the recognizing, by the preset TLSVM model, different action types in the training set includes the following steps: acquiring figure images of different dance pictures in the training set, and determining different limb positions in the figure images; constructing a set of limb positions based on the different limb positions, wherein the set of limb positions comprises: a left hand set, a right hand set, a left leg set, a right leg set, a head set and a human body set; determining space-time regions corresponding to the different limb positions based on the limb position set; and identifying different action types in the training set by judging the space-time areas corresponding to the different limb positions.
Explanation: the figure image acquisition is carried out on different dance pictures in the training set, and different limb positions in the figure image are determined, and the specific implementation mode is as follows: compressing the dance picture set elements in proportion; processing the picture by using a GrabCut algorithm to obtain a mask image of a person; carrying out edge detection on the figure mask image, dividing an absolute background area, sampling and color-taking the absolute background area, and calculating the RGB value of the absolute background; respectively and independently carrying out morphological processing of erosion and expansion on the mask image with the background residue, and then carrying out subtraction operation on the two morphological processing results to obtain an edge area, wherein the edge area is an unknown edge area; the pixels of the edge unknown area are re-divided into the background or the foreground, the dividing method is to compare the RGB value of the absolute background with the color value of the unknown pixel, the closer the distance is, the higher the possibility of belonging to the background is, and the pixels are re-divided according to the judgment condition; smoothing the edge, and finally outputting a segmentation image, namely a figure image in the dance picture; and (4) carrying out independent segmentation on the figure image, and determining different limb positions in the figure image.
Explanation: constructing a set of limb positions based on the different limb positions, wherein the set of limb positions comprises: left hand collection, right hand collection, left leg collection, right leg collection, head collection, human trunk collection, the concrete mode is as follows: before character images of different dance pictures in the training set are acquired, distinguishing identification is carried out on elements in the dance picture sets, after the character images are extracted, the distinguishing identification is added into the character images and then added into different limb positions, and the same limbs with different identifications are added into a set to construct a limb position set.
For example: 1 ten thousand dance pictures exist in the dance picture set, and in order to distinguish, 1 to 10000 Arabic numbers are used as picture names; after the figure images in the dance picture are extracted, 1 to 10000 are also used as picture names of the figure images, and if three figures exist in the dance picture 300, 300a, 300b and 300c are used for distinguishing and identifying; after the body position of the dance picture 300 is determined, the left hand _300a, the left hand _300b, the left hand _300c, the right hand _300a, the right hand _300b, the right hand _300c, the left leg _300a, the left leg _300b, the left leg _300c, the right leg _300a, the right leg _300b, the right leg _300c, the head _300a, the head _300b, the head _300c, the body _300a, the body _300b and the body _300c are used, and the left hand _300a, the left hand _300b, the left hand _300c and the left hand pictures in the other dance picture figure images are put into the same set to construct a left hand set, and similarly, the right hand set, the left leg set, the right leg set, the head set and the human body set are constructed.
Explanation: the space-time regions corresponding to the different limb positions are determined based on the limb position set, and the specific implementation mode is as follows: determining edge points in the different limb positions, acquiring two-dimensional coordinates of the edge points, determining minimum outsourcing rectangles of the different limb positions, acquiring 6 minimum outsourcing rectangles forming a cuboid of the limb, generating a minimum outsourcing cube of the limb position, and taking the minimum outsourcing cube as a corresponding space-time area.
Explanation: and identifying different action types in the training set by judging the space-time areas corresponding to the different limb positions, wherein the specific implementation mode is as follows: and taking the space-time region corresponding to the human body trunk as a reference region, and judging the positions of the space-time regions corresponding to the positions of other human body limbs corresponding to the reference region, thereby identifying the types of different actions in the training set.
For example: the number of the figure image corresponding to a certain dance picture set element is 31, and in the above step, the spatio-temporal regions corresponding to the left hand _31, the right hand _31, the left leg _31, the right leg _31, the head _31 and the torso _31 are obtained respectively, at this time, the spatio-temporal region corresponding to the torso _31 is taken as a reference region, if a certain spatio-temporal region is at the upper right, the spatio-temporal region corresponding to the left hand _31 is taken, the action is determined to be the left-hand type, and similarly, other spatio-temporal regions can be identified, and the types to which different actions belong can be determined.
And 204, acquiring image feature spaces corresponding to different action types, acquiring video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests, determining a space-time region corresponding to each video feature space, and constructing a space-time region set corresponding to the training set.
In this embodiment of the application, the obtaining of the image feature spaces corresponding to the different motion types includes: acquiring different limb positions corresponding to the different action types, and identifying extreme values of the spaces corresponding to the different limbs based on a Hessian matrix mode; constructing scale spaces corresponding to different limbs based on the extreme values; acquiring feature points in the scale space, filtering and accurately positioning the feature points; and acquiring the main direction of the different characteristic points and the characteristic values of the different characteristic points, constructing corresponding shape characteristics of the different characteristic points based on the main direction and the characteristic values, and taking the shape characteristics as image characteristic spaces corresponding to the different action types.
Explanation: acquiring different limb positions corresponding to different action types, and identifying extreme values of corresponding spaces of different limbs based on a Hessian matrix mode, wherein the specific mode is as follows: determining the positions of the different limbs in the two-dimensional image, calculating second-order partial derivatives of the image in the X direction and the Y direction for each pixel point contained in the different limbs, calculating derivatives of the image in the XY direction, wherein the Hessian matrix of the limbs at the critical point C (X, Y) is H (C), if the H (C) is a positive matrix, the critical point C is a local minimum value, if the H (C) is a negative matrix, the critical point C is a local maximum value, if the H (C) is an indeterminate matrix, the critical point C is not an extreme value, and all identified points corresponding to the maximum value and the minimum value form extreme values of the spaces corresponding to the different limbs.
Explanation: constructing the scale space corresponding to different limbs based on the extreme value, wherein the specific mode is as follows: and obtaining a plurality of extreme points, and determining the scale spaces corresponding to different limbs through the extreme point sets.
Explanation: the method comprises the following specific implementation modes of acquiring the feature points in the scale space, filtering the feature points and accurately positioning the feature points: and taking the points in the extreme point set as the characteristic points in the scale space, taking the points of the maximum value as the maximum characteristic points, taking the points of the minimum value as the minimum characteristic points, distinguishing, and respectively obtaining a point set corresponding to the maximum characteristic points and a point set corresponding to the minimum characteristic points.
Explanation: the method comprises the following steps of obtaining the main direction of different feature points and the feature values of the different feature points, constructing corresponding shape features of the different feature points based on the main direction and the feature values, and taking the shape features as image feature spaces corresponding to different action types, wherein the specific implementation mode is as follows: and respectively connecting adjacent maximum value points and adjacent minimum value points to form point lines, wherein if one point line consisting of minimum value points exists on each of two sides of the point line consisting of the maximum value points, the direction of the point line consisting of the maximum value points is the main direction of the feature points, a plurality of point lines which are formed by all the maximum value points are obtained, namely the shape lines of the image, and the two-dimensional space area consisting of the shape lines is used as the image feature space corresponding to different action types.
Explanation: the video feature space mapped by the image feature space is obtained based on a linear transformation method of random clustering forests, and the specific implementation mode is as follows: sampling returned samples from the shape lines of the image to construct a sub-line set, wherein the number of the lines of the sub-line set is the same as that of the shape lines of the image; lines in different sub-line sets can be repeated, and elements in the same sub-line set can also be repeated; constructing a sub-decision tree by utilizing the sub-line set, forming a space line set by all lines corresponding to the image feature space, putting the space line set into each sub-decision tree, and outputting a space construction result by each sub-decision tree; and if the output results of most of the sub decision trees are consistent, constructing a space construction result with the largest proportion as a random clustering forest result. And if the random clustering forest result is subjected to matrix transformation, changing the basis of the composition vector, and the linear combination mode of the random clustering forest result relative to the basis is not changed, taking the random clustering forest result as the video feature space mapped by the image feature space.
Step 205, using the field video as a test set, obtaining a video feature space corresponding to each action in the test set, and determining a space-time region corresponding to the video feature space as a test space-time region.
In this embodiment of the application, the obtaining a video feature space corresponding to each action in the test set includes: performing video segmentation processing on the test set, and segmenting the test set into coherent images; acquiring color features, texture features and figure shape features of the image; and respectively obtaining image feature spaces of the images, and obtaining video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests.
In this embodiment of the present application, the determining a spatio-temporal region corresponding to the video feature space as a test spatio-temporal region includes: acquiring a two-dimensional image block corresponding to a person in a video; constructing three-dimensional image blocks of the two-dimensional image blocks based on the video time sequence; and taking the cube block corresponding to the three-dimensional image block as a test space-time area corresponding to the video feature space.
Explanation: the method comprises the steps that video segmentation processing is carried out on the test set in the steps, the test set is segmented into continuous images, at the moment, two-dimensional spaces occupied by people in different images are respectively obtained, the two-dimensional spaces are overlapped through a video time sequence, three-dimensional image blocks are constructed, cubes corresponding to the three-dimensional images are fast, namely space cubes occupied by people in dance videos are used as space-time sub-regions corresponding to the feature spaces of the videos.
And step 206, determining a space-time region corresponding to the test space-time region based on the space-time region set, and identifying different dance actions in the field video based on the space-time region.
In an embodiment of the present application, the determining, based on the spatiotemporal region set, a spatiotemporal region corresponding to the test spatiotemporal region includes: and judging the space-time region corresponding to the test space-time region by using a comparison mode.
And step 207, comparing the dance motions recognized in the field video with preset dance testing motions, determining the similarity between the dance motions and the preset dance testing motions based on a preset algorithm, and taking the similarity as an evaluation result to finish the university dance test evaluation.
In an embodiment of the application, the determining the similarity between the two based on a preset algorithm, and taking the similarity as an evaluation result to complete the evaluation of the university dance test includes: if the evaluation students are single evaluation, directly comparing dance actions identified in the site video with preset test dance actions, judging the similarity of the actions, if the similarity exceeds a preset evaluation qualified threshold, judging that the evaluation results of the students are qualified, otherwise, judging that the evaluation results of the students are unqualified; if the evaluation students are group evaluation, obtaining dance actions identified in the field video and comparing the dance actions with preset test dance actions, judging action similarity of all students in the group, and obtaining a similarity average value, wherein if the similarity average value exceeds a preset evaluation qualified threshold value, the group evaluation result is qualified, otherwise, the evaluation student evaluation result is unqualified.
With continuing reference to fig. 3, fig. 3 is a logic diagram of an implementation of the method for assessing a dance test of a university according to an embodiment of the present application, and the implementation steps are as follows: collecting a live video of a dance test of a student; acquiring a large number of dance action pictures only labeled with action types on a network, and constructing a dance picture set; taking the dance picture set as a training set, transmitting the dance picture set into a preset TLSVM model, and identifying different action types in the training set; acquiring image feature spaces corresponding to different action types, acquiring video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests, determining a space-time region corresponding to each video feature space, and constructing a space-time region set corresponding to the training set; taking the field video as a test set, acquiring a video feature space corresponding to each action in the test set, and determining a space-time area corresponding to the video feature space as a test space-time area; determining a space-time region corresponding to the test space-time region based on the space-time region set, and identifying different dance actions in the field video based on the space-time region; and comparing the dance actions identified in the field video with preset dance testing actions, determining the similarity between the dance actions and the preset dance testing actions based on a preset algorithm, and taking the similarity as an evaluation result to finish the university dance test evaluation.
The method for evaluating the dance test of the university can be used for evaluating the dance test of the university by acquiring the field video of the dance test of the student; constructing a dance picture set; transmitting the dance picture set into a preset TLSVM model, and identifying different action types; acquiring space-time regions corresponding to different action types, and constructing a space-time region set corresponding to the training set; obtaining a test space-time region corresponding to each action in a field video; determining space-time regions corresponding to the test space-time regions one by one from the training space-time region set, and identifying dance actions in the field video based on the space-time regions; and comparing the dance motions recognized in the live video with preset dance testing motions, determining the similarity between the dance motions and the preset dance testing motions, and taking the similarity as an evaluation result to finish the evaluation of the university dance test. This application avoids invigilating mr's subjective factor influence, accomplishes fairly to carry out dance evaluation.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 4, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a university dance test evaluation apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the university dance test evaluation device 4 according to the present embodiment includes: the dance test system comprises a video acquisition module 401, a dance picture set construction module 402, an action type identification module 403, a training spatiotemporal region set generation module 404, a test spatiotemporal region determination module 405, a dance action identification module 406 and a dance test evaluation module 407. Wherein:
the video acquisition module 401 is used for acquiring a live video of a dance test of a student;
a dance picture set construction module 402, configured to obtain a large number of dance action pictures only labeled with action categories on a network, and construct a dance picture set;
the action type recognition module 403 is configured to transmit the dance picture set as a training set to a preset TLSVM model, and recognize different action types in the training set;
a training spatiotemporal region set generating module 404, configured to obtain image feature spaces corresponding to the different motion types, obtain video feature spaces mapped by the image feature spaces based on a linear transformation method of a random clustering forest, determine a spatiotemporal region corresponding to each of the video feature spaces, and construct a spatiotemporal region set corresponding to the training set;
a test spatiotemporal region determining module 405, configured to use the field video as a test set, obtain a video feature space corresponding to each action in the test set, and determine a spatiotemporal region corresponding to the video feature space as a test spatiotemporal region;
a dance action recognition module 406, configured to determine a spatio-temporal region corresponding to the test spatio-temporal region based on the spatio-temporal region set, and recognize different dance actions in the live video based on the spatio-temporal region;
and the dance test evaluation module 407 is used for comparing the dance motions recognized in the field video with preset dance test motions, determining the similarity between the dance motions and the preset dance test motions based on a preset algorithm, and taking the similarity as an evaluation result to finish the dance test evaluation of the university.
The device for testing and evaluating the dance test of the university acquires the field video of the dance test of the student; constructing a dance picture set; transmitting the dance picture set into a preset TLSVM model, and identifying different action types; acquiring space-time regions corresponding to different action types, and constructing a space-time region set corresponding to the training set; obtaining a test space-time region corresponding to each action in a field video; determining space-time regions corresponding to the test space-time regions one by one from the training space-time region set, and identifying dance actions in the field video based on the space-time regions; and comparing the dance motions recognized in the live video with preset dance testing motions, determining the similarity between the dance motions and the preset dance testing motions, and taking the similarity as an evaluation result to finish the evaluation of the university dance test. This application avoids invigilating mr's subjective factor influence, accomplishes fairly to carry out dance evaluation.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 5, fig. 5 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 5 comprises a memory 5a, a processor 5b, and a network interface 5c, which are communicatively connected to each other via a system bus. It is noted that only a computer device 5 having components 5a-5c is shown in the figures, but it is to be understood that not all of the shown components need be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 5a includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 5a may be an internal storage unit of the computer device 5, such as a hard disk or a memory of the computer device 5. In other embodiments, the memory 5a may also be an external storage device of the computer device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 5. Of course, the memory 5a may also comprise both an internal storage unit of the computer device 5 and an external storage device thereof. In this embodiment, the memory 5a is generally used for storing an operating system and various types of application software installed on the computer device 5, such as program codes of a university dance test evaluation method. In addition, the memory 5a may also be used to temporarily store various types of data that have been output or are to be output.
The processor 5b may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip in some embodiments. The processor 5b is typically used to control the overall operation of the computer device 5. In this embodiment, the processor 5b is configured to run the program code stored in the memory 5a or process data, for example, the program code for the university dance test evaluation method.
The network interface 5c may comprise a wireless network interface or a wired network interface, and the network interface 5c is typically used for establishing a communication connection between the computer device 5 and other electronic devices.
The present application further provides another embodiment of a non-transitory computer-readable storage medium storing a university dance test evaluation program executable by at least one processor to cause the at least one processor to perform the steps of the university dance test evaluation method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A university dance test evaluation method is characterized by comprising the following steps:
collecting a live video of a dance test of a student;
acquiring a large number of dance action pictures only labeled with action types on a network, and constructing a dance picture set;
taking the dance picture set as a training set, transmitting the dance picture set into a preset TLSVM model, and identifying different action types in the training set;
acquiring image feature spaces corresponding to different action types, acquiring video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests, determining a space-time region corresponding to each video feature space, and constructing a space-time region set corresponding to the training set;
taking the field video as a test set, acquiring a video feature space corresponding to each action in the test set, and determining a space-time area corresponding to the video feature space as a test space-time area;
determining a space-time region corresponding to the test space-time region based on the space-time region set, and identifying different dance actions in the field video based on the space-time region;
and comparing the dance actions identified in the field video with preset dance testing actions, determining the similarity between the dance actions and the preset dance testing actions based on a preset algorithm, and taking the similarity as an evaluation result to finish the university dance test evaluation.
2. The university dance test evaluation method according to claim 1, wherein the obtaining of the plurality of dance motion pictures labeled only with motion categories on the network comprises:
taking the dance action pictures as keywords, searching on the Internet based on a big data searching mode, and downloading a large number of dance action pictures.
3. The university dance test evaluation method according to claim 2, wherein the preset TLSVM model identifies different types of actions in the training set, and the method comprises the following steps:
acquiring figure images of different dance pictures in the training set, and determining different limb positions in the figure images;
constructing a set of limb positions based on the different limb positions, wherein the set of limb positions comprises: a left hand set, a right hand set, a left leg set, a right leg set, a head set and a human body set;
determining space-time regions corresponding to the different limb positions based on the limb position set;
and identifying different action types in the training set by judging the space-time areas corresponding to the different limb positions.
4. The university dance test evaluation method according to any one of claims 1 to 3, wherein the obtaining of the image feature space corresponding to the different action categories includes:
acquiring different limb positions corresponding to the different action types, and identifying extreme values of the spaces corresponding to the different limbs based on a Hessian matrix mode;
constructing scale spaces corresponding to different limbs based on the extreme values;
acquiring feature points in the scale space, filtering and accurately positioning the feature points;
and acquiring the main direction of the different characteristic points and the characteristic values of the different characteristic points, constructing corresponding shape characteristics of the different characteristic points based on the main direction and the characteristic values, and taking the shape characteristics as image characteristic spaces corresponding to the different action types.
5. The university dance test evaluation method according to claim 4, wherein the obtaining of the video feature space corresponding to each action in the test set comprises:
performing video segmentation processing on the test set, and segmenting the test set into coherent images;
acquiring color features, texture features and figure shape features of the image;
and respectively obtaining image feature spaces of the images, and obtaining video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests.
6. The university dance test evaluation method according to claim 5, wherein the determining a spatiotemporal region corresponding to the video feature space as a test spatiotemporal region comprises:
acquiring a two-dimensional image block corresponding to a person in a video;
constructing three-dimensional image blocks of the two-dimensional image blocks based on the video time sequence;
and taking the cube block corresponding to the three-dimensional image block as a test space-time area corresponding to the video feature space.
7. The university dance test evaluation method according to claim 5 or 6, wherein the similarity between the two is determined based on a preset algorithm, and the similarity is used as an evaluation result to complete the university dance test evaluation, and the method comprises the following steps:
if the evaluation students are single evaluation, directly comparing dance actions identified in the site video with preset test dance actions, judging the similarity of the actions, if the similarity exceeds a preset evaluation qualified threshold, judging that the evaluation results of the students are qualified, otherwise, judging that the evaluation results of the students are unqualified;
if the evaluation students are group evaluation, obtaining dance actions identified in the field video and comparing the dance actions with preset test dance actions, judging action similarity of all students in the group, and obtaining a similarity average value, wherein if the similarity average value exceeds a preset evaluation qualified threshold value, the group evaluation result is qualified, otherwise, the evaluation student evaluation result is unqualified.
8. A university dance examination evaluation device, comprising:
the video acquisition module is used for acquiring the field video of the dance test of the student;
the dance picture set building module is used for obtaining a large number of dance action pictures only labeled with action types on a network and building a dance picture set;
the action type recognition module is used for taking the dance picture set as a training set, transmitting the dance picture set into a preset TLSVM model, and recognizing different action types in the training set;
the training spatiotemporal region set generation module is used for acquiring image feature spaces corresponding to different action types, acquiring video feature spaces mapped by the image feature spaces based on a linear transformation method of random clustering forests, determining a spatiotemporal region corresponding to each video feature space, and constructing a spatiotemporal region set corresponding to the training set;
the testing space-time region determining module is used for taking the field video as a testing set, acquiring a video feature space corresponding to each action in the testing set, and determining a space-time region corresponding to the video feature space as a testing space-time region;
the dance action recognition module is used for determining a space-time region corresponding to the test space-time region based on the space-time region set and recognizing different dance actions in the field video based on the space-time region;
and the dance test evaluation module is used for comparing the dance motions recognized in the field video with preset test dance motions, determining the similarity between the dance motions and the preset test dance motions based on a preset algorithm, and taking the similarity as an evaluation result to finish the dance test evaluation of the university.
9. A computer device comprising a memory having stored therein a computer program and a processor that when executed implements the steps of the university dance test assessment method according to any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium having stored thereon a computer program that, when executed by a processor, performs the steps of the university dance test evaluation method according to any one of claims 1 to 7.
CN202011147588.5A 2020-10-23 2020-10-23 College dance examination evaluation method and device Active CN112381118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011147588.5A CN112381118B (en) 2020-10-23 2020-10-23 College dance examination evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011147588.5A CN112381118B (en) 2020-10-23 2020-10-23 College dance examination evaluation method and device

Publications (2)

Publication Number Publication Date
CN112381118A true CN112381118A (en) 2021-02-19
CN112381118B CN112381118B (en) 2024-05-17

Family

ID=74580808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011147588.5A Active CN112381118B (en) 2020-10-23 2020-10-23 College dance examination evaluation method and device

Country Status (1)

Country Link
CN (1) CN112381118B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612310A (en) * 2023-07-17 2023-08-18 长春医学高等专科学校(长春职工医科大学长春市医学情报所) Multimedia dance action based image decomposition processing method
CN117077084A (en) * 2023-10-16 2023-11-17 南京栢拓视觉科技有限公司 Dance scoring method based on space-time heterogeneous double-flow convolutional network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799873A (en) * 2012-07-23 2012-11-28 青岛科技大学 Human body abnormal behavior recognition method
CN103310233A (en) * 2013-06-28 2013-09-18 青岛科技大学 Similarity mining method of similar behaviors between multiple views and behavior recognition method
CN108830252A (en) * 2018-06-26 2018-11-16 哈尔滨工业大学 A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN109508656A (en) * 2018-10-29 2019-03-22 重庆中科云丛科技有限公司 A kind of dancing grading automatic distinguishing method, system and computer readable storage medium
CN111563487A (en) * 2020-07-14 2020-08-21 平安国际智慧城市科技股份有限公司 Dance scoring method based on gesture recognition model and related equipment
KR20200119042A (en) * 2019-04-09 2020-10-19 유진기술 주식회사 Method and system for providing dance evaluation service

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799873A (en) * 2012-07-23 2012-11-28 青岛科技大学 Human body abnormal behavior recognition method
CN103310233A (en) * 2013-06-28 2013-09-18 青岛科技大学 Similarity mining method of similar behaviors between multiple views and behavior recognition method
CN108830252A (en) * 2018-06-26 2018-11-16 哈尔滨工业大学 A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN109508656A (en) * 2018-10-29 2019-03-22 重庆中科云丛科技有限公司 A kind of dancing grading automatic distinguishing method, system and computer readable storage medium
KR20200119042A (en) * 2019-04-09 2020-10-19 유진기술 주식회사 Method and system for providing dance evaluation service
CN111563487A (en) * 2020-07-14 2020-08-21 平安国际智慧城市科技股份有限公司 Dance scoring method based on gesture recognition model and related equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
余海燕;: "基于多目立体视觉的体育舞蹈错误动作校正方法", 赤峰学院学报(自然科学版), no. 07, 25 July 2020 (2020-07-25) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612310A (en) * 2023-07-17 2023-08-18 长春医学高等专科学校(长春职工医科大学长春市医学情报所) Multimedia dance action based image decomposition processing method
CN116612310B (en) * 2023-07-17 2023-09-26 长春医学高等专科学校(长春职工医科大学长春市医学情报所) Multimedia dance action based image decomposition processing method
CN117077084A (en) * 2023-10-16 2023-11-17 南京栢拓视觉科技有限公司 Dance scoring method based on space-time heterogeneous double-flow convolutional network
CN117077084B (en) * 2023-10-16 2024-01-26 南京栢拓视觉科技有限公司 Dance scoring method based on space-time heterogeneous double-flow convolutional network

Also Published As

Publication number Publication date
CN112381118B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
CN111754541B (en) Target tracking method, device, equipment and readable storage medium
WO2020107847A1 (en) Bone point-based fall detection method and fall detection device therefor
CN111832468B (en) Gesture recognition method and device based on biological recognition, computer equipment and medium
CN112396613B (en) Image segmentation method, device, computer equipment and storage medium
CN112863683B (en) Medical record quality control method and device based on artificial intelligence, computer equipment and storage medium
CN112784778B (en) Method, apparatus, device and medium for generating model and identifying age and sex
CN111814620A (en) Face image quality evaluation model establishing method, optimization method, medium and device
CN113255557B (en) Deep learning-based video crowd emotion analysis method and system
WO2021223738A1 (en) Method, apparatus and device for updating model parameter, and storage medium
CN113343898B (en) Mask shielding face recognition method, device and equipment based on knowledge distillation network
CN112381118B (en) College dance examination evaluation method and device
WO2022111387A1 (en) Data processing method and related apparatus
CN111507285A (en) Face attribute recognition method and device, computer equipment and storage medium
CN114359582B (en) Small sample feature extraction method based on neural network and related equipment
CN111709346B (en) Historical building identification and detection method based on deep learning and high-resolution images
CN116033259B (en) Method, device, computer equipment and storage medium for generating short video
CN115760886B (en) Land parcel dividing method and device based on unmanned aerial vehicle aerial view and related equipment
CN116611491A (en) Training method and device of target detection model, electronic equipment and storage medium
CN113361519B (en) Target processing method, training method of target processing model and device thereof
CN113139490B (en) Image feature matching method and device, computer equipment and storage medium
CN116052225A (en) Palmprint recognition method, electronic device, storage medium and computer program product
CN112309181A (en) Dance teaching auxiliary method and device
CN111814865A (en) Image identification method, device, equipment and storage medium
CN112395450A (en) Picture character detection method and device, computer equipment and storage medium
CN112699263B (en) AI-based two-dimensional art image dynamic display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant