CN113705483A - Icon generation method, device, equipment and storage medium - Google Patents

Icon generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113705483A
CN113705483A CN202111011849.5A CN202111011849A CN113705483A CN 113705483 A CN113705483 A CN 113705483A CN 202111011849 A CN202111011849 A CN 202111011849A CN 113705483 A CN113705483 A CN 113705483A
Authority
CN
China
Prior art keywords
human body
icon
graph
key point
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111011849.5A
Other languages
Chinese (zh)
Inventor
李慧峰
冷子昂
谢东明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Technology Development Co Ltd
Original Assignee
Shanghai Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Technology Development Co Ltd filed Critical Shanghai Sensetime Technology Development Co Ltd
Priority to CN202111011849.5A priority Critical patent/CN113705483A/en
Publication of CN113705483A publication Critical patent/CN113705483A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an icon generation method, an icon generation device, an icon generation equipment and a storage medium, wherein the method comprises the following steps: acquiring picture contents including an image to be detected of a human body; determining human body key points of the human body in the image to be detected; determining attribute information of the human body key points; and generating an action icon of the human body based on the attribute information.

Description

Icon generation method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an icon generation method, apparatus, device, and storage medium.
Background
With the continuous development of society, various icons are gradually merged into the lives of people, for example, sports icons of the Olympic games can clearly show various game items of the Olympic games. However, in the related art, a professional designer is usually required to operate icon design software to generate an icon, so that the threshold of icon design for human body actions is high.
Disclosure of Invention
The embodiment of the application provides an icon generation technical scheme.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an icon generation method, which comprises the following steps:
acquiring picture contents including an image to be detected of a human body;
determining human body key points of the human body in the image to be detected;
determining attribute information of the human body key points;
and generating an action icon of the human body based on the attribute information. Therefore, the action icon corresponding to the human body in the image to be detected is determined based on the attribute information of the human body key point in the image to be detected, and the action icon matched with the human body in the image to be detected can be generated efficiently and conveniently.
In some embodiments, the determining key points of the human body in the image to be detected includes: determining a human body detection frame of the human body in the image to be detected; identifying a plurality of human body key points in the human body detection frame; the determining of the attribute information of the human body key points includes: and determining the position information of the plurality of human key points in the human detection frame. Therefore, the human key points with higher precision and the position information of the human key points can be efficiently and conveniently provided.
In some embodiments, the attribute information includes: the generating of the action icon of the human body based on the attribute information includes: determining a graph matched with the human body key points based on the body parts of the human body key points; and generating the action icon of the human body based on the graph matched with the human body key point and the position information of the human body key point. Therefore, the action icon corresponding to the human body in the image to be detected is generated through the graph corresponding to each human body key point and the position information of each human body key point in the image to be detected, and the action icon of the generated human body can be more matched with the posture information presented by the human body in the image to be detected.
In some embodiments, the determining, based on the body part to which the human body key point belongs, a graph matching the human body key point includes: determining the graphic parameters matched with the body part to which the key points of the human body belong; and determining a graph matched with the key points of the human body based on the graph parameters. Therefore, the corresponding graph is determined through the body part to which the key point belongs, the graph corresponding to each human body key point can be further refined, and the accuracy of determining the action icon of the human body based on the graph matched with the human body key point can be further improved.
In some embodiments, in a case that the human body key points include a head key point, an upper limb key point, and a lower limb key point, the generating an action icon of the human body based on a graph matching the human body key points and position information of the human body key points includes: generating a head icon of the human body based on the graph matched with the head key point and the position information of the head key point; generating an upper limb icon of the human body based on the graph matched with the upper limb key point and the position information of the upper limb key point; generating a lower limb icon of the human body based on the graph matched with the lower limb key points and the position information of the lower limb key points; and generating an action icon of the human body based on the head icon, the upper limb icon and the lower limb icon. Therefore, the action icon matched with the human body posture information in the image to be detected can be efficiently and accurately generated.
In some embodiments, in a case that the head key points include a vertex key point and a neck key point, the generating the head icon of the human body based on the graphics matching the head key points and the position information of the head key points includes: determining an identification graph based on the position information of the vertex key point, the position information of the neck key point and the graph matched with the head key point; and filling the identification graph to obtain the head icon. In this way, the determined head icon can be more matched with the head region of the human body in the image to be detected.
In some embodiments, in a case where the upper limb key points include a shoulder key point, an elbow key point, and a wrist key point, the generating an upper limb icon of the human body based on the graphics matching the upper limb key points and the position information of the upper limb key points includes: determining a first graph based on the position information of the shoulder key points and the graph matched with the shoulder key points; determining a second graph based on the position information of the elbow key point and the graph matched with the elbow key point; determining a third graph based on the position information of the wrist key point and the graph matched with the wrist key point; and splicing the first graph, the second graph and the third graph to generate the upper limb icon. Therefore, the upper limb icon of the human body is determined simultaneously based on the position information corresponding to the shoulder key point, the elbow key point and the wrist key point and the graph matched with the position information, and the generated upper limb icon can be matched with the upper limb posture information of the human body in the image to be detected.
In some embodiments, the stitching the first graphic, the second graphic, and the third graphic to generate the upper limb icon includes: determining a first connection mode according to a human upper limb structure model; connecting the first graph, the second graph and the third graph based on the first connection mode to obtain a first polygon; and filling the first polygon to obtain the upper limb icon. Therefore, the determined upper limb icon can be matched with the upper limb area of the human body in the image to be detected.
In some embodiments, in a case that the lower limb key points include a hip key point, a knee key point, and an ankle key point, the generating the lower limb icon of the human body based on the graph matched with the lower limb key point and the position information of the lower limb key point includes: determining a fourth graph based on the position information of the hip key points and the graph matched with the hip key points; determining a fifth graph based on the position information of the knee key points and the graph matched with the knee key points; determining a sixth graph based on the position information of the ankle key point and the graph matched with the ankle key point; and splicing the fourth graph, the fifth graph and the sixth graph to generate the lower limb icon. Therefore, the lower limb icon of the human body is determined simultaneously based on the position information corresponding to the hip key point, the knee key point and the ankle key point and the graph matched with the position information, and the generated lower limb icon can be matched with the lower limb posture information of the human body in the image to be detected.
In some embodiments, in a case where the hip key points include a left hip key point and a right hip key point, the determining a fourth graph based on the position information of the hip key points and the graph matching the hip key points includes: : determining a middle point of the left hip key point and the right hip key point based on the position information of the left hip key point and the position information of the right hip key point; determining the middle point as a waist key point of the human body; determining the position information of the middle point as the position information of the waist key point of the human body; determining a graph matched with the waist key point based on the graph matched with the left hip key point and the graph matched with the right hip key point; and determining the fourth graph based on the position information of the waist key points and the graph matched with the waist key points. Therefore, the relevant information of the waist key points is determined based on the left hip key points and the right hip key points, and then the action icons relevant to the waist postures of the human bodies in the images to be detected can be conveniently generated based on the relevant information of the waist key points.
In some embodiments, the stitching the fourth graphic, the fifth graphic, and the sixth graphic to generate the lower limb icon includes: determining a second connection mode according to the human body lower limb structure model; connecting the fourth graph, the fifth graph and the sixth graph based on the second connection mode to obtain a second polygon; and filling the second polygon to obtain the lower limb icon. Therefore, the determined lower limb icon can be matched with the lower limb area of the human body in the image to be detected better.
In some embodiments, the generating an action icon for the human body based on the head icon, the upper limb icon, and the lower limb icon comprises: rendering the head icon, the upper limb icon and the lower limb icon in the image to be detected by adopting a first color parameter to obtain an intermediate image; rendering the background area of the intermediate image by adopting a second color parameter to obtain an action icon of the human body; wherein the background region is a region of the intermediate image that is occupied except for the head icon, the upper limb icon, and the lower limb icon. In this way, the motion icon of the human body with higher recognition can be generated.
In some embodiments, in the case that the image to be detected is at least two consecutive frames of images to be processed, the method further comprises: acquiring a motion icon of a human body in each frame of image to be processed; and generating the human body action animation of the human body based on the time sequence information of the at least two frames of images to be processed and the action icon of the human body in each frame of image to be processed. In this way, a human body motion animation that matches the change in the motion posture of the human body can be generated based on the motion icon of the human body generated in the continuous multi-frame image.
In some embodiments, the method further comprises: determining the action type of a human body in the image to be processed; and marking the action icon of the human body in the image to be processed based on the action name matched with the action type to obtain a target action icon of the human body in the image to be processed. Thus, the information expressed by the generated moving picture icon can be enriched.
An embodiment of the present application provides an icon generating apparatus, including:
the acquisition module is used for acquiring the picture content including the image to be detected of the human body;
the first determining module is used for determining human body key points of the human body in the image to be detected;
the second determining module is used for determining the attribute information of the human key points;
and the generating module is used for generating the action icon of the human body based on the attribute information.
The embodiment of the application provides computer equipment, which comprises a memory and a processor, wherein computer-executable instructions are stored in the memory, and the icon generation method can be realized when the processor runs the computer-executable instructions on the memory.
The embodiment of the application provides a computer storage medium, wherein computer-executable instructions are stored on the computer storage medium, and after being executed, the icon generation method can be realized.
The embodiment of the application provides an icon generation method, device, equipment and storage medium, firstly, obtaining picture contents including an image to be detected of a human body; secondly, determining human body key points of the human body in the image to be detected; determining attribute information of the human body key points; and finally, generating the action icon of the human body based on the attribute information. Therefore, a professional designer is not required to design the icon, the action icon corresponding to the human body in the image to be detected is determined based on the attribute information of the human body key point in the image to be detected, the action icon matched with the human body action in the image to be detected can be conveniently generated, and the threshold of icon design is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
fig. 1 is a schematic flowchart of a first implementation of an icon generation method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a second implementation of the icon generating method according to the embodiment of the present application;
fig. 3 is a schematic flowchart of a third implementation of the icon generating method according to the embodiment of the present application;
fig. 4 is a schematic diagram of an action icon of a human body according to an embodiment of the present disclosure;
fig. 5A is a scene schematic diagram of determining an action icon of a human body according to an embodiment of the present application;
fig. 5B is a scene schematic diagram of a graph corresponding to a determined human body key point according to the embodiment of the present application;
fig. 6A is a schematic view of a scenario for determining an upper limb icon and a lower limb icon according to an embodiment of the present application;
fig. 6B is a schematic diagram of another human body action icon according to the embodiment of the present disclosure;
fig. 7 is a schematic structural component diagram of an icon generating apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural component diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the examples of the present application, but are not intended to limit the scope of the examples of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of this application belong. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of embodiments of the present application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Computer Vision (CV) is a science for studying how to "look" at a machine, and refers to the recognition, tracking and measurement of an object by using a camera and a Computer instead of human eyes, and further processing images.
2) Deep Learning (DL) is a new research direction in the field of Machine Learning (ML), which is introduced to make Machine Learning closer to the original goal: artificial Intelligence (AI); DL is the intrinsic law and the expression hierarchy of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds; the final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds.
3) Human body key point Detection (HKD), also known as Human body posture estimation, is a relatively basic task in CV and is a pre-task for Human body action recognition, behavior analysis, Human-computer interaction, and the like; in general, detection of key points of a human body can be subdivided into single/multi-person key point detection and two-dimensional/three-dimensional key point detection, and meanwhile, an algorithm can also track key points after the key point detection is finished, and the algorithm is also called as human body posture tracking.
An exemplary application of the computer device provided in the embodiments of the present application is described below, and the device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer with an image capturing function, a tablet computer, a desktop computer, a camera, a mobile device (e.g., a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. In the following, an exemplary application will be explained when the device is implemented as a terminal or a server.
The method can be applied to a computer device, and the functions realized by the method can be realized by calling a program code by a processor in the computer device, although the program code can be stored in a computer storage medium, which at least comprises the processor and the storage medium.
An embodiment of the present application provides an icon generating method, as shown in fig. 1, which is described with reference to the steps shown in fig. 1:
step S101, acquiring a picture content including an image to be detected of a human body.
In some embodiments, the image to be detected may be acquired by the icon generating device through an internal image acquisition module, or may be an image to be detected which is received by the icon generating device and sent by a device or equipment capable of performing information interaction with the icon generating device; correspondingly, the image to be detected can be a color image or a gray image, the human body can be located in a foreground region, a middle scene region and a background region of the image to be detected, and the region where the human body is located in the image to be detected and the like is not limited in any way in the embodiment of the application.
In some embodiments, the human body in the image to be detected refers to a human body capable of presenting all body information in the image to be detected, or a human body presenting part of body information; in the case where the human body presents all body information in the image to be detected, such as: a head, two arms, a torso, and two legs, etc.; it may also be that partial body information is presented, such as: head, one arm and two legs, etc. Meanwhile, the posture information presented by the human body in the image to be detected can be standing, lying or half-squatting.
In some embodiments, the pose information of the human body presented in the image to be detected may be: standing, walking, sitting, etc. Meanwhile, the picture content of the image to be detected comprises but is not limited to a human body; the picture content of the image to be detected can be that the human body is in any scene, such as: classrooms, parks, offices or gaming venues, etc.; meanwhile, the number of the picture contents including human bodies in the image to be detected can be one, two or more; when the number of the human bodies included in the picture content of the image to be detected is two or more, the relative position relationship between the human bodies in the picture content may be: left-right, front-back, up-down, etc.
And S102, determining human body key points of the human body in the image to be detected.
In some embodiments, human keypoints may refer to at least some of the following keypoints of the human body: head, neck, left and right shoulders, left and right elbows, left and right hands (left and right wrists), left and right hips, left and right knees, left and right feet (left and right ankles), etc.; correspondingly, a human body detection algorithm can be adopted to identify the image to be detected to obtain human body information in the image to be detected, and the human body information is detected based on a posture estimation algorithm to obtain human body key points.
In some possible implementation manners, after the human body detection is performed on the image to be detected and the human body detection frame in the image to be detected is determined, the human body detection frame is identified to obtain a plurality of human body key points, that is, the step S102 may be implemented by the following processes:
firstly, determining a human body detection frame of the human body in the image to be detected.
In some embodiments, the human body detection frame is a detection frame capable of including a body part of a human body in the image to be detected. Exemplarily, assuming that all body parts of a human body can be presented in an image to be detected, a human body detection frame is a minimum rectangular frame which can surround the head, two arms, one trunk, two legs and the like of the human body; simultaneously human detection frame inside can surround human key point, if: head, neck, shoulders, elbows, hands, hips, knees, and feet. Exemplarily, assuming that a part of a body part of a human body can be represented in an image to be detected, the human body detection frame is a minimum rectangular frame that can surround a head, an arm, a part of a torso, and the like of the human body represented in the image.
In some embodiments, the human body detection frame is determined by detecting the human body in the image to be detected through the trained detection model. Therefore, the human body detection frame with higher precision can be quickly and conveniently provided.
And secondly, identifying a plurality of human key points in the human detection frame.
In some embodiments, a posture estimation algorithm is adopted to identify human body information in a human body detection frame to obtain a plurality of human body key points; wherein, the key points of the human body can be 21 key points of the human body, namely key points of main joint parts such as the top of the head, five sense organs, the neck, four limbs and the like; or may be key points of 14 persons, namely key points of main joint parts including the top of the head, the neck, the limbs and the like. Illustratively, the human body key points include: a vertex key point, a neck key point, a left and right shoulder key point, a left and right elbow key point, a left and right wrist key point, a left and right hip key point, a left and right knee key point, a left and right ankle key point, and the like.
And step S103, determining attribute information of the human body key points.
In some embodiments, the attribute information of the human key point includes, but is not limited to: the body part to which the human body key point belongs, the position information of the human body key point in the image to be detected, the position information of the human body key point in the human body detection frame, the confidence coefficient that the human body key point is a true value point, and the like; meanwhile, body parts to which the key points of the human body belong can be determined through pattern recognition; identifying the position information of each human body key point in the image to be detected by identifying the position information of the human body key point in the image to be detected; the position information of the key points of the human body can be represented by two-dimensional coordinates; the position information of the human body key points can refer to coordinate parameters of the human body key points in an image physical coordinate system, and can also refer to coordinate parameters of the human body key points in an image pixel coordinate system.
In some possible implementation manners, based on the above-mentioned human body detection algorithm, human body identification is performed on the image to be detected to obtain a human body detection frame, and then human body key points are identified in the human body detection frame; similarly, the position information of each human body key point can also be determined in the human body detection frame, that is, the position information of the human body key points can be determined through the following processes:
and determining the position information of the plurality of human key points in the human detection frame.
In some embodiments, the human body key points of the preset part in the human body detection frame are identified, and the position information of each human body key point is determined based on the position information of the human body detection frame in the image to be detected.
In some embodiments, a human body detection frame or an image to be detected can be input into a trained human body key point detection neural network model, and human body key points are identified to obtain a plurality of human body key points; and simultaneously, positioning the plurality of human key points in the human body detection frame to obtain a positioning result, and further determining the position information of the plurality of human key points based on the position information of the positioning result in the image to be detected.
In some embodiments, each human key point and the position information of each human key point are sequentially determined based on the human detection frame in the image to be detected. Therefore, the human key points with higher precision and the position information of the human key points can be efficiently and conveniently provided.
And step S104, generating the action icon of the human body based on the attribute information.
In some embodiments, the action icon of the human body may be generated based on the position information of the key points of the human body and the graph matched with the key points of the human body at the same time. For example, the action icon of the human body may be generated based on a graphic matched with a body part to which each human body key point belongs and position information of each human body key point. Such as: the corresponding body part can be determined based on the position of each human body key point, then based on the corresponding body part, the position information of each human body key point is marked by graphs with different shapes and/or different sizes so as to generate a marked graph corresponding to each human body key point, and the marked graphs are spliced according to a human body model so as to generate the action icon of the human body.
In some embodiments, the action icon of the human body is used for representing the posture information presented by the human body in the image to be detected; exemplarily, if the human body in the image to be detected is standing, the action icon of the human body is a standing human body icon; and if the human body in the image to be detected is in a motion state, the action icon of the human body is a motion human body icon. In addition, the upper and lower limb parts, the head part and the body part of the human body in the action icon of the human body can be marked by different color areas.
According to the icon generation method provided by the embodiment of the application, firstly, the picture content including an image to be detected of a human body is obtained; secondly, determining human body key points of the human body in the image to be detected; determining attribute information of the human body key points; and finally, generating the action icon of the human body based on the attribute information. Therefore, the action icon corresponding to the human body in the image to be detected is determined based on the attribute information of the human body key point in the image to be detected, so that a professional designer and icon design software are not needed to generate the relevant action icon, and the action icon matched with the human body action in the image to be detected can be conveniently generated in real time.
In some embodiments, in the case where the attribute information includes the body part to which the human body key point belongs and the position information, first, a figure matching the human body key point is determined based on the body part to which the human body key point belongs, and then, an action icon of the human body is generated based on the figure matching the human body key point and the position information of the human body key point. Therefore, the action icon matched with the human body posture information in the image to be detected can be conveniently generated. As shown in fig. 2, fig. 2 is a schematic flow chart of a second implementation of the icon generating method according to the embodiment of the present application; the following is described in conjunction with the steps shown in fig. 1 and 2:
step S201, determining a graph matched with the human body key points based on the body parts of the human body key points.
In some embodiments, the graphics matching the human keypoints are determined based on the body part to which each human keypoint belongs. The body part to which each key point belongs may be a left shoulder, a right shoulder, a left elbow, a right wrist of an upper limb, or a left knee, a right knee, a left ankle, or the like of a lower limb. Meanwhile, the figure matched with the key points of the human body can be a circle, a square, a star figure or a hexagon.
In some embodiments, the size of the graphics, the shape of the graphics, and the color parameters filled in the interior of the graphics between each two graphics matching the key points of the human body may be completely different, or completely or partially the same.
In some possible implementations, the graph matching the human body key points is determined by a graph parameter matching a body part to which the human body key points belong. That is, the above step S201 can be realized by the following steps S211 and S212:
and step S211, determining the graphic parameters matched with the body part to which the key points of the human body belong.
In some embodiments, after the body part to which the key points of the human body belong is identified, the graphic parameters matched with the key points of the human body are determined according to a preset mapping relation table based on the identification result. Wherein, the mapping relation table includes the matched graphic parameters of each human body key point, such as: pattern size, pattern shape, and pattern or color parameters filled inside the pattern.
In some embodiments, the graphical parameters that match the body part to which the human keypoints belong include, but are not limited to: pattern size, pattern shape, pattern or color parameters filled in the pattern, and the like. Meanwhile, the two patterns matched with the key points of the human body can have the same shape and different sizes, can also have the same shape and different sizes, and can also have the same shape and size. Illustratively, the graphic parameters that match the shoulder keypoints are: the radius is a circle with a first radius, the graphic filling color parameter is a first parameter, and the graphic parameter matched with the elbow key point is as follows: the radius is the circle of second radius, the figure fills the color parameter and is the second parameter, and the figure parameter of matching with ankle key point is: the side length is the positive direction of the first side length, and the graphic filling color parameter is a third parameter; the first radius is larger than the second radius, the first side length is larger than the second radius and smaller than the first radius, and the first parameter and the second parameter are the same and different from the third parameter. Illustratively, the first radius is 2 centimeters, the second radius is 1 centimeter and the first side length is 1.5 centimeters, while the first parameter is white, the second parameter is red and the third parameter is blue.
And step S212, determining a graph matched with the human body key points based on the graph parameters.
In some embodiments, the graphics matching the human body keypoints are determined based on graphics parameters matching the body part to which the human body keypoints belong. Illustratively, determining that the graph matched with the knee key points is a circle with the radius of the first size based on the graph size of the graph matched with the knee key points being the first size and the graph shape being the circle; and determining that the graph matched with the shoulder key points is a square with the side length of the second size based on that the graph size is the second size and the graph shape is the square in the graph parameters matched with the shoulder key points. Therefore, the corresponding graph is determined through the body part to which the key point belongs, the graph corresponding to each human body key point can be refined, and the accuracy of determining the action icon of the human body based on the graph matched with the human body key point can be improved.
And step S202, generating an action icon of the human body based on the graph matched with the human body key point and the position information of the human body key point.
In some embodiments, first, the position information of each human body key point may be marked based on a graph matching with each human body key point, for example, a graph mark is generated by taking the human body key point as a graph center to respectively determine a head icon, a shoulder icon, an elbow icon, a wrist icon, a hip icon, a knee icon, and an ankle icon of the human body; secondly, splicing the icons to determine a head icon and a limb icon of the human body, and finally generating a motion icon of the human body based on the head icon and the limb icon of the human body.
In some embodiments, the action icon corresponding to the human body in the image to be detected is generated through the graph corresponding to each human body key point and the position information of each human body key point in the image to be detected. Therefore, the generated action icon of the human body can be more matched with the posture information presented by the human body in the image to be detected.
In some embodiments, in a case that the key points of the human body include a head key point, an upper limb key point, and a lower limb key point, based on a graph matched with the head key point, the upper limb key point, and the lower limb key point and corresponding position information, a head icon, an upper limb icon, and a lower limb icon of the human body are respectively generated, and then an action icon of the human body is generated based on the head icon, the upper limb icon, and the lower limb icon. Therefore, the action icon matched with the human body posture information in the image to be detected can be efficiently and accurately generated. As shown in fig. 3, fig. 3 is a schematic flow chart of a third implementation of the icon generating method according to the embodiment of the present application; the following description is made in conjunction with the steps shown in fig. 1 to 3:
step S301, generating a head icon of the human body based on the graph matched with the head key point and the position information of the head key point.
In some embodiments, a graph matched with the head key points can be adopted to mark the position information of the head key points in the image to be detected so as to generate a head icon of the human body; or covering the graph matched with the head key point to the area corresponding to the position information of the head key point in the image to be detected, and filling the graph matched with the head key point to generate the head icon of the human body.
In some embodiments, a first region of an image or canvas to be marked may be marked with a graphic matching the head key point, thereby generating an icon that is the same as the head action in the image to be detected.
In some embodiments, in a case that the head key points include a head key point and a neck key point, the head icon may be determined based on the position information corresponding to the neck key point and the head key point, respectively, and the graphics matched with the head key point, that is, the step S301 may be implemented by the following steps S311 and S312:
step S311, determining an identification graph based on the position information of the vertex key point, the position information of the neck key point, and the graph matched with the head key point.
In some embodiments, a graph, such as a circle or a square, matching the head key point (which may be referred to as a vertex key point and/or a neck key point) may be used to cover the position areas of the head key point and the neck key point in the image to be detected, and the graph matching the head key point is adjusted based on the distance between the position information of the vertex key point and the position information of the neck key point to obtain the identification graph.
Step S312, the identification graph is filled to obtain the head icon.
In some embodiments, the identification graphics may be color-filled or rendered according to preset color parameters to obtain a head icon; the head icon may be obtained by filling the inside area of the logo pattern according to a preset pattern. In this way, the determined head icon can be more matched with the head region of the human body in the image to be detected.
Step S302, generating an upper limb icon of the human body based on the graph matched with the upper limb key point and the position information of the upper limb key point.
In some embodiments, a graph matched with the upper limb key point can be adopted to mark the position information of the upper limb key point in the image to be detected so as to generate an upper limb icon of the human body; or covering the graph matched with the upper limb key point to the area corresponding to the position information of the upper limb key point in the image to be detected, and filling the graph matched with the upper limb key point to generate an upper limb icon of the human body; wherein the upper limb icon may be an icon represented by a polygon. In some embodiments, the upper extremity keypoints include left and right upper extremity keypoints, i.e., including left and right shoulder keypoints, left and right elbow keypoints, and left and right wrist keypoints; namely, the position information corresponding to each upper limb key point in the image to be detected can be marked by adopting the graphs respectively matched with the key points of the left and right shoulders, the key points of the left and right elbows and the key points of the left and right wrists, then the plurality of graphs are connected according to the upper limb structural model of the human body so as to generate the graph matched with the upper limb of the human body, and finally the graph matched with the upper limb of the human body is filled so as to obtain the upper limb icon of the human body. The human upper limb structure model represents the sequence of the sequential connection of shoulder-arm-elbow-forearm-wrist-hand in the human body and corresponds to the key points of the human body, namely represents the sequential connection of the shoulder key points-elbow key points-wrist key points.
Step S303, generating a lower limb icon of the human body based on the graph matched with the lower limb key points, the position information of the lower limb key points and the position information of the waist key points.
In some embodiments, a graph matched with the lower limb key points can be adopted to mark the position information of the lower limb key points in the image to be detected so as to generate a lower limb icon of the human body; or covering the graph matched with the lower limb key point to an area corresponding to the position information of the lower limb key point in the image to be detected, and filling the graph matched with the lower limb key point to generate a lower limb icon of the human body; the lower limb icon may be an icon represented by a polygon.
In some embodiments, the lower limb keypoints include left and right lower limb keypoints, i.e., including left and right hip keypoints, left and right knee keypoints, and left and right ankle keypoints; the method comprises the steps of marking position information corresponding to each lower limb key point in an image to be detected by adopting graphs respectively matched with left and right hip key points, left and right knee key points and left and right ankle key points, connecting the graphs according to a human lower limb structure model to generate graphs matched with the lower limbs of a human body, and filling the graphs matched with the lower limbs of the human body to obtain a lower limb icon of the human body. The human body lower limb structure model represents the sequence of sequentially connecting hip, thigh, knee, shin and foot in a human body and corresponds to key points of the human body, namely represents the sequence of sequentially connecting the hip key points, knee key points and ankle key points.
Step S304, generating an action icon of the human body based on the head icon, the upper limb icon and the lower limb icon.
In some embodiments, the head icon, the upper limb icon and the lower limb icon may be combined to generate an action icon of the human body, or the head icon, the upper limb icon and the lower limb icon in the image to be detected may be uniformly filled according to the first color parameter, and meanwhile, the region except the head icon, the upper limb icon and the lower limb icon in the image to be detected is filled according to the second color parameter to generate an action icon of the human body; wherein the first color parameter is different from the second color parameter. Therefore, the action icon matched with the human body posture information in the image to be detected can be efficiently and accurately generated.
In some embodiments, a graph matched with the head key point, a graph matched with the upper limb key point, and a graph matched with the lower limb key point may be adopted, and according to the position information of the head key point, the position information of the upper limb key point, and the position information of the lower limb key point, the image to be marked or the canvas to be marked is marked, so as to generate a human body action icon which is the same as or similar to the posture information of the human body in the image to be detected.
In some possible implementation manners, the head icon, the upper limb icon, and the lower limb icon in the image to be detected may be rendered in a unified manner, so that the head icon, the upper limb icon, and the lower limb icon of the human body are distinguished from other regions in the image to be detected, and then the action icon of the human body with higher recognition degree is generated, that is, the step S304 may be implemented through the following processes:
the method comprises the following steps of firstly, rendering the head icon, the upper limb icon and the lower limb icon in the image to be detected by adopting a first color parameter to obtain an intermediate image.
In some embodiments, the first color parameter may be a preset parameter. The intermediate image is obtained by rendering the head icon, the upper limb icon and the lower limb icon by adopting the first color parameter. The picture of the intermediate image includes: the method comprises the steps of rendering a head icon, an upper limb icon and a lower limb icon by adopting first color parameters, and a background area except the icons. The background area may be a black and white area or a color area.
And secondly, rendering the background area of the intermediate image by adopting a second color parameter to obtain the action icon of the human body.
In some embodiments, the background area is an area of the screen content of the intermediate image occupied except for the head icon, the upper limb icon, and the lower limb icon. No matter the color of the background area in the intermediate image is black and white or color, the background area can be rendered by adopting the second color parameter according to the user requirement, so that the color of the background area is rendered into the color corresponding to the second color parameter, and thus, the user requirement can be better met. Fig. 4 is a schematic diagram of an action icon of a human body according to an embodiment of the present disclosure; in the image corresponding to the action icon of the human body, the area occupied by the action icon and the background area except the action icon are displayed in different display modes.
In some embodiments, the head icon, the upper limb icon and the lower limb icon in the image to be detected are rendered in a unified manner by adopting the first color parameter, and the background area except the head icon, the upper limb icon and the lower limb icon in the image to be detected is rendered by adopting the second color parameter, so that the action icon of a human body can be more highlighted in the image to be detected, and meanwhile, the color parameter of the generated action icon can be determined as required, and the flexibility and the usability of the generated action icon can be further improved.
In some embodiments, in the case that the upper limb key points include a shoulder key point, an elbow key point and a wrist key point, the upper limb icon of the human body is generated based on the relevant information of the shoulder key point, the elbow key point and the wrist key point, such as corresponding graphic and position information, at the same time, that is, the above step S302 may be implemented by the following steps S321 to S324:
step S321, determining a first graph based on the position information of the shoulder key points and the graph matched with the shoulder key points.
In some embodiments, a graph matched with the shoulder key points can be adopted to mark the position information of the shoulder key points in the image to be detected so as to generate a first graph representing the shoulder key points; or covering the graph matched with the shoulder key points to the area corresponding to the position information of the shoulder key points in the image to be detected, and filling the graph matched with the shoulder key points to generate a first graph representing the shoulder key points.
In some embodiments, a graphic matching with the shoulder key point may be used to mark a second area in an image to be marked or a canvas to be marked, so as to generate an icon identical to the shoulder action in the image to be detected. And the first position relation between the second area and the first area is in one-to-one correspondence with the second position relation between the position information of the head key point and the shoulder key point in the image to be detected.
Step S322, based on the position information of the elbow key point and the graph matched with the elbow key point, determining a second graph.
Step S323, determining a third graph based on the position information of the wrist key point and the graph matching the wrist key point.
In some possible implementations, the implementation process through step S322 and step S323 is similar to the implementation process of step S321, that is, the second graph and the third graph are determined based on the location information of the relevant keypoints and the corresponding graphs at the same time.
In some embodiments, the shoulder keypoints, elbow keypoints, and wrist keypoints comprise left and right shoulder keypoints, left and right elbow keypoints, and left and right wrist keypoints, respectively. Wherein, the shapes of the graphs matched with the shoulder key points, the elbow key points and the wrist key points can be the same or different; meanwhile, the first area corresponding to the graph matched with the shoulder key point is larger than the second area corresponding to the graph matched with the elbow key point, and the second area corresponding to the graph matched with the elbow key point is larger than the third area corresponding to the wrist key point. Illustratively, where the graphics matching the shoulder keypoints, the elbow keypoints, and the wrist keypoints are identical in shape and are all circular, a first radius of a first circle matching the shoulder keypoints is greater than a second radius of a second circle matching the elbow keypoints, a second radius of the second circle matching the elbow keypoints is greater than a third radius of a third circle matching the wrist keypoints.
In some embodiments, because the human body posture in the image to be detected represents standing, moving, half-squatting, or the like, the relative positions of the shoulder key point, the elbow key point, and the wrist key point in the image to be detected may be up-down, left-right, or the like, that is, in the image to be detected, the relative positions of the first graph, the second graph, and the third graph may be up-down, left-right, or the like.
Step S324, the first graph, the second graph, and the third graph are spliced to generate the upper limb icon.
In some embodiments, the first graph, the second graph and the third graph are sequentially spliced to generate an upper limb icon of the human body; wherein the upper limb icons of the human body comprise a left upper limb icon and a right upper limb icon; meanwhile, the shapes of the corresponding graphs of the left upper limb icon and the right upper limb icon can be the same or different, and the relative positions of the left upper limb icon and the right upper limb icon in the image to be detected can be partially overlapped, adjacent or far away. Therefore, the upper limb icon of the human body is determined simultaneously based on the position information corresponding to the shoulder key point, the elbow key point and the wrist key point and the graph matched with the position information, and the generated upper limb icon can be matched with the upper limb posture information of the human body in the image to be detected.
In some possible implementation manners, the first graph, the second graph, and the third graph may be connected by a connection manner corresponding to the upper limb structure model of the human body, so as to obtain corresponding upper limb icons of the human body, that is, the step S324 may be implemented by:
firstly, a first connection mode is determined according to a structural model of an upper limb of a human body.
In some embodiments, the human upper limb structure model represents the sequential connection order of shoulder-arm-elbow-forearm-wrist-hand in the human body, corresponding to the human body key points, i.e. representing the sequential connection order of shoulder key points-elbow key points-wrist key points. The first connection mode determined according to the upper limb structure model can refer to connecting a shoulder key point and an elbow key point in the key points of the human body, and connecting the elbow key point and a wrist key point; and the key points corresponding to the left upper limb are connected according to a first connection mode, and the key points corresponding to the right upper limb are connected according to the first connection mode.
And secondly, connecting the first graph, the second graph and the third graph based on the first connection mode to obtain a first polygon.
In some embodiments, based on the determined first connection manner, the first graph corresponding to the shoulder key point and the second graph corresponding to the elbow key point are connected, and the second graph corresponding to the elbow key point and the third graph corresponding to the wrist key point are connected to obtain the first polygon. Wherein, the first polygon comprises a polygon corresponding to the left upper limb and a polygon corresponding to the right upper limb.
In some embodiments, it is assumed that the first graph, the second graph and the third graph are circles with successively decreasing radius sizes (here, the left upper limb may be taken as an example, and the right upper limb may be taken as an example); firstly, connecting a first circle center of a first graph with a second circle center of a second graph to obtain a first connecting line, and connecting the second circle center of the second graph with a third circle center of a third graph to obtain a second connecting line; secondly, respectively determining a first vertical line which is perpendicular to the first connecting line and passes through a first circle center, a second vertical line which is perpendicular to the first connecting line and passes through a second circle center, a third vertical line which is perpendicular to the second connecting line and passes through the second circle center, and a fourth vertical line which is perpendicular to the second connecting line and passes through the third circle center; then, two intersections of the first vertical line and the first graph, two intersections of the second vertical line and the second graph, two intersections of the third vertical line and the second graph, and two intersections of the fourth vertical line and the third graph are obtained; and finally, connecting the eight cross points according to a first connection mode to obtain a first polygon.
The eight intersections may be connected according to a first connection manner by first connecting two intersections of the first vertical line and the first pattern, correspondingly connecting two intersections of the second vertical line and the second pattern to obtain two first parallel lines, and connecting two intersections of the third vertical line and the second pattern, correspondingly connecting two intersections of the fourth vertical line and the third pattern to obtain two second parallel lines; connecting two cross points of the second vertical line and the second graph and correspondingly connecting two cross points of the third vertical line and the second graph to obtain two third parallel connecting lines; then, connecting two intersections of the first vertical line and the first graph by adopting a curve with a preset radian to obtain a first arc line, and connecting two intersections of the fourth vertical line and the third graph to obtain a second arc line; finally, a first polygon is obtained based on the first parallel line, the third parallel line, the second parallel line, the first arc line and the second arc line.
In some embodiments, the polygon corresponding to the left upper limb and the polygon corresponding to the right upper limb may be connected to obtain the upper limb icon. The connection mode of the polygon corresponding to the left upper limb and the polygon corresponding to the right upper limb can refer to the first connection mode.
In some embodiments, first, based on the position information of the left shoulder key point and the position information of the right shoulder key point, determining a middle point of the left shoulder key point and the right shoulder key point, and determining the middle point as a first upper limb key point of the human body, and simultaneously determining the position information of the middle point as the position information of the first upper limb key point of the human body; and determining a graph matched with the first upper limb key point based on the graph matched with the left shoulder key point and the graph matched with the right shoulder key point, wherein the implementation mode of the determined graph is similar to the implementation mode of determining the graph matched with the waist key point through the graph matched with the left hip key point and the graph matched with the right hip key point. Second, a first graph may be determined based on the position information of the first upper limb keypoints and the graph matching the first upper limb keypoints. Then, the upper limb icon of the human body can be determined together according to the second graph and the third graph determined in the above embodiment; and the upper limb icon is a left upper limb icon and a right upper limb icon which are connected based on the first upper limb key point.
Meanwhile, the determined upper limb icons of the human body are respectively based on the left upper limb icon corresponding to the left shoulder key point and the right upper limb icon corresponding to the right shoulder key point based on the position information corresponding to the shoulder key point, the elbow key point and the wrist key point and the graphs matched with the position information.
And filling the first polygon to obtain the upper limb icon.
In some embodiments, the first polygon may be color-filled or rendered according to a preset color parameter to obtain an upper limb icon; the first polygon inner area may be filled in according to a preset pattern to obtain the upper limb icon. Therefore, the determined upper limb icon can be matched with the upper limb area of the human body in the image to be detected.
In some possible implementations, in the case that the lower limb key points include a hip key point, a knee key point, and an ankle key point, the lower limb icon of the human body is generated based on the relevant information of the hip key point, the knee key point, and the ankle key point, such as corresponding graphics and position information, at the same time, that is, the above step S303 may be implemented by the following steps S331 to S334:
in step S331, a fourth graph is determined based on the position information with the hip key points and the graph matching the hip key points.
In some embodiments, a graph matched with the hip key points can be adopted to mark the position information of the hip key points in the image to be detected so as to generate a fourth graph representing the hip key points; or covering the graph matched with the hip key points to the area corresponding to the position information of the hip key points in the image to be detected, and filling the graph matched with the hip key points to generate a fourth graph representing the hip key points.
In some embodiments, a third area in an image or canvas to be marked may be marked with a graphic matching the hip key points; thereby generating an icon which is the same as the hip movement in the image to be detected. And the third position relation between the third area and the first area is in one-to-one correspondence with the fourth position relation between the position information of the head key point and the hip key point in the image to be detected.
In some possible implementation manners, in a case that the hip key points include a left hip key point and a right hip key point, the position information of the waist key point and the graph matching with the waist key point may be determined by using the graph and the position information corresponding to the left hip key point and the right hip key point, and then the fourth graph is determined based on the position information of the waist key point and the graph matching with the waist key point, that is, the step S331 may be implemented by:
the method comprises the steps of firstly, determining a middle point of a left hip key point and a right hip key point based on position information of the left hip key point and position information of the right hip key point.
In some embodiments, the left hip key point and the right hip key point identified in the image to be detected may be connected to determine a middle point therebetween, that is, the middle point located on a connection line between the left hip key point and the right hip key point may be determined according to the position information of the left hip key point and the position information of the right hip key point.
And secondly, determining the middle point as a waist key point of the human body.
In some embodiments, the middle point of the image to be detected, which is located between the key point of the left hip and the key point of the right hip, is determined as the key point of the waist of the human body.
And thirdly, determining the position information of the middle point as the position information of the waist key point of the human body.
In some embodiments, the middle point may be identified based on an image identification algorithm to obtain the position information of the middle point, and the position information of the middle point, that is, the position information of the waist key point, may also be determined based on the position information of the left hip key point and the position information of the right hip key point.
And fourthly, determining a graph matched with the waist key point based on the graph matched with the left hip key point and the graph matched with the right hip key point.
In some embodiments, graphics matching the left hip keypoints and graphics matching the right hip keypoints may be fused to obtain graphics matching the waist keypoints. For example, in the case that the size and shape of the graph matching the left hip key point and the graph matching the right hip key point are the same, any one of the graph matching the left hip key point and the graph matching the right hip key point may be adjusted according to the preset graph size to obtain the graph matching the waist key point.
In some embodiments, the graph matching the waist keypoints may also be determined based on a mapping relation table including graph parameters matching each human keypoint.
And fifthly, determining the fourth graph based on the position information of the waist key points and the graph matched with the waist key points.
In some embodiments, the graph matched with the waist key point can be adopted to mark the position information of the waist key point in the image to be detected so as to generate a fourth graph representing the waist key point; or covering the graph matched with the waist key point to the region corresponding to the position information of the waist key point in the image to be detected, and filling the graph matched with the waist key point to generate a fourth graph representing the waist key point.
In some embodiments, a fourth region in an image to be marked or a canvas to be marked may be marked with a graphic matching the waist key point; thereby generating an icon which has the same action as the waist in the image to be detected. And the fifth position relation between the fourth region and the first region is in one-to-one correspondence with the sixth position relation between the position information of the head key point and the waist key point in the image to be detected. Therefore, the action icon related to the waist posture of the human body in the image to be detected can be conveniently generated.
Step S332, determining a fifth graph based on the position information of the knee key point and the graph matched with the knee key point.
Step S333, determining a sixth graph based on the position information of the ankle key point and the graph matched with the ankle key point.
In some possible implementations, the implementation process through step S332 and step S333 is similar to the implementation process of step S331, that is, the fifth graph and the sixth graph are determined based on the position information of the relevant key points and the corresponding graphs at the same time.
In some embodiments, the hip keypoints, knee keypoints, and ankle keypoints comprise left and right hip keypoints, left and right knee keypoints, and left and right ankle keypoints, respectively. Wherein, the shapes of the graphs matched with the buttock key point, the ankle key point and the knee key point can be the same or different; meanwhile, the fourth area corresponding to the graph matched with the hip key point is larger than the fifth area corresponding to the graph matched with the knee key point, and the fifth area corresponding to the graph matched with the knee key point is larger than the sixth area corresponding to the ankle key point. Illustratively, in the case where the shapes of the figures matching the hip keypoint, the knee keypoint, and the ankle keypoint are the same and are all circular, the fourth radius of the fourth circle matching the hip keypoint is greater than the fifth radius of the fifth circle matching the knee keypoint, and the fifth radius of the fifth circle matching the knee keypoint is greater than the sixth radius of the sixth circle matching the ankle keypoint.
In some embodiments, because the human body posture in the image to be detected shows standing, moving, half-squatting, and the like, the relative positions of the hip key point, the knee key point, and the ankle key point in the image to be detected may be up-down, left-right, and the like, that is, in the image to be detected, the relative positions of the fourth graph, the fifth graph, and the sixth graph may be up-down, left-right, and the like.
And step 334, splicing the fourth graph, the fifth graph and the sixth graph to generate the lower limb icon.
In some embodiments, the fourth graph, the fifth graph and the sixth graph are sequentially spliced to generate a lower limb icon of the human body; the lower limb icons of the human body comprise a left lower limb icon and a right lower limb icon; meanwhile, the shapes of the corresponding graphs of the left lower limb icon and the right lower limb icon can be the same or different, and the relative positions of the left lower limb icon and the right lower limb icon in the image to be detected can be partially overlapped, adjacent or far away. Therefore, the lower limb icon of the human body is determined simultaneously based on the position information corresponding to the hip key point (or the waist key point), the knee key point and the ankle key point respectively and the graph matched with the position information, and the generated lower limb icon can be matched with the lower limb posture information of the human body in the image to be detected.
In some embodiments, a lower limb icon of a human body is determined based on the position information corresponding to the hip key point, the knee key point and the ankle key point and the graph matched with the position information; the lower limb icon is a left lower limb icon corresponding to the left hip key point and a right lower limb icon corresponding to the right hip key point respectively. And simultaneously determining a lower limb icon of the human body based on the position information corresponding to the waist key point, the knee key point and the ankle key point and the graph matched with the position information, wherein the lower limb icon is a left lower limb icon and a right lower limb icon connected based on the waist key point.
In some embodiments, the lower limb icons are left and right lower limb icons connected based on the fourth graphic corresponding to the left hip key point and the fourth graphic corresponding to the right hip key point.
In some embodiments, the lower limb icon is an icon where the left lower limb icon and the right lower limb icon are unconnected.
In some possible implementation manners, the fourth graph, the fifth graph, and the sixth graph may be connected by a connection manner corresponding to the lower limb structure model of the human body, so as to obtain corresponding lower limb icons of the human body, that is, the step S334 may be implemented by:
firstly, determining a second connection mode according to a lower limb structure model of a human body.
In some embodiments, the human lower limb structure model represents the sequence of sequential hip-thigh-knee-shin-foot connection in the human body, corresponding to the sequence of sequential hip-knee-ankle connection in the human body key points. The second connection mode determined according to the human body lower limb structure model can refer to that a hip key point (or a waist key point) in human body key points is connected with a knee key point, and the knee key point is connected with an ankle key point; and the key points corresponding to the left lower limb are connected according to a second connection mode, and the key points corresponding to the right lower limb are connected according to the second connection mode.
And secondly, connecting the fourth graph, the fifth graph and the sixth graph based on the second connection mode to obtain a second polygon.
In some embodiments, based on the determined second connection manner, the fifth graph corresponding to the knee key point and the sixth graph corresponding to the ankle key point are connected, and the fifth graph corresponding to the knee key point and the fourth graph corresponding to the hip key point (or the waist key point) are connected to obtain the second polygon. Wherein, the second polygon comprises a polygon corresponding to the left lower limb and a polygon corresponding to the right lower limb.
In some embodiments, it is assumed that the fourth graph, the fifth graph and the sixth graph are circles with successively decreasing radius sizes (here, the left lower limb may be taken as an example, and the right lower limb may be taken as an example); firstly, connecting a fourth circle center of a fourth graph with a fifth circle center of a fifth graph to obtain a third connecting line, and connecting the fifth circle center of the fifth graph with a sixth circle center of a sixth graph to obtain a fourth connecting line; secondly, respectively determining a fifth vertical line which is perpendicular to the third connecting line and passes through a fourth circle center, a sixth vertical line which is perpendicular to the third connecting line and passes through the fifth circle center, a seventh vertical line which is perpendicular to the fourth connecting line and passes through the fifth circle center, and an eighth vertical line which is perpendicular to the fourth connecting line and passes through the sixth circle center; then, two intersections of the fifth vertical line and the fourth pattern, two intersections of the sixth vertical line and the fifth pattern, two intersections of the seventh vertical line and the fifth pattern, and two intersections of the eighth vertical line and the sixth pattern are obtained; and finally, connecting the eight cross points according to a second connection mode to obtain a second polygon.
The eight intersections may be connected according to a second connection manner by first connecting two intersections of a fifth vertical line and a fourth pattern, correspondingly connecting two intersections of a sixth vertical line and the fifth pattern to obtain two fourth parallel lines, and connecting two intersections of a seventh vertical line and the fifth pattern, correspondingly connecting two intersections of an eighth vertical line and the sixth pattern to obtain two fifth parallel lines; connecting the two intersections of the sixth vertical line and the fifth graph and the two intersections of the seventh vertical line and the fifth graph correspondingly to obtain two sixth parallel connecting lines; then, connecting two intersections of the fifth vertical line and the fourth graph by adopting a curve with a preset radian to obtain a third arc line, and connecting two intersections of the eighth vertical line and the sixth graph to obtain a fourth arc line; and finally, obtaining a second polygon based on the fourth parallel line, the fifth parallel line, the sixth parallel line, the third arc line and the fourth arc line.
And filling the second polygon to obtain the lower limb icon.
In some embodiments, the second polygon may be color-filled or rendered according to a preset color parameter to obtain a lower limb icon; the second polygon inner area may be filled according to a preset pattern to obtain the lower limb icon. Therefore, the determined lower limb icon can be matched with the lower limb area of the human body in the image to be detected better.
In the icon generating method provided in the embodiment of the present application, in a case where the image to be detected is at least two continuous frames of images to be processed, the icon generating device may further perform the following steps a1 and a 2:
and step A1, acquiring the action icon of the human body in each frame of image to be processed.
In some embodiments, through the implementation process of the above embodiments, the action icon of the human body in each frame of the image to be processed is acquired; the action icons of the human body in each two frames of images to be processed can be the same or different.
Step A2, generating the human body action animation of the human body based on the time sequence information of the at least two frames of images to be processed and the action icon of the human body in each frame of images to be processed.
In some embodiments, the motion icons of the human body detected in the time-sequence continuous images to be processed are sorted and combined according to the time sequence information of the images to be processed to generate the human body motion animation of the human body. In this way, a human body motion animation that matches the change in the motion posture of the human body can be generated based on the motion icon of the human body generated in the continuous multi-frame image.
In some embodiments, the motion icon of the human body in the part of the image to be processed may be determined according to the attribute information based on the human body key points mentioned above at preset frame intervals, and in the interval preset frame image, the position change of the human body key points may be determined according to a human body motion prediction algorithm, so as to generate the motion icon of the human body in the preset frame; and then combining and sequencing the generated human body action icons according to the time sequence information corresponding to the partial images to be processed so as to generate human body action animation of the human body.
In some embodiments, while determining the action icon of the human body in at least part of the image to be processed, the following steps can be further performed to obtain an action icon with richer and more specific presentation data:
firstly, the motion type of the human body in the image to be processed is determined.
In some embodiments, a plurality of human body key points are determined by detecting human body key points of a human body in an image to be processed, posture information of the human body in the image to be processed is determined based on relative position relations among the human body key points, and then the action type of the human body is determined based on the posture information of the human body; wherein, the action types of the human body comprise: weight lifting, running, shooting, etc.
Secondly, marking the action icon of the human body in the image to be processed based on the action name matched with the action type to obtain a target action icon of the human body in the image to be processed.
In some embodiments, the action icon of the human body in the image to be processed may be marked with an action name matching the action type to obtain a target action icon whose screen content includes the action name. The action name can be represented by any form of characters, and the position information marked in the action icon of the human body can be up, down, left, right or medium.
In some embodiments, in the case that the action type of the human body is determined to be weight lifting, marking the weight lifting in the action icon of the human body to obtain a target action icon of which the screen content comprises the weight lifting; in the case that the action type of the human body is determined to be running, marking 'running' in the action icon of the human body can be carried out to obtain a target action icon of which the picture content comprises the 'running'. Thus, the information expressed by the generated moving picture icon can be enriched.
The icon generating method is described below with reference to a specific embodiment, but it should be noted that the specific embodiment is only for better describing the embodiments of the present application, and should not be construed as an inappropriate limitation to the embodiments of the present application.
In the related art, the generation of the action icon is mainly performed by professional design software. However, professional software has poor generalization and cannot be directly used by most users, i.e., the threshold is high, and meanwhile, the icon designed by using the professional software has poor reusability and is not convenient enough, each action change needs to be designed heavily, a large amount of time is spent, and the method is not flexible enough. Therefore, a more flexible method for generating an action icon is needed.
Based on this, an embodiment of the present application provides an icon generating method, which may generate a human body motion animation corresponding to human body pose information in consecutive video frames, and may specifically be implemented through the following processes:
firstly, displaying the whole human body in a camera picture, secondly, recognizing the human body in each frame of video frame by using a human body detection algorithm, and detecting 14 human body key points of the human body by using a posture estimation algorithm. The 14 human key points respectively comprise: a crown, a neck, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles.
Firstly, detecting a human body detection frame in a video frame and position information of the human body detection frame in the video frame by using a human body detection algorithm for a currently input video frame; secondly, in the detected human body detection frame, human body key points are detected by using a human body key point detection method, and the position information of each human body is returned.
The second step may be to correspond the attribute information of the vertex key point and the neck key point, such as: the position information and the affiliated body part are input into a minEnclingCrick function in a related code library (opencv) to search central points and radiuses of a vertex key point and a neck key point, a head icon of the human body is generated through the circle function in the opencv, then upper limb icons of the human body are generated by using left and right shoulder key points, left and right elbow key points and left and right wrists, and lower limb icons of the human body are generated by using the left and right hip key points, left and right knee key points and left and right ankle key points. As shown in fig. 5A, a scene diagram of determining an action icon of a human body according to an embodiment of the present application is shown; FIG. 5A shows the situation of different human body gestures to be measured in two video frames, respectively; wherein 501, two frames of video frames with different human posture information are input into a relevant detection model to realize human detection, namely, a human detection frame is determined; and then, recognizing the human body key points of the human body detection frame, namely obtaining that the corresponding human body key point information under two different human body postures in 502 is shown as a black point in 502, and further respectively generating corresponding human body action icons, namely 503, based on the recognized human body key point information.
In the second step, based on the attribute information of the key points of the human body, an action icon of the human body is generated, and the method can be divided into the following four small steps:
firstly, determining the position information of the waist key point based on the position information of the key points of the left and right hips, namely determining the central point of the two key points of the left and right hips as the waist key point of the human body, and simultaneously determining the position information of the central point as the position information of the waist key point.
Secondly, inputting information (position information and the body part to which the information corresponds) corresponding to the key point of the top of the head and the key point of the neck into a minenclosingCrick function in opencv to output the center coordinate and the radius of a circle containing the two key points, taking the center coordinate of the circle as the position of the head of the human body to be detected, and drawing out the head icon of the human body to be detected based on the circle function in the opencv to obtain the head icon of the human body to be detected.
Then, circles of different sizes are drawn at positions corresponding to the left and right shoulder key points, the left and right elbow key points, and the left and right wrist key points, and different circles are drawn at positions corresponding to the waist key point, the left and right knee key points, and the left and right ankle key points, using a circle function in opencv (where the size of the circle is determined based on the body part to which the key points belong). As shown in fig. 5B, a scene schematic diagram for determining a graph corresponding to a human body key point is provided in the embodiment of the present application; corresponding to fig. 5A, the situation that two video frames have different postures of the human body to be measured is given; reference numeral 504 denotes a schematic diagram for identifying key points of the human body, 505 denotes a schematic diagram for generating a circle corresponding to key points of the head and the upper limbs (key points of the left and right shoulders, the left and right elbows, and the left and right wrists), and 506 denotes a schematic diagram for adding a circle corresponding to key points of the waist, the left and right knees, and the left and right ankles on the basis of reference numeral 505.
Finally, the corresponding joining points on the upper limb key points (the left and right shoulder key points, the left and right elbow key points, and the left and right wrist key points), the lower limb key points (the left and right knee key points, and the left and right ankle key points), and the waist key points are obtained by the following formula (1). And connecting the splicing points obtained by each limb area, and filling the splicing point surrounding area by using a fillconveloxy function in opencv to generate an upper limb icon and a lower limb icon of the action icon.
Figure BDA0003239256740000301
Wherein (x)0,y0) The coordinates of the circle centers corresponding to the key points of the upper limbs, the lower limbs and the waist are shown, r is the radius of each circle, rad is the corresponding radian parameter of the intersection point, (x)1,y1) The coordinates of the corresponding splicing point. As shown in fig. 6A, a scene diagram for determining an upper limb icon and a lower limb icon is provided in the embodiment of the present application; the graph 601 of the determined key points and 602 are the joint points corresponding to the graph of each key point in the upper limb key points and the graph of each key point in the lower limb key points, and then the schematic diagrams of the upper limb icon and the lower limb icon, that is 603, are determined based on the joint points.
Meanwhile, as shown in fig. 6B, it is a schematic diagram of another human body action icon provided in the embodiment of the present application; and the action icon of the human body can also present an action type name matched with the action type of the human body. As shown in fig. 6B, the gesture of the human body in the image to be detected may be recognized first to determine the motion type of the human body: softball sports; second, action names associated with softball sports may be used: softball (including Chinese description and English description), marking the action icon of the human body recognized in the image to be detected to obtain a target action icon with a character identifier of an action type; wherein, the mark of the character can be set according to the requirement.
In addition, for the image corresponding to the target action icon, different display modes can be adopted, such as: different color parameters or different filling patterns are displayed on the limb area and other areas of the human body. As shown in fig. 6B, a minimum circle capable of including the limb area of the human body, and the limb area of the human body in the minimum circle are displayed in two different display manners.
According to the icon generation method provided by the embodiment of the application, firstly, the picture content including an image to be detected of a human body is obtained; secondly, determining human body key points of the human body in the image to be detected; determining attribute information of the human body key points; and finally, generating the action icon of the human body based on the attribute information. Therefore, the action icon corresponding to the human body in the image to be detected is determined based on the attribute information of the human body key point in the image to be detected, and the action icon matched with the human body in the image to be detected can be generated conveniently in real time. The motion icon generation can be realized for image data of image contents including human bodies, and the human body motion in the video stream can be detected to realize real-time motion icon capture; in addition, the icon background and the virtual character color can be specified according to related requirements, so that the richness of the determined action icon is increased. Meanwhile, a human body posture estimation method is adopted, the generation of a human body action icon is realized by utilizing a simple opencv function, the requirement of excessively depending on professional designers and design software can be reduced, the convenient action capture can be realized, and a dynamic icon is generated.
Based on this, the icon generation method provided by the embodiment of the application can be used for making the sports icon; the method can also be applied to the special effect of the camera shooting function of the electronic equipment to realize the generation of the real-time special effect; the icon generation method provided by the embodiment of the application can be used for dance and/or sports motion teaching, namely motion correction can be performed according to the icon style, so that the relevant dance motions or sports motions are more consistent with the icon display content.
Based on the foregoing embodiments, an icon generating apparatus is provided in an embodiment of the present application, and fig. 7 is a schematic structural diagram of the icon generating apparatus provided in the embodiment of the present application, as shown in fig. 7, the icon generating apparatus 700 includes:
an obtaining module 710, configured to obtain a to-be-detected image with picture content including a human body;
a first determining module 720, configured to determine human body key points of the human body in the image to be detected;
a second determining module 730, configured to determine attribute information of the human body key point;
a generating module 740, configured to generate an action icon of the human body based on the attribute information.
In some embodiments, the first determining module 720 is further configured to determine a human body detection frame of the human body in the image to be detected; identifying a plurality of human body key points in the human body detection frame; the second determining module 730 is further configured to determine, in the human body detection frame, position information of the plurality of human body key points.
In some embodiments, the generating module 740 generates the attribute information including: the body part and position information of the human body key point comprises: the figure determining submodule is used for determining a figure matched with the human body key points on the basis of the body part to which the human body key points belong; and the icon generation submodule is used for generating the action icon of the human body based on the graph matched with the human body key point and the position information of the human body key point.
In some embodiments, the graph determining sub-module is further configured to determine a graph parameter matching the body part to which the human body key point belongs; and determining a graph matched with the key points of the human body based on the graph parameters.
In some embodiments, in a case where the human body keypoints include head keypoints, upper limb keypoints, and lower limb keypoints, the icon generation sub-module includes: a head icon generating subunit, configured to generate a head icon of the human body based on the graph matched with the head key point and the position information of the head key point; an upper limb icon generating subunit, configured to generate an upper limb icon of the human body based on the graph matched with the upper limb key point and the position information of the upper limb key point; the lower limb icon generating subunit is used for generating a lower limb icon of the human body based on the graph matched with the lower limb key point and the position information of the lower limb key point; and the action icon generating subunit is used for generating an action icon of the human body based on the head icon, the upper limb icon and the lower limb icon.
In some embodiments, in a case that the head key points include a head key point and a neck key point, the head icon generating subunit is further configured to determine an identification graph based on the position information of the head key point, the position information of the neck key point, and a graph matching the head key point; and filling the identification graph to obtain the head icon.
In some embodiments, in a case that the upper limb key points include a shoulder key point, an elbow key point, and a wrist key point, the upper limb icon generating subunit is further configured to determine a first graphic based on the position information of the shoulder key point and a graphic matching the shoulder key point; determining a second graph based on the position information of the elbow key point and the graph matched with the elbow key point; determining a third graph based on the position information of the wrist key point and the graph matched with the wrist key point; and splicing the first graph, the second graph and the third graph to generate the upper limb icon.
In some embodiments, the upper limb icon generating subunit is further configured to determine the first connection mode according to a human upper limb structure model; connecting the first graph, the second graph and the third graph based on the first connection mode to obtain a first polygon; and filling the first polygon to obtain the upper limb icon.
In some embodiments, in the case that the lower limb key points include a hip key point, a knee key point, and an ankle key point, the lower limb icon generating subunit is further configured to determine a fourth graph based on the position information of the hip key point and a graph matching the hip key point; determining a fifth graph based on the position information of the knee key points and the graph matched with the knee key points; determining a sixth graph based on the position information of the ankle key point and the graph matched with the ankle key point; determining a sixth graph based on the position information of the waist key points and the graph matched with the waist key points; and splicing the fourth graph, the fifth graph and the sixth graph to generate the lower limb icon.
In some embodiments, in a case where the hip key points include a left hip key point and a right hip key point, the lower limb icon generating subunit is further configured to determine a middle point of the left hip key point and the right hip key point based on the position information of the left hip key point and the position information of the right hip key point; determining the middle point as a waist key point of the human body; determining the position information of the middle point as the position information of the waist key point; determining a graph matched with the waist key point based on the graph matched with the left hip key point and the graph matched with the right hip key point; and determining the fourth graph based on the position information of the waist key points and the graph matched with the waist key points.
In some embodiments, the lower limb icon generating subunit is further configured to determine a second connection mode according to the human lower limb structure model; connecting the fourth graph, the fifth graph and the sixth graph based on the second connection mode to obtain a second polygon; and filling the second polygon to obtain the lower limb icon.
In some embodiments, the action icon generating subunit is further configured to render the head icon, the upper limb icon, and the lower limb icon in the image to be detected by using a first color parameter, so as to obtain an intermediate image; rendering the background area of the intermediate image by adopting a second color parameter to obtain an action icon of the human body; wherein the background region is a region of the intermediate image that is occupied except for the head icon, the upper limb icon, and the lower limb icon.
In some embodiments, the obtaining module is further configured to obtain an action icon of a human body in each frame of image to be processed, when the image to be detected is at least two continuous frames of images to be processed; and the generating module is further used for generating the human body action animation of the human body based on the time sequence information of the at least two frames of images to be processed and the action icon of the human body in each frame of image to be processed.
In some embodiments, the second determining module 730 is further configured to determine a motion type of a human body in the image to be processed; and marking the action icon of the human body in the image to be processed based on the action name matched with the action type to obtain a target action icon of the human body in the image to be processed.
It should be noted that the above description of the embodiment of the apparatus, similar to the above description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the game monitoring method is implemented in the form of a software functional module and is sold or used as a standalone product, the game monitoring method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a terminal, a server, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a hard disk drive, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application further provides a computer program product, where the computer program product includes computer-executable instructions, and after the computer-executable instructions are executed, the icon generation method provided in the embodiment of the present application can be implemented.
Correspondingly, an embodiment of the present application provides a computer device, fig. 8 is a schematic structural diagram of the computer device in the embodiment of the present application, and as shown in fig. 8, the device 800 includes: a processor 801, at least one communication bus, a communication interface 802, at least one external communication interface, and a memory 803. Wherein the communication interface 802 is configured to enable connected communication between these components. The communication interface 802 may include a display screen, and the external communication interface may include a standard wired interface and a wireless interface. The processor 801 is configured to execute an image processing program in the memory to implement the icon generation method provided in the above embodiments.
Accordingly, an embodiment of the present application further provides a computer storage medium, where computer-executable instructions are stored on the computer storage medium, and when the computer-executable instructions are executed by a processor, the icon generation method provided in the foregoing embodiment is implemented.
The descriptions of the embodiments of the icon generating apparatus, the computer device and the storage medium are similar to the descriptions of the above method embodiments, have similar technical descriptions and advantages to the corresponding method embodiments, and are limited by the text. For technical details not disclosed in the embodiments of the icon generating apparatus, the computer device and the storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the embodiments of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply an order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not limit the implementation processes of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit in the embodiment of the present application may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof that contribute to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code. The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. An icon generation method, characterized in that the method comprises:
acquiring picture contents including an image to be detected of a human body;
determining human body key points of the human body in the image to be detected;
determining attribute information of the human body key points;
and generating an action icon of the human body based on the attribute information.
2. The method according to claim 1, wherein the determining human body key points of the human body in the image to be detected comprises:
determining a human body detection frame of the human body in the image to be detected;
identifying a plurality of human body key points in the human body detection frame;
the determining of the attribute information of the human body key points includes:
and determining the position information of the plurality of human key points in the human detection frame.
3. The method of claim 1 or 2, wherein the attribute information comprises: the generating of the action icon of the human body based on the attribute information includes:
determining a graph matched with the human body key points based on the body parts of the human body key points;
and generating the action icon of the human body based on the graph matched with the human body key point and the position information of the human body key point.
4. The method of claim 3, wherein the determining a graph matching the human keypoints based on the body part to which the human keypoints belong comprises:
determining the graphic parameters matched with the body part to which the key points of the human body belong;
and determining a graph matched with the key points of the human body based on the graph parameters.
5. The method according to claim 3 or 4, wherein in a case where the human body key points include a head key point, an upper limb key point, and a lower limb key point, the generating an action icon of the human body based on the graphics matched with the human body key points and the position information of the human body key points includes:
generating a head icon of the human body based on the graph matched with the head key point and the position information of the head key point;
generating an upper limb icon of the human body based on the graph matched with the upper limb key point and the position information of the upper limb key point;
generating a lower limb icon of the human body based on the graph matched with the lower limb key points and the position information of the lower limb key points;
and generating an action icon of the human body based on the head icon, the upper limb icon and the lower limb icon.
6. The method according to claim 5, wherein in a case where the head key points include a head top key point and a neck key point, the generating the head icon of the human body based on the graphic matched with the head key point and the position information of the head key point includes:
determining an identification graph based on the position information of the vertex key point, the position information of the neck key point and the graph matched with the head key point;
and filling the identification graph to obtain the head icon.
7. The method of claim 5 or 6, wherein in a case where the upper extremity keypoints include shoulder keypoints, elbow keypoints, and wrist keypoints, the generating an upper extremity icon of the human body based on the graphics matching the upper extremity keypoints and the position information of the upper extremity keypoints comprises:
determining a first graph based on the position information of the shoulder key points and the graph matched with the shoulder key points;
determining a second graph based on the position information of the elbow key point and the graph matched with the elbow key point;
determining a third graph based on the position information of the wrist key point and the graph matched with the wrist key point;
and splicing the first graph, the second graph and the third graph to generate the upper limb icon.
8. The method of claim 7, wherein said stitching said first graphic, said second graphic, and said third graphic to generate said upper extremity icon comprises:
determining a first connection mode according to a human upper limb structure model;
connecting the first graph, the second graph and the third graph based on the first connection mode to obtain a first polygon;
and filling the first polygon to obtain the upper limb icon.
9. The method of any one of claims 5 to 8, wherein in a case where the lower limb key points include a hip key point, a knee key point, and an ankle key point, the generating the lower limb icon of the human body based on the graphic matched with the lower limb key point and the position information of the lower limb key point comprises:
determining a fourth graph based on the position information of the hip key points and the graph matched with the hip key points;
determining a fifth graph based on the position information of the knee key points and the graph matched with the knee key points;
determining a sixth graph based on the position information of the ankle key point and the graph matched with the ankle key point;
and splicing the fourth graph, the fifth graph and the sixth graph to generate the lower limb icon.
10. The method of claim 9, wherein in the case that the hip key points include a left hip key point and a right hip key point, the determining a fourth graph based on the position information of the hip key points and the graph matching the hip key points comprises:
determining a middle point of the left hip key point and the right hip key point based on the position information of the left hip key point and the position information of the right hip key point;
determining the middle point as a waist key point of the human body;
determining the position information of the middle point as the position information of the waist key point;
determining a graph matched with the waist key point based on the graph matched with the left hip key point and the graph matched with the right hip key point;
and determining the fourth graph based on the position information of the waist key points and the graph matched with the waist key points.
11. The method of claim 9 or 10, wherein the stitching the fourth graphic, the fifth graphic, and the sixth graphic to generate the lower limb icon comprises:
determining a second connection mode according to the human body lower limb structure model;
connecting the fourth graph, the fifth graph and the sixth graph based on the second connection mode to obtain a second polygon;
and filling the second polygon to obtain the lower limb icon.
12. The method of any of claims 6 to 11, wherein said generating an action icon for the human body based on the head icon, the upper limb icon, and the lower limb icon comprises:
rendering the head icon, the upper limb icon and the lower limb icon in the image to be detected by adopting a first color parameter to obtain an intermediate image;
rendering the background area of the intermediate image by adopting a second color parameter to obtain an action icon of the human body; wherein the background region is a region of the intermediate image that is occupied except for the head icon, the upper limb icon, and the lower limb icon.
13. The method according to any one of claims 1 to 12, wherein in case the image to be detected is at least two consecutive frames of images to be processed, the method further comprises:
acquiring a motion icon of a human body in each frame of image to be processed;
and generating the human body action animation of the human body based on the time sequence information of the at least two frames of images to be processed and the action icon of the human body in each frame of image to be processed.
14. The method of any one of claims 1-13, wherein the method further comprises:
determining the action type of a human body in the image to be processed;
and marking the action icon of the human body in the image to be processed based on the action name matched with the action type to obtain a target action icon of the human body in the image to be processed.
15. An icon generating apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the picture content including the image to be detected of the human body;
the first determining module is used for determining human body key points of the human body in the image to be detected;
the second determining module is used for determining the attribute information of the human key points;
and the generating module is used for generating the action icon of the human body based on the attribute information.
16. A computer device comprising a memory having computer-executable instructions stored thereon and a processor capable of implementing the icon generation method of any one of claims 1 to 14 when the processor executes the computer-executable instructions on the memory.
17. A computer storage medium having computer-executable instructions stored thereon that, when executed, are capable of implementing the icon generation method of any one of claims 1 to 14.
CN202111011849.5A 2021-08-31 2021-08-31 Icon generation method, device, equipment and storage medium Withdrawn CN113705483A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111011849.5A CN113705483A (en) 2021-08-31 2021-08-31 Icon generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111011849.5A CN113705483A (en) 2021-08-31 2021-08-31 Icon generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113705483A true CN113705483A (en) 2021-11-26

Family

ID=78657939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111011849.5A Withdrawn CN113705483A (en) 2021-08-31 2021-08-31 Icon generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113705483A (en)

Similar Documents

Publication Publication Date Title
US11468612B2 (en) Controlling display of a model based on captured images and determined information
CN108665492B (en) Dance teaching data processing method and system based on virtual human
KR101519775B1 (en) Method and apparatus for generating animation based on object motion
JP2019510297A (en) Virtual try-on to the user's true human body model
CN107018336A (en) The method and apparatus of image procossing and the method and apparatus of Video processing
CN106355153A (en) Virtual object display method, device and system based on augmented reality
CN110457414A (en) Offline map processing, virtual objects display methods, device, medium and equipment
JP7126812B2 (en) Detection device, detection system, image processing device, detection method, image processing program, image display method, and image display system
CN104376309B (en) A kind of gesture motion basic-element model structural method based on gesture identification
CN110717391A (en) Height measuring method, system, device and medium based on video image
KR20180118669A (en) Intelligent chat based on digital communication network
WO2020104990A1 (en) Virtually trying cloths & accessories on body model
CN113761966A (en) Motion adaptive synchronization method and electronic equipment
CN112348942A (en) Body-building interaction method and system
CN110314344A (en) Move based reminding method, apparatus and system
Goncalves et al. Reach out and touch space (motion learning)
CN111639612A (en) Posture correction method and device, electronic equipment and storage medium
CN113705483A (en) Icon generation method, device, equipment and storage medium
CN112837339B (en) Track drawing method and device based on motion capture technology
CN116266408A (en) Body type estimating method, body type estimating device, storage medium and electronic equipment
CN114860060A (en) Method for hand mapping mouse pointer, electronic device and readable medium thereof
CN115068940A (en) Control method of virtual object in virtual scene, computer device and storage medium
KR20200052812A (en) Activity character creating method in virtual environment
Peng Design and Application of Cultural Digital Display System based on Interactive Technology
CN113593016A (en) Method and device for generating sticker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211126