CN108227931A - For controlling the method for virtual portrait, equipment, system, program and storage medium - Google Patents

For controlling the method for virtual portrait, equipment, system, program and storage medium Download PDF

Info

Publication number
CN108227931A
CN108227931A CN201810064291.9A CN201810064291A CN108227931A CN 108227931 A CN108227931 A CN 108227931A CN 201810064291 A CN201810064291 A CN 201810064291A CN 108227931 A CN108227931 A CN 108227931A
Authority
CN
China
Prior art keywords
human body
equipment
information
body key
multiple human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810064291.9A
Other languages
Chinese (zh)
Inventor
吴再
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201810064291.9A priority Critical patent/CN108227931A/en
Publication of CN108227931A publication Critical patent/CN108227931A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the present disclosure discloses a kind of method, equipment, system, computer program and storage medium for being used to control virtual portrait.Wherein, method includes:First equipment obtains collected present image;First equipment carries out feature point extraction processing to the present image, obtain the information of the multiple human body key points for the personage that the present image includes, wherein, the information of the multiple human body key point is used to control the posture of virtual portrait in current virtual scene image.The embodiment of the present disclosure can be realized controls virtual portrait in real time.

Description

For controlling the method for virtual portrait, equipment, system, program and storage medium
Technical field
The disclosure belongs to technical field of computer vision, more particularly to it is a kind of for control virtual portrait method, set Standby, system, computer program and storage medium.
Background technology
Virtual portrait be by digital technology and computer technology simulate come the threedimensional model similar with real human body, It has the geometrical property of real human body and personage's characteristic.The motion control of virtual portrait refers to based on virtual reality technology to three The motion state of dimension virtual portrait is controlled.In recent years, as virtual portrait is in amusement, tourism, culture, medical treatment, military, boat The extensive use of the every field such as it, what the motion control of virtual portrait had become that virtual reality technology one is concerned grinds Study carefully direction.
Invention content
The embodiment of the present disclosure provides a kind of technical solution for being used to control virtual portrait.
According to the one side of the embodiment of the present disclosure, the method for controlling virtual portrait is provided, including:
First equipment obtains collected present image;
First equipment carries out feature point extraction processing to the present image, obtains what the present image included The information of multiple human body key points of personage, wherein, the information of the multiple human body key point is used to control current virtual scene The posture of virtual portrait in image.
Optionally, in any of the above-described embodiment of the method for the present invention, the information of the multiple human body key point includes:It is described The location information of each human body key point and/or the angle information of the multiple human body key point in multiple human body key points.
Optionally, in any of the above-described embodiment of the method for the present invention, first equipment carries out the present image special Sign point extraction process, obtains the information of the multiple human body key points for the personage that the present image includes, including:
Feature point extraction processing is carried out to the present image, obtains the multiple people for the personage that the present image includes The location information of each human body key point in body key point;
According to the location information of human body key point each in the multiple human body key point, determine that the multiple human body is crucial The angle information of point.
Optionally, it is described that feature point extraction is carried out to the present image in any of the above-described embodiment of the method for the present invention Processing, obtains the location information of each human body key point in the multiple human body key points for the personage that the present image includes, Including:
Feature point extraction processing is carried out to the present image using neural network, obtains what the present image included The location information of each human body key point in multiple human body key points of personage.
Optionally, it is described according to each in the multiple human body key point in any of the above-described embodiment of the method for the present invention The location information of human body key point determines the angle information of the multiple human body key point, including:
According to the location information of the first key point and the second key point adjacent in the multiple human body key point, institute is determined State primary vector of first key point relative to second key point;
According to the location information of the second key point and third key point adjacent in the multiple human body key point, institute is determined State secondary vector of the third key point relative to second key point;
According to the primary vector and the secondary vector, the folder between the primary vector and the secondary vector is determined Angle.
Optionally, in any of the above-described embodiment of the method for the present invention, first equipment obtains collected present image, Including:
First equipment obtains the collected present image of remote camera;Or
First equipment obtains the collected present image of local camera.
Optionally, it in any of the above-described embodiment of the method for the present invention, further includes:
First equipment controls empty in the current virtual scene image according to the information of the multiple human body key point The posture of anthropomorphic object.
Optionally, in any of the above-described embodiment of the method for the present invention, first equipment is according to the multiple human body key The information of point controls the posture of virtual portrait in the current virtual scene image, including:
According to the information of the multiple human body key point, multiple virtual key points of virtual portrait are obtained in the current void Intend the target position information in scene image;
According to target position information of the multiple virtual key point in the virtual scene image, to the current void Intend scene image and carry out rendering processing.
Optionally, it in any of the above-described embodiment of the method for the present invention, further includes:
First equipment sends the information of the multiple human body key point, the multiple human body key point to the second equipment Information the posture of virtual portrait in the current virtual scene image is controlled for second equipment.
Optionally, in any of the above-described embodiment of the method for the present invention, first equipment sends described more to the second equipment The information of a human body key point, including:
First equipment is sent described more by persistently connecting between second equipment to second equipment The information of a human body key point.
Optionally, in any of the above-described embodiment of the method for the present invention, first equipment sends described more to the second equipment Before the information of a human body key point, further include:
First equipment carries out coded treatment to the information of the multiple human body key point, obtains the multiple human body and closes Information after key point coded treatment;
First equipment sends the information of the multiple human body key point to the second equipment, including:
Information of first equipment after the multiple human body key point coded treatment of second equipment transmission.
According to the other side of the embodiment of the present disclosure, a kind of method for controlling virtual portrait is provided, including:
Second equipment receives the information for multiple human body key points that the first equipment is sent;
Second equipment controls visual human in current virtual scene image according to the information of the multiple human body key point The posture of object.
Optionally, in any of the above-described embodiment of the method for the present invention, the information of the multiple human body key point includes:It is described The location information of each human body key point and/or the angle information of the multiple human body key point in multiple human body key points.
Optionally, in any of the above-described embodiment of the method for the present invention, second equipment is according to the multiple human body key The information of point controls the posture of virtual portrait in current virtual scene image, including:
According to the information of the multiple human body key point, multiple virtual key points of virtual portrait are obtained in the current void Intend the target position information in scene image;
According to target position information of the multiple virtual key point in the virtual scene image, to the current void Intend scene image and carry out rendering processing.
Optionally, in any of the above-described embodiment of the method for the present invention, the information according to the multiple human body key point, Target position information of the multiple virtual key points of virtual portrait in the current virtual scene image is obtained, including:
According to the angle information of the multiple human body key point, determine that multiple virtual key points of virtual portrait are worked as described Target position information in preceding virtual scene image.
Optionally, in any of the above-described embodiment of the method for the present invention, the angle according to the multiple human body key point Information determines target position information of the multiple virtual key points of virtual portrait in the current virtual scene image, including:
According to the angle information of the multiple human body key point, the mesh of multiple virtual key points of the virtual portrait is obtained Mark angle information;
According to the target angle information of the multiple virtual key point, multiple virtual key points of the virtual portrait are determined Target position information in the current virtual scene image.
Optionally, in any of the above-described embodiment of the method for the present invention, the target angle information of the multiple virtual key point At least one Virtual vector angle including the multiple virtual key point;
The angle information according to the multiple human body key point obtains multiple virtual key points of the virtual portrait Target angle information, including:
According to the corresponding key point information of each vector angle of the multiple human body key point, determine at least one described Correspondence between vector angle and at least one Virtual vector angle of the multiple virtual key point;
The target value of each Virtual vector angle at least one Virtual vector angle is determined as corresponding The numerical value of vector angle.
Optionally, in any of the above-described embodiment of the method for the present invention, second equipment receives the more of the first equipment transmission The information of a human body key point, including:
Second equipment is received by persistently connecting between first equipment described in the first equipment transmission The information of multiple human body key points.
Optionally, in any of the above-described embodiment of the method for the present invention, second equipment receives the more of the first equipment transmission Before the information of a human body key point, further include:
Second equipment obtains the collected present image of local camera;
Second equipment sends the present image to first equipment, wherein, the multiple human body key point Information is that first equipment carries out the present image feature point extraction processing acquisition.
Optionally, in any of the above-described embodiment of the method for the present invention, second equipment receives the more of the first equipment transmission The information of a human body key point, including:Second equipment receives the multiple human body key point that first equipment is sent Information after coded treatment;
After second equipment receives the information for multiple human body key points that the first equipment is sent, further include:Described Two equipment are decoded processing to the information after the multiple human body key point coded treatment, obtain the multiple human body key point Information.
According to the another aspect of the embodiment of the present disclosure, a kind of equipment for controlling virtual portrait provided, including:
Receiving unit, for obtaining collected present image;
Processing unit for carrying out feature point extraction processing to the present image, obtains the present image and includes Personage multiple human body key points information, wherein, the information of the multiple human body key point is for controlling current virtual field The posture of virtual portrait in scape image.
Optionally, in any of the above-described apparatus embodiments of the present invention, the information of the multiple human body key point includes:It is described The location information of each human body key point and/or the angle information of the multiple human body key point in multiple human body key points.
Optionally, in any of the above-described apparatus embodiments of the present invention, the processing unit includes:
Characteristic extracting module for carrying out feature point extraction processing to the present image, is obtained in the present image Including personage multiple human body key points in each human body key point location information;
Angle-determining module, for the location information according to human body key point each in the multiple human body key point, really The angle information of fixed the multiple human body key point.
Optionally, in any of the above-described apparatus embodiments of the present invention, the characteristic extracting module is specifically used for utilizing nerve Network carries out feature point extraction processing to the present image, and the multiple human bodies for obtaining the personage that the present image includes close The location information of each human body key point in key point.
Optionally, in any of the above-described apparatus embodiments of the present invention, the angle-determining module is specifically used for:
According to the location information of the first key point and the second key point adjacent in the multiple human body key point, institute is determined State primary vector of first key point relative to second key point;
According to the location information of the second key point and third key point adjacent in the multiple human body key point, institute is determined State secondary vector of the third key point relative to second key point;
According to the primary vector and the secondary vector, the folder between the primary vector and the secondary vector is determined Angle.
Optionally, in any of the above-described apparatus embodiments of the present invention, the receiving unit is specifically used for obtaining remote shooting The collected present image;Or
Obtain the collected present image of local camera.
Optionally, it in any of the above-described apparatus embodiments of the present invention, further includes:
Execution unit for the information according to the multiple human body key point, is controlled in the current virtual scene image The posture of virtual portrait.
Optionally, in any of the above-described apparatus embodiments of the present invention, the execution unit includes:
Position determination module for the information according to the multiple human body key point, obtains the multiple virtual of virtual portrait Target position information of the key point in the current virtual scene image;
Image rendering module, for the target location according to the multiple virtual key point in the virtual scene image Information carries out rendering processing to the current virtual scene image.
Optionally, it in any of the above-described apparatus embodiments of the present invention, further includes:
Transmitting element, for sending the information of the multiple human body key point to the second equipment, the multiple human body is crucial The information of point controls the posture of virtual portrait in the current virtual scene image for second equipment.
Optionally, in any of the above-described apparatus embodiments of the present invention, the transmitting element is specifically used for by with described the The information that the multiple human body key point is sent to second equipment is persistently connected between two equipment.
Optionally, it in any of the above-described apparatus embodiments of the present invention, further includes:
Coding unit for carrying out coded treatment to the information of the multiple human body key point, obtains the multiple human body Information after key point coded treatment;
The transmitting element is specifically used for after second equipment sends the multiple human body key point coded treatment Information.
According to another aspect of the embodiment of the present disclosure, a kind of equipment for controlling virtual portrait provided, including:
Receiving unit, for receiving the information for multiple human body key points that the first equipment is sent;
Execution unit for the information according to the multiple human body key point, controls virtual in current virtual scene image The posture of personage.
Optionally, in any of the above-described apparatus embodiments of the present invention, the information of the multiple human body key point includes:It is described The location information of each human body key point and/or the angle information of the multiple human body key point in multiple human body key points.
Optionally, in any of the above-described apparatus embodiments of the present invention, the execution unit includes:
Position determination module for the information according to the multiple human body key point, obtains the multiple virtual of virtual portrait Target position information of the key point in the current virtual scene image;
Image rendering module, for the target location according to the multiple virtual key point in the virtual scene image Information carries out rendering processing to the current virtual scene image.
Optionally, in any of the above-described apparatus embodiments of the present invention, the position determination module is specifically used for according to The angle information of multiple human body key points determines multiple virtual key points of virtual portrait in the current virtual scene image Target position information.
Optionally, in any of the above-described apparatus embodiments of the present invention, the position determination module is specifically used for:
According to the angle information of the multiple human body key point, the mesh of multiple virtual key points of the virtual portrait is obtained Mark angle information;
According to the target angle information of the multiple virtual key point, determine multiple virtual key points of virtual portrait in institute State the target position information in current virtual scene image.
Optionally, in any of the above-described apparatus embodiments of the present invention, the target angle information of the multiple virtual key point At least one Virtual vector angle including the multiple virtual key point;
The position determination module is specifically used for:
According to the corresponding key point information of each vector angle of the multiple human body key point, determine at least one described Correspondence between vector angle and at least one Virtual vector angle of the multiple virtual key point;
The target value of each Virtual vector angle at least one Virtual vector angle is determined as corresponding The numerical value of vector angle.
Optionally, in any of the above-described apparatus embodiments of the present invention, the receiving unit is specifically used for by with described the Lasting connection between one equipment receives the information that first equipment sends the multiple human body key point.
Optionally, in any of the above-described apparatus embodiments of the present invention, the receiving unit is additionally operable to obtain local camera Collected present image;
The equipment further includes:
Transmitting element, for sending the present image to first equipment, wherein, the multiple human body key point Information is that first equipment carries out the present image feature point extraction processing acquisition.
Optionally, in any of the above-described apparatus embodiments of the present invention, the receiving unit is specifically used for receiving described first Information after the multiple human body key point coded treatment that equipment is sent;
The equipment further includes:
Decoding unit for being decoded processing to the information after the multiple human body key point coded treatment, obtains institute State the information of multiple human body key points.
According to another aspect of the embodiment of the present disclosure, a kind of system for controlling virtual portrait provided, including:On State described in any embodiment for control the first equipment of virtual portrait and described in any of the above-described embodiment for controlling void Second equipment of anthropomorphic object.
According to another aspect of the embodiment of the present disclosure, a kind of computer program provided, including computer-readable code, When the computer-readable code in equipment when running, the processor execution in the equipment is used to implement any of the above-described implementation The instruction of each step in example the method.
According to another aspect of the embodiment of the present disclosure, a kind of computer storage media provided, for storing computer The instruction that can be read, described instruction are performed the operation for performing any of the above-described embodiment the method.
Based on disclosure above-described embodiment provide for controlling the method for virtual portrait, equipment, system, computer program And storage medium, by obtaining collected present image, feature point extraction processing is carried out to present image, obtains present image The information of multiple human body key points of the personage included controls current virtual scene graph using the information of multiple human body key points The posture of virtual portrait as in can realize the dynamic of virtual portrait in the action control virtual scene according to personage in real scene Make, and be advantageously implemented the real-time control to the posture of virtual portrait in virtual scene image.
Description of the drawings
The attached drawing of a part for constitution instruction describes embodiment of the disclosure, and is used to explain together with description The principle of the disclosure.
With reference to attached drawing, according to following detailed description, the disclosure can be more clearly understood, wherein:
Fig. 1 is the schematic flow chart for being used to control the method for virtual portrait of the disclosure some embodiments offer.
Fig. 2 is the schematic flow chart for being used to control the method for virtual portrait of the disclosure other embodiments offer.
Fig. 3 is the schematic flow chart for being used to control the method for virtual portrait that the other embodiment of the disclosure provides.
Fig. 4 is the schematic flow chart for being used to control the method for virtual portrait that disclosure still other embodiments provide.
Fig. 5 is the structure diagram for being used to control the equipment of virtual portrait of the disclosure some embodiments offer.
Fig. 6 is the structure diagram for being used to control the equipment of virtual portrait of the disclosure other embodiments offer.
Fig. 7 is the structure diagram for being used to control the equipment of virtual portrait that the other embodiment of the disclosure provides.
Fig. 8 is the structure diagram for being used to control the equipment of virtual portrait that disclosure still other embodiments provide.
Fig. 9 is the structure diagram for the electronic equipment that the embodiment of the present disclosure provides.
Specific embodiment
The various exemplary embodiments of the disclosure are described in detail now with reference to attached drawing.It should be noted that:Unless in addition have Body illustrates, the positioned opposite of the component otherwise illustrated in these embodiments, numerical expression and the unlimited disclosure processed of numerical value Range.
Simultaneously, it should be appreciated that for ease of description, the size of the various pieces shown in attached drawing is not according to reality Proportionate relationship draw.
It is illustrative to the description only actually of at least one exemplary embodiment below, is never used as to the disclosure And its application or any restrictions that use.
Technology, method and apparatus known to person of ordinary skill in the relevant may be not discussed in detail, but suitable In the case of, the technology, method and apparatus should be considered as part of specification.
It should be noted that:Similar label and letter represents similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, then in subsequent attached drawing does not need to that it is further discussed.
The embodiment of the present disclosure can be applied to computer system/server, can be with numerous other general or specialized calculating System environments or configuration operate together.Suitable for be used together with computer system/server well-known computing system, ring The example of border and/or configuration includes but not limited to:Personal computer system, server computer system, thin client, thick client Machine, hand-held or laptop devices, the system based on microprocessor, set-top box, programmable consumer electronics, NetPC Network PC, Little types Ji calculates machine Xi Tong ﹑ large computer systems and the distributed cloud computing technology environment including any of the above described system, etc..
Computer system/server can be in computer system executable instruction (such as journey performed by computer system Sequence module) general linguistic context under describe.In general, program module can include routine, program, target program, component, logic, number According to structure etc., they perform specific task or realize specific abstract data type.Computer system/server can be with Implement in distributed cloud computing environment, in distributed cloud computing environment, task is long-range by what is be linked through a communication network Manage what equipment performed.In distributed cloud computing environment, program module can be located at the Local or Remote meter for including storage device It calculates in system storage medium.
Fig. 1 is the schematic flow chart for being used to control the method for virtual portrait of the disclosure some embodiments offer.Ying Li Solution, example shown in FIG. 1 is just for the sake of helping those skilled in the art to more fully understand the technical solution of the disclosure, without answering It is understood as the restriction to the disclosure.Those skilled in the art can carry out various transformation on the basis of Fig. 1, and this transformation It should be understood to a part for disclosed technique scheme.
As shown in Figure 1, this method includes:
102, the first equipment obtains collected present image.
In one or more optional examples, present image be by real scene shoot/Image Acquisition obtains It arrives.
In an optional example, present image can be the image of local camera shooting, for example, being local camera shooting A frame image in the multiple image of head shooting.Optionally, in present image obtained from local camera the image collected In the case of taking, it is alternatively possible to be obtained successively using certain call function (such as function in the opencv of computer vision library) Each frame image in the multiple image of local camera shooting, as present image, but the embodiment of the present disclosure is without being limited thereto.
In another optional example, present image can also be the image of remote camera shooting, for example, being long-range A frame image in the multiple image of camera shooting.Optionally, it is from remote camera the image collected in present image In the case of middle acquisition, each frame image in the image of remote camera shooting can be received successively, as present image, but The embodiment of the present disclosure is without being limited thereto.
104, the first equipment carries out feature point extraction processing to present image, obtains the more of the personage that present image includes The information of a human body key point, wherein, the information of multiple human body key points is used to control visual human in current virtual scene image The posture of object.
In one or more optionally examples, multiple human body key points include but not limited to the crown, neck, left shoulder, the right side 14 shoulder, left elbow, right elbow, left hand, the right hand, left hip, right hip, left knee, right knee, left foot and right crus of diaphragm human joint points.Optionally, A human body key point can also include more or less human joint points, and the embodiment of the present disclosure is to multiple human body key points Quantity and type be not construed as limiting.
In an optional example, the information of multiple human body key points can include in multiple human body key points everyone The location information of body key point, such as including each human body key point the coordinate in present image.In another optional example In son, the information of multiple human body key points can also include the angle information of multiple human body key points, wherein, multiple human bodies are crucial It is different from adjacent that the angle information of point can include each human body key point in some or at least two human body key points Angle between the vector that human body key point is formed.In another optional example, the information of multiple human body key points can be with Not only the location information of each human body key point in multiple human body key points, but also the letter of the angle including multiple human body key points had been included Breath.Alternatively, the information of multiple human body key point can further include other information, the embodiment of the present disclosure does not limit this It is fixed.
Optionally, the first equipment can carry out feature point extraction processing using machine learning method to present image, as One example, the first equipment can carry out feature point extraction processing to present image using neural network, obtain in present image Including personage multiple human body key points in each human body key point location information, wherein, the embodiment of the present disclosure is to the god Specific implementation through network is not construed as limiting.
In some optional embodiments, the first equipment can carry out feature point extraction processing to present image, be worked as The location information of each human body key point in the multiple human body key points for the personage that preceding image includes, then according to multiple human bodies The location information of each human body key point in key point determines the angle information of multiple human body key points.
Assuming that the information of multiple human body key point includes the angle information of multiple human body key points, wherein, multiple people The angle information of body key point includes the angle between the vector that the first key point and adjacent different human body key point are formed, this In assume that the human body key point adjacent with the second key point specifically includes the first key point and third key point, then in some implementations In example, the first equipment can determine the first key point phase according to the location information of adjacent the first key point and the second key point For the primary vector of the second key point, and according to the location information of adjacent the second key point and third key point, determine Three key points relative to the second key point secondary vector, then, according to primary vector and secondary vector, determine primary vector with Angle between secondary vector.
In one or more optional examples, dot-product operation can be utilized to determine the angle between different vectors.For example, Can coordinate system be established as coordinate origin using the upper left corner of image, obtain the coordinate of each human body key point in image, wherein phase Three adjacent human body key points may be constructed two vectors, utilize the dot-product operation formula of vector It can obtain the angle between two vectors.
Alternatively it is also possible to the angle information of multiple human body key points, the embodiment of the present disclosure pair are determined by other methods This is not limited.
Based on disclosure above-described embodiment provide for the method that controls virtual portrait, pass through the first equipment and obtain acquisition The present image arrived carries out feature point extraction processing to present image, obtains the multiple human bodies for the personage that present image includes The information of multiple human body key points is used to control the posture of virtual portrait in current virtual scene image by the information of key point, Feature point extraction is carried out using to image, obtains the information of multiple human body key points of the posture of personage in reflection image, it can be with The information according to multiple human body key points of personage in real-time the image collected is realized, to virtual portrait in virtual scene image Posture controlled in real time, so as to realize virtual portrait in the action control virtual scene according to personage in real scene Action.
Fig. 2 is the schematic flow chart for being used to control the method for virtual portrait of the disclosure other embodiments offer.It should Understanding, example shown in Fig. 2 more fully understands the technical solution of the disclosure just for the sake of help those skilled in the art, without It should be understood to the restriction to the disclosure.Those skilled in the art can carry out various transformation on the basis of Fig. 2, and this transformation It is also contemplated that the part into disclosed technique scheme.
As shown in Fig. 2, this method, compared with direction shown in FIG. 1, the difference lies in this method further includes:
206, the first equipment controls virtual portrait in current virtual scene image according to the information of multiple human body key points Posture.
Optionally, the first equipment according to the information of multiple human body key points, can obtain multiple virtual passes of virtual portrait Target position information of the key point in current virtual scene image, then according to multiple virtual key points in virtual scene image Target position information, rendering processing is carried out to current virtual scene image.
In one or more optional examples, can visual human be determined according to the angle information of multiple human body key points Target position information of the multiple virtual key points of object in current virtual scene image.
It is alternatively possible to according to the angle information of multiple human body key points, multiple virtual key points of virtual portrait are obtained Target angle information, then according to the target angle information of multiple virtual key points, determine multiple virtual passes of virtual portrait Target position information of the key point in current virtual scene image.
In some embodiments, the target angle information of multiple virtual key points can include multiple virtual key points extremely A few Virtual vector angle can be determined according to the corresponding key point information of each vector angle of multiple human body key points Correspondence between at least one vector angle and at least one Virtual vector angle of multiple virtual key points, it is then near The target value of each Virtual vector angle in a few Virtual vector angle is determined as the numerical value of corresponding vector angle.Example Such as, it can be determined at least one by adjacent according to the center key point information of corresponding three key points of each vector angle The vector angle that three key points are formed and at least one Virtual vector angle being made of adjacent three virtual key points it Between correspondence or correspondence between vector angle and Virtual vector angle can also be determined by other methods, The embodiment of the present disclosure does not limit this.
Since angle information belongs to relative position information, when the angle information using human body key point is as reflection personage's appearance The information of state, relative to information of the location information using human body key point as reflection personage's posture, according to practical personage Posture when controlling the posture of virtual portrait, it is convenient to omit the operations such as coordinate transformation between different coordinates simplify control System operation.
In a specific example, when controlling the virtual portrait in browser painting canvas, the first equipment can root According to the information of multiple human body key points of personage in collected each frame image, the interface function of browser painting canvas is called DrawImage carries out the image of browser painting canvas the real-time rendering of each frame, makes virtual portrait in painting canvas according to collecting Each frame image in personage action carry out corresponding actions.
Based on disclosure above-described embodiment provide for the method that controls virtual portrait, pass through the first equipment and obtain acquisition The present image arrived carries out feature point extraction processing to present image, obtains the multiple human bodies for the personage that present image includes The information of key point according to the information of multiple human body key points, controls the posture of virtual portrait in current virtual scene image, profit With feature point extraction is carried out to image, the information of multiple human body key points of the posture of personage in reflection image, Ke Yishi are obtained Now according to the information of multiple human body key points of personage in real-time the image collected, to virtual portrait in virtual scene image Posture is controlled in real time, so as to realize virtual portrait in the action control virtual scene according to personage in real scene Action due to being performed to the extraction of character features point in image and to the control of virtual portrait using same equipment, can simplify The composition of system avoids error and interference caused by being transmitted between different devices due to information, improves virtual portrait control Accuracy rate.
Fig. 3 is the schematic flow chart for being used to control the method for virtual portrait that the other embodiment of the disclosure provides.It should Understanding, example shown in Fig. 3 more fully understands the technical solution of the disclosure just for the sake of help those skilled in the art, without It should be understood to the restriction to the disclosure.Those skilled in the art can carry out various transformation on the basis of Fig. 3, and this transformation It is also contemplated that the part into disclosed technique scheme.
As shown in figure 3, this method, compared with method shown in FIG. 1, the difference lies in this method further includes:
306, the first equipment sends the information of multiple human body key points to the second equipment, and the information of multiple human body key points is used The posture of virtual portrait in current virtual scene image is controlled in the second equipment.
Optionally, before the information for sending multiple human body key points to the second equipment in the first equipment, the can also be established Lasting connection between one equipment and the second equipment, so as to the first equipment can by between the second equipment persistently connect to Second equipment sends the information of multiple human body key points.For example, persistently it is connected as websocket protocol connection.Due to persistently connecting Connecing only needs to connect the connection status that can once keep the first equipment and the second equipment, thereby may be ensured that multiple human bodies are crucial The real-time high-efficiency transmission of the information of point.
Alternatively it is also possible to realize the lasting connection between the first equipment and the second equipment, the disclosure using other modes Embodiment does not limit this.Optionally, which can also send multiple people to the second equipment by other means The information of body key point, the embodiment of the present disclosure do not limit this.
In a specific example, present image is a frame image of video, present image is carried out in the first equipment special Before sign point extraction process, the first equipment can also send the personage that the previous frame image of present image includes to the second equipment Multiple human body key points information, then the information of the multiple human body key points for the personage that previous frame image includes is used for second The posture of virtual portrait in the former frame virtual scene image of equipment control current virtual scene image, so as to obtain current virtual Scene image.In this way, image acquisition-information extraction-information transmission-image rendering can be carried out to every frame image in video The real-time rendering to virtual scene image is realized in cyclic process.
Optionally, before the information for sending multiple human body key points to the second equipment in the first equipment, the first equipment may be used also To carry out coded treatment to the information of multiple human body key points, obtain the information after multiple human body key point coded treatments, then the One equipment to the information of multiple human body key points that second equipment is sent be multiple human body key point coded treatments after letter Breath.It is encoded for example, json or protobuf can be carried out to the information of multiple human body key points.
Alternatively it is also possible to be encoded using other coding modes to the information of multiple human body key points, the disclosure is real Example is applied not limit this.
Based on disclosure above-described embodiment provide for the method that controls virtual portrait, pass through the first equipment and obtain acquisition The present image arrived carries out feature point extraction processing to present image, obtains the multiple human bodies for the personage that present image includes The information of key point, and to the information of the multiple human body key points of the second equipment transmission, the information of multiple human body key points will be used for The posture of virtual portrait, feature point extraction is carried out using to image in second equipment control current virtual scene image, is obtained anti- The information of multiple human body key points of the posture of personage in image is reflected, can be realized according to personage in real-time the image collected The information of multiple human body key points controls the posture of virtual portrait in virtual scene image, in real time so as to realize According to the action of virtual portrait in the action control virtual scene of personage in real scene, due to character features point in image It extracts and the control of virtual portrait is respectively adopted different equipment and perform, the structure of equipment can be simplified, improve holding for equipment Line efficiency, the scope of application of extension virtual portrait control.
Fig. 4 is the schematic flow chart for being used to control the method for virtual portrait that disclosure still other embodiments provide.It should Understanding, example shown in Fig. 4 more fully understands the technical solution of the disclosure just for the sake of help those skilled in the art, without It should be understood to the restriction to the disclosure.Those skilled in the art can carry out various transformation on the basis of Fig. 4, and this transformation It is also contemplated that the part into disclosed technique scheme.
As shown in figure 4, this method includes:
402, the second equipment receives the information for multiple human body key points that the first equipment is sent.
In one or more optionally examples, multiple human body key points include but not limited to the crown, neck, left shoulder, the right side 14 shoulder, left elbow, right elbow, left hand, the right hand, left hip, right hip, left knee, right knee, left foot and right crus of diaphragm human joint points.Optionally, A human body key point can also include more or less human joint points, and the embodiment of the present disclosure is to multiple human body key points Quantity and type be not construed as limiting.
In an optional example, the information of multiple human body key points can include in multiple human body key points everyone The location information of body key point, such as including each human body key point the coordinate in present image.In another optional example In son, the information of multiple human body key points can also include the angle information of multiple human body key points, wherein, multiple human bodies are crucial It is different from adjacent that the angle information of point can include each human body key point in some or at least two human body key points Angle between the vector that human body key point is formed.In another optional example, the information of multiple human body key points can be with Not only the location information of each human body key point in multiple human body key points, but also the letter of the angle including multiple human body key points had been included Breath.Alternatively, the information of multiple human body key point can further include other information, the embodiment of the present disclosure does not limit this It is fixed.
Optionally, it before the information for receiving multiple human body key points that the first equipment is sent in the second equipment, can also build Vertical lasting connection between first equipment and the second equipment, so as to which the second equipment can pass through persistently connecting between the first equipment Receive the information that the first equipment sends multiple human body key points.For example, persistently it is connected as websocket protocol connection.Due to Persistently connection only needs to connect the connection status that can once keep the first equipment and the second equipment, thereby may be ensured that multiple people The real-time high-efficiency transmission of the information of body key point.
Alternatively it is also possible to realize the lasting connection between the first equipment and the second equipment, the disclosure using other modes Embodiment does not limit this.Optionally, which can also receive the more of first equipment transmission by other means The information of a human body key point, the embodiment of the present disclosure do not limit this.
Optionally, the information for multiple human body key points that the first equipment that the second equipment is received is sent is that multiple human bodies close Information after key point coded treatment, after the information of multiple human body key points of the second equipment reception the first equipment transmission, the Two equipment can also be decoded processing to the information after multiple human body key point coded treatments, obtain multiple human body key points Information.Encoded for example, json or protobuf can be carried out to the information of multiple human body key points, then can according to json or The principle of protobuf codings is decoded accordingly.
Alternatively it is also possible to be encoded using other coding modes to the information of multiple human body key points, then can adopt It is decoded with the cryptoprinciple of corresponding encoded mode, the embodiment of the present disclosure does not limit this.
In one or more optional examples, multiple human body key points of the first equipment transmission are received in the second equipment Before information, the second equipment can also obtain the collected present image of local camera, then be sent to the first equipment current Image, wherein, the information of multiple human body key points is that the first equipment carries out present image feature point extraction processing acquisition.Example Such as, each frame image in the image of local camera shooting is obtained from the utilizing camera interface of browser, as present image.
404, the second equipment controls virtual portrait in current virtual scene image according to the information of multiple human body key points Posture.
Optionally, the second equipment according to the information of multiple human body key points, can obtain multiple virtual passes of virtual portrait Target position information of the key point in current virtual scene image, then according to multiple virtual key points in virtual scene image Target position information, rendering processing is carried out to current virtual scene image.
In one or more optional examples, can visual human be determined according to the angle information of multiple human body key points Target position information of the multiple virtual key points of object in current virtual scene image.
It is alternatively possible to according to the angle information of multiple human body key points, multiple virtual key points of virtual portrait are obtained Target angle information, then according to the target angle information of multiple virtual key points, determine multiple virtual passes of virtual portrait Target position information of the key point in current virtual scene image.
It in some embodiments, can be by least one vector angle of multiple human body key points and multiple virtual key points At least one Virtual vector angle correspond to, and control each Virtual vector angle at least one Virtual vector angle with it is right The vector angle answered is equal, obtains the target angle information of multiple virtual key points.It for example, can be by making multiple virtual keys Angle between the vector that adjacent three virtual key points are formed in point, corresponding with multiple human body key points adjacent three Angle between the vector that a key point is formed is equal, and as the target angle information of multiple virtual key points, but the disclosure is real It is without being limited thereto to apply example.Since angle information belongs to relative position information, when the angle information using human body key point is as reflection The information of personage's posture, relative to information of the location information using human body key point as reflection personage's posture, according to reality When the posture of border personage controls the posture of virtual portrait, it is convenient to omit the operations such as coordinate transformation between different coordinates, Simplify control operates.
In a specific example, when controlling the virtual portrait in browser painting canvas, the second equipment can root According to the information of multiple human body key points of personage in collected each frame image, the interface function of browser painting canvas is called DrawImage carries out the image of browser painting canvas the real-time rendering of each frame, makes virtual portrait in painting canvas according to collecting Each frame image in personage action carry out corresponding actions.
Based on disclosure above-described embodiment provide for the method that controls virtual portrait, pass through the second equipment reception first The information for multiple human body key points that equipment is sent according to the information of multiple human body key points, controls current virtual scene image The posture of middle virtual portrait carries out feature point extraction using to image, obtains multiple human bodies of the posture of personage in reflection image The information of key point can realize the information according to multiple human body key points of personage in real-time the image collected, to virtual The posture of virtual portrait is controlled in real time in scene image, so as to realize the action control according to personage in real scene The action of virtual portrait in virtual scene, due to being adopted respectively to the extraction of character features point in image and to the control of virtual portrait It is performed with different equipment, the structure of equipment can be simplified, improve the execution efficiency of equipment, extension virtual portrait control is applicable in Range.
Fig. 5 is the structure diagram for being used to control the equipment of virtual portrait of the disclosure some embodiments offer.It should be understood that Example shown in fig. 5 is just for the sake of helping those skilled in the art to more fully understand the technical solution of the disclosure, without that should manage Solve the restriction of the pairs of disclosure.Those skilled in the art can carry out various transformation on the basis of Fig. 5, and this transformation also should A part for understanding cost public technology scheme.
As shown in figure 5, the equipment includes:Receiving unit 510 and processing unit 520.Wherein,
Receiving unit 510, for obtaining collected present image.
In one or more optional examples, present image be by real scene shoot/Image Acquisition obtains It arrives.
In an optional example, present image can be the image of local camera shooting, for example, being local camera shooting A frame image in the multiple image of head shooting.Optionally, it is obtained from local camera the image collected in receiving unit 510 In the case of taking present image, can successively it be obtained using certain call function (such as function in the opencv of computer vision library) Each frame image in the multiple image of local camera shooting, as present image, but the embodiment of the present disclosure is without being limited thereto.
In another optional example, present image can also be the image of remote camera shooting, for example, being long-range A frame image in the multiple image of camera shooting.Optionally, in receiving unit 510 from remote camera the image collected In the case of middle acquisition present image, can receive successively remote camera shooting image in each frame image, as work as Preceding image, but the embodiment of the present disclosure is without being limited thereto.
Processing unit 520 for carrying out feature point extraction processing to present image, obtains the personage that present image includes Multiple human body key points information, wherein, the information of multiple human body key points is empty in current virtual scene image for controlling The posture of anthropomorphic object.
In one or more optionally examples, multiple human body key points include but not limited to the crown, neck, left shoulder, the right side 14 shoulder, left elbow, right elbow, left hand, the right hand, left hip, right hip, left knee, right knee, left foot and right crus of diaphragm human joint points.Optionally, A human body key point can also include more or less human joint points, and the embodiment of the present disclosure is to multiple human body key points Quantity and type be not construed as limiting.
In an optional example, the information of multiple human body key points can include in multiple human body key points everyone The location information of body key point, such as including each human body key point the coordinate in present image.In another optional example In son, the information of multiple human body key points can also include the angle information of multiple human body key points, wherein, multiple human bodies are crucial It is different from adjacent that the angle information of point can include each human body key point in some or at least two human body key points Angle between the vector that human body key point is formed.In another optional example, the information of multiple human body key points can be with Not only the location information of each human body key point in multiple human body key points, but also the letter of the angle including multiple human body key points had been included Breath.Alternatively, the information of multiple human body key point can further include other information, the embodiment of the present disclosure does not limit this It is fixed.
Optionally, processing unit 520 can carry out feature point extraction processing using machine learning method to present image, make For an example, processing unit 520 can carry out feature point extraction processing to present image using neural network, currently be schemed The location information of each human body key point in multiple human body key points of personage that picture includes, wherein, the embodiment of the present disclosure pair The specific implementation of the neural network is not construed as limiting.
In some optional embodiments, as shown in figure 5, processing unit 520 can also include:Characteristic extracting module 522 With angle-determining module 524.Wherein, characteristic extracting module 522 can carry out feature point extraction processing to present image, be worked as The location information of each human body key point in the multiple human body key points for the personage that preceding image includes;Angle-determining module 524 According to the location information of human body key point each in multiple human body key points, the angle letter of multiple human body key points can be determined Breath.
Assuming that the information of multiple human body key point includes the angle information of multiple human body key points, wherein, multiple people The angle information of body key point includes the angle between the vector that the first key point and adjacent different human body key point are formed, this In assume that the human body key point adjacent with the second key point specifically includes the first key point and third key point, then in some implementations In example, angle-determining module 524 can determine the first pass according to the location information of adjacent the first key point and the second key point Key o'clock relative to the second key point primary vector, and according to the location information of adjacent the second key point and third key point, It determines secondary vector of the third key point relative to the second key point, then, according to primary vector and secondary vector, determines first Angle between vector and secondary vector.
In one or more optional examples, it can be determined between primary vector and secondary vector using dot-product operation Angle.
Alternatively it is also possible to the angle information of multiple human body key points, the embodiment of the present disclosure pair are determined by other methods This is not limited.
Based on disclosure above-described embodiment provide for the equipment that controls virtual portrait, by obtain it is collected currently Image carries out feature point extraction processing to present image, obtains the multiple human body key points for the personage that present image includes The information of multiple human body key points is used to control the posture of virtual portrait in current virtual scene image by information, using to figure As carrying out feature point extraction, the information of multiple human body key points of the posture of personage in reflection image is obtained, can realize basis The information of multiple human body key points of personage in real-time the image collected, to the posture of virtual portrait in virtual scene image into Row control in real time, so as to realize the action of virtual portrait in the action control virtual scene according to personage in real scene.
Fig. 6 is the structure diagram for being used to control the equipment of virtual portrait of the disclosure other embodiments offer.Ying Li Solution, example shown in fig. 6 is just for the sake of helping those skilled in the art to more fully understand the technical solution of the disclosure, without answering It is understood as the restriction to the disclosure.Those skilled in the art can carry out various transformation on the basis of Fig. 6, and this transformation It should be understood to a part for disclosed technique scheme.
As shown in fig. 6, the equipment, compared with equipment shown in fig. 5, the difference lies in the equipment further includes:Perform list Member 630.Wherein,
Execution unit 630 for the information according to multiple human body key points, controls visual human in current virtual scene image The posture of object.
Optionally, as shown in fig. 6, execution unit 630 can also include:Coordinate determining module 632 and image rendering module 634.Wherein, coordinate determining module 632 according to the information of multiple human body key points, can obtain multiple virtual passes of virtual portrait Target position information of the key point in current virtual scene image;Image rendering module 634 can be according to multiple virtual key points Target position information in virtual scene image carries out rendering processing to current virtual scene image.
In one or more optional examples, coordinate determining module 632 can be according to the angle of multiple human body key points Information determines target position information of the multiple virtual key points of virtual portrait in current virtual scene image.
Optionally, coordinate determining module 632 can obtain virtual portrait according to the angle information of multiple human body key points The target angle information of multiple virtual key points then according to the target angle information of multiple virtual key points, determines visual human Target position information of the multiple virtual key points of object in current virtual scene image.
In some embodiments, the target angle information of multiple virtual key points can include multiple virtual key points extremely A few Virtual vector angle, coordinate determining module 632 can be corresponding according to each vector angles of multiple human body key points Key point information determines pair between at least one vector angle and at least one Virtual vector angle of multiple virtual key points Should be related to, then by the target value of each Virtual vector angle at least one Virtual vector angle be determined as it is corresponding to Measure the numerical value of angle.For example, coordinate determining module 632 can be closed according to the center of corresponding three key points of each vector angle Key point information, determine at least one vector angle being made of three adjacent key points with it is at least one by three adjacent void Intend key point form Virtual vector angle between correspondence or can also be determined by other methods vector angle with Correspondence between Virtual vector angle, the embodiment of the present disclosure do not limit this.
Since angle information belongs to relative position information, when the angle information using human body key point is as reflection personage's appearance The information of state, relative to information of the location information using human body key point as reflection personage's posture, according to practical personage Posture when controlling the posture of virtual portrait, it is convenient to omit the operations such as coordinate transformation between different coordinates simplify control System operation.
Based on disclosure above-described embodiment provide for the equipment that controls virtual portrait, by obtain it is collected currently Image carries out feature point extraction processing to present image, obtains the multiple human body key points for the personage that present image includes Information according to the information of multiple human body key points, controls the posture of virtual portrait in current virtual scene image, using to image Feature point extraction is carried out, the information of multiple human body key points of the posture of personage in reflection image is obtained, can realize according to reality When the image collected in personage multiple human body key points information, in virtual scene image virtual portrait posture carry out Control in real time, so as to realize the action of virtual portrait in the action control virtual scene according to personage in real scene, by The extraction of character features point and the control of virtual portrait using same equipment is performed in image, the structure of system can be simplified Into avoiding error and interference caused by being transmitted between different devices due to information, improve the accuracy rate of virtual portrait control.
Fig. 7 is the structure diagram for being used to control the equipment of virtual portrait that the other embodiment of the disclosure provides.Ying Li Solution, example shown in Fig. 7 is just for the sake of helping those skilled in the art to more fully understand the technical solution of the disclosure, without answering It is understood as the restriction to the disclosure.Those skilled in the art can carry out various transformation on the basis of Fig. 7, and this transformation It should be understood to a part for disclosed technique scheme.
As shown in fig. 7, the equipment, compared with equipment shown in fig. 5, the difference lies in the equipment further includes:It sends single Member 740.Wherein,
Transmitting element 740, for sending the information of multiple human body key points, the letter of multiple human body key points to the second equipment Breath controls the posture of virtual portrait in current virtual scene image for the second equipment.
Optionally, it before the information for sending multiple human body key points to the second equipment in transmitting element 740, can also establish Lasting connection between the equipment and the second equipment, so as to which transmitting element 740 can be by between the equipment and the second equipment Persistently connect the information that multiple human body key points are sent to the second equipment.For example, persistently it is connected as websocket protocol connection. It only needs to connect the connection status that can once keep the equipment and the second equipment due to persistently connecting, thereby may be ensured that multiple The real-time high-efficiency transmission of the information of human body key point.
Alternatively it is also possible to realize the lasting connection between the equipment and the second equipment using other modes, the disclosure is real Example is applied not limit this.Optionally, which can also send multiple human body key points to the second equipment by other means Information, the embodiment of the present disclosure do not limit this.
In a specific example, present image is a frame image of video, and present image is carried out in processing unit 720 Before feature point extraction processing, the previous frame image that transmitting element 740 can also send present image to the second equipment includes Personage multiple human body key points information, then the information of the multiple human body key points for the personage that previous frame image includes uses The posture of virtual portrait in the former frame virtual scene image of the second equipment control current virtual scene image, so as to be worked as Preceding virtual scene image.In this way, image acquisition-information extraction-information transmission-image can be carried out to every frame image in video The real-time rendering to virtual scene image is realized in the cyclic process of rendering.
Optionally, as shown in fig. 7, the equipment can also include:Coding unit 750, for multiple human body key points Information carries out coded treatment, obtains the information after multiple human body key point coded treatments, then transmitting element 740 is sent out to the second equipment The information of multiple human body key points sent can be the information after multiple human body key point coded treatments.It for example, can be to multiple The information of human body key point carries out json or protobuf codings.
Alternatively it is also possible to be encoded using other coding modes to the information of multiple human body key points, the disclosure is real Example is applied not limit this.
Based on disclosure above-described embodiment provide for the equipment that controls virtual portrait, by obtain it is collected currently Image carries out feature point extraction processing to present image, obtains the multiple human body key points for the personage that present image includes Information, and to the information of the multiple human body key points of the second equipment transmission, the information of multiple human body key points will be used for the second equipment The posture of virtual portrait in current virtual scene image is controlled, feature point extraction is carried out using to image, is obtained in reflection image The information of multiple human body key points of the posture of personage, can realize multiple human bodies according to personage in real-time the image collected The information of key point controls the posture of virtual portrait in virtual scene image in real time, so as to realize according to true In scene in the action control virtual scene of personage virtual portrait action, due to the extraction of character features point in image and right The control of virtual portrait is respectively adopted different equipment and performs, and can simplify the structure of equipment, improves the execution efficiency of equipment, expands Open up the scope of application of virtual portrait control.
Fig. 8 is the structure diagram for being used to control the equipment of virtual portrait of the disclosure some embodiments offer.It should be understood that Example shown in Fig. 8 is just for the sake of helping those skilled in the art to more fully understand the technical solution of the disclosure, without that should manage Solve the restriction of the pairs of disclosure.Those skilled in the art can carry out various transformation on the basis of Fig. 8, and this transformation also should A part for understanding cost public technology scheme.
As shown in figure 8, the equipment includes:Receiving unit 810 and execution unit 820.Wherein,
Receiving unit 810, for receiving the information for multiple human body key points that the first equipment is sent.
In one or more optionally examples, multiple human body key points include but not limited to the crown, neck, left shoulder, the right side 14 shoulder, left elbow, right elbow, left hand, the right hand, left hip, right hip, left knee, right knee, left foot and right crus of diaphragm human joint points.Optionally, A human body key point can also include more or less human joint points, and the embodiment of the present disclosure is to multiple human body key points Quantity and type be not construed as limiting.
In an optional example, the information of multiple human body key points can include in multiple human body key points everyone The location information of body key point, such as including each human body key point the coordinate in present image.In another optional example In son, the information of multiple human body key points can also include the angle information of multiple human body key points, wherein, multiple human bodies are crucial It is different from adjacent that the angle information of point can include each human body key point in some or at least two human body key points Angle between the vector that human body key point is formed.In another optional example, the information of multiple human body key points can be with Not only the location information of each human body key point in multiple human body key points, but also the letter of the angle including multiple human body key points had been included Breath.Alternatively, the information of multiple human body key point can further include other information, the embodiment of the present disclosure does not limit this It is fixed.
It optionally, can be with before the information for receiving multiple human body key points that the first equipment is sent in receiving unit 810 Establish the lasting connection between the first equipment and the equipment, so as to receiving unit 810 can by with the equipment and the first equipment Between lasting connection receive the information that the first equipment sends multiple human body key points.For example, persistently it is connected as websocket associations View connection.It only needs to connect the connection status that can once keep the first equipment and the equipment due to persistently connecting, so as to Ensure the real-time high-efficiency transmission of the information of multiple human body key points.
Alternatively it is also possible to realize the lasting connection between the first equipment and the equipment using other modes, the disclosure is real Example is applied not limit this.Optionally, which can also receive multiple human bodies pass of the first equipment transmission by other means The information of key point, the embodiment of the present disclosure do not limit this.
Optionally, as shown in figure 8, the letter of multiple human body key points that the first equipment that receiving unit 810 is received is sent Breath is the information after multiple human body key point coded treatments, then the equipment can also include:Decoding unit 830, for multiple Information after human body key point coded treatment is decoded processing, obtains the information of multiple human body key points.It for example, can be to more The information of a human body key point carries out json or protobuf codings, then can according to the principle that json or protobuf is encoded into The corresponding decoding of row.
Alternatively it is also possible to be encoded using other coding modes to the information of multiple human body key points, then can adopt It is decoded with the cryptoprinciple of corresponding encoded mode, the embodiment of the present disclosure does not limit this.
In one or more optional examples, multiple human bodies that the transmission of the first equipment is received in receiving unit 810 are crucial Before the information of point, receiving unit 810 can also obtain the collected present image of local camera, then the equipment can also wrap It includes:Transmitting element, for sending present image to the first equipment, wherein, multiple human body key points that receiving unit 810 receives Information can be that the first equipment carries out present image feature point extraction processing acquisition.For example, the utilizing camera interface from browser Each frame image in the image of local camera shooting is obtained, as present image.
Execution unit 820 for the information according to multiple human body key points, controls visual human in current virtual scene image The posture of object.
Optionally, as shown in figure 8, execution unit 820 can also include:Coordinate determining module 822 and image rendering module 824.Wherein, coordinate determining module 822 according to the information of multiple human body key points, can obtain multiple virtual passes of virtual portrait Target position information of the key point in current virtual scene image;Image rendering module 824 can be according to multiple virtual key points Target position information in virtual scene image carries out rendering processing to current virtual scene image.
In one or more optional examples, coordinate determining module 822 can be according to the angle of multiple human body key points Information determines target position information of the multiple virtual key points of virtual portrait in current virtual scene image.
Optionally, coordinate determining module 822 can obtain virtual portrait according to the angle information of multiple human body key points The target angle information of multiple virtual key points then according to the target angle information of multiple virtual key points, determines visual human Target position information of the multiple virtual key points of object in current virtual scene image.
In some embodiments, coordinate determining module 822 can be by least one vector angle of multiple human body key points It is corresponding at least one Virtual vector angle of multiple virtual key points, and control each at least one Virtual vector angle Virtual vector angle is equal with corresponding vector angle, obtains the target angle information of multiple virtual key points.For example, coordinate is true Cover half block 822 can by making the angle between the vector that the virtual key point of adjacent in multiple virtual key points three forms, Angle between the vector that three adjacent key points corresponding in multiple human body key points are formed is equal, as multiple virtual The target angle information of key point, but the embodiment of the present disclosure is without being limited thereto.Since angle information belongs to relative position information, when adopting By the use of the angle information of human body key point as the information for reflecting personage's posture, make relative to using the location information of human body key point To reflect the information of personage's posture, when the posture according to practical personage controls the posture of virtual portrait, it is convenient to omit The operations such as coordinate transformation between different coordinates, simplify control operation.
Based on disclosure above-described embodiment provide for the equipment that controls virtual portrait, pass through and receive the transmission of the first equipment Multiple human body key points information, according to the information of multiple human body key points, control visual human in current virtual scene image The posture of object carries out feature point extraction using to image, obtains multiple human body key points of the posture of personage in reflection image Information can realize the information according to multiple human body key points of personage in real-time the image collected, to virtual scene image The posture of middle virtual portrait is controlled in real time, so as to realize the action control virtual scene according to personage in real scene The action of middle virtual portrait, it is different due to being respectively adopted to the extraction of character features point in image and to the control of virtual portrait Equipment performs, and can simplify the structure of equipment, improves the execution efficiency of equipment, the scope of application of extension virtual portrait control.
The embodiment of the present disclosure additionally provides a kind of system for controlling virtual portrait, equipped with above-mentioned Fig. 7 any embodiments For control the equipment (be, for example, the first equipment) of virtual portrait and above-mentioned Fig. 8 any embodiments be used for control virtual portrait Equipment (be, for example, the second equipment).
The embodiment of the present disclosure provide for the equipment that controls virtual portrait, such as can be mobile terminal, personal determining Machine (PC), tablet computer, server etc..Below with reference to Fig. 9, it illustrates suitable for be used for realizing the embodiment of the present application for controlling The structure diagram of the electronic equipment 900 of the equipment of virtual portrait processed:As shown in figure 9, the machine of determining system 900 includes one or more A processor, communication unit etc., one or more of processors are for example:One or more central processing unit (CPU) 901 and/ Or one or more image processors (GPU) 913 etc., processor can according to be stored in read-only memory (ROM) 902 can Execute instruction performs various from the executable instruction that storage section 908 is loaded into random access storage device (RAM) 903 Appropriate action and processing.Communication unit 912 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) network interface card.
Processor can communicate with read-only memory 902 and/or random access storage device 930 to perform executable instruction, It is connected by bus 904 with communication unit 912 and is communicated through communication unit 912 with other target devices, is implemented so as to complete the application The corresponding operation of any one method that example provides, for example, the first equipment obtains collected present image;First equipment pair The present image carries out feature point extraction processing, obtains the multiple human body key points for the personage that the present image includes Information, wherein, the information of the multiple human body key point is used to control the posture of virtual portrait in current virtual scene image.Or Second equipment receives the information for multiple human body key points that the first equipment is sent;Second equipment is closed according to the multiple human body The information of key point controls the posture of virtual portrait in current virtual scene image.
In addition, in RAM 903, it can also be stored with various programs and data needed for device operation.CPU901、ROM902 And RAM903 is connected with each other by bus 904.In the case where there is RAM903, ROM902 is optional module.RAM903 is stored Executable instruction is written in executable instruction into ROM902 at runtime, and it is above-mentioned logical that executable instruction performs processor 901 The corresponding operation of letter method.Input/output (I/O) interface 905 is also connected to bus 904.Communication unit 912 can be integrally disposed, It may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interfaces 905 are connected to lower component:Importation 906 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 907 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 908 including hard disk etc.; And the communications portion 909 of the network interface card including LAN card, modem etc..Communications portion 909 via such as because The network of spy's net performs communication process.Driver 910 is also according to needing to be connected to I/O interfaces 905.Detachable media 911, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 910, as needed in order to be read from thereon Determining machine program be mounted into storage section 908 as needed.
Need what is illustrated, framework as shown in Figure 9 is only a kind of optional realization method, can root during concrete practice The component count amount and type of above-mentioned Fig. 9 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component Put, can also be used it is separately positioned or integrally disposed and other implementations, such as GPU and CPU separate setting or can be by GPU collection Into on CPU, communication unit separates setting, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiments Each fall within protection domain disclosed in the disclosure.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product, it is machine readable including being tangibly embodied in Computer program on medium, computer program are included for the program code of the method shown in execution flow chart, program code It may include the corresponding instruction of corresponding execution method and step provided by the embodiments of the present application, for example, the acquisition of the first equipment is collected Present image;First equipment carries out feature point extraction processing to the present image, obtains the present image and includes Personage multiple human body key points information, wherein, the information of the multiple human body key point is for controlling current virtual field The posture of virtual portrait in scape image.Or second equipment receive the first equipment send multiple human body key points information;It is described Second equipment controls the posture of virtual portrait in current virtual scene image according to the information of the multiple human body key point. In such embodiment, which can be downloaded and installed from network by communications portion 909 and/or from can Medium 911 is dismantled to be mounted.When the computer program is performed by central processing unit (CPU) 901, the present processes are performed The above-mentioned function of middle restriction.
In one or more optional embodiments, the embodiment of the present disclosure additionally provides a kind of computer program program production Product, for storing computer-readable instruction, which is performed so that computer performs any of the above-described possible realization method In for the method that controls virtual portrait.
The computer program product can be realized especially by hardware, software or its mode combined.In an alternative embodiment In son, which is embodied as computer storage media, in another optional example, the computer program Product is embodied as software product, such as software development kit (Software Development Kit, SDK) etc..
In one or more optional embodiments, the embodiment of the present disclosure additionally provides a kind of for controlling virtual portrait Method, equipment, system, computer storage media, computer program and computer program product, wherein, this method includes:The One device sends virtual portrait control instruction to second device, which causes second device to perform any of the above-described possible implementation In example for the method that controls virtual portrait;First device receives the information of the posture for the virtual portrait that second device is sent.
In some embodiments, virtual portrait control instruction can be specially call instruction, and first device can pass through The mode of calling indicates that second device performs virtual portrait control, and accordingly, in response to call instruction is received, second device can To perform step and/or flow in any embodiment in the above-mentioned method for being used to control virtual portrait.
It should be understood that the terms such as " first " in the embodiment of the present disclosure, " second " are used for the purpose of distinguishing, and be not construed as Restriction to the embodiment of the present disclosure.
It should also be understood that in the disclosure, " multiple " can refer to two or more, " at least one " can refer to one, Two or more.
It should also be understood that for the either component, data or the structure that are referred in the disclosure, clearly limited or preceding no In the case of opposite enlightenment given hereinlater, one or more may be generally understood to.
It should also be understood that the disclosure highlights the description of each embodiment the difference between each embodiment, Same or similar part can be referred to mutually, for sake of simplicity, no longer repeating one by one.
Disclosed method and equipment, system may be achieved in many ways.For example, software, hardware, firmware can be passed through Or any combinations of software, hardware, firmware realize disclosed method and equipment, system.The step of for method Sequence is stated merely to illustrate, the step of disclosed method is not limited to sequence described in detail above, unless with other Mode illustrates.In addition, in some embodiments, the disclosure can be also embodied as recording program in the recording medium, this A little programs include being used to implement the machine readable instructions according to disclosed method.Thus, the disclosure also covers storage for holding The recording medium gone according to the program of disclosed method.
The description of the disclosure provides for the sake of example and description, and is not exhaustively or by the disclosure It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.It selects and retouches Embodiment is stated and be the principle and practical application in order to more preferably illustrate the disclosure, and those of ordinary skill in the art is enable to manage The disclosure is solved so as to design the various embodiments with various modifications suitable for special-purpose.

Claims (10)

  1. A kind of 1. method for controlling virtual portrait, which is characterized in that including:
    First equipment obtains collected present image;
    First equipment carries out feature point extraction processing to the present image, obtains the personage that the present image includes Multiple human body key points information, wherein, the information of the multiple human body key point is for controlling current virtual scene image The posture of middle virtual portrait.
  2. 2. it according to the method described in claim 1, it is characterized in that, further includes:
    First equipment sends the information of the multiple human body key point, the letter of the multiple human body key point to the second equipment Breath controls the posture of virtual portrait in the current virtual scene image for second equipment.
  3. A kind of 3. method for controlling virtual portrait, which is characterized in that including:
    Second equipment receives the information for multiple human body key points that the first equipment is sent;
    Second equipment controls virtual portrait in current virtual scene image according to the information of the multiple human body key point Posture.
  4. 4. a kind of equipment for controlling virtual portrait, which is characterized in that including:
    Receiving unit, for obtaining collected present image;
    Processing unit for carrying out feature point extraction processing to the present image, obtains the people that the present image includes The information of multiple human body key points of object, wherein, the information of the multiple human body key point is used to control current virtual scene graph The posture of virtual portrait as in.
  5. 5. a kind of equipment for controlling virtual portrait, which is characterized in that including:
    Receiving unit, for receiving the information for multiple human body key points that the first equipment is sent;
    Execution unit for the information according to the multiple human body key point, controls virtual portrait in current virtual scene image Posture.
  6. 6. a kind of system for controlling virtual portrait, which is characterized in that including:It is virtual for controlling described in claim 2 Described in the equipment and claim 3 of personage for the equipment that controls virtual portrait.
  7. 7. a kind of computer program, including computer-readable code, which is characterized in that when the computer-readable code is in equipment During upper operation, the processor execution in the equipment is used to implement the instruction of each step in claims 1 or 2 the method.
  8. 8. a kind of computer program, including computer-readable code, which is characterized in that when the computer-readable code is in equipment During upper operation, the processor execution in the equipment is used to implement the instruction of each step in claim 3 the method.
  9. 9. a kind of computer storage media, for storing computer-readable instruction, which is characterized in that described instruction is performed When perform claim require 1 or 2 the methods operation.
  10. 10. a kind of computer storage media, for storing computer-readable instruction, which is characterized in that described instruction is held Perform claim requires the operation of 3 the methods during row.
CN201810064291.9A 2018-01-23 2018-01-23 For controlling the method for virtual portrait, equipment, system, program and storage medium Pending CN108227931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810064291.9A CN108227931A (en) 2018-01-23 2018-01-23 For controlling the method for virtual portrait, equipment, system, program and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810064291.9A CN108227931A (en) 2018-01-23 2018-01-23 For controlling the method for virtual portrait, equipment, system, program and storage medium

Publications (1)

Publication Number Publication Date
CN108227931A true CN108227931A (en) 2018-06-29

Family

ID=62668549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810064291.9A Pending CN108227931A (en) 2018-01-23 2018-01-23 For controlling the method for virtual portrait, equipment, system, program and storage medium

Country Status (1)

Country Link
CN (1) CN108227931A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753150A (en) * 2018-12-11 2019-05-14 北京字节跳动网络技术有限公司 Figure action control method, device, storage medium and electronic equipment
CN110058685A (en) * 2019-03-20 2019-07-26 北京字节跳动网络技术有限公司 Display methods, device, electronic equipment and the computer readable storage medium of virtual objects
CN110349081A (en) * 2019-06-17 2019-10-18 达闼科技(北京)有限公司 Generation method, device, storage medium and the electronic equipment of image
CN110390291A (en) * 2019-07-18 2019-10-29 北京字节跳动网络技术有限公司 Data processing method, device and electronic equipment
WO2020062998A1 (en) * 2018-09-25 2020-04-02 上海瑾盛通信科技有限公司 Image processing method, storage medium, and electronic device
CN110991327A (en) * 2019-11-29 2020-04-10 深圳市商汤科技有限公司 Interaction method and device, electronic equipment and storage medium
CN111368667A (en) * 2020-02-25 2020-07-03 达闼科技(北京)有限公司 Data acquisition method, electronic equipment and storage medium
WO2020147791A1 (en) * 2019-01-18 2020-07-23 北京市商汤科技开发有限公司 Image processing method and device, image apparatus, and storage medium
CN111460873A (en) * 2019-01-18 2020-07-28 北京市商汤科技开发有限公司 Image processing method and apparatus, image device, and storage medium
CN111462337A (en) * 2020-03-27 2020-07-28 咪咕文化科技有限公司 Image processing method, device and computer readable storage medium
CN111638791A (en) * 2020-06-03 2020-09-08 北京字节跳动网络技术有限公司 Virtual character generation method and device, electronic equipment and storage medium
CN112714337A (en) * 2020-12-22 2021-04-27 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium
WO2021103613A1 (en) * 2019-11-28 2021-06-03 北京市商汤科技开发有限公司 Method and apparatus for driving interactive object, device, and storage medium
CN114115528A (en) * 2021-11-02 2022-03-01 深圳市雷鸟网络传媒有限公司 Virtual object control method and device, computer equipment and storage medium
WO2023071964A1 (en) * 2021-10-27 2023-05-04 腾讯科技(深圳)有限公司 Data processing method and apparatus, and electronic device and computer-readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105487660A (en) * 2015-11-25 2016-04-13 北京理工大学 Immersion type stage performance interaction method and system based on virtual reality technology
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
CN105637531A (en) * 2013-10-19 2016-06-01 德尔格制造股份两合公司 Recognition of gestures of a human body
US20160202770A1 (en) * 2012-10-12 2016-07-14 Microsoft Technology Licensing, Llc Touchless input
CN106020440A (en) * 2016-05-05 2016-10-12 西安电子科技大学 Emotion interaction based Peking Opera teaching system
KR101686585B1 (en) * 2016-06-07 2016-12-14 연합정밀주식회사 A hand motion tracking system for a operating of rotary knob in virtual reality flighting simulator
CN106774820A (en) * 2016-11-08 2017-05-31 北京暴风魔镜科技有限公司 The methods, devices and systems that human body attitude is superimposed with virtual scene
CN107247874A (en) * 2017-06-06 2017-10-13 陕西科技大学 A kind of physical examination robot system based on Kinect
CN107272882A (en) * 2017-05-03 2017-10-20 江苏大学 The holographic long-range presentation implementation method of one species

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160202770A1 (en) * 2012-10-12 2016-07-14 Microsoft Technology Licensing, Llc Touchless input
CN105637531A (en) * 2013-10-19 2016-06-01 德尔格制造股份两合公司 Recognition of gestures of a human body
CN105487660A (en) * 2015-11-25 2016-04-13 北京理工大学 Immersion type stage performance interaction method and system based on virtual reality technology
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
CN106020440A (en) * 2016-05-05 2016-10-12 西安电子科技大学 Emotion interaction based Peking Opera teaching system
KR101686585B1 (en) * 2016-06-07 2016-12-14 연합정밀주식회사 A hand motion tracking system for a operating of rotary knob in virtual reality flighting simulator
CN106774820A (en) * 2016-11-08 2017-05-31 北京暴风魔镜科技有限公司 The methods, devices and systems that human body attitude is superimposed with virtual scene
CN107272882A (en) * 2017-05-03 2017-10-20 江苏大学 The holographic long-range presentation implementation method of one species
CN107247874A (en) * 2017-06-06 2017-10-13 陕西科技大学 A kind of physical examination robot system based on Kinect

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020062998A1 (en) * 2018-09-25 2020-04-02 上海瑾盛通信科技有限公司 Image processing method, storage medium, and electronic device
CN109753150A (en) * 2018-12-11 2019-05-14 北京字节跳动网络技术有限公司 Figure action control method, device, storage medium and electronic equipment
CN111460872B (en) * 2019-01-18 2024-04-16 北京市商汤科技开发有限公司 Image processing method and device, image equipment and storage medium
CN111460873B (en) * 2019-01-18 2024-06-11 北京市商汤科技开发有限公司 Image processing method and device, image equipment and storage medium
US11741629B2 (en) 2019-01-18 2023-08-29 Beijing Sensetime Technology Development Co., Ltd. Controlling display of model derived from captured image
WO2020147791A1 (en) * 2019-01-18 2020-07-23 北京市商汤科技开发有限公司 Image processing method and device, image apparatus, and storage medium
CN111460873A (en) * 2019-01-18 2020-07-28 北京市商汤科技开发有限公司 Image processing method and apparatus, image device, and storage medium
CN111460872A (en) * 2019-01-18 2020-07-28 北京市商汤科技开发有限公司 Image processing method and apparatus, image device, and storage medium
US11538207B2 (en) 2019-01-18 2022-12-27 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, image device, and storage medium
CN111460875A (en) * 2019-01-18 2020-07-28 北京市商汤科技开发有限公司 Image processing method and apparatus, image device, and storage medium
US11468612B2 (en) 2019-01-18 2022-10-11 Beijing Sensetime Technology Development Co., Ltd. Controlling display of a model based on captured images and determined information
CN110058685B (en) * 2019-03-20 2021-07-09 北京字节跳动网络技术有限公司 Virtual object display method and device, electronic equipment and computer-readable storage medium
CN110058685A (en) * 2019-03-20 2019-07-26 北京字节跳动网络技术有限公司 Display methods, device, electronic equipment and the computer readable storage medium of virtual objects
CN110349081B (en) * 2019-06-17 2023-04-07 达闼科技(北京)有限公司 Image generation method and device, storage medium and electronic equipment
CN110349081A (en) * 2019-06-17 2019-10-18 达闼科技(北京)有限公司 Generation method, device, storage medium and the electronic equipment of image
CN110390291B (en) * 2019-07-18 2021-10-08 北京字节跳动网络技术有限公司 Data processing method and device and electronic equipment
CN110390291A (en) * 2019-07-18 2019-10-29 北京字节跳动网络技术有限公司 Data processing method, device and electronic equipment
WO2021103613A1 (en) * 2019-11-28 2021-06-03 北京市商汤科技开发有限公司 Method and apparatus for driving interactive object, device, and storage medium
CN110991327A (en) * 2019-11-29 2020-04-10 深圳市商汤科技有限公司 Interaction method and device, electronic equipment and storage medium
CN111368667A (en) * 2020-02-25 2020-07-03 达闼科技(北京)有限公司 Data acquisition method, electronic equipment and storage medium
CN111368667B (en) * 2020-02-25 2024-03-26 达闼科技(北京)有限公司 Data acquisition method, electronic equipment and storage medium
CN111462337B (en) * 2020-03-27 2023-08-18 咪咕文化科技有限公司 Image processing method, device and computer readable storage medium
CN111462337A (en) * 2020-03-27 2020-07-28 咪咕文化科技有限公司 Image processing method, device and computer readable storage medium
CN111638791A (en) * 2020-06-03 2020-09-08 北京字节跳动网络技术有限公司 Virtual character generation method and device, electronic equipment and storage medium
CN111638791B (en) * 2020-06-03 2021-11-09 北京火山引擎科技有限公司 Virtual character generation method and device, electronic equipment and storage medium
CN112714337A (en) * 2020-12-22 2021-04-27 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium
WO2023071964A1 (en) * 2021-10-27 2023-05-04 腾讯科技(深圳)有限公司 Data processing method and apparatus, and electronic device and computer-readable storage medium
CN114115528B (en) * 2021-11-02 2024-01-19 深圳市雷鸟网络传媒有限公司 Virtual object control method, device, computer equipment and storage medium
CN114115528A (en) * 2021-11-02 2022-03-01 深圳市雷鸟网络传媒有限公司 Virtual object control method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108227931A (en) For controlling the method for virtual portrait, equipment, system, program and storage medium
CN110349081B (en) Image generation method and device, storage medium and electronic equipment
CN109712234B (en) Three-dimensional human body model generation method, device, equipment and storage medium
EP1899793B1 (en) Control device for information display, corresponding system, method and program product
US11455765B2 (en) Method and apparatus for generating virtual avatar
WO2016110199A1 (en) Expression migration method, electronic device and system
CN109671141B (en) Image rendering method and device, storage medium and electronic device
CN111208783B (en) Action simulation method, device, terminal and computer storage medium
CN107222468A (en) Augmented reality processing method, terminal, cloud server and edge server
CN109375764A (en) A kind of head-mounted display, cloud server, VR system and data processing method
CN105678702B (en) A kind of the human face image sequence generation method and device of feature based tracking
CN107850947A (en) The Social Interaction of telecommunication
CN112837406B (en) Three-dimensional reconstruction method, device and system
US20170192734A1 (en) Multi-interface unified displaying system and method based on virtual reality
CN109584168B (en) Image processing method and apparatus, electronic device, and computer storage medium
KR20130016318A (en) A method of real-time cropping of a real entity recorded in a video sequence
Capin et al. Realistic avatars and autonomous virtual humans in: VLNET networked virtual environments
CN109821239A (en) Implementation method, device, equipment and the storage medium of somatic sensation television game
CN109144252B (en) Object determination method, device, equipment and storage medium
CN112581635B (en) Universal quick face changing method and device, electronic equipment and storage medium
CN110298306A (en) The determination method, device and equipment of target object motion information
CN107609946A (en) A kind of display control method and computing device
CN116546149A (en) Dance teaching interaction method, device, equipment and medium based on virtual digital person
CN114616536A (en) Artificial reality system for transmitting surface data using superframe
CN116342782A (en) Method and apparatus for generating avatar rendering model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180629