CN106710003B - OpenG L ES-based three-dimensional photographing method and system - Google Patents

OpenG L ES-based three-dimensional photographing method and system Download PDF

Info

Publication number
CN106710003B
CN106710003B CN201710013995.9A CN201710013995A CN106710003B CN 106710003 B CN106710003 B CN 106710003B CN 201710013995 A CN201710013995 A CN 201710013995A CN 106710003 B CN106710003 B CN 106710003B
Authority
CN
China
Prior art keywords
data
dimensional model
dimensional
openg
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710013995.9A
Other languages
Chinese (zh)
Other versions
CN106710003A (en
Inventor
黄超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pinguo Technology Co Ltd
Original Assignee
Chengdu Pinguo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pinguo Technology Co Ltd filed Critical Chengdu Pinguo Technology Co Ltd
Priority to CN201710013995.9A priority Critical patent/CN106710003B/en
Publication of CN106710003A publication Critical patent/CN106710003A/en
Application granted granted Critical
Publication of CN106710003B publication Critical patent/CN106710003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional photographing method based on OpenG L ES, which comprises the steps of S01, building a three-dimensional model geometric body by utilizing modeling software and storing the three-dimensional model geometric body into a memory of mobile equipment, S02, obtaining a video image by utilizing a camera of the mobile equipment and positioning the position of a human face by a human face recognition and tracking method to obtain video image data, S03, analyzing the three-dimensional model geometric body by a central processing unit of the mobile equipment to obtain analysis data, S04, drawing a three-dimensional model by utilizing OpenG L ES on the analysis data, S05, mixing the video image data and the three-dimensional model by utilizing an OpenG L ES mixing mode and finally rendering a final scene with the three-dimensional model, and S06, displaying the final scene to a user through a display device of the mobile equipment.

Description

OpenG L ES-based three-dimensional photographing method and system
Technical Field
The invention belongs to the technical field of graphic processing, and particularly relates to a three-dimensional photographing method and system based on OpenG L ES.
Background
At present, mobile phone photographing is the mainstream, and most of people go out and take photos with only one mobile phone. How to make the mobile phone more interesting and more interesting for taking pictures becomes a demand of people for basic shooting.
At present, people mostly adopt various photographing software to beautify photos, wherein the most common method is to add foreground images into the images; the existing foreground beautifying mode mostly combines the characters in the two-dimensional image and the video image to achieve the purpose of beautifying the image foreground, but the two-dimensional image is rigid and has low interest performance; and the existing three-dimensional geometric body rendering method has low speed, so that the rendering efficiency is low.
Disclosure of Invention
In order to solve the above problems, the invention provides a three-dimensional photographing method and system based on OpenG L ES, which combines the characters in the video with the virtual three-dimensional object to go out of the mirror, so as to increase the interestingness of photographing, and has the advantages of high response speed and high rendering efficiency.
In order to achieve the purpose, the invention adopts the technical scheme that:
a three-dimensional photographing method based on OpenG L ES comprises the steps of,
s01, building a three-dimensional model geometric body by utilizing modeling software, and storing the three-dimensional model geometric body into a memory of the mobile equipment;
s02, acquiring a video image by using a camera of the mobile equipment, and positioning the face position by using a face recognition and tracking method to acquire video image data;
s03, analyzing the three-dimensional model geometry by a central processing unit of the mobile equipment to obtain analysis data;
s04, drawing the three-dimensional model by the OpenG L ES by utilizing the analytic data;
s05, mixing the video image data with the three-dimensional model by using an OpenG L ES mixing mode, and finally rendering a final scene with the three-dimensional model;
and S06, displaying the final scene to the user through a display device of the mobile equipment.
Further, the modeling software in the step S01 is Maya modeling software which is a mainstream three-dimensional modeling software in the market and has strong general performance; in the modeling process, texture mapping is carried out on the three-dimensional model geometric body, and when animation exists, skeleton skin animation is added, so that the three-dimensional model geometric body is more vivid.
Further, the face recognition and tracking method in step S02 includes analyzing the camera video image data, recognizing a face and tracking the face to obtain face data, where the face data includes eye position data, mouth position data, and three-dimensional pose data of the face; the video image and the face data are combined into image data to prepare for three-dimensional model synthesis.
Further, in step S03, the parsing data includes vertex data, index data, texture coordinates, and bone animation data skinning weight, and provides basic data for OpenG L ES rendering.
Further, the step S04 of rendering the three-dimensional geometry by OpenG L ES includes the steps of:
uploading the vertex data into a graphics processor to be rendered using a vertex data upload interface of OpenG L ES;
uploading the index data to a graphic processor, and finding corresponding vertex data in the vertex data according to the index data to organize a corresponding three-dimensional model; because the data volume of the three-dimensional model is usually huge, the step is mainly used for saving the data volume and accelerating the technical speed;
uploading the texture coordinates to a graphic processor, and mapping the texture image by using the texture coordinates to obtain a three-dimensional model with the texture image; this step makes the three-dimensional model more realistic;
if the three-dimensional model is provided with the animation, updating vertex data in real time by the skeleton skin animation data, and circulating the steps to obtain the three-dimensional model with the animation; the step enables the three-dimensional model to be more vivid and updates the three-dimensional model in real time.
Further, the step S05 of mixing image data with three-dimensional geometry according to OpenG L ES mixing includes the steps of setting an OpenG L ES mixing mode and starting a depth test, rendering video image data by a layer of graphics, wherein the layer is a primitive unit consisting of 4 vertex data and 4 texture coordinates, rendering a three-dimensional model by using the analyzed three-dimensional model data, rendering the three-dimensional model by using a three-dimensional model consisting of a plurality of networks, respectively rendering each network, and rendering by using an Alpha mixing mode to obtain a final scene with the three-dimensional model.
Further, step S06 is followed by the step of: and adjusting the three-dimensional posture of the three-dimensional geometric body in real time according to the video image data. And adjusting the state of the whole three-dimensional model in real time according to the position of the face.
Further, the process of adjusting the three-dimensional pose of the three-dimensional model in real time includes the steps of: generating a transformation matrix from the three-dimensional attitude data; transforming the transformation matrix and each vertex of the three-dimensional model to obtain the final form of the three-dimensional model; if the three-dimensional model has animation, the final shape of the three-dimensional model is combined with skeleton skin animation to obtain a final scene with the three-dimensional model after real-time adjustment.
On the other hand, the invention also provides an OpenG L ES-based three-dimensional photographing system, which comprises a mobile device and a modeling computer which are in communication connection with each other, wherein the mobile device comprises a camera, a memory, a graphics processor, a central processing unit and a display device, and the camera, the memory, the graphics processor and the display device are respectively connected to the central processing unit.
The beneficial effects of the technical scheme are as follows:
the method can combine the characters and the virtual three-dimensional objects in the live broadcast or later-period video, so that the characters and the three-dimensional objects are taken out of the mirror together, and the characters and the three-dimensional objects have higher fusion performance, thereby increasing the interest of photographing; the method can position and track the images in the video in real time, and has high tracking precision and high response speed; the method provided by the invention has the advantages of high calculation response speed and high rendering efficiency.
Drawings
FIG. 1 is a schematic flow chart of a three-dimensional photographing method based on OpenG L ES according to the present invention;
fig. 2 is a schematic structural diagram of an OpenG L ES-based three-dimensional photographing system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
In this embodiment, referring to fig. 1, the invention provides a three-dimensional photographing method based on OpenG L ES, which includes steps S01-S06.
Wherein:
and S01, constructing a three-dimensional model geometric body by utilizing modeling software, and storing the three-dimensional model geometric body into a memory of the mobile device.
The modeling software in the step S01 is Maya modeling software which is mainstream three-dimensional modeling software in the market and has strong general performance; in the modeling process of the step S01, texture maps are added to the three-dimensional model geometry, and if animation is added, skeleton skinning animation is added, so that the three-dimensional model geometry is more vivid.
Using three-dimensional modeling software to create a three-dimensional model geometry, which can be animated, textured, and select a corresponding dimensional scale for rendering in a program in a corresponding size; after the model is created, it is derived into a common three-dimensional interchange format such as FBX or DAE.
The implementation process of a specific model is illustrated here, mainstream three-dimensional modeling software on the market is used, for example, Maya is used to make a three-dimensional model, Maya is opened to create a new plane, a three-dimensional modeler creates vertex data to obtain a model, then attaches a texture map to the model, and then maps UV texture coordinates so as to correctly display textures, and finally plug-ins are used to derive required data, including individual texture images.
And S02, acquiring a video image by using a camera of the mobile equipment, and positioning the face position by using a face recognition and tracking method to acquire video image data.
The face recognition and tracking method in the step S02 includes analyzing camera video image data, recognizing a face and tracking the face to obtain face data, where the face data includes eye position data, mouth position data, and three-dimensional posture data of the face; the video image and the face data are combined into image data.
And recognizing and positioning the face by using a face recognition and tracking system of the mobile equipment or other third-party face tracking systems to obtain face data such as the coordinate position, the 3D posture and the like of the face in the whole output picture, and preparing for subsequent three-dimensional model synthesis.
Taking an example on an iPhone mobile phone, firstly, using a face detection API provided by the system to obtain the position and size of a face, and according to the actual situation, the system will return information such as position information angles of one or more faces in an image, and the system will usually return a rectangular area occupied by the face in the image for identification; and then, the data related to the face is taken into a face tracking system for tracking, and 66 or more tracking points are obtained from tens of thousands of trained face data models through corresponding algorithms so as to obtain the real-time tracking effect of the face position and simultaneously evaluate the 3D posture of the face, such as face yaw, roll and pitch angle information and the like.
S03, analyzing the three-dimensional model geometry by a central processing unit of the mobile equipment to obtain analysis data; .
In step S03, the parsing data includes vertex data, index data, texture coordinates, and bone animation data skinning weight, and provides basic data for OpenG L ES rendering.
The DAE format is taken as an example to illustrate a specific process, and the DAE format is a standard XM L format, and the standard is currently maintained by khronos, because the standard XM L format includes all data to be rendered, such as vertex data nodes, animation data nodes, texture coordinate nodes, index data nodes, and skeleton animation data nodes, and the data can be used only by a writer program to analyze all the data mentioned above.
S04, OpenG L ES uses the parsed data to render a three-dimensional model.
In step S04, the OpenG L ES technique is used to fabricate a three-dimensional geometry, including the steps of:
the method comprises the steps of using a vertex data uploading interface of an OpenG L ES to upload vertex data to a graphics processor to be rendered, taking the vertex data which is extrinsic form data of a three-dimensional model, using the vertex data uploading interface of the OpenG L ES to upload the vertex data to a GPU to be rendered, and directly storing the data on the GPU by using VAO, VBO and the like for performance.
Uploading the index data to a graphic processor, and finding corresponding vertex data in the vertex data according to the index data to organize a corresponding three-dimensional model; because the data volume of the three-dimensional model is usually huge, the step is mainly used for saving the data volume and accelerating the rendering speed; the index data can find corresponding vertex data in the vertex data to organize a corresponding three-dimensional model, and the data amount is mainly saved; after the vertex data and index data are available, the three-dimensional model can theoretically be rendered, but at this time the appearance of the three-dimensional model may be less than good because no lighting, no texture, and no animation is added.
Uploading the texture coordinates and the texture image to a graphic processor, and mapping the texture image by using the texture coordinates to obtain a three-dimensional model with the texture image; this step makes the three-dimensional model more realistic.
If the three-dimensional model is provided with the animation, updating vertex data in real time by the skeleton skin animation data, and circulating the steps to obtain the three-dimensional model with the animation; the step enables the three-dimensional model to be more vivid and updates the three-dimensional model in real time. For the purpose of vividly updating the three-dimensional model, uploading texture data and texture coordinates, and if necessary, updating the vertex data in real time so as to obtain the animation, wherein the source of the updated vertex data is skeleton skinning animation data, and for simplicity, the operation is temporarily operated on the GPU, and the operation result is directly submitted to the GPU after being obtained.
Specifically, texture data needs to be derived together when DAE or FBX is derived, the texture picture is used for adding a skin function to a model to enable the model to look more vivid, texture picture data is read and generated to obtain TextureId corresponding to a texture in Opengl ES, then TextureId is corresponding to a uniform variable of unifonm in a shader and is assigned in a usual way of compiling the shader, the GPU already has vertex data, index data and texture data, but lacks data for mapping the texture data to the model, namely texture coordinate data, and the texture coordinate data of the whole model is analyzed, so that the texture coordinate data of the part is also analyzed and uploaded to the GPU through an uploading interface corresponding to OpenG L ES, and after the series of actions are completed, only an OpenG L ES corresponding rendering interface needs to be called and displayed on G L View, and the static model can be seen.
If the three-dimensional model has animation, the animation should be rendered accordingly, the principle of the animation is to update vertex data in real time, and the calculation of the animation update can be carried out on a GPU or a CPU, and in the embodiment, the calculation is carried out on the CPU.
The specific flow is that the final vertex data is interpolated by using the obtained animation data, skeleton data and skin data; if there is animation existing in the two time periods of time T0 and T1, the animation data shown in the two time points will indicate the existence of the transformation, the transformation usually exists in a way of transformation matrix, the middle is completed by linear interpolation or quaternion spherical linear interpolation, the bone morphology can be transformed to T1 by multiplying the bone data after obtaining the transformation matrix corresponding to the corresponding time point, because the three-dimensional model usually has multiple nodes, such as parent node and child node, in order to make the child node transform with the transformation of the parent node, it also needs to cascade the matrix of the multiplier node itself, after obtaining the transformation matrix corresponding to the world coordinate for each node, it also needs to know that the bone and the joint are what is supposed to exist, and the essence is a local coordinate system. Since all transformations of the final vertices are driven by the skeleton, it is also necessary to transform the vertices from the world coordinate system to the local coordinate system where the skeleton exists, so that when the skeleton moves, all affected vertices are moved together. Each bone contains data called an offset matrix, the conversion can be completed only by multiplying a vertex by the offset matrix, the formed vertex can realize animation effect, but because the conversion is hard, the conversion between joints can generate 'cracks', a covering effect needs to be introduced, covering is right, each vertex can be influenced by a plurality of bones, the total weight is 1, one vertex is influenced by a plurality of bones, the realization effect of the joints is smooth, namely the 'cracks' disappear, the whole effect is achieved, and functions such as blinking, mouth opening and closing and the like can be realized, so that the three-dimensional model can be animated following the facial animation of a human.
And S05, mixing the video image data and the three-dimensional model by using an OpenG L ES mixing mode, and finally rendering a final scene with the three-dimensional model.
The step S05 of mixing video image data with a three-dimensional geometric body according to OpenG L ES mixing includes the steps of setting an OpenG L ES mixing mode and starting a depth test, rendering the video image data by using a layer of graphics, rendering a three-dimensional model by using the analyzed three-dimensional model data, rendering a three-dimensional model by using a three-dimensional model formed by a plurality of networks, rendering each network respectively, and rendering by using an Alpha mixing mode to obtain a final scene with the three-dimensional model.
Setting preparation such as mixed mode starting depth test, rendering video data input by a video, namely whole image data, by using a layer of graphics, wherein all equipment is ready, and a three-dimensional model is not rendered when the whole video image given by a camera is rendered; when the video image rendering is completed and then the three-dimensional rendering is rendered, since the three-dimensional model may be composed of a plurality of meshes and the depths of the vertexes are different, a depth test must be started, and attention needs to be paid to whether a depth buffer area is writable or not when the video image is rendered and the three-dimensional model is rendered so as to avoid incorrect effects. The desired effect can be easily achieved by blending Alpha-based rendering modes.
And S06, displaying the final scene to the user through a display device of the mobile equipment.
As an optimization scheme of the above embodiment, after S06, the method further includes the steps of: and adjusting the three-dimensional posture of the three-dimensional geometric body in real time according to the video image data. And adjusting the state of the whole three-dimensional model in real time according to the position of the face.
The process of adjusting the three-dimensional pose of the three-dimensional model in real time comprises the steps of: generating a transformation matrix from the three-dimensional attitude data; transforming the transformation matrix and each vertex of the three-dimensional model to obtain the final form of the three-dimensional model; if the three-dimensional model has animation, the final shape of the three-dimensional model is combined with skeleton skin animation to obtain a final scene with the three-dimensional model after real-time adjustment.
Specifically, the state of the whole three-dimensional model is adjusted in real time by using the face position data obtained in S02, the method includes adding one step of face three-dimensional posture data when submitting the data to the final vertex position of the GPU, namely adding a matrix multiplication, and applying a model view transformation matrix projection matrix obtained from the face three-dimensional posture, namely multiplying the vertex data of the three-dimensional model by the final transformation of the three-dimensional posture in a vertex shader, wherein the final matrix can be represented by the multiplication result of the model view matrix and the projection matrix at the CPU end.
Care is taken here to keep the data type (column vector or row vector) consistent, i.e. whether the matrix is a left or right multiplication; the final matrix data obtained in this way is combined with the vertex data to which the animation effect has been applied to calculate the final vertex position by multiplying the final matrix, and here, a detail needs to be noted that a three-dimensional model has a plurality of networks, offset matrices exist among the networks, and a parent-child relationship usually exists, so that the final matrix of each network also needs to be calculated in a cascading manner, and finally the final matrix is multiplied by the model view matrix projection matrix mentioned at the beginning, so that the effect of finally containing the three-dimensional model is obtained.
Meanwhile, a 2D dynamic rendering layer can be added on the basis of three dimensions, so that the content can be richer.
In order to match with the implementation of the method of the present invention, based on the same inventive concept, as shown in fig. 2, the present invention further provides an OpenG L ES-based three-dimensional photographing system, which includes a mobile device and a modeling computer that are communicatively connected to each other, where the mobile device includes a camera, a memory, a graphics processor, a central processing unit, and a display device, and the camera, the memory, the graphics processor, and the display device are respectively connected to the central processing unit.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. A three-dimensional photographing method based on OpenG L ES is characterized by comprising the following steps,
s01, building a three-dimensional model geometric body by utilizing modeling software, and storing the three-dimensional model geometric body into a memory of the mobile equipment;
s02, acquiring a video image by using a camera of the mobile equipment, and positioning the face position by using a face recognition and tracking method to acquire video image data;
s03, analyzing the three-dimensional model geometry by the central processing unit of the mobile equipment to obtain vertex data, index data, texture coordinates, bone animation data and skin weight;
s04, uploading the vertex data to a graphic processor to be rendered by using a vertex data uploading interface of OpenG L ES, uploading the index data to the graphic processor, finding corresponding vertex data in the vertex data according to the index data to organize a corresponding three-dimensional model, uploading the texture coordinates to the graphic processor, mapping a texture image by using the texture coordinates to obtain a three-dimensional model with the texture image, updating the vertex data in real time by using the skeleton animation data if the three-dimensional model has animation, and circulating the steps to obtain the three-dimensional model with the animation;
s05, setting an OpenG L ES hybrid mode, starting a depth test, rendering video image data by using a layer of graphics, wherein the layer is a primitive unit consisting of 4 vertex data and 4 texture coordinates, rendering a three-dimensional model by using the analyzed three-dimensional model data, rendering a three-dimensional model consisting of a plurality of networks, respectively rendering each network, and rendering by using an Alpha hybrid mode to obtain a final scene with the three-dimensional model;
and S06, displaying the final scene to the user through a display device of the mobile equipment.
2. The OpenG L ES-based three-dimensional photographing method according to claim 1, wherein during the modeling process of step S0l, texture maps are added to the three-dimensional model geometry.
3. The OpenG L ES-based three-dimensional photographing method according to claim 2, wherein the face recognition and tracking method in step S02 comprises analyzing camera video image data, recognizing a face and tracking the face to obtain face data, wherein the face data comprises eye position data, mouth position data and three-dimensional pose data of the face, and the video image and the face data are combined into image data.
4. The OpenG L ES-based three-dimensional photographing method according to claim 3, further comprising the step of adjusting the three-dimensional pose of the three-dimensional geometry in real time according to the video image data after S06.
5. The OpenG L ES-based three-dimensional photographing method according to claim 4, wherein the process of adjusting the three-dimensional pose of the dimensional model in real time comprises the steps of generating a transformation matrix from the three-dimensional pose data, transforming the transformation matrix and each vertex of the three-dimensional model to obtain the final form of the three-dimensional model, and if the three-dimensional model has animation, combining the final form of the three-dimensional model with skeleton skin animation to obtain the final scene with the three-dimensional model after real-time adjustment.
6. A three-dimensional photographing system for implementing the OpenG L ES-based three-dimensional photographing method according to any one of claims 1-5.
CN201710013995.9A 2017-01-09 2017-01-09 OpenG L ES-based three-dimensional photographing method and system Active CN106710003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710013995.9A CN106710003B (en) 2017-01-09 2017-01-09 OpenG L ES-based three-dimensional photographing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710013995.9A CN106710003B (en) 2017-01-09 2017-01-09 OpenG L ES-based three-dimensional photographing method and system

Publications (2)

Publication Number Publication Date
CN106710003A CN106710003A (en) 2017-05-24
CN106710003B true CN106710003B (en) 2020-07-10

Family

ID=58908089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710013995.9A Active CN106710003B (en) 2017-01-09 2017-01-09 OpenG L ES-based three-dimensional photographing method and system

Country Status (1)

Country Link
CN (1) CN106710003B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145688A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 The processing method and processing device of video image
CN108122266B (en) * 2017-12-20 2021-07-27 成都卓杭网络科技股份有限公司 Method, device and storage medium for caching rendering textures of skeleton animation
CN108765529A (en) * 2018-05-04 2018-11-06 北京比特智学科技有限公司 Video generation method and device
CN108648252A (en) * 2018-05-17 2018-10-12 成都明镜视觉科技有限公司 A kind of skeleton cartoon compatibility processing method
CN108765539B (en) * 2018-05-24 2022-05-13 武汉斗鱼网络科技有限公司 OpenGLES-based image rendering method, device, equipment and storage medium
CN108921778B (en) * 2018-07-06 2022-12-30 成都品果科技有限公司 Method for generating star effect map
CN109636893B (en) * 2019-01-03 2023-04-21 华南理工大学 Analysis and rendering method of three-dimensional OBJ model and MTL material in iPhone
CN111476834B (en) * 2019-01-24 2023-08-11 北京地平线机器人技术研发有限公司 Method and device for generating image and electronic equipment
CN110347462A (en) * 2019-06-21 2019-10-18 秦皇岛尼特智能科技有限公司 WMF fire-fighting graph processing method and device based on OPENGL
CN110298918A (en) * 2019-08-02 2019-10-01 湖南海诚宇信信息技术有限公司 One kind is based on GPU real-time three-dimensional modeling display device and three-dimensional modeling display methods
CN110992460B (en) * 2019-11-26 2023-05-16 深圳市毕美科技有限公司 Model fluency display method, system, device and storage medium for mobile equipment
CN112929750B (en) * 2020-08-21 2022-10-28 海信视像科技股份有限公司 Camera adjusting method and display device
CN113487708B (en) * 2021-06-25 2023-11-03 山东齐鲁数通科技有限公司 Flow animation implementation method based on graphics, storage medium and terminal equipment
CN116433821B (en) * 2023-04-17 2024-01-23 上海臻图信息技术有限公司 Three-dimensional model rendering method, medium and device for pre-generating view point index

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021952A (en) * 2007-03-23 2007-08-22 北京中星微电子有限公司 Method and apparatus for realizing three-dimensional video special efficiency
CN101452582A (en) * 2008-12-18 2009-06-10 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
CN102572391A (en) * 2011-12-09 2012-07-11 深圳市万兴软件有限公司 Method and device for genius-based processing of video frame of camera
US9043515B1 (en) * 2010-12-22 2015-05-26 Google Inc. Vertex array access bounds checking
CN105046740A (en) * 2015-06-25 2015-11-11 上海卓悠网络科技有限公司 3D graph processing method based on OpenGL ES and device thereof
CN105338370A (en) * 2015-10-28 2016-02-17 北京七维视觉科技有限公司 Method and apparatus for synthetizing animations in videos in real time

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605211B (en) * 2009-07-23 2011-01-05 杭州镭星科技有限公司 Method for seamlessly composing virtual three-dimensional building and real-scene video of real environment
CN102917174A (en) * 2011-08-04 2013-02-06 深圳光启高等理工研究院 Video synthesis method and system applied to electronic equipment
CN103037165A (en) * 2012-12-21 2013-04-10 厦门美图网科技有限公司 Photographing method of immediate-collaging and real-time filter
CN104834897A (en) * 2015-04-09 2015-08-12 东南大学 System and method for enhancing reality based on mobile platform
CN105491365A (en) * 2015-11-25 2016-04-13 罗军 Image processing method, device and system based on mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021952A (en) * 2007-03-23 2007-08-22 北京中星微电子有限公司 Method and apparatus for realizing three-dimensional video special efficiency
CN101452582A (en) * 2008-12-18 2009-06-10 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
US9043515B1 (en) * 2010-12-22 2015-05-26 Google Inc. Vertex array access bounds checking
CN102572391A (en) * 2011-12-09 2012-07-11 深圳市万兴软件有限公司 Method and device for genius-based processing of video frame of camera
CN105046740A (en) * 2015-06-25 2015-11-11 上海卓悠网络科技有限公司 3D graph processing method based on OpenGL ES and device thereof
CN105338370A (en) * 2015-10-28 2016-02-17 北京七维视觉科技有限公司 Method and apparatus for synthetizing animations in videos in real time

Also Published As

Publication number Publication date
CN106710003A (en) 2017-05-24

Similar Documents

Publication Publication Date Title
CN106710003B (en) OpenG L ES-based three-dimensional photographing method and system
WO2021093453A1 (en) Method for generating 3d expression base, voice interactive method, apparatus and medium
US8933928B2 (en) Multiview face content creation
WO2022205760A1 (en) Three-dimensional human body reconstruction method and apparatus, and device and storage medium
CN109325990B (en) Image processing method, image processing apparatus, and storage medium
CN107452049B (en) Three-dimensional head modeling method and device
CN101916454A (en) Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
KR20210113948A (en) Method and apparatus for generating virtual avatar
CN112530005B (en) Three-dimensional model linear structure recognition and automatic restoration method
CN113744374A (en) Expression-driven 3D virtual image generation method
CN111710035B (en) Face reconstruction method, device, computer equipment and storage medium
CN112837406A (en) Three-dimensional reconstruction method, device and system
CN111382618B (en) Illumination detection method, device, equipment and storage medium for face image
US11328466B2 (en) Method and user interface for generating tangent vector fields usable for generating computer generated imagery
CN115984447B (en) Image rendering method, device, equipment and medium
CN116342782A (en) Method and apparatus for generating avatar rendering model
CN117115398A (en) Virtual-real fusion digital twin fluid phenomenon simulation method
US11682156B2 (en) Method for controlling digital feather growth between two manifolds in a computer simulated creature
EP3980975B1 (en) Method of inferring microdetail on skin animation
CN114119821A (en) Hair rendering method, device and equipment of virtual object
CN110689616B (en) Water delivery channel parametric modeling method based on three-dimensional digital earth
CA3169005A1 (en) Face mesh deformation with detailed wrinkles
US11783516B2 (en) Method for controlling digital feather generations through a user interface in a computer modeling system
CN117671110B (en) Real-time rendering system and method based on artificial intelligence
JPH1027268A (en) Image processing method and image processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant