CN117649475A - Eyeball rendering method, device, electronic equipment and computer readable storage medium - Google Patents

Eyeball rendering method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117649475A
CN117649475A CN202311675523.1A CN202311675523A CN117649475A CN 117649475 A CN117649475 A CN 117649475A CN 202311675523 A CN202311675523 A CN 202311675523A CN 117649475 A CN117649475 A CN 117649475A
Authority
CN
China
Prior art keywords
iris
eyeball
texture map
layer
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311675523.1A
Other languages
Chinese (zh)
Inventor
邵佳仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311675523.1A priority Critical patent/CN117649475A/en
Publication of CN117649475A publication Critical patent/CN117649475A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an eyeball rendering method, an eyeball rendering device, electronic equipment and a computer readable storage medium, wherein the eyeball model to be rendered and an initial texture map corresponding to the eyeball model are obtained, the initial texture map comprises texture maps of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure comprises an iris layer; acquiring a camera direction vector for shooting an eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters; performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset; and rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain a rendered eyeball model. According to the embodiment of the application, the eyeball rendering efficiency can be improved.

Description

Eyeball rendering method, device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of rendering technologies, and in particular, to an eyeball rendering method, an eyeball rendering device, an electronic device, and a computer readable storage medium.
Background
Under the wave of the internet, computer graphics are increasingly being used as a visual rendering of an important branch of computer graphics, which simulates visual effects in the real world by means of computer programs.
Traditional visual rendering techniques are based primarily on geometric models and illumination models to achieve simulated ray interactions with objects to generate images. In the application process of the visual rendering technology, aiming at the virtual character with eyeballs, the reality sense and vitality of the virtual character can be improved with respect to the rendering of the eyeballs.
The existing eyeball rendering method generally generates a fixed texture map for texture mapping so as to render based on the fixed texture map, but when the eyeball is changed, a great deal of time is required to redraw a new texture map based on the expected eyeball change effect because of the complexity of the eyeball structure and different dynamic change conditions of the eyeball under different illumination conditions so as to render based on the redrawn new texture map, thereby resulting in lower eyeball rendering efficiency.
Disclosure of Invention
The embodiment of the application provides an eyeball rendering method, an eyeball rendering device, electronic equipment and a computer readable storage medium, which can improve the eyeball rendering efficiency.
In a first aspect, an embodiment of the present application provides an eyeball rendering method, where the method includes:
acquiring an eyeball model to be rendered and an initial texture map corresponding to the eyeball model, wherein the initial texture map comprises texture maps of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure comprises an iris layer;
acquiring a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters;
performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset;
and rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain the rendered eyeball model.
In a second aspect, an embodiment of the present application further provides an eyeball rendering device, where the device includes:
the device comprises an acquisition module, a rendering module and a rendering module, wherein the acquisition module is used for acquiring an eyeball model to be rendered and an initial texture map corresponding to the eyeball model, the initial texture map comprises texture maps of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure comprises an iris layer;
the direction determining module is used for acquiring a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters;
the deviation module is used for carrying out deviation processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction deviation;
and the rendering module is used for rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain the rendered eyeball model.
In a third aspect, embodiments of the present application further provide an electronic device, including a memory storing a plurality of instructions; the processor loads instructions from the memory to execute any of the eyeball rendering methods provided by the embodiments of the present application.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform any one of the methods for rendering an eyeball provided in the embodiments of the present application.
In this embodiment of the present application, by obtaining an eye model to be rendered and an initial texture map corresponding to the eye model, the initial texture map includes texture maps of a multi-layer eye structure of the eye model, where the multi-layer eye structure includes an iris layer, so as to adjust the texture map of the eye structure of a corresponding layer to be adjusted through the texture maps of the multi-layer eye structure. Specifically, a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model are obtained, so that a target refraction direction of ambient light under the iris parameters of the eyeball model is determined based on the camera direction vector and the iris parameters. And then, performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain the iris texture map after refraction offset. And finally, rendering the eyeball model based on the iris texture map and the texture maps of other eyeball structural layers except the iris layer in the initial texture map to obtain a rendered eyeball model, so that the iris layer texture map to be adjusted in the initial texture map is adjusted based on parameters to generate a new texture map for rendering based on the adjusted iris texture map, thereby avoiding the time spent for redrawing the texture map and greatly improving the eyeball rendering efficiency.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an eyeball structure of an eyeball model according to an embodiment of the present application;
fig. 2 is a schematic view of a scene of an eyeball rendering method according to an embodiment of the present application;
fig. 3 is a flowchart of an embodiment of an eyeball rendering method provided in an embodiment of the present application;
FIG. 4 is an iris normal map provided in an embodiment of the application;
FIG. 5 is an iris depth map provided in an embodiment of the application;
FIG. 6 is a schematic diagram of an iris depth contrast scenario provided in an embodiment of the present application;
FIG. 7 is a schematic illustration of a comparison between a limbus and iris layer provided in an embodiment of the present application;
FIG. 8 is a texture map of the scleral layer provided in an embodiment of the present application;
FIG. 9 is a schematic illustration of limbal information provided by an embodiment of the present application;
FIG. 10 is a schematic view of a rendering effect of an eyeball model according to an embodiment of the present application;
FIG. 11 is a schematic view of another rendering effect of an eyeball model according to an embodiment of the present application;
FIG. 12 is a wetting normal graph provided by an embodiment of the present application;
FIG. 13 is a schematic view of the wetting effect of the model eye provided in the embodiments of the present application;
FIG. 14 is a schematic illustration of the shading effects of an eye model provided in an embodiment of the present application;
FIG. 15 is a graph showing the focus and dispersion effect of the model eye provided in the examples of the present application;
fig. 16 is a schematic structural view of an eyeball rendering device provided in an embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Before explaining the embodiments of the present application in detail, some terms related to the embodiments of the present application are explained.
The eyeball structure of the eyeball model includes, but is not limited to, a cornea layer (marked as Limbus), a Limbus Ring, an Iris layer (marked as Iris), a Pupil layer (marked as Pupil), a Sclera layer (marked as Sclera) and the like, and the eyeball structures of different layers can be spliced or arranged in a front-back mode to construct the eyeball model. Different eyeball structures of the eyeball have unique influences on changes of reflection, refraction, scattering and the like of light rays in the eyeball, for example, a cornea layer is used for refracting the light rays entering the eyeball, and an iris layer can control the entering amount of the light rays and the like by adjusting the size of a pupil layer.
The iris layer is positioned behind the cornea layer and is responsible for controlling the size of the pupil layer, so as to control the quantity of light entering the eyes.
Wherein, the sclera layer is the outer layer structure of eyeball, mainly presents white. In the scleral region, red blood filaments (denoted Red Veins) are contained, which are vascularized Red textures, which effect is usually simulated by texture mapping.
Wherein the pupil is the black part of the eye center, which is the passage of light into the eye. Pupil Dilation (denoted Pupil treatment) occurs when the Pupil is facing a ray, the size of the Pupil is adjusted according to the intensity of the ray, and when the ray is weaker, the Pupil dilates to allow more ray into the eye.
The limbus is located between the iris layer and the sclera layer, and is the edge area of the transition from the iris layer to the sclera layer. In the structure of the eye, the limbus is the transition region between the iris layer and the sclera layer, forming the edges of the iris layer, which is of great significance in the rendering and simulation of the eyeball in the face of light refraction and scattering, etc.
Illustratively, as shown in fig. 1, from the view of the relief structure of the eyeball, the eyeball model is a sphere, from the view of the side face of the eyeball model, the outline of the whole eyeball model is a cornea layer, the cornea layer wraps an iris layer and is an outward protruding structure, the iris layer is a plane, the pupil layer is a black part of the inward concave iris layer, and a certain distance exists between the cornea layer and the iris layer inside.
Wherein in the description of embodiments of the present application, the terms "first," "second," and the like may be used herein to describe various concepts, but such concepts are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application provides an eyeball rendering method, an eyeball rendering device, electronic equipment and a computer readable storage medium. Specifically, the eyeball rendering method of the embodiment of the application may be performed by an electronic device, where the electronic device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), and the like, and the terminal may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
For example, as shown in fig. 2, the electronic device is illustrated by taking a terminal 20 as an example, and the terminal may obtain an eyeball model to be rendered and an initial texture map corresponding to the eyeball model, where the initial texture map includes a texture map of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure includes an iris layer; acquiring a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters; performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset; and rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain the rendered eyeball model.
Based on the above problems, embodiments of the present application provide an eyeball rendering method, apparatus, electronic device, and computer readable storage medium, which can improve the eyeball rendering efficiency.
The following detailed description is provided with reference to the accompanying drawings. The following description of the embodiments is not intended to limit the preferred embodiments. Although a logical order is depicted in the flowchart, in some cases the steps shown or described may be performed in an order different than depicted in the figures.
In this embodiment, a terminal is taken as an example for illustration, and this embodiment provides an eyeball rendering method, as shown in fig. 3, a specific flow of the eyeball rendering method may be as follows:
301. and obtaining an eyeball model to be rendered and an initial texture map corresponding to the eyeball model, wherein the initial texture map comprises texture maps of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure comprises an iris layer.
The eyeball model is a model to be rendered currently, and has a plurality of eyeball structures, such as a cornea layer, a limbus, an iris layer, a pupil layer, a sclera layer and the like, and correspondingly, the initial texture map corresponding to the eyeball model also comprises a texture map corresponding to each eyeball structure.
In this embodiment, the terminal obtains the eyeball model and the initial texture map, so as to adjust the initial texture map of the eyeball model, thereby rendering the eyeball model more truly by the adjusted initial texture map.
302. And acquiring a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters.
The camera direction vector is used to indicate the direction of the camera shooting the eyeball model, and since the orientation of the eyeball model needs to be changed along with the change of the direction of the camera and the change of the refraction scene position seen by different visual angles is different, in this embodiment, the direction of the camera shooting the eyeball model at present needs to be acquired, so that the refraction direction of the eyeball model under the actual parameters is determined based on the relative direction relationship between the directions of the eyeball model and the camera.
The iris parameter is used to indicate the parameter related to the iris layer in the eyeball model, and because the depth between the iris layer and the cornea layer is different and the view angle of the change of the light in the eyeball model is different, in this embodiment, the terminal needs to determine the target refraction direction of the ambient light under the iris parameter of the eyeball model based on the camera direction vector and the iris parameter.
Refraction (referred to as Refraction) refers to a phenomenon that a light ray changes its propagation direction when passing through an interface of different media, and the target Refraction direction is a Refraction direction of the light ray between a cornea layer and an iris layer in air at a current viewing angle, and may be a vector representing the Refraction direction.
In some embodiments, the terminal may determine an initial refraction direction of light entering the eyeball model through the cornea layer, so as to obtain a refraction direction of the ambient light in the eyeball model under a certain iris parameter by performing corresponding processing on the initial refraction direction through the camera direction vector and the iris parameter.
In this embodiment, the determining, based on the camera direction vector and the iris parameter, the target refraction direction of the ambient light under the iris parameter of the eyeball model may include: the terminal needs to determine the initial refraction direction of the ambient light in the eyeball model. And correcting the initial refraction direction based on the camera direction vector and the iris parameter to obtain a target refraction direction of the ambient light under the iris parameter of the eyeball model.
Specifically, the determining the initial refraction direction of the ambient light in the eyeball model includes: acquiring an eyeball refractive index, an air refractive index and a vertex normal of the eyeball model; and processing the camera direction vector, the eyeball refractive index, the air refractive index and the vertex normal based on a preset refraction calculation rule to obtain an initial refraction direction of the ambient light in the eyeball model.
It will be appreciated that since the vertex normals at different locations of the model eye are different, the initial refraction directions corresponding to the different locations are different.
Wherein the refractive index is a physical parameter describing the degree of change in velocity of light as it passes through a medium. The refractive index of the eyeball refers to the degree of change of the speed of the light in the eyeball, and the refractive index of the air refers to the degree of change of the speed of the light in the air.
The Normal (Normal) is a vector perpendicular to a certain plane or surface, and is usually used for calculating illumination and reflection, and the vertex Normal is an outward protruding Normal of each vertex constituting the eyeball model, and since the cornea layer is an external structure of the eyeball model, the vertex Normal is also equivalent to a Normal of a cornea structure of the eyeball model, and is recorded in attribute information of the eyeball model, and a terminal can directly acquire attribute information of the eyeball model when needed.
Specifically, the codes corresponding to the refraction calculation rule are as follows:
float eye_ior=1.336;
float air_ior=1.00029;
float n=air_ior/eye_ior;
float w=n*dot(world_normal,camera_vector);
float k=sqrt(1+(w-n)*(w+n));
float3 refraction_direction=-normalize((w-k)*world_normal-n*camera_vector)
the eye_ior is used for indicating the eyeball refractive index, the air_ior is used for indicating the air refractive index, the world_normal is used for indicating the vertex normal, the camera_vector is used for indicating the camera direction vector, and the reflection_direction is used for indicating the initial refractive direction.
In some embodiments, the camera direction vector is used to indicate the direction of the screen pixel to the camera that captured the model eye; the iris parameters include iris depth and iridoid process map, wherein the iris depth is used to indicate a distance between an iris layer and a cornea layer in the eyeball model.
It can be understood that, from the model level, the eyeball model has an outward protruding structure, namely, conforms to the cornea, and when illumination is performed, the inward concave effect of the iris needs to be achieved, so that an iris line graph needs to be introduced, as shown in fig. 4, and the inward concave effect of the iris normal in the iris normal map can be seen from fig. 4.
It will be appreciated that the above-described iris depth may be indicative of how far the iris layer is recessed within the model eye, from which it may be determined that the iris layer is recessed due to the difference in depth between different locations of the iris layer and the cornea layer within the model eye, and from which it is determined the direction of refraction of light from the air through the cornea layer between the cornea layer and the iris layer at the current angle of view, since the iris depth is the distance between the iris layer and the cornea layer.
Illustratively, as shown in fig. 5, fig. 5 is an iris depth map for indicating iris depth, a white area in fig. 5 is for indicating that the distance between cornea and iris is maximum, that is, the iris depth is maximum, and a black area is for indicating that the distance between cornea layer and iris layer is 0, that is, the iris layer is a plane as seen from the side of the eyeball model, the cornea layer is a plane of a dome shape surrounding the iris layer, the white area is an area of the cornea layer farthest from the iris layer, and the black area is an area where the cornea layer is to be closely adhered to the iris layer, at this time, the black area corresponds to the sclera layer, and an area of gray transition in fig. 5 is, that is, limbus.
The iris depth map and the iris normal map may be stored in one map to save inclusion and sampling, for example, the iris film map is stored in RG channel of one map, and the iris depth map is stored in B channel of the same map. Correspondingly, when the sampling is carried out, the UV coordinates in the initial texture map of the eyeball model can be utilized to sample the iris film map, so that a corresponding normal map is attached to the eyeball model, and the concave normal of the eyeball model with the iris is obtained. Wherein, since the normal map is in the tangent space, the normal map can be converted into the world space by the world matrix for facilitating the subsequent calculation process.
Correspondingly, the correcting the initial refraction direction based on the camera direction vector and the iris parameter to obtain a target refraction direction of the ambient light under the iris parameter of the eyeball model includes: and the terminal performs point multiplication processing on the camera direction vector and the iris normal map to obtain direction indication parameters corresponding to the iris normal of each position in the iris film map, wherein the direction indication parameters are used for indicating the relative situation between the iris normal and the camera direction vector in the iris film map. And correcting the initial refraction direction based on the iris depth of each position in the iris depth map and the direction indication parameters corresponding to each position in the iris film map, and determining the target refraction direction corresponding to each position in the texture map of the iris layer by ambient light.
Specifically, the code for determining the target refraction direction is as follows:
float cos_alpha=dot(camera_vector,bottom_world_normal);
float3 scale_refracted_offset_dierction=refraction_direction*(iris_depth/lerp(0.325,1.0,cos_alpha*cos_alpha))
the iris_direction is used for indicating an initial refraction direction, and the iris_depth is used for indicating an iris depth.
It will be appreciated that, since cos_alpha is used to indicate the relative situation between the iris normal and the camera direction vector in the iris line graph, that is, whether the iris normal is completely opposite to the camera direction vector, if the iris normal is completely opposite to the camera direction, the processing result in the lerp function is 1, if the iris normal is worst compared with other iris normal, the processing result in the lerp function is 0.325, so that the correction coefficient corresponding to each iris normal is determined according to the relative situation between the iris normal and the camera direction vector, and then the initial refraction direction is corrected based on the ratio between the iris depth and the correction coefficient, that is, the iris position corresponding to the iris normal in the completely opposite camera direction is corrected by taking the iris depth of the iris normal itself, and the iris position corresponding to the iris normal in the deviated camera direction is corrected by taking a larger value.
In some embodiments, the iris depth may be set directly by a worker, or may be set based on attribute information related to the iris, for example, an initial iris depth map may be set, where the initial iris depth map includes iris depths corresponding to respective positions of the iris, and then scaling the initial iris depth map based on iris dimensions set in practical applications to obtain an adjusted iris depth map, so as to determine a final iris depth map based on the adjusted iris depth map, for example, by calculating a difference between the initial iris depth map and the adjusted iris depth map, and then calculating a product of the difference and a preset depth coefficient as an iris depth in the final iris depth map.
It will be appreciated that u_iris_scale+0.5 and 0.5 above are used to indicate the UV position in the sampled initial iris depth map; the half is an operation of optimizing performance, and since the iris depth is large in the middle distance and gradually decreases to the periphery, the concave degree of the iris should be different according to the size of the eyeball model in this embodiment, and the corresponding iris depth is also different.
Illustratively, as shown in fig. 6, the left area in fig. 6 is a schematic view of a display scene of an eyeball model in which an iris depth is not present, and the right area in fig. 6 is a schematic view of a display scene of an eyeball model in which an iris depth is present, and it can be seen from fig. 6 that the eyeball model in which an iris depth is present is higher in reality than the eyeball model in which an iris depth is not present.
303. And performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain the iris texture map after refraction offset.
It will be appreciated that the position of the iris as seen is shifted by the presence of corneal parallax, i.e. by refraction of light by the eye structure (e.g. the corneal layer), which refers to the change in position of the scene as seen due to a change in the point of view. Therefore, in order to make the rendering result of the iris layer more realistic, in this embodiment, the terminal needs to perform offset processing on the texture map of the iris layer in the initial texture map based on the actually obtained target refraction direction, so as to obtain the iris texture map after refraction offset according with the actual situation.
Specifically, the performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset may include: the terminal may obtain an eye forward vector, so as to determine an offset of the texture map of the iris layer based on the eye forward vector, the iris pattern and the target refraction direction, that is, a specific offset of the iris with an offset seen under the view angle is clarified, so that the offset processing is performed on the texture map of the iris layer based on the offset of the texture map of the iris layer, and an iris texture map after refraction offset is obtained.
It will be appreciated that since the iris texture map will change with changes in view angle, for example, when the model is viewed from the front of the model, the iris layer will not shift, i.e., the iris texture map will not need to change, whereas when the model is viewed from the side, the iris layer will shift backward, i.e., the iris texture map will change, for example, as shown in fig. 6, when the model is viewed from the side, the rendering effect will be poor if the iris layer does not shift backward and there is no depth, and the rendering effect will be good if the iris layer shifts backward, and the reality will be high.
Therefore, in order to render the eyeball model more finely, in this embodiment, the texture position in the two-dimensional texture map is adjusted in advance based on the determined offset to obtain an adjusted texture map that meets the expected effect, so that rendering is performed based on the adjusted texture map.
Wherein the eyeball forward vector is used for indicating the visual direction of eyes.
Illustratively, the code for determining the eyeball forward vector is as follows:
float3 tangent_basis=mul(float3x3(tangent_to_world),float3(1.0,0.0,0.0))
wherein, the above-mentioned text_basic is used for indicating the eyeball forward vector, and the above-mentioned text_to_world is used for indicating the world matrix.
In some embodiments, the offset includes an offset in a tangential direction, and the determining the offset of the texture map of the iris layer based on the eye forward vector, the iris line map, and the target refraction direction may include: and the terminal obtains a tangent vector corresponding to the iris normal line graph based on the eyeball forward vector and the iris normal line graph, wherein the tangent vector is a vector perpendicular to the iris normal line in the iris normal line graph. And then, performing point multiplication processing on the tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the tangent direction.
Specifically, the code for determining the tangent vector corresponding to the iris line graph is as follows:
float3 dirve_tangents=normalize(tangent_basis-(dot(tangent_basis,bottom_world_normal)*bottom_world_normal))
wherein, the above-mentioned text_basis is used for indicating the eyeball forward vector, the above-mentioned bottom_world_normal is used for indicating the iris line graph, and the above-mentioned dirve_text is used for indicating the tangent vector corresponding to the iris line graph.
In some embodiments, the offset further includes an offset in a secondary tangential direction, and the determining the offset of the texture map of the iris layer based on the eyeball forward vector, the iris line map, and the target refraction direction may include: the terminal carries out cross multiplication processing on the tangent vector corresponding to the iris normal line diagram and the iris normal line in the iris film method diagram to obtain an auxiliary tangent vector corresponding to the iris normal line diagram, wherein the auxiliary tangent vector is a vector perpendicular to the iris normal line and the tangent vector in the iris film method diagram; and performing dot multiplication processing on the secondary tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the secondary tangent direction.
Specifically, the code for determining the offset of the texture map of the iris layer in the tangential direction and the secondary tangential direction is as follows:
float2 refracted_uv_offset=float2(dot(dirve_tangents,scale_refracted_offset_dierction),dot(cross(dirve_tangents,bottom_world_normal),scale_refracted_offset_dierction))
Wherein the updated_uv_offset is used to indicate an offset, the scale_updated_offset_dictionary is used to indicate a target refraction direction, the dirve_constants are used to indicate tangential vectors corresponding to the iris normal map, the dot (dirve_constants, scale_updated_offset_dictionary) is used to indicate an offset of the texture map of the iris layer in the tangential direction, the cross (dirve_constants, bottom_world_normal) is used to indicate a secondary tangential vector corresponding to the iris film map, and the dot (cross_world_normal) is used to indicate an offset of the texture map of the iris layer in the secondary tangential direction.
In some embodiments, the offset may include an offset in a tangential direction and an offset in a secondary tangential direction, and the offset processing is performed on the texture map of the iris layer based on the offset of the texture map of the iris layer to obtain an iris texture map after refraction offset, and may include: and the terminal respectively carries out offset processing on the texture map of the iris layer based on the offset in the tangential direction and the offset in the auxiliary tangential direction to obtain the iris texture map after refraction offset.
Specifically, the code for determining the iris texture map after refractive offset is as follows:
float2 refracted_uv=uv0-float2(u_iris_scale)*refracted_uv_offset
wherein uv0 is used to indicate an initial texture map, u_iris_scale is used to indicate iris size, and reduced_uv_offset is used to indicate an offset. It will be appreciated that the texture mapping of the iris layer described above may be made clear by uv0 and u_iris_scale.
304. And rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain the rendered eyeball model.
It can be understood that, during rendering, two-dimensional texture mapping is required to be mapped into a three-dimensional eyeball model based on UV texture coordinates of the eyeball model so as to obtain a rendering result of the eyeball model, thereby realizing surface detail rendering of the model. Therefore, in this embodiment, the terminal needs to comprehensively render the eyeball model based on the iris layer texture map and the texture maps of other eyeball structural layers except the iris layer to obtain the rendered eyeball model.
In some embodiments, the rendering the eyeball model based on the iris texture map and the texture maps of other eyeball structural layers except the iris layer in the initial texture map to obtain a rendered eyeball model includes steps S3041 to S3043:
S3041, determining an eyeball mask of the eyeball model, wherein the eyeball mask is used for indicating texture mask information corresponding to at least one layer of eyeball structure of the eyeball model.
Wherein the mask is used in computer graphics to define which areas of the image should be displayed or hidden, or to control the color or transparency of certain areas of the image. The above-mentioned eyeball mask needs to indicate which portions of the texture map corresponding to at least one layer of eyeball structure of the eyeball model should be displayed or hidden, or the corresponding colors or transparency. For example, an eyeball mask containing texture mask information corresponding to an iris layer is used to indicate the position of the iris layer display.
It can be appreciated that, since the eyeball mask already indicates the positions corresponding to the eyeball structures of the different layers, the eyeball mask is also used for fusing the normal line graphs corresponding to the eyeball structures of the different layers so as to obtain the fused normal line graph required by final rendering.
In some embodiments, the determining the eyeball mask of the eyeball model may include: the terminal obtains preset iris size parameters and structure layer size parameters of the other eyeball structure layers, and then determines texture mask information of at least one layer of eyeball structure on the initial texture map based on the iris size parameters, the structure layer size parameters and the relative position relationship between the iris layer and the other eyeball structure layers so as to generate an eyeball mask of the eyeball model based on the texture mask information of at least one layer of eyeball structure on the initial texture map.
Illustratively, the structural layer size parameter of the other eyeball structural layer includes a structural layer size parameter of a limbus, and the code for generating the eyeball mask of the eyeball model is as follows:
float2 mask_uv=uv0-float2(0.5);
float2 r=(length(mask_uv)-(u_iris_scale-limbus_uv_width))/limbus_uv_width;
float2 m=saturate(1.0-r);
float2 iris_mask=smoothstep(0.0,1.0,m)
wherein uv0 is used to indicate an initial texture map, u_iris_scale is used to indicate iris size, and limbus uv width is used to indicate a structural layer size parameter of the limbus. The smoothstep is used to soften the transition between limbus and sclera, resulting in a shade of white iris layer, black sclera layer, and gray limbus.
It will be appreciated that since the iris and limbus are concentric circles and the limbus is located between the iris and sclera, a mask is available that can distinguish between the iris layer, limbus and sclera layers based on the iris size parameter and the limbus structural layer size parameter.
Specifically, the iris size parameter and the structural layer size parameter may be in a preset ratio, so that after the iris size parameter is determined by the terminal, the structural layer size parameter of at least one other eyeball structural layer is determined based on the preset ratio.
Or, the terminal presets the size parameters of the structural layers of other eyeball structural layers so as to directly process the eyeball mask based on the preset size parameters of the structural layers.
It will be appreciated that since the limbus is the portion connecting the iris and sclera, the limbus structural layer size parameter is proportional to the iris size parameter, and that a predetermined duty cycle parameter may be set, after the iris size parameter is determined, the product between the iris size parameter and the predetermined duty cycle parameter is calculated as the limbus structural layer size parameter, and the predetermined duty cycle parameter determines the limbus blurring ratio. For example, as shown in fig. 7, a graphical representation of the comparison between limbus and iris layers at preset duty cycle parameters of 0.1, 0.2 and 0.5, respectively, is presented in fig. 7.
Illustratively, the code for determining structural layer size parameters of the limbus described above is as follows:
float2 limbus_uv_width=float2(u_iris_scale*0.2,u_iris_scale*0.5)
wherein, limbus_uv_width is used to indicate the structural layer size parameter of the limbus, and u_iris_scale is used to indicate the iris size.
S3042, fusing the iris texture map and the texture maps of the other eyeball structural layers based on the eyeball mask to obtain a fused target texture map.
In this embodiment, since the eyeball mask is used to indicate texture mask information corresponding to at least one layer of eyeball structure of the eyeball model, based on the eyeball mask and the texture map corresponding to the eyeball structure of the eyeball model, the texture maps corresponding to different eyeball structures can be fused to obtain the fused target texture map.
In some embodiments, when fusing the texture map, for the expected rendering effect, the texture map corresponding to at least one layer of eyeball structure needs to be adjusted, so before fusing the iris texture map and the texture maps of other eyeball structure layers based on the eyeball mask to obtain the fused target texture map, the method further includes: acquiring eyeball setting parameters corresponding to the other eyeball structural layers; and generating texture maps of the other eyeball structural layers based on the eyeball setting parameters to obtain the structural texture maps of the other eyeball structural layers. The above-mentioned eyeball setting parameter may be a parameter for adjusting the texture map of the eyeball structural layer in the initial texture map, or may be a parameter for regenerating a new texture map of the eyeball structural layer.
Correspondingly, after generating the texture map of at least one other eyeball structure layer, the fusing the iris texture map and the texture map of the other eyeball structure layer based on the eyeball mask to obtain a fused target texture map may include: and fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, the eyeball setting parameter includes a sclera scaling instruction parameter, and the generating the texture map of the other eyeball structural layer based on the eyeball setting parameter to obtain the texture map of the other eyeball structural layer includes: and scaling the texture map of the sclera layer in the initial texture map based on the sclera scaling instruction parameter to generate a structure texture map of the sclera layer.
It will be appreciated that as shown in fig. 8, the center of the texture map of the sclera layer is white, the peripheral edge region is red blood filaments, and the amount of red blood filaments in the texture map of the sclera layer is adjusted by the scaling of the texture map of the sclera layer.
Correspondingly, the fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map comprises the following steps: and fusing the iris texture map and the scleral structure texture map based on the eyeball mask to obtain a fused target texture map.
Illustratively, the code for determining the fused target texture map is as follows:
half3 base_color=lerp(sclera_color,iris_color,iris_mask)
the above-mentioned sclerotium_color is used for indicating the structure texture map of sclera layer, above-mentioned iris_color is used for indicating iris texture map, above-mentioned iris_mask is used for indicating the eyeball mask of distinguishing iris layer and sclera layer, above-mentioned base_color is used for indicating the target texture map after fusing.
In some embodiments, the eyeball setting parameters include a limbal size and a limbal color intensity, and the generating the texture map of the other eyeball structural layer based on the eyeball setting parameters to obtain a structural texture map of the other eyeball structural layer includes: and generating the limbal texture information in the iris texture map based on the limbal size and the limbal color intensity, thereby obtaining a limbal texture map.
It will be appreciated that since the limbus is at the limbal location of the iris layer, which connects the iris layer and the sclera layer, after the limbus size and limbus color intensity are set, a structural texture map of the limbus is generated based on the iris size and location in the iris texture map, and the limbus size and limbus color intensity to represent the limbus effect by displaying the limbus, such as by displaying the limbus as black, and to highlight the blurring and darkening effect of the limbus in the manner of limbus black.
Wherein the terminal may set the limbal size and the limbal color intensity based on the camera direction vector and the ray direction of the ambient light.
Wherein, as shown in fig. 9, limbal size is used to indicate the width of the limbal black border in fig. 9, limbal color intensity is used to indicate the intensity of the limbal black border in fig. 9, it can be seen from fig. 9 that there is a limbal when the width of the limbal black border is 2, there is no limbal when the width of the limbal black border is 0, and the color when the intensity of the limbal black border is 1 is light compared to the color when the intensity of the limbal black border is 5.
Correspondingly, the fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map comprises the following steps: and fusing the iris texture map, the scleral layer texture map and the limbus texture map based on the eyeball mask to obtain a fused target texture map.
For example, if the eyeball mask is the same as the iris depth map in fig. 5, the region corresponding to the white region in the eyeball mask in the iris texture map is to be displayed, the region corresponding to the black region in the eyeball mask in the structure texture map of the sclera layer is to be displayed, and the region corresponding to the gray region in the eyeball mask in the structure texture map of the limbus is to be displayed.
In some embodiments, the eyeball setting parameter includes a pupil constriction indicating parameter, and the generating the texture map of the other eyeball structural layer based on the eyeball setting parameter includes: based on the pupil scaling instruction parameter, scaling the texture map of the pupil layer in the initial texture map to generate a structure texture map of the pupil layer; or, based on the pupil constriction indication parameter, directly generating a texture map of the pupil layer to obtain a structure texture map of the pupil layer; alternatively, since the pupil layer is located at the center of the iris layer, the pupil layer position may be determined directly in the iris texture map, and then the pupil layer texture map may be scaled in the iris texture map to generate a pupil layer structure texture map, or alternatively, an iris texture map including the pupil layer may be generated.
Correspondingly, the fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map comprises the following steps: and fusing the iris texture map and the structure texture map of the pupil layer based on the eyeball mask to obtain a fused target texture map.
And S3043, rendering the eyeball model based on the target texture map to obtain a rendered eyeball model.
In this embodiment, after the fused target texture map is obtained, the eyeball model needs to be rendered based on the fused target texture map, so as to obtain a relatively real rendered eyeball model.
In some embodiments, since the rendering of the eyeball model based on the normal line is further required when the eyeball model is rendered, that is, the rendering of the eyeball model based on the target texture map may include: the terminal may obtain an iris line graph of the iris layer and a vertex line graph of the eyeball model, where the iris line graph is used to represent a concave effect of an iris in the eyeball model, so as to simulate a concave structure of the iris based on the iris line graph. Then, based on the eyeball mask, fusing the iris line graph and the vertex line graph to obtain a fused target line graph; based on the target texture map and the target normal map, the eyeball model is rendered, so that a rendered eyeball model is obtained, the refraction and scattering process of light rays in the eyeball is more truly simulated through textures and normals, and the reality and the fineness of rendering can be improved.
The vertex normal line graph of the eyeball model is used for representing the normal line of the outward protrusion effect of the eyeball model, namely, the outward protrusion normal line of each vertex of the eyeball model is constructed.
Illustratively, the code for obtaining the fused target normal map is as follows:
bottom_world_normal=lerp(world_normal.xyz,bottom_world_normal,iris_mas k)
wherein, the world_normal_xyz is used for indicating a vertex normal map of the eyeball model, the bottom_world_normal is used for indicating an iris normal map, the iris_mask is used for indicating an eyeball mask for distinguishing an iris layer and a sclera layer, so as to represent the concave effect of the iris layer through the iris normal map, represent the outward protruding effect of the eyeball model through the sclera layer, and finally only the fused target normal map is used for illumination rendering of the eyeball model, so that the display effect of the eyeball model after rendering is shown in fig. 10.
Correspondingly, based on the example in fig. 10, a target texture map is further introduced to perform texture rendering on the eyeball, so that a display effect of the eyeball model after rendering is shown in fig. 11.
In some embodiments, in order to simulate the wet feeling of the eyeball and the lacrimal gland, so that the real feeling of the eyeball surface is stronger, in this embodiment, a wet normal map may be introduced, as shown in fig. 12, that is, based on the eyeball mask, the iris normal map and the vertex normal map are fused, so as to obtain a fused target normal map, which may include: the terminal may obtain a wetting normal map, where the wetting normal map is used to represent a wetting effect of at least a part of the other eyeball structural layers. And finally, fusing the iris film method line graph and the target vertex film graph based on the eyeball mask to obtain a fused target film graph. Wherein, the eyeball model is rendered by fusing the fused target normal line graph of the target vertex normal line graph, and the display effect of the eyeball model after rendering is obtained, as shown in fig. 13.
It is understood that, since the region of the eyeball where the moisturizing effect is required is an outer region of the eyeball, such as a scleral layer, at least a portion of the other eyeball structural layer may be the scleral layer.
In some embodiments, in order to better enhance the volume sense of the eyeball model, shadows cast to the eyeball by eyelid can be added to the eyeball model in the embodiment.
Specifically, before the eyeball model is rendered based on the target texture map to obtain a rendered eyeball model, the method may further include: the terminal acquires a shadow position indication parameter and a shadow boundary indication parameter, wherein the shadow position indication parameter is used for indicating the position of a shadow cast by an eyelid on the eyeball model, and the shadow boundary indication parameter is used for representing the edge transition effect of the shadow cast by the eyelid on the eyeball model. Then, a shadow mask of the eye model is generated based on the shadow position indication parameter and the shadow boundary indication parameter, wherein the shadow mask is used for indicating mask information corresponding to shadows at different positions of the eye model. And finally, acquiring shadow texture information, and fusing the target texture mapping and the shadow texture information based on the shadow mask to obtain the target texture mapping with a shadow effect.
It will be appreciated that since the shadow mask is essentially a concentric circle with a central black and white surrounding, the black portion is the shadow region where the shadow texture information is to be superimposed, and the white region where the target texture map is to be superimposed.
Wherein the shadow position indication parameters include, but are not limited to, an upper shadow rotation parameter for adjusting an x-axis of the shadow uv and an upper shadow position for adjusting a y-axis of the shadow uv.
The shadow boundary indication parameters include, but are not limited to, a shadow boundary curvature, a shadow boundary maximum value, and a shadow boundary minimum value, wherein the shadow boundary curvature is used for adjusting the concentric circle diameter size, and the shadow boundary maximum value and the shadow boundary minimum value are used for making a blurring transition on the adjusted boundary.
Illustratively, the code for obtaining a target texture map with shadow effect is as follows:
float2 central_uv=uv0*2.0-1.0;
float2 shadow_uv=float2(central_uv.x+u_shadow_top_boundary_rotation,central_uv.y-u_shadow_top_boundary_offset);
float length=saturate(distance(shadow_uv,0.0)-u_shadow_boundary_curvature);
float shadow_alpha=smoothstep(u_shadow_softness_min,u_shadow_softness_max,length);
base_color=lerp(base_color,base_color*u_fake_shadow_color.rgb,shadow_alph a)
wherein uv0 is used to indicate an initial texture map, u_shadow_top_boundary_rotation is used to indicate an up-shadow rotation parameter, u_shadow_top_boundary_offset is used to indicate an up-shadow position, u_shadow_boundary_current is used to indicate a shadow boundary curvature, u_shadow_soft_min is used to indicate a shadow boundary minimum value, and u_shadow_soft_max is used to indicate a shadow boundary maximum value.
Illustratively, as shown in fig. 14, the left area in fig. 14 is a schematic view of the rendering effect of the model eye with shadows, and the right area in fig. 14 is a schematic view of the rendering effect of the model eye without shadows.
In some embodiments, after the iris texture map is obtained, the distance between the iris and the cornea may be calculated, so as to calculate the focus scatter based on the distance between the iris and the cornea, so as to obtain refraction of light passing through the cornea, projection the light onto a semicircular light spot of the iris, and improve the rendering effect of the eyeball, as shown in fig. 15, the left area of fig. 15 is a schematic drawing of the rendering effect of the eyeball model without focus scatter, and the right area of fig. 15 is a schematic drawing of the rendering effect of the eyeball model with focus scatter.
In optics, when light is refracted or reflected by a transparent object, the light is concentrated on one point or one line to form a bright light beam, and in computer graphics, the focus is used for simulating phenomena such as light spots formed by light reflected by water surface.
Specifically, the code for determining the distance between the iris and the cornea is as follows:
float2 center_refracted_uv=refracted_uv-0.5;
half uv_distance=distance(center_refracted_uv,0.0)/u_iris_scale;
half iris_distance=pow(uv_distance*u_iris_concavity_scale,u_iris_concavity_power)。
wherein the updated_uv is used to indicate an iris texture map, the u_iris_scale is used to indicate an iris size, the u_iris_control_scale is used to indicate an iris indent range, the u_iris_control_power is used to indicate an iris indent strength, the iris indent range and the iris indent strength are parameters of a soft concentric circle boundary, and the iris_distance is used to indicate a distance between the iris and the cornea.
From the above, it can be seen that, by obtaining an eye model to be rendered and an initial texture map corresponding to the eye model, the initial texture map includes texture maps of a multi-layer eye structure of the eye model, and the multi-layer eye structure includes an iris layer, so as to adjust the texture map of the eye structure of the corresponding layer to be adjusted in a targeted manner through the texture maps of the multi-layer eye structure. Specifically, a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model are obtained, so that a target refraction direction of ambient light under the iris parameters of the eyeball model is determined based on the camera direction vector and the iris parameters. And then, performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain the iris texture map after refraction offset. And finally, rendering the eyeball model based on the iris texture map and the texture maps of other eyeball structural layers except the iris layer in the initial texture map to obtain a rendered eyeball model, so that the iris layer texture map to be adjusted in the initial texture map is adjusted based on parameters to generate a new texture map for rendering based on the adjusted iris texture map, thereby avoiding the time spent for redrawing the texture map and greatly improving the eyeball rendering efficiency.
In order to better implement the above method, the embodiment of the application also provides an eyeball rendering device, which may be specifically integrated in an electronic device, for example, a computer device, where the computer device may be a terminal, a server, or other devices.
The terminal can be a mobile phone, a tablet personal computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
For example, in this embodiment, a detailed description will be given of a method of the embodiment of the present application by taking an example that an eyeball rendering device is specifically integrated in a terminal, and this embodiment provides an eyeball rendering device, as shown in fig. 16, which may include:
an obtaining module 1601, configured to obtain an eye model to be rendered, and an initial texture map corresponding to the eye model, where the initial texture map includes a texture map of a multi-layer eye structure of the eye model, and the multi-layer eye structure includes an iris layer;
a direction determining module 1602, configured to obtain a camera direction vector for capturing the eyeball model and an iris parameter of the eyeball model, and determine a target refraction direction of ambient light under the iris parameter of the eyeball model based on the camera direction vector and the iris parameter;
The offset module 1603 is configured to perform offset processing on the texture map of the iris layer based on the target refraction direction, so as to obtain an iris texture map after refraction offset;
the rendering module 1604 is configured to render the eyeball model based on the iris texture map and the texture maps of other eyeball structural layers in the initial texture map except for the iris layer, so as to obtain a rendered eyeball model.
In some embodiments, the rendering module 1604 is specifically configured to:
determining an eyeball shade of the eyeball model, wherein the eyeball shade is used for indicating texture shade information corresponding to at least one layer of eyeball structure of the eyeball model;
fusing the iris texture map and the texture maps of the other eyeball structural layers based on the eyeball mask to obtain a fused target texture map;
and rendering the eyeball model based on the target texture map to obtain a rendered eyeball model.
In some embodiments, the rendering module 1604 is specifically configured to:
acquiring preset iris size parameters and structural layer size parameters of the other eyeball structural layers;
determining texture mask information of at least one layer of eyeball structure on the initial texture map based on the iris size parameter, the structure layer size parameter and the relative position relationship between the iris layer and the other eyeball structure layers;
And generating an eyeball mask of the eyeball model based on the texture mask information of at least one layer of eyeball structure on the initial texture map.
In some embodiments, the rendering module 1604 is specifically configured to:
acquiring an iris film method line graph of the iris layer and a vertex normal line graph of the eyeball model, wherein the iris normal line graph is used for representing the concave effect of the iris in the eyeball model;
based on the eyeball shade, fusing the iris film method line graph and the vertex normal line graph to obtain a fused target normal line graph;
and rendering the eyeball model based on the target texture map and the target normal map to obtain a rendered eyeball model.
In some embodiments, the rendering module 1604 is specifically configured to:
acquiring a wetting normal map, wherein the wetting normal map is used for expressing the wetting effect of at least part of other eyeball structural layers;
fusing normals at the same position in the wetting normal line graph and the vertex normal line graph to obtain a target vertex normal line graph with a wetting effect;
and fusing the iris line graph and the target vertex line graph based on the eyeball mask to obtain a fused target line graph.
In some embodiments, the eyeball rendering device further includes a map generation module, where the map generation module is specifically configured to:
acquiring eyeball setting parameters corresponding to the other eyeball structural layers;
generating texture maps of the other eyeball structural layers based on the eyeball setting parameters to obtain the structural texture maps of the other eyeball structural layers;
the rendering module 1604 is specifically configured to:
and fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, the eyeball setting parameter includes a sclera scaling indication parameter, and the map generation module is specifically configured to:
scaling the texture map of the sclera layer in the initial texture map based on the sclera scaling instruction parameter to generate a structure texture map of the sclera layer;
the rendering module 1604 is specifically configured to:
and fusing the iris texture map and the scleral structure texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, the eyeball setting parameters include limbal size and limbal color intensity, and the map generation module is specifically configured to:
Generating texture information of the limbus in the iris texture map based on the limbus size and the limbus color intensity to obtain a structural texture map of the limbus;
the rendering module 1604 is specifically configured to:
and fusing the iris texture map, the scleral layer texture map and the limbus texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, the eyeball rendering device further includes an information fusion module, where the information fusion module is specifically configured to:
a shadow position indication parameter and a shadow boundary indication parameter are obtained, wherein the shadow position indication parameter is used for indicating the position of a shadow cast by an eyelid on the eyeball model, and the shadow boundary indication parameter is used for representing the edge transition effect of the shadow cast by the eyelid on the eyeball model;
generating a shadow mask of the eyeball model based on the shadow position indication parameter and the shadow boundary indication parameter, wherein the shadow mask is used for indicating mask information corresponding to shadows at different positions of the eyeball model;
and acquiring shadow texture information, and merging the target texture map and the shadow texture information based on the shadow mask to obtain the target texture map with a shadow effect.
In some embodiments, the direction determining module 1602 is specifically configured to:
determining an initial refraction direction of the ambient light in the eyeball model;
and correcting the initial refraction direction based on the camera direction vector and the iris parameter to obtain a target refraction direction of the ambient light under the iris parameter of the eyeball model.
In some embodiments, the direction determining module 1602 is specifically configured to:
acquiring an eyeball refractive index, an air refractive index and a vertex normal of the eyeball model;
and processing the camera direction vector, the eyeball refractive index, the air refractive index and the vertex normal based on a preset refraction calculation rule to obtain an initial refraction direction of the ambient light in the eyeball model.
In some embodiments, the camera direction vector is used to indicate the direction of the screen pixel to the camera that captured the model eye; the iris parameters comprise iris depth and an iris method line graph, wherein the iris depth is used for indicating the distance between an iris layer and a cornea layer in the eyeball model;
the direction determining module 1602 is specifically configured to:
performing point multiplication processing on the camera direction vector and the iris normal map to obtain direction indication parameters corresponding to the iris normal of each position in the iris film map, wherein the direction indication parameters are used for indicating the relative situation between the iris normal and the camera direction vector in the iris film map;
And correcting the initial refraction direction based on the iris depth of each position in the iris depth map and the direction indication parameters corresponding to each position in the iris film map, and determining the target refraction direction corresponding to each position in the texture map of the iris layer by ambient light.
In some embodiments, the offset module 1603 is specifically configured to:
acquiring an eyeball forward vector, and determining the offset of the texture map of the iris layer based on the eyeball forward vector, the iris film map and the target refraction direction;
and performing offset processing on the texture map of the iris layer based on the offset of the texture map of the iris layer to obtain the iris texture map after refraction offset.
In some embodiments, the offset includes an offset in a tangential direction, and the offset module 1603 is specifically configured to:
based on the eyeball forward vector and the iris line graph, obtaining a tangent vector corresponding to the iris normal line graph, wherein the tangent vector is a vector perpendicular to the iris normal line in the iris line graph;
and performing dot multiplication processing on the tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the tangent direction.
In some embodiments, the offset further includes an offset in a secondary tangential direction, and the offset module 1603 is specifically configured to:
performing cross multiplication processing on the tangent vector corresponding to the iris normal line graph and the iris normal line in the iris film method graph to obtain an auxiliary tangent vector corresponding to the iris normal line graph, wherein the auxiliary tangent vector is a vector perpendicular to the iris normal line and the tangent vector in the iris film method graph;
and performing dot multiplication processing on the secondary tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the secondary tangent direction.
As can be seen from the above, the eyeball-rendering device of the present embodiment obtains the eyeball model to be rendered and the initial texture map corresponding to the eyeball model, where the initial texture map includes the texture map of the multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure includes the iris layer, so as to adjust the texture map of the eyeball structure of the corresponding layer to be adjusted through the texture map of the multi-layer eyeball structure. Specifically, a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model are obtained, so that a target refraction direction of ambient light under the iris parameters of the eyeball model is determined based on the camera direction vector and the iris parameters. And then, performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain the iris texture map after refraction offset. And finally, rendering the eyeball model based on the iris texture map and the texture maps of other eyeball structural layers except the iris layer in the initial texture map to obtain a rendered eyeball model, so that the iris layer texture map to be adjusted in the initial texture map is adjusted based on parameters to generate a new texture map for rendering based on the adjusted iris texture map, thereby avoiding the time spent for redrawing the texture map and greatly improving the eyeball rendering efficiency.
Correspondingly, the embodiment of the application also provides electronic equipment, which can be a terminal, and the terminal can be terminal equipment such as a smart phone, a tablet personal computer, a notebook computer, a touch screen, a game machine, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. As shown in fig. 17, fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 1700 includes a processor 1701 having one or more processing cores, a memory 1702 having one or more computer-readable storage media, and a computer program stored on the memory 1702 and executable on the processor. The processor 1701 is electrically connected to the memory 1702. It will be appreciated by those skilled in the art that the electronic device structure shown in the figures is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 1701 is a control center of the electronic device 1700, connects various portions of the entire electronic device 1700 using various interfaces and lines, and performs various functions of the electronic device 1700 and processes data by running or loading software programs and/or modules stored in the memory 1702 and invoking data stored in the memory 1702, thereby performing overall monitoring of the electronic device 1700.
In the embodiment of the present application, the processor 1701 in the electronic device 1700 loads the instructions corresponding to the processes of one or more application programs into the memory 1702, and the processor 1701 executes the application programs stored in the memory 1702, so as to implement various functions as follows:
acquiring an eyeball model to be rendered and an initial texture map corresponding to the eyeball model, wherein the initial texture map comprises texture maps of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure comprises an iris layer;
acquiring a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters;
performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset;
and rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain the rendered eyeball model.
In some embodiments, the rendering the eyeball model based on the iris texture map and the texture maps of other eyeball structural layers except the iris layer in the initial texture map to obtain a rendered eyeball model includes:
determining an eyeball shade of the eyeball model, wherein the eyeball shade is used for indicating texture shade information corresponding to at least one layer of eyeball structure of the eyeball model;
fusing the iris texture map and the texture maps of the other eyeball structural layers based on the eyeball mask to obtain a fused target texture map;
and rendering the eyeball model based on the target texture map to obtain a rendered eyeball model.
In some embodiments, determining an eyeball mask of the above eyeball model includes:
acquiring preset iris size parameters and structural layer size parameters of the other eyeball structural layers;
determining texture mask information of at least one layer of eyeball structure on the initial texture map based on the iris size parameter, the structure layer size parameter and the relative position relationship between the iris layer and the other eyeball structure layers;
And generating an eyeball mask of the eyeball model based on the texture mask information of at least one layer of eyeball structure on the initial texture map.
In some embodiments, the rendering the eyeball model based on the target texture map to obtain a rendered eyeball model includes:
acquiring an iris film method line graph of the iris layer and a vertex normal line graph of the eyeball model, wherein the iris normal line graph is used for representing the concave effect of the iris in the eyeball model;
based on the eyeball shade, fusing the iris film method line graph and the vertex normal line graph to obtain a fused target normal line graph;
and rendering the eyeball model based on the target texture map and the target normal map to obtain a rendered eyeball model.
In some embodiments, based on the eyeball mask, fusing the iris line graph and the vertex line graph to obtain a fused target line graph, including:
acquiring a wetting normal map, wherein the wetting normal map is used for expressing the wetting effect of at least part of other eyeball structural layers;
fusing normals at the same position in the wetting normal line graph and the vertex normal line graph to obtain a target vertex normal line graph with a wetting effect;
And fusing the iris line graph and the target vertex line graph based on the eyeball mask to obtain a fused target line graph.
In some embodiments, before fusing the iris texture map and the texture maps of the other eyeball structural layers based on the eyeball mask to obtain a fused target texture map, the method further includes:
acquiring eyeball setting parameters corresponding to the other eyeball structural layers;
generating texture maps of the other eyeball structural layers based on the eyeball setting parameters to obtain the structural texture maps of the other eyeball structural layers;
the above-mentioned fusing the above-mentioned iris texture map and texture maps of the above-mentioned other eyeball structural layers based on the above-mentioned eyeball mask, get the goal texture map after fusion, including:
and fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, the eyeball setting parameter includes a sclera scaling instruction parameter, and the generating the texture map of the other eyeball structural layer based on the eyeball setting parameter to obtain the texture map of the other eyeball structural layer includes:
Scaling the texture map of the sclera layer in the initial texture map based on the sclera scaling instruction parameter to generate a structure texture map of the sclera layer;
the above-mentioned fusing the above-mentioned iris texture map and above-mentioned structure texture map based on above-mentioned eyeball mask, get the goal texture map after fusing, including:
and fusing the iris texture map and the scleral structure texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, the eyeball setting parameters include a limbal size and a limbal color intensity, and the generating the texture map of the other eyeball structural layer based on the eyeball setting parameters to obtain a structural texture map of the other eyeball structural layer includes:
generating texture information of the limbus in the iris texture map based on the limbus size and the limbus color intensity to obtain a structural texture map of the limbus;
the above-mentioned fusing the above-mentioned iris texture map and above-mentioned structure texture map based on above-mentioned eyeball mask, get the goal texture map after fusing, including:
And fusing the iris texture map, the scleral layer texture map and the limbus texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, before rendering the eyeball model based on the target texture map to obtain a rendered eyeball model, the method further includes:
a shadow position indication parameter and a shadow boundary indication parameter are obtained, wherein the shadow position indication parameter is used for indicating the position of a shadow cast by an eyelid on the eyeball model, and the shadow boundary indication parameter is used for representing the edge transition effect of the shadow cast by the eyelid on the eyeball model;
generating a shadow mask of the eyeball model based on the shadow position indication parameter and the shadow boundary indication parameter, wherein the shadow mask is used for indicating mask information corresponding to shadows at different positions of the eyeball model;
and acquiring shadow texture information, and merging the target texture map and the shadow texture information based on the shadow mask to obtain the target texture map with a shadow effect.
In some embodiments, the determining the target refraction direction of the ambient light under the iris parameter of the eyeball model based on the camera direction vector and the iris parameter includes:
Determining an initial refraction direction of the ambient light in the eyeball model;
and correcting the initial refraction direction based on the camera direction vector and the iris parameter to obtain a target refraction direction of the ambient light under the iris parameter of the eyeball model.
In some embodiments, the determining the initial refraction direction of the ambient light in the model eye comprises:
acquiring an eyeball refractive index, an air refractive index and a vertex normal of the eyeball model;
and processing the camera direction vector, the eyeball refractive index, the air refractive index and the vertex normal based on a preset refraction calculation rule to obtain an initial refraction direction of the ambient light in the eyeball model.
In some embodiments, the camera direction vector is used to indicate the direction of the screen pixel to the camera that captured the model eye; the iris parameters comprise iris depth and an iris method line graph, wherein the iris depth is used for indicating the distance between an iris layer and a cornea layer in the eyeball model;
the correcting the initial refractive direction based on the camera direction vector and the iris parameter to obtain a target refractive direction of the ambient light under the iris parameter of the eyeball model includes:
Performing point multiplication processing on the camera direction vector and the iris normal map to obtain direction indication parameters corresponding to the iris normal of each position in the iris film map, wherein the direction indication parameters are used for indicating the relative situation between the iris normal and the camera direction vector in the iris film map;
and correcting the initial refraction direction based on the iris depth of each position in the iris depth map and the direction indication parameters corresponding to each position in the iris film map, and determining the target refraction direction corresponding to each position in the texture map of the iris layer by ambient light.
In some embodiments, the performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset includes:
acquiring an eyeball forward vector, and determining the offset of the texture map of the iris layer based on the eyeball forward vector, the iris film map and the target refraction direction;
and performing offset processing on the texture map of the iris layer based on the offset of the texture map of the iris layer to obtain the iris texture map after refraction offset.
In some embodiments, the offset includes an offset in a tangential direction, and the determining the offset of the texture map of the iris layer based on the eye forward vector, the iris line map, and the target refraction direction includes:
Based on the eyeball forward vector and the iris line graph, obtaining a tangent vector corresponding to the iris normal line graph, wherein the tangent vector is a vector perpendicular to the iris normal line in the iris line graph;
and performing dot multiplication processing on the tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the tangent direction.
In some embodiments, the offset further includes an offset in a secondary tangential direction, and the determining the offset of the texture map of the iris layer based on the eye forward vector, the iris line map, and the target refraction direction includes:
performing cross multiplication processing on the tangent vector corresponding to the iris normal line graph and the iris normal line in the iris film method graph to obtain an auxiliary tangent vector corresponding to the iris normal line graph, wherein the auxiliary tangent vector is a vector perpendicular to the iris normal line and the tangent vector in the iris film method graph;
and performing dot multiplication processing on the secondary tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the secondary tangent direction.
Thus, the electronic device 1700 provided in this embodiment may bring the following technical effects: the eyeball rendering efficiency is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 17, the electronic device 1700 further includes: a touch display 1703, a radio frequency circuit 1704, an audio circuit 1705, an input unit 1706, and a power supply 1707. The processor 1701 is electrically connected to the touch display 1703, the radio frequency circuit 1704, the audio circuit 1705, the input unit 1706 and the power supply 1707, respectively. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 17 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The touch display 1703 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display 1703 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 1701, and can receive commands from the processor 1701 and execute them. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 1701 to determine a type of touch event, and the processor 1701 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 1703 to implement input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch display 1703 may also implement an input function as part of the input unit 1706.
The radio frequency circuit 1704 may be configured to receive and transmit radio frequency signals to and from a network device or other electronic device via wireless communication to and from the network device or other electronic device.
The audio circuit 1705 may be used to provide an audio interface between a user and the electronic device through a speaker, microphone. The audio circuit 1705 may transmit the received electrical signal converted from audio data to a speaker, and convert the electrical signal into a sound signal to be output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 1705 and converted into audio data, which are processed by the audio data output processor 1701 for transmission to, for example, another electronic device via the radio frequency circuit 1704, or which are output to the memory 1702 for further processing. The audio circuit 1705 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device.
The input unit 1706 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
A power supply 1707 is used to power the various components of the electronic device 1700. Optionally, the power supply 1707 may be logically connected to the processor 1701 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 1707 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 17, the electronic device 1700 may also include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., and will not be described in detail herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, the embodiments of the present application provide a computer readable storage medium in which a plurality of computer programs are stored, the computer programs being capable of being loaded by a processor to perform any one of the eyeball rendering methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
Acquiring an eyeball model to be rendered and an initial texture map corresponding to the eyeball model, wherein the initial texture map comprises texture maps of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure comprises an iris layer;
acquiring a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters;
performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset;
and rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain the rendered eyeball model.
In some embodiments, the rendering the eyeball model based on the iris texture map and the texture maps of other eyeball structural layers except the iris layer in the initial texture map to obtain a rendered eyeball model includes:
Determining an eyeball shade of the eyeball model, wherein the eyeball shade is used for indicating texture shade information corresponding to at least one layer of eyeball structure of the eyeball model;
fusing the iris texture map and the texture maps of the other eyeball structural layers based on the eyeball mask to obtain a fused target texture map;
and rendering the eyeball model based on the target texture map to obtain a rendered eyeball model.
In some embodiments, determining an eyeball mask of the above eyeball model includes:
acquiring preset iris size parameters and structural layer size parameters of the other eyeball structural layers;
determining texture mask information of at least one layer of eyeball structure on the initial texture map based on the iris size parameter, the structure layer size parameter and the relative position relationship between the iris layer and the other eyeball structure layers;
and generating an eyeball mask of the eyeball model based on the texture mask information of at least one layer of eyeball structure on the initial texture map.
In some embodiments, the rendering the eyeball model based on the target texture map to obtain a rendered eyeball model includes:
Acquiring an iris film method line graph of the iris layer and a vertex normal line graph of the eyeball model, wherein the iris normal line graph is used for representing the concave effect of the iris in the eyeball model;
based on the eyeball shade, fusing the iris film method line graph and the vertex normal line graph to obtain a fused target normal line graph;
and rendering the eyeball model based on the target texture map and the target normal map to obtain a rendered eyeball model.
In some embodiments, based on the eyeball mask, fusing the iris line graph and the vertex line graph to obtain a fused target line graph, including:
acquiring a wetting normal map, wherein the wetting normal map is used for expressing the wetting effect of at least part of other eyeball structural layers;
fusing normals at the same position in the wetting normal line graph and the vertex normal line graph to obtain a target vertex normal line graph with a wetting effect;
and fusing the iris line graph and the target vertex line graph based on the eyeball mask to obtain a fused target line graph.
In some embodiments, before fusing the iris texture map and the texture maps of the other eyeball structural layers based on the eyeball mask to obtain a fused target texture map, the method further includes:
Acquiring eyeball setting parameters corresponding to the other eyeball structural layers;
generating texture maps of the other eyeball structural layers based on the eyeball setting parameters to obtain the structural texture maps of the other eyeball structural layers;
the above-mentioned fusing the above-mentioned iris texture map and texture maps of the above-mentioned other eyeball structural layers based on the above-mentioned eyeball mask, get the goal texture map after fusion, including:
and fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, the eyeball setting parameter includes a sclera scaling instruction parameter, and the generating the texture map of the other eyeball structural layer based on the eyeball setting parameter to obtain the texture map of the other eyeball structural layer includes:
scaling the texture map of the sclera layer in the initial texture map based on the sclera scaling instruction parameter to generate a structure texture map of the sclera layer;
the above-mentioned fusing the above-mentioned iris texture map and above-mentioned structure texture map based on above-mentioned eyeball mask, get the goal texture map after fusing, including:
And fusing the iris texture map and the scleral structure texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, the eyeball setting parameters include a limbal size and a limbal color intensity, and the generating the texture map of the other eyeball structural layer based on the eyeball setting parameters to obtain a structural texture map of the other eyeball structural layer includes:
generating texture information of the limbus in the iris texture map based on the limbus size and the limbus color intensity to obtain a structural texture map of the limbus;
the above-mentioned fusing the above-mentioned iris texture map and above-mentioned structure texture map based on above-mentioned eyeball mask, get the goal texture map after fusing, including:
and fusing the iris texture map, the scleral layer texture map and the limbus texture map based on the eyeball mask to obtain a fused target texture map.
In some embodiments, before rendering the eyeball model based on the target texture map to obtain a rendered eyeball model, the method further includes:
A shadow position indication parameter and a shadow boundary indication parameter are obtained, wherein the shadow position indication parameter is used for indicating the position of a shadow cast by an eyelid on the eyeball model, and the shadow boundary indication parameter is used for representing the edge transition effect of the shadow cast by the eyelid on the eyeball model;
generating a shadow mask of the eyeball model based on the shadow position indication parameter and the shadow boundary indication parameter, wherein the shadow mask is used for indicating mask information corresponding to shadows at different positions of the eyeball model;
and acquiring shadow texture information, and merging the target texture map and the shadow texture information based on the shadow mask to obtain the target texture map with a shadow effect.
In some embodiments, the determining the target refraction direction of the ambient light under the iris parameter of the eyeball model based on the camera direction vector and the iris parameter includes:
determining an initial refraction direction of the ambient light in the eyeball model;
and correcting the initial refraction direction based on the camera direction vector and the iris parameter to obtain a target refraction direction of the ambient light under the iris parameter of the eyeball model.
In some embodiments, the determining the initial refraction direction of the ambient light in the model eye comprises:
acquiring an eyeball refractive index, an air refractive index and a vertex normal of the eyeball model;
and processing the camera direction vector, the eyeball refractive index, the air refractive index and the vertex normal based on a preset refraction calculation rule to obtain an initial refraction direction of the ambient light in the eyeball model.
In some embodiments, the camera direction vector is used to indicate the direction of the screen pixel to the camera that captured the model eye; the iris parameters comprise iris depth and an iris method line graph, wherein the iris depth is used for indicating the distance between an iris layer and a cornea layer in the eyeball model;
the correcting the initial refractive direction based on the camera direction vector and the iris parameter to obtain a target refractive direction of the ambient light under the iris parameter of the eyeball model includes:
performing point multiplication processing on the camera direction vector and the iris normal map to obtain direction indication parameters corresponding to the iris normal of each position in the iris film map, wherein the direction indication parameters are used for indicating the relative situation between the iris normal and the camera direction vector in the iris film map;
And correcting the initial refraction direction based on the iris depth of each position in the iris depth map and the direction indication parameters corresponding to each position in the iris film map, and determining the target refraction direction corresponding to each position in the texture map of the iris layer by ambient light.
In some embodiments, the performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset includes:
acquiring an eyeball forward vector, and determining the offset of the texture map of the iris layer based on the eyeball forward vector, the iris film map and the target refraction direction;
and performing offset processing on the texture map of the iris layer based on the offset of the texture map of the iris layer to obtain the iris texture map after refraction offset.
In some embodiments, the offset includes an offset in a tangential direction, and the determining the offset of the texture map of the iris layer based on the eye forward vector, the iris line map, and the target refraction direction includes:
based on the eyeball forward vector and the iris line graph, obtaining a tangent vector corresponding to the iris normal line graph, wherein the tangent vector is a vector perpendicular to the iris normal line in the iris line graph;
And performing dot multiplication processing on the tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the tangent direction.
In some embodiments, the offset further includes an offset in a secondary tangential direction, and the determining the offset of the texture map of the iris layer based on the eye forward vector, the iris line map, and the target refraction direction includes:
performing cross multiplication processing on the tangent vector corresponding to the iris normal line graph and the iris normal line in the iris film method graph to obtain an auxiliary tangent vector corresponding to the iris normal line graph, wherein the auxiliary tangent vector is a vector perpendicular to the iris normal line and the tangent vector in the iris film method graph;
and performing dot multiplication processing on the secondary tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the secondary tangent direction.
It can be seen that the computer program can be loaded by the processor to perform any of the eyeball rendering methods provided in the embodiments of the present application, thereby bringing about the following technical effects: the eyeball rendering efficiency is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Since the computer program stored in the computer readable storage medium can execute any one of the eyeball rendering methods provided in the embodiments of the present application, the beneficial effects that any one of the eyeball rendering methods provided in the embodiments of the present application can be achieved, and detailed descriptions of the previous embodiments are omitted herein.
The foregoing describes in detail an eyeball rendering method, apparatus, electronic device and computer readable storage medium provided in the embodiments of the present application, and specific examples are applied herein to illustrate the principles and embodiments of the present application, where the foregoing description of the embodiments is only for helping to understand the method and core ideas of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (18)

1. An eyeball rendering method, characterized in that the method comprises:
acquiring an eyeball model to be rendered and an initial texture map corresponding to the eyeball model, wherein the initial texture map comprises texture maps of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure comprises an iris layer;
acquiring a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters;
performing offset processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction offset;
and rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain a rendered eyeball model.
2. The method for rendering an eyeball according to claim 1, wherein the rendering the eyeball model based on the iris texture map and the texture maps of other eyeball structural layers except the iris layer in the initial texture map to obtain a rendered eyeball model includes:
Determining an eyeball shade of the eyeball model, wherein the eyeball shade is used for indicating texture shade information corresponding to at least one layer of eyeball structure of the eyeball model;
based on the eyeball shade, fusing the iris texture mapping with the texture mapping of the other eyeball structural layers to obtain a fused target texture mapping;
and rendering the eyeball model based on the target texture map to obtain a rendered eyeball model.
3. The eye rendering method of claim 2, wherein determining an eye mask of the eye model comprises:
acquiring preset iris size parameters and structural layer size parameters of the other eyeball structural layers;
determining texture mask information of at least one layer of eyeball structure on the initial texture map based on the iris size parameter, the structure layer size parameter and the relative position relationship between the iris layer and the other eyeball structure layers;
and generating an eyeball mask of the eyeball model based on the texture mask information of at least one layer of eyeball structure on the initial texture map.
4. The method for rendering an eyeball as set forth in claim 2, wherein the rendering the eyeball model based on the target texture map to obtain a rendered eyeball model includes:
Acquiring an iris film method line graph of the iris layer and a vertex normal line graph of the eyeball model, wherein the iris normal line graph is used for representing the concave effect of the iris in the eyeball model;
based on the eyeball shade, fusing the iris line graph and the vertex line graph to obtain a fused target line graph;
and rendering the eyeball model based on the target texture map and the target normal map to obtain a rendered eyeball model.
5. The eyeball rendering method of claim 4 wherein the iris pattern and the vertex pattern are fused based on the eyeball mask to obtain a fused target pattern comprising:
acquiring a wetting normal map, wherein the wetting normal map is used for expressing the wetting effect of at least part of other eyeball structural layers;
fusing normals at the same position in the wetting normal line diagram and the vertex normal line diagram to obtain a target vertex normal line diagram with a wetting effect;
and based on the eyeball shade, fusing the iris film method line graph and the target vertex normal line graph to obtain a fused target normal line graph.
6. The eyeball rendering method of claim 2 further comprising, prior to fusing the iris texture map and the texture maps of the other eyeball structural layers based on the eyeball mask to obtain a fused target texture map:
acquiring eyeball setting parameters corresponding to the other eyeball structural layers;
generating texture maps of the other eyeball structural layers based on the eyeball setting parameters to obtain the structural texture maps of the other eyeball structural layers;
the fusing the iris texture map and the texture maps of the other eyeball structural layers based on the eyeball mask to obtain a fused target texture map, which comprises the following steps:
and fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map.
7. The method of claim 6, wherein the eyeball setting parameters include sclera scaling indication parameters, wherein the generating the texture map of the other eyeball structural layer based on the eyeball setting parameters to obtain the structure texture map of the other eyeball structural layer includes:
Scaling the texture map of the sclera layer in the initial texture map based on the sclera scaling indication parameter to generate a structural texture map of the sclera layer;
the fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map, which comprises the following steps:
and fusing the iris texture map and the scleral structure texture map based on the eyeball mask to obtain a fused target texture map.
8. The method for rendering an eyeball according to claim 7, wherein the eyeball setting parameters include a limbal size and a limbal color intensity, and the generating the texture map of the other eyeball structural layer based on the eyeball setting parameters to obtain the structure texture map of the other eyeball structural layer includes:
generating texture information of the limbus in the iris texture map based on the limbus size and the limbus color intensity to obtain a structural texture map of the limbus;
the fusing the iris texture map and the structure texture map based on the eyeball mask to obtain a fused target texture map, which comprises the following steps:
And fusing the iris texture map, the sclera layer structure texture map and the limbus structure texture map based on the eyeball mask to obtain a fused target texture map.
9. The eye rendering method according to claim 2, wherein before rendering the eye model based on the target texture map, the method further comprises:
a shadow position indication parameter and a shadow boundary indication parameter are obtained, wherein the shadow position indication parameter is used for indicating the position of a shadow cast by an eyelid on the eyeball model, and the shadow boundary indication parameter is used for representing the edge transition effect of the shadow cast by the eyelid on the eyeball model;
generating a shadow mask of the eyeball model based on the shadow position indication parameter and the shadow boundary indication parameter, wherein the shadow mask is used for indicating mask information corresponding to shadows at different positions of the eyeball model;
and acquiring shadow texture information, and fusing the target texture map and the shadow texture information based on the shadow mask to obtain the target texture map with a shadow effect.
10. The eye rendering method according to any one of claims 1 to 9, wherein the determining a target refraction direction of ambient light under iris parameters of the eye model based on the camera direction vector and the iris parameters comprises:
determining an initial refraction direction of ambient light in the eyeball model;
and correcting the initial refraction direction based on the camera direction vector and the iris parameter to obtain a target refraction direction of the ambient light under the iris parameter of the eyeball model.
11. The eye rendering method as in claim 10, wherein said determining an initial refraction direction of ambient light in the eye model comprises:
acquiring an eyeball refractive index and an air refractive index of an eyeball model and a vertex normal of the eyeball model;
and processing the camera direction vector, the eyeball refractive index, the air refractive index and the vertex normal based on a preset refraction calculation rule to obtain an initial refraction direction of the ambient light in the eyeball model.
12. An eye rendering method as claimed in claim 10, wherein the camera direction vector is used to indicate the direction of screen pixels to a camera shooting the eye model; the iris parameters include an iris depth and an iridoid process map, wherein the iris depth is used for indicating a distance between an iris layer and a cornea layer in the eyeball model;
The correcting the initial refraction direction based on the camera direction vector and the iris parameter to obtain a target refraction direction of the ambient light under the iris parameter of the eyeball model includes:
performing point multiplication processing on the camera direction vector and the iris normal map to obtain direction indication parameters corresponding to iris normals at all positions in the iris normal map, wherein the direction indication parameters are used for indicating the relative situation between the iris normals and the camera direction vector in the iris normal map;
and correcting the initial refraction direction based on the iris depth of each position in the iris depth map and the direction indication parameters corresponding to each position in the iris film map, and determining the target refraction direction corresponding to each position in the texture map of the iris layer by ambient light.
13. The eyeball rendering method of claim 12 wherein the shifting the texture map of the iris layer based on the target refraction direction results in a refraction-shifted iris texture map comprising:
acquiring an eyeball forward vector, and determining the offset of the texture map of the iris layer based on the eyeball forward vector, the iris film map and the target refraction direction;
And performing offset processing on the texture map of the iris layer based on the offset of the texture map of the iris layer to obtain the iris texture map after refraction offset.
14. The eye rendering method of claim 13, wherein the offset comprises an offset in a tangential direction, the determining an offset of a texture map of the iris layer based on the eye forward vector, the iris line map, and the target refraction direction comprising:
based on the eyeball forward vector and the iris normal map, obtaining a tangent vector corresponding to the iris normal map, wherein the tangent vector is a vector perpendicular to the iris normal in the iris normal map;
and performing dot multiplication processing on the tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the tangent direction.
15. The eye rendering method of claim 14, wherein the offset further comprises an offset in a secondary tangential direction, the determining an offset of a texture map of the iris layer based on the eye forward vector, the iris line map, and the target refraction direction comprising:
Performing cross multiplication on the tangent vector corresponding to the iris normal line graph and the iris normal line in the iris normal line graph to obtain an auxiliary tangent vector corresponding to the iris normal line graph, wherein the auxiliary tangent vector is a vector perpendicular to the iris normal line and the tangent vector in the iris normal line graph;
and performing dot multiplication processing on the auxiliary tangent vector corresponding to the iris normal map and the target refraction direction to obtain the offset of the texture map of the iris layer in the auxiliary tangent direction.
16. An eyeball rendering device, characterized in that the device comprises:
the device comprises an acquisition module, a rendering module and a rendering module, wherein the acquisition module is used for acquiring an eyeball model to be rendered and an initial texture map corresponding to the eyeball model, the initial texture map comprises texture maps of a multi-layer eyeball structure of the eyeball model, and the multi-layer eyeball structure comprises an iris layer;
the direction determining module is used for acquiring a camera direction vector for shooting the eyeball model and iris parameters of the eyeball model, and determining a target refraction direction of ambient light under the iris parameters of the eyeball model based on the camera direction vector and the iris parameters;
the deviation module is used for carrying out deviation processing on the texture map of the iris layer based on the target refraction direction to obtain an iris texture map after refraction deviation;
And the rendering module is used for rendering the eyeball model based on the iris texture mapping and the texture mapping of other eyeball structural layers except the iris layer in the initial texture mapping to obtain the rendered eyeball model.
17. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the eyeball rendering method of any one of claims 1 to 15.
18. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the eyeball rendering method of any one of claims 1 to 15.
CN202311675523.1A 2023-12-07 2023-12-07 Eyeball rendering method, device, electronic equipment and computer readable storage medium Pending CN117649475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311675523.1A CN117649475A (en) 2023-12-07 2023-12-07 Eyeball rendering method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311675523.1A CN117649475A (en) 2023-12-07 2023-12-07 Eyeball rendering method, device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117649475A true CN117649475A (en) 2024-03-05

Family

ID=90047589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311675523.1A Pending CN117649475A (en) 2023-12-07 2023-12-07 Eyeball rendering method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117649475A (en)

Similar Documents

Publication Publication Date Title
WO2021129044A1 (en) Object rendering method and apparatus, and storage medium and electronic device
CN111415422B (en) Virtual object adjustment method and device, storage medium and augmented reality equipment
US11619989B2 (en) Gaze and saccade based graphical manipulation
US11276227B2 (en) Object rendering method and apparatus, storage medium, and electronic device using a simulated pre-integration map
CN112053423B (en) Model rendering method and device, storage medium and computer equipment
CN113052947B (en) Rendering method, rendering device, electronic equipment and storage medium
CN112489179B (en) Target model processing method and device, storage medium and computer equipment
CN110136236B (en) Personalized face display method, device and equipment for three-dimensional character and storage medium
US20180253892A1 (en) Eye image generation method and apparatus
CN112370783B (en) Virtual object rendering method, device, computer equipment and storage medium
EP3677994B1 (en) Text display method and device in virtual reality, and virtual reality apparatus
CN112884873B (en) Method, device, equipment and medium for rendering virtual object in virtual environment
CN114026603B (en) Rendering computer-generated real text
WO2020199821A1 (en) Object display method and apparatus for simulating experience of blind person, and storage medium
CN112819941A (en) Method, device, equipment and computer-readable storage medium for rendering water surface
CN112446943A (en) Image rendering method and device and computer readable storage medium
US20210303062A1 (en) Eye Tracking
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
CN117582661A (en) Virtual model rendering method, device, medium and equipment
CN117649475A (en) Eyeball rendering method, device, electronic equipment and computer readable storage medium
CN112950753A (en) Virtual plant display method, device, equipment and storage medium
CN115439594A (en) Filter effect rendering method and device of virtual model and storage medium
CN115564880A (en) Coloring method, coloring device, computer equipment and storage medium
CN116310020A (en) Method and device for realizing light reflection effect, computer equipment and storage medium
CN112562066B (en) Image reconstruction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination