CN113763546A - Card preview method and device and electronic equipment - Google Patents

Card preview method and device and electronic equipment Download PDF

Info

Publication number
CN113763546A
CN113763546A CN202111316720.5A CN202111316720A CN113763546A CN 113763546 A CN113763546 A CN 113763546A CN 202111316720 A CN202111316720 A CN 202111316720A CN 113763546 A CN113763546 A CN 113763546A
Authority
CN
China
Prior art keywords
card
dimensional model
data
target
preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111316720.5A
Other languages
Chinese (zh)
Inventor
杨雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiede China Technology Co ltd
Original Assignee
Jiede China Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiede China Technology Co ltd filed Critical Jiede China Technology Co ltd
Priority to CN202111316720.5A priority Critical patent/CN113763546A/en
Publication of CN113763546A publication Critical patent/CN113763546A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a card previewing method, a card previewing device and electronic equipment, wherein element data can be obtained, a three-dimensional model of a first card is established in a preset three-dimensional scene, and then the three-dimensional model of the first card is rendered according to the element data to generate a target three-dimensional model of the first card; the preview picture of the target three-dimensional model is displayed, so that a user can preview the three-dimensional visual effect of the card when drawing the card, the card meeting the requirements of the user can be manufactured conveniently, and the efficiency is improved.

Description

Card preview method and device and electronic equipment
Technical Field
The application belongs to the technical field of card drawing, and particularly relates to a card previewing method and device and electronic equipment.
Background
In general card manufacturing applications, a user can customize elements such as the shape and the size of a card and a front cover, and manufacture a personalized card. However, in the current technology, after the user defines the above elements, the user can only preview a planar image in the front view direction of the formed card model, and the preview view is single, so that the finally manufactured card often cannot achieve the expected visual effect.
Disclosure of Invention
The embodiment of the application provides an on-card previewing method and device and electronic equipment, which can preview the three-dimensional effect of a card model.
In one aspect, an embodiment of the present application provides a card preview method, where the method includes:
acquiring element data;
creating a three-dimensional model of a first card in a preset three-dimensional scene;
rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card;
and displaying a preview picture of the target three-dimensional model.
In some embodiments, the element data includes card body shape data of the first card;
creating a three-dimensional model of a first card in a preset three-dimensional scene, comprising:
generating a card body plane graph of a first card according to card body shape data in a view field space in a preset three-dimensional scene, wherein the view field space is a view cone space determined by coordinates, view field angles and view field proportions corresponding to a preset observation point in the preset three-dimensional scene, and the center of the card body plane graph is associated with the coordinates corresponding to the observation point;
and performing thickness stretching on the card body plane graph to generate a three-dimensional model of the first card.
In some embodiments, the element data includes one or more of the following types:
mark element data, element data, cover element data, text element data, material data and light effect data.
In some embodiments, the element data includes first element data and second element data, the first element data being cover element data;
in the rendering the three-dimensional model of the first card according to the element data to generate a corresponding target three-dimensional model, the method includes:
taking the first element data as base map texture, and overlaying the second element data on the upper layer of the first element data to form first rendering texture data;
rendering the first rendering texture data into element textures of a plane corresponding to the three-dimensional model of the first card to obtain a target three-dimensional model.
In some embodiments, after forming the first rendered texture data, the method comprises:
generating first rendering texture data obtained after the preset ambient light is superposed according to the preset ambient light data and the first rendering texture data;
rendering the first rendering texture data into element textures of a plane corresponding to the three-dimensional model of the first card to obtain a target three-dimensional model, wherein the rendering comprises the following steps:
rendering the first rendering texture data after the preset ambient light is superposed into element textures of the plane corresponding to the three-dimensional model of the first card, and obtaining the target three-dimensional model.
In some embodiments, the first element data and the second element data are both element pictures, the first element region on the element picture corresponding to the second element data presents a corresponding element graphic, and the region of the element picture except the first element region is transparent;
taking the first element data as base map texture, and superposing the second element data on the first element data to form first rendering texture data, wherein the first rendering texture data comprises:
aligning the center of the second element data with the center of the first element data and then superposing the second element data on the upper layer of the first element data to form first rendering texture data; and
the center of the first rendered texture data is aligned with the center of the corresponding plane of the three-dimensional model.
In some embodiments, the second elemental data is an elemental picture, the elemental picture including an elemental graphic,
taking the first element data as base map texture, and superposing the second element data on the first element data to form first rendering texture data, wherein the first rendering texture data comprises:
taking the first element data as base map texture, and overlapping the second element data on the upper layer of the first element data according to a preset coordinate position to form first rendering texture data; and
and taking the center of the first element data as the center of the first rendering texture data, and aligning the center of the first rendering texture data with the center of the corresponding plane of the three-dimensional model.
In some embodiments, the element data comprises a first element picture comprising a first element graphic therein; the target three-dimensional model is a model corresponding to a first angle; after displaying a preview picture of the target three-dimensional model, the method comprises the following steps:
determining an initial coordinate and an initial form of a first element graph at a first angle of a corresponding plane of the target three-dimensional model according to the target three-dimensional model;
determining a second coordinate and a second form of the first element graph at a second angle of the corresponding plane of the target three-dimensional model;
updating the rendering three-dimensional model according to the first element graph within preset updating time to obtain an updated target three-dimensional model corresponding to a second angle, wherein the element texture corresponding to the first element graph in the updated target three-dimensional model is located at a second coordinate of the plane and is rendered into a second shape;
and dynamically updating and displaying the preview picture of the target three-dimensional model into a preview picture of the updated target three-dimensional model.
In some embodiments, displaying a preview screen of the target three-dimensional model includes:
a first input by a user is received,
and responding to the first input, and dynamically updating a preview picture for displaying the corresponding angle of the target three-dimensional model.
In some embodiments, the target three-dimensional model is a model corresponding to a first angle; responding to the first input, dynamically updating a preview picture for displaying the corresponding angle of the target three-dimensional model, and comprising the following steps:
in response to the first input, determining update data of an updated rendering of the three-dimensional model of the first card, the update data including a second angle corresponding to the three-dimensional model or second coordinates and a second modality corresponding to the second field of view space or the element data;
according to the updating data, updating and rendering the three-dimensional model by using the element data to obtain an updated target three-dimensional model;
and displaying a preview picture of the updated target three-dimensional model.
In some embodiments, the element data includes N consecutive frames of still element pictures, the element pictures including corresponding element graphics,
rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card, comprising:
respectively aligning the centers of N continuous frame static element pictures with the plane centers of the three-dimensional models corresponding to N angles, wherein N is an integer greater than 1;
respectively rendering the element textures of the three-dimensional model of the first card according to the element graphics corresponding to the N continuous frame static element pictures to obtain N target three-dimensional models, wherein the N target three-dimensional models respectively correspond to N angles;
a preview screen displaying a three-dimensional model of an object, comprising:
acquiring a second input, and determining a third angle, wherein the third angle is one of N angles;
and responding to the second input, and displaying a preview screen of the target three-dimensional model corresponding to the third angle.
In some embodiments, after obtaining the N three-dimensional models of the object, the method includes:
carrying out interpolation calculation according to the Mth preview picture and the (M + 1) th preview picture in the preview pictures of the N target three-dimensional models to obtain U interpolation pictures;
and inserting the U interpolation pictures between the Mth preview picture and the M +1 th preview picture to obtain N + U target preview pictures.
In some embodiments, after presenting the preview screen of the target three-dimensional model, the method comprises:
and recording interface changes in the process of displaying the preview picture of the target three-dimensional model to generate a first recorded video.
In some embodiments, after presenting the preview screen of the target three-dimensional model, the method comprises:
and intercepting the interface content in the process of displaying the preview picture of the target three-dimensional model, and generating a first recording picture.
In some embodiments, the element data comprises a first element picture comprising a first element graphic therein and a second element picture comprising a second element graphic therein;
rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card, comprising:
creating a graphic three-dimensional model of the first element graphic according to the first element graphic;
rendering the second element graph into element textures of the three-dimensional model of the first card to obtain the three-dimensional model of the card;
and superposing the graphic three-dimensional model to a corresponding plane of the three-dimensional model of the first card to generate a target three-dimensional model.
On the other hand, the embodiment of the present application provides a card preview device, and the device includes:
the acquisition module is used for acquiring element data;
the creating module is used for creating a three-dimensional model of the first card in a preset three-dimensional scene;
the generating module is used for rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card;
and the display module is used for displaying the target three-dimensional model.
In another aspect, an embodiment of the present application provides an electronic device, where the electronic device includes:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the steps in the card preview method as in the first aspect.
In yet another aspect, an embodiment of the present application provides a computer storage medium, on which computer program instructions are stored, and the computer program instructions, when executed by a processor, implement the steps in the card preview method according to the first aspect.
The card previewing method, the card previewing device and the electronic equipment can acquire element data of a first card, create a three-dimensional model of the first card in a preset three-dimensional scene, and render the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card; the preview picture of the target three-dimensional model is displayed, so that a user can preview the three-dimensional visual effect of the card when drawing the card, the card meeting the requirements of the user can be manufactured conveniently, and the drawing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating a card preview method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a dimensional relationship between an element picture and a first card in an example of the present application;
FIG. 3 is a schematic diagram illustrating the relationship between the element picture and the special-shaped card in another embodiment of the present application;
FIG. 4 is a schematic view of a vertebral body in one embodiment of the present application;
FIG. 5 is a diagram illustrating a preview screen of a three-dimensional model of a display object being dynamically updated according to another specific example of the present application;
fig. 6 is a schematic structural diagram of a card preview device according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to still another embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the prior art, when cards such as smart cards are manufactured through a card manufacturing system, a user generally uploads a front cover picture of the front of the card to the card manufacturing system, a two-dimensional image of the front of the card is generated for the user to preview, and corresponding card production data is generated after the user confirms the two-dimensional image and is used for a subsequent production system to produce corresponding cards.
Because the existing card making system can not preview the pictures of the card at other angles, the user can not visually know the drawing effect of other parts of the card, so that the user can not timely know or correct the drawing effect even if the drawing effect is poor, and the appearance of the produced card can not reach the effect expected by the user.
In order to solve the problem of the prior art, embodiments of the present application provide a card preview method, apparatus, device, and computer storage medium. First, a card preview method provided in an embodiment of the present application is described below.
Fig. 1 shows a flowchart of a card preview method according to an embodiment of the present application. As shown in FIG. 1, the method includes steps S101 to S104:
s101, acquiring element data.
The element data is data for subsequent rendering of an element texture on the first card.
Illustratively, the element data may be an element picture on which a corresponding element graphic is contained.
For example, the element picture may be a picture uploaded by a user, or a picture obtained by matching in a preset element database, which is not limited in the embodiment of the present application.
For example, the first Card may be an IC Card (Integrated Circuit Card), an ID Card (Identification Card), or a CPU Card.
The first card can comprise card surface elements such as a mark (logo), a chip, a card organization, a front surface cover, a back surface cover and characters, and can also comprise light effect elements with a light reflecting effect. Therefore, in the embodiment of the present application, the acquired element data may include one or more of the following types:
logo element data, color element data, element data, card organization element data, cover element data (including front cover element data and back cover element data), text element data, texture data, and light effect data.
Wherein the component element data is used to render a component texture on a smart card, such as a chip element of an IC card, or a display device texture on a smart card having a display device.
The elemental data may also include card body shape data for determining a first card shape and size.
S102, creating a three-dimensional model of the first card in a preset three-dimensional scene.
The three-dimensional model of the first card created in the preset three-dimensional scene is a model before rendering, and may be a three-dimensional model with a preset size or a three-dimensional model created according to a self-defined size of the first card.
S103, rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card.
When the three-dimensional model of the first card is rendered through the element data, the content in the element picture corresponding to the element data may be rendered as the texture of the three-dimensional model of the first card, and the rendered target three-dimensional model of the first card is generated.
For example, when an element picture is one, the content in the picture can be rendered into the texture of the first card three-dimensional model;
illustratively, when the number of the element pictures is multiple, the element pictures are stacked according to the corresponding element types of the element pictures in sequence, the stacking sequence needs to ensure that the element graphics on each element picture are not shielded (except the element pictures serving as the texture of the base map) as much as possible, and then all the stacked element pictures are rendered into the texture of the first card three-dimensional model.
And S104, displaying a preview picture of the target three-dimensional model.
The card previewing method can acquire the element data of the first card, create the three-dimensional model of the first card in the preset three-dimensional scene, and then render the three-dimensional model of the first card according to the element data to generate the target three-dimensional model of the first card; the preview picture of the target three-dimensional model is displayed, so that a user can preview the three-dimensional visual effect of the card when drawing the card, the card meeting the requirements of the user can be manufactured conveniently, and the drawing efficiency is improved.
For the convenience of recording or displaying the preview screen by the user, optionally, after the steps S101 to S104 are executed, the embodiment of the present application may further include step S105 or step S106:
s105, recording interface changes in the process of displaying a preview picture of the target three-dimensional model, and generating a first recording video; or
And S106, intercepting the interface content in the process of displaying the preview picture of the target three-dimensional model, and generating a first recording picture.
The content change recorded on the interface in the process of previewing the target three-dimensional model of the first card or updating the target three-dimensional model in the first recorded video generally can include the change of contents such as a preview picture of the target three-dimensional model and a mouse pointer, and therefore a user can conveniently record the preview process or show and reproduce the preview process to other users.
The first recording picture can be a picture formed by splicing a plurality of screenshots, interface contents can be captured at regular time or according to user requirements in the process of displaying a target three-dimensional model or updating a preview picture of the target three-dimensional model to obtain one or more screenshots of the interface contents, and the screenshots of the plurality of interface contents are spliced according to the preview time sequence when the plurality of screenshot of the interface contents are captured to obtain the first recording picture, so that a user can conveniently record a preview process or show and reproduce the preview process to other users.
In order to enrich the material for making the card, optionally, before acquiring the element data corresponding to the card element in step S101, the method may further include steps S107 to S108:
and S107, constructing an element gallery.
The gallery may include a logo gallery, a text content gallery, an element gallery, a custom gallery and the like corresponding to each card element. Each picture in each gallery has attribute information of the element in the corresponding picture, and the attribute information includes the element type, the name and the storage position.
And S108, obtaining card information, and selecting corresponding element data from the gallery according to the card information.
When the card is manufactured, the card information input by the user can be obtained through the information filling interface, the card information can comprise the name of a customized party, the shape and the size of the card, contained elements and pictures of the elements, and therefore matching can be performed from the elements according to the card information, and corresponding element data can be selected. For example, if the card information includes an XX bank name, the corresponding logo picture, chip component picture, text content picture, cover picture, etc. can be matched from the gallery according to the name information.
In other examples, the user may also choose the element picture directly from the gallery.
In order to meet the requirement of making a personalized card for a user, optionally, in the embodiment of the present application, the obtaining of the element data in step S101 may include obtaining card body shape data and element pictures corresponding to different elements.
For example, the step S101 may specifically include steps S10101 to S10106:
s10101, card body shape data are obtained.
The card body shape data comprises shape and size data, the card shape can be rectangular, circular or other special shapes, and the size data can be used for describing the size of the card shape.
For example, the card shape and size data may be selected through a tab control component of the data customization interface, for example, if a bank card is selected to be manufactured, the shape and size data of a standard bank card is called correspondingly, the card body of the standard bank card is rectangular, and the card surface size is 1051 × 661 pixel units.
For example, the card shape and size data may also be obtained by drawing a graph, the graph of the card is freely drawn by a graph tool (a vector two-dimensional graph which can be input by a computer through a formula, such as a point, a straight line, an arc, a free curve, and the like) on a graph drawing interface, and corresponding parameters of the drawn graph are determined as the card shape and size data. In this example, if the drawn figure is a special shape, the card shape and size data further includes a figure peripheral bounding box (bounding box) size of the special shape.
The plan view of the first card may be determined from custom or manually drawn card shape and size data.
Step S10102, obtaining first element data, wherein the first element can be used for generating a logo and/or an element graph of the first card.
Because the elements of the first card may include a logo (logo), an element (hereinafter, a chip is taken as an example), a cover, text, and the like, the corresponding element data may include a logo element picture, a chip element picture, a front cover picture, a text content picture, and the like, where the text content picture refers to a text element presented by using a picture as a carrier. And rendering the element pictures of various elements on the three-dimensional model according to the positions on the card body or a preset superposition sequence.
In order to facilitate standardized production of pictures and ensure that the graphs on various element pictures are not shielded, optionally, the pictures in each element data can have uniform size, and the pictures contain corresponding element graphs and the areas except the element graphs are transparent. If a certain bank card body is rectangular and the size is 1051 x 661 pixels, the size of the obtained logo element picture is 1051 x 661 pixels, the logo picture comprises a bank logo image, and other picture areas except the bank logo image on the logo element picture are all transparent; the size of the obtained chip element picture is also 1051 x 661 pixels, and the areas except the element graph of the golden chip element in the chip element picture are all transparent; the logo element picture and the chip element picture are directly and completely overlapped according to the size (the logo graphic position and the chip graphic position are not overlapped), and the logo and the chip element image of the card can be displayed so as to be used for manufacturing the card surface.
For example, each element picture can also be the size of the included element graph, and other transparent or blank background areas are few, for example, a logo element picture of a certain bank card is the size of the standard logo graph of the bank; and after being combined according to the preset relative position on the first card, each element picture is rendered on the three-dimensional model of the first card.
Illustratively, the format of each element picture may be png format, and may also be bmp, jpg, jpeg, heic, icon, gif, etc., where the picture in gif format may be cut into one or more continuous frame still pictures.
S10103, second element data is obtained, and the second element data can be used for rendering a back cover of the first card.
Taking a bank card as an example, the back cover usually includes magnetic stripe patterns, bank description characters, card descriptions and other text contents, so that the magnetic stripe patterns and the characters can form back cover element pictures, the back cover element pictures can also be formed separately according to the magnetic stripe patterns, and the text contents form the text content element pictures separately.
Optionally, the text content may also be input by a user, and the system automatically or according to a font selected by the user generates a png picture corresponding to the text.
S10104, third element data is obtained, and the third element data can be used for rendering the self-defined patterns of the first card.
The third element data may be a user's selection from a gallery of stickers (e.g., cartoon patterns, etc.) in a database, rendering a card pattern with personalization.
S10105, fourth prime data is obtained, and the fourth prime data can be used for rendering a front cover of the first card.
The fourth element data can be pictures uploaded by a user from the electronic equipment terminal, and whether the size of the uploaded pictures is in a preset range or not is detected;
and if the current time is less than the preset range, reminding the user to replace.
If the card body shape data is larger than the preset card body shape data, the card body shape data of the first card can be cut.
For example, an uploaded picture is cut for a bank standard card, as shown in fig. 2, after the front cover element picture 201 is uploaded, the picture can be subsequently superimposed on the front surface of the first card 202, and in order to ensure the consistency of the picture size, the picture can be cut according to the geometric center of the picture aligned with the geometric center of the standard card, and redundant parts are cut; or aligning a vertex of the picture to a corresponding vertex of the standard card, and cutting off redundant parts; or identifying the core part of the main object in the picture through AI (artificial intelligence), learning the area where most of the core parts of the picture are located, and placing the identified core part in the corresponding area of the standard card.
It should be understood that each of the above element data has a corresponding relationship of position associated with the first card according to its type, for example, logo element data, chip element associated with the front surface of the card body, magnetic stripe element associated with the back surface of the card body, etc.
For the special-shaped card, as shown in fig. 3, the center of the uploaded picture 301 may be aligned with the geometric center of the bounding box 302 (or a vertex of the special-shaped card) determined by the card shape data, an extra portion may be cut, and then the cut picture may be superimposed with the card body vector graph 303 determined by the card shape data, and the region 304 outside the card body vector graph 303 is set to be transparent.
S10106, fifth element data is obtained, and the fifth element data can be used for rendering materials or light effect elements of the first card.
The material picture or the light effect picture corresponding to the fifth element data can be selected from the database. For example, the light effect picture comprises one or more material texture pictures made of common materials such as metal and plastic aiming at various types of element graphs and light effect pictures aiming at the light reflection effect of various types of elements, wherein the material pictures can be mainly used for rendering the front cover of the card, and the light effect pictures can be mainly used for rendering the chip element graphs.
In order to ensure effective rendering of the first card, optionally, after the element data is acquired in step S101, the three-dimensional model of the first card may be determined through a preset three-dimensional scene in the embodiment of the present application, so that step S102 creates the three-dimensional model of the first card in the preset three-dimensional scene, which may specifically include steps S10201 to S10202:
s10201, generating a card body plane graph of the first card according to the card body shape data in a view field space in a preset three-dimensional scene, wherein the view field space is a view cone space determined by coordinates, view field angles and view field proportions corresponding to a preset observation point in the preset three-dimensional scene, and the center of the card body plane graph is associated with the coordinates corresponding to the observation point.
In a preset three-dimensional scene, a foreground camera (i.e. a viewpoint) is set, and the setting of the viewpoint can be determined by the following basic information:
coordinates are as follows: the observation point is associated with the center of the front face of the first card, the first card is determined by the shape data of the card body, the origin of coordinates is determined by the center of the first card, the long edge of the card determines the X axis, the wide edge of the card determines the Y axis, the thickness of the card determines the Z axis, and a coordinate system is established; wherein the observation point is located on the Z-axis, and as shown in the view field space plane view of (4 a) in fig. 4, the coordinate of the observation point on the Z-axis is a.
Angle of field: the field of view is represented, i.e. the angle of the field of view is used to determine the range to which a preview can be previewed in the preset three-dimensional scene, its value being in angular units, represented by a.
Visual field ratio: representing the length-width ratio of the viewing cone of the observation point, and determining the ratio of the transverse dimension and the longitudinal dimension of the rendering result of the three-dimensional model; the default aspect ratio is 1, i.e. the previewed view is square, and optionally, the view scale is equal to or close to the card aspect ratio.
As shown in the three-dimensional view of fig. 4 (4 b), the viewing angle property of viewpoint 401 determines the lateral field of view. Based on the view scale property, a longitudinal field of view is determined, and the front and back boundaries of the visualization region, i.e. the elements between the two, are determined by the near cross-section 403 and the far cross-section 404 in the view volume 402 determined by the coordinates of the viewpoint 401, the angle of the field of view and the view scale, and some parts of the model will not be rendered into the scene when they are farther than the far cross-section or closer than the near cross-section.
Illustratively, the view frustum space may be determined by:
as shown in fig. 4 (4 a), let the center of the first card 202 be located at the origin of coordinates, the observation point 401 be located on the Z-axis, and the coordinates be a; the view angle is alpha; the visual field scale is defaulted to 1; the first card is between the near section 403 and the far section 404, and the observation point 401, the center of the first card 202, the center of the near section 403, and the center of the far section 404 are aligned and distributed on the Z axis in sequence.
Let the distance between the near cross-section 403 and the viewpoint 401 be B, and the distance between the far cross-section 404 and the viewpoint 401 be C, C > B > a; the Z-axis coordinate of the center of the near cross-section 403 is A-B, and the Z-axis coordinate of the center of the far cross-section 404 is A-C;
the card body dimensions of first card 202 are: length, width, height = L, W, T;
in order to ensure that the card body of the first card 202 is in the range of the visual field in the whole process of rotation, turning and moving, the following conditions are simultaneously satisfied for each parameter:
Tan
Figure 747965DEST_PATH_IMAGE001
*B>
Figure 556653DEST_PATH_IMAGE002
A-B>
Figure 832389DEST_PATH_IMAGE002
C-A>
Figure 701119DEST_PATH_IMAGE002
C-B>
Figure 737339DEST_PATH_IMAGE003
for the special-shaped card, if the farthest distance from the geometric center of the special-shaped card to all points on the surface of the special-shaped card is X, the above formula is also required to be satisfied, and only the formula is used
Figure 877946DEST_PATH_IMAGE002
And replacing by X.
Determining a visual cone space range in a preset three-dimensional scene, determining a card body plane graph of a first card according to the acquired card body shape data, and associating the center of the card body plane graph with a coordinate corresponding to an observation point, wherein the center of the card body plane graph can keep a fixed relative distance with the observation point after association, so that a three-dimensional model obtained by subsequent construction is always positioned in the visual cone space range in the process of rotating or overturning by the center.
In order to meet the personalized card drawing requirement of the user, optionally, after determining the card body planar graph of the first card, color rendering (i.e., coloring) may be performed based on the card body planar graph, and the color may be selected by the user or may be matched by the AI.
When the AI matches colors, color matching can be determined according to the dominant hue of the element picture. For example, if the obtained card body front cover image is recognized by the AI to have a red dominant hue, the other surfaces can be rendered into hues matched with the dominant hue; or when the material picture is determined to be a metal material, matching the lighting effect matched with the material; or identifying that the front cover picture of the card body is a landscape or a portrait, and if the front cover picture is the portrait, intercepting the outline of the portrait, stretching and tiling, and the like.
S10202, performing thickness stretching on the card body plane graph to generate a three-dimensional model of the first card.
After the card body plane figure is subjected to thickness stretching, the plane figure becomes a three-dimensional shape with a certain thickness and is used as a three-dimensional model.
If the card body plane figure is colored before step S10202, the respective faces (including the front face, the back face, and the four side faces) of the generated three-dimensional model may have the same color. In some examples, a certain face or faces of the three-dimensional model may also be recoloring according to user input after the three-dimensional model is generated.
When the card includes multiple types of element data, that is, the element data at least includes first element data and second element data, and when the first element data is cover element data, in order to ensure that various types of element data can be effectively rendered on the three-dimensional model of the first card, optionally, after the three-dimensional model of the first card is obtained through steps S101 to S102 in the embodiment of the present application, step S103 renders the three-dimensional model of the first card according to the element data, and generates a corresponding target three-dimensional model, which specifically may include steps S10301 to S10302:
s10301, taking the first element data as base map texture, and overlaying second element data on the upper layer of the first element data to form first rendering texture data;
s10302, rendering the first rendering texture data into element textures of a plane corresponding to the three-dimensional model of the first card, and obtaining the target three-dimensional model.
The acquired element data at least includes cover element data (i.e., first element data) and second element data, and the second element data may be one or more of logo element data, element data, text element data, material data, light effect data, or the like.
The first element data is used as the texture of the base image on the front side or the back side of the card, and the second element data is superposed on the upper layer of the first element data according to a preset superposed hierarchical relationship or a preset position relationship to form first rendering texture data. The predetermined positional relationship may include the face and coordinates of the second element data, such as at a certain coordinate on the front face of the card and at the upper left corner.
And rendering the first rendering texture data into the element texture of the plane corresponding to the first card three-dimensional model, so as to obtain the target three-dimensional model.
In order to ensure that a better rendering effect can be presented under the condition of multiple element data, optionally, in this embodiment of the application, if the acquired first element data is cover element data and the second element data is an element picture, a first element region on the element picture corresponding to the second element data presents a corresponding element graphic, and a region on the element picture except the first element region is transparent. In this embodiment, step S10301 may specifically include steps S103011 through S103012:
s103011, aligning the center of the second element data with the center of the first element data, and then overlapping the center of the second element data and the center of the first element data on the upper layer of the first element data to form first rendering texture data; and
s103012. align a center of the first rendered texture data with a center of a corresponding plane of the three-dimensional model.
Because the element data can include a plurality of element pictures, the element data can be sequentially overlapped and aligned to a plane corresponding to the three-dimensional model according to the front cover element data, the logo element data, the card organization element data, the chip element data, the back cover element data, the text content element data, the light effect data and the pictures corresponding to the material data.
For example, the center point of a first picture corresponding to the acquired front cover of the card is aligned with the center points of other element pictures, wherein the first picture is used as a front base picture, the other element pictures are superposed on the upper layer of the first picture, the superposition of the other element pictures is also superposed according to a preset sequence, such as a logo element picture, a card organization element picture, a chip element picture, a material picture and the like, in sequence, and the superposition of all the element pictures superposed on the first picture is ensured to ensure that the element pictures are not mutually shielded. After all the element pictures on the front surface are superposed, the central point is aligned with the central point of the front surface plane of the three-dimensional model, the redundant parts of all the pictures exceeding the front surface plane of the three-dimensional model are cut, and then the pictures are rendered into the front surface texture of the three-dimensional model. Meanwhile, if a back cover picture exists, the central point of a second picture (serving as a back base picture) corresponding to the back cover is sequentially superposed with the magnetic stripe element picture, the character content element picture, the light effect picture and the like, the central points of all the superposed element pictures are aligned with the central point of the back plane of the three-dimensional model, and the redundant parts are cut and then rendered into the back of the three-dimensional model.
It should be understood that when the element images are sequentially overlapped, in order to ensure effective exposure of the element images in the images, the element images need to be overlapped according to a preset sequence, and when the images are overlapped, if the element images are larger than the shape and the size determined by the card body shape data of the first card, the card body shape data of the first card is cut.
Rendering each element graphic as an element texture of the three-dimensional model of the first card may obtain a target three-dimensional model, and then the target three-dimensional model may be displayed on a page through step S104.
In some embodiments, in order to ensure that a better rendering effect can be presented when a plurality of element data exist, optionally, if the element data includes first element data and second element data, the first element data is cover element data, the second element data is an element picture, the element picture includes element graphics, and an area occupied by the element graphics in the element picture in the current picture exceeds a range of a preset area threshold, that is, the element graphics in the element picture occupy a larger area of the picture, and a background area (that is, a transparent area or a blank area outside the element graphics) is smaller, in this case, to ensure effective exposure of each element, the element picture may be superimposed on a base map according to a preset coordinate position and then aligned to a plane corresponding to the three-dimensional model to be rendered. Specifically, in this embodiment of the application, in step S10301, the first element data is used as a base map texture, and the second element data is superimposed on an upper layer of the first element data to form first rendering texture data, which may specifically include S103013 to S103014:
s103013, taking the first element data as a base map texture, and overlapping the second element data on the upper layer of the first element data according to a preset coordinate position to form first rendering texture data; and
s103014, centering the first rendering texture data with a center of the first element data, aligning the center with a center of a corresponding plane of the three-dimensional model.
The preset coordinate position may be an association relationship between preset element data and a three-dimensional model plane, for example, a logo element corresponds to the upper left corner of the front of the three-dimensional model, and a chip element corresponds to the middle of the left side of the front of the three-dimensional model.
Optionally, based on an element picture (front cover picture or back cover picture) in the element data as a base map, other element pictures are superimposed on an upper layer of the element picture of the base map according to a preset position relationship or a manual dragging mode, so as to form first rendering texture data.
The center of the first element data is the center of the first rendering texture data, and after the center is aligned with the center of the corresponding plane of the three-dimensional model, all the superimposed element data are rendered on the three-dimensional model through step S10302, so as to obtain the target three-dimensional model.
In order to achieve a more realistic visual effect, in this embodiment, optionally, after the first rendering texture data is formed in step S10301, step S103 in the method may further include step S10303:
s10303, according to the preset ambient light data and the first rendering texture data, generating first rendering texture data on which the preset ambient light is superimposed.
The preset ambient light may include a parallel light source, a point light source, and a spotlight light source for the three-dimensional model.
The preset ambient light is used for simulating the reflection effect formed by the light source irradiating the first card, and visual pictures in different light effect states can be displayed conveniently.
Correspondingly, after the first rendering texture data on which the preset ambient light is superimposed is obtained in step S10303, step S10302 is executed to render the first rendering texture data into the element texture of the plane corresponding to the three-dimensional model of the first card, so as to obtain the target three-dimensional model. Specifically, in this embodiment of the present application, the step S10302 may include the step S103021:
s103021, rendering the first rendering texture data after the preset ambient light is superposed into element textures of the plane corresponding to the three-dimensional model of the first card, and obtaining the target three-dimensional model.
After the environment light is superposed, the corresponding environment light effect is superposed while the first rendering texture data is rendered to the three-dimensional model, and a more vivid visual picture is displayed.
Because the rendered target three-dimensional model generally corresponds to an angle to the front or back (i.e. 180 °) of the card, in order to enable a user to observe a stereoscopic effect at another angle, optionally, in this embodiment of the application, the element data may include a first element picture, and the first element picture includes a first element graphic; the target three-dimensional model is a model corresponding to a first angle; after the step S104 displays the preview screen of the target three-dimensional model, the method may include steps S109 to S112:
s109, according to the target three-dimensional model, determining an initial coordinate and an initial form of a first element graph at a first angle of a plane corresponding to the target three-dimensional model;
s110, determining a second coordinate and a second form of the first element graph at a second angle of the plane corresponding to the target three-dimensional model;
s111, updating the rendering three-dimensional model according to the first element graph at preset updating time to obtain an updated target three-dimensional model corresponding to a second angle, wherein the element texture corresponding to the first element graph in the updated target three-dimensional model is located at a second coordinate of the plane and is rendered into a second shape;
and S112, dynamically updating and displaying the preview picture of the target three-dimensional model into a preview picture of the updated target three-dimensional model.
In the embodiment of the application, after the target three-dimensional model corresponding to the first angle (e.g., 180 °) is rendered in steps S101 to S104, a preview screen of the angle is displayed on a preview interface, and then the target three-dimensional model preview screen of the first card at other angles is triggered and dynamically displayed according to system setting or user input.
Illustratively, the dynamic display may be a rotation, a flip or translation, a jump, and the like.
Optionally, in the embodiment of the present application, rendering the three-dimensional model may be performed based on a webgl (Web Graphics Library, which is a 3D mapping protocol) technology. The above steps S109 to S112 can be executed to present a dynamic display from the previous frame preview screen to the next frame preview screen on the web page. Wherein, because each element graph on the target three-dimensional model has its own initial coordinate and initial form at the current plane angle, the coordinate and form of each element graph may change with the change of the display angle in order to present the dynamic display effect.
Taking the first element graph as an example, according to the target three-dimensional model, the initial coordinate and the initial form of the first element graph at the first angle of the plane corresponding to the target three-dimensional model can be determined; and then, according to system setting (such as how many angles to rotate per second) or user input (such as a mouse dragging a certain angle), determining a second coordinate and a second form of the first element graph at a second angle of the plane corresponding to the target three-dimensional model. And then re-rendering the three-dimensional model of the first card once again according to the first element graph within a preset updating time (such as 1 s) to obtain an updated target three-dimensional model corresponding to a second angle, and displaying the updated target three-dimensional model on the preview interface, wherein the element texture corresponding to the first element graph in the updated target three-dimensional model is located at a second coordinate of the plane and is rendered into a second shape.
For example, referring to fig. 5, after obtaining the logo elemental image 501, rendering the logo elemental image in the logo elemental image 501 into a texture of a front plane in the target three-dimensional model 502 according to a preset association relationship, where the logo elemental image in the logo elemental image 501 is an initial coordinate and an initial shape on the front plane of the target three-dimensional model 502. And triggering the rotation operation of the target three-dimensional model, namely rotating to a second angle and displaying a next preview picture, and rendering the three-dimensional model of the first card by using the logo element picture 501 to obtain an updated three-dimensional model 502'. The updated logo element picture 501 'with the texture of the second form and the second coordinate in the three-dimensional model 502' is visually represented by animation of the logo element graph in the logo element picture 501 changing from the initial coordinate and the initial form to the second coordinate and the second form in the process of displaying the preview picture.
It can be understood that, when the target three-dimensional model is dynamically updated, the update speed is related to the preset update time, and the preset update time is about short, that is, the shorter the update interval is, the faster the update speed is, and the faster the preview change of the target three-dimensional model is.
In the dynamic updating process, each element graph is transformed from the initial coordinate to the second coordinate to the final coordinate, and the element graph can show the effect of moving on a certain moving track at a certain moving speed, such as moving for 1 second by 0.01 mm.
It should be understood that if a plurality of element data or a plurality of element pictures are included, before one frame of preview picture is dynamically displayed as the next frame of preview picture, all the element data or element pictures determine the coordinates and the forms of the respective element data or element pictures at the second angle of the corresponding plane, and then the three-dimensional model of the first card is re-rendered once, the re-rendered updated target three-dimensional model is obtained, and the preview picture of the model is displayed.
Therefore, the three-dimensional model is rendered once again when a preview picture at one angle is displayed, the preview pictures of the first card at different angles or visual angles are presented, an animation effect within a certain angle range (such as 180-360 degrees) is further formed, and a user can see the three-dimensional visual effect of the first card at different angles.
In another embodiment, in order to enable the user to see the three-dimensional visual effect of the first card under different angles, the preview angle of the target three-dimensional model can be changed by changing the corresponding view field space of the three-dimensional model. Optionally, in this embodiment of the application, the target three-dimensional model is a model corresponding to the first field space; then after step S104, the method may include steps S113-S114:
s113, adjusting coordinates, view field angles and view field proportions of observation points corresponding to the three-dimensional model of the first card to obtain a second view field space;
and S114, displaying a preview picture of the target three-dimensional model in the second view field space.
By changing the observation point and the field space relative to the three-dimensional model, the angle of the observation target three-dimensional model is changed, and a corresponding preview picture is presented. Further, steps S113 to S114 can be repeated to dynamically display the preview image of the target three-dimensional model at each viewing angle.
In other embodiments, in order to facilitate the user to freely and flexibly view the preview screen of each angle of the target three-dimensional model obtained in steps S101 to S103, optionally, in this embodiment, step S104 displays the preview screen of the target three-dimensional model, which may specifically include steps S10401 to S10402:
s10401, receiving a first input of a user.
The first input may be a click input, or a voice instruction input by a user, or a specific gesture or an air-space gesture input by the user, which may be determined according to actual usage requirements, and this embodiment does not limit this.
The click input can be single click input, double click input or click input of any number of times, and can also be long-press input or short-press input. The specific gesture may be any one of a tap gesture, a double tap gesture, a swipe gesture, a drag gesture, a zoom gesture, and a rotate gesture.
S10402, responding to the first input, dynamically updating a preview picture of the corresponding angle of the displayed target three-dimensional model.
And the user rotates, turns over and moves the target three-dimensional model or changes the corresponding viewing cone space through clicking or sliding input of a screen by fingers or clicking or dragging input controlled by a mouse, and dynamically updates and displays a preview picture of the corresponding angle of the target three-dimensional model.
In the process of dynamically updating the preview picture of the corresponding angle of the displayed target three-dimensional model, when the target three-dimensional model is triggered to rotate, turn over and move or the corresponding view cone space is changed through the first input, the preview picture of the displayed target three-dimensional model can be dynamically updated according to the updating data determined by the first input. Therefore, the user can freely control the three-dimensional effect of each angle and visual angle after the first card is rendered, the user can conveniently determine whether the rendered element texture meets the expected requirement, and the card which does not meet the requirements of the user is prevented from being manufactured after being sent to a production system.
Optionally, in the process of dynamically displaying the preview picture of the target three-dimensional model according to the user input, step S10402 may specifically include steps S104021 to S104023:
s104021, in response to the first input, determining update data of an update rendering of the three-dimensional model of the first card.
The update data may include a second angle corresponding to the three-dimensional model or a second coordinate and a second modality corresponding to the second field of view space or the element data.
The preview screen for generating different angles of the stereoscopic model may be a change in the model itself (e.g., rotation or inversion), a spatial change in the viewing cone of the viewpoint with respect to the stereoscopic model (e.g., change in the coordinates of the viewpoint, angle of view, etc.), a change in the coordinates and shape of the element data on the stereoscopic model, or the like. Therefore, after receiving a first input (for example, an input of a mouse dragging the target three-dimensional model to rotate) of a user, the angle of the target three-dimensional model, the corresponding view cone space or the coordinates and the form of element data rendered on the three-dimensional model can be changed in response to the first input, so that animation effects of rotation, turning, amplification, reduction, translation and the like of the target three-dimensional model of the first card can be presented.
Thus, in response to the first input, a second angle of the three-dimensional model, a second coordinate of the second field of view space or corresponding element data, and a second modality are determined, i.e. the angle of the target three-dimensional model, the coordinates and modality of the field of view space and the element data in the next frame to be previewed are determined.
S104022, updating and rendering the three-dimensional model by using the corresponding element data according to the updating data to obtain an updated target three-dimensional model;
and S104023, displaying a preview picture of the updated target three-dimensional model.
And after determining a second angle of the three-dimensional model, a second view field space or a second coordinate and a second form of the element data, re-rendering the three-dimensional model of the first card by using the corresponding element data according to the updated data to obtain a rendered updated target three-dimensional model, and displaying a corresponding preview picture.
And repeating the steps to continue rendering the preview picture of the subsequent frame and presenting the animation of the first card stereo model at different angles and visual angles.
In some embodiments, for a manufacturing scene in which the first card is a raster card, in addition to rendering of each element of the card body, the drawn first card needs to present different pictures at different angles. Therefore, in order to meet the diversified card making requirements of the user, optionally, in the element data acquired in step S101, the element data may include a group of continuous frame static element pictures, and each frame of element picture includes one or more types of element graphics.
Illustratively, the continuous frame static element pictures refer to the group of element pictures with content having relevance, and the continuous frame static element pictures have a fixed arrangement sequence, and the content in the group of pictures shows a certain trend of picture change according to the sequence.
Illustratively, a group of consecutive frames of still element pictures may be captured from a dynamic image.
After the element data is acquired, the three-dimensional model of the first card may be determined through step S102, and the continuous frames of static element pictures are rendered into the three-dimensional model through step S103, respectively, so as to present a raster visual effect. Specifically, the step S103 may include steps S10304 to S10305:
s10304, aligning centers of N continuous frames of static element pictures with plane centers of the three-dimensional models corresponding to N angles respectively, wherein N is an integer larger than 1.
Aligning the N continuous frame static element pictures with the plane centers of the corresponding N preset-angle three-dimensional models respectively, wherein for example, 60 continuous frame static element pictures are needed to be rendered as front covers of the card body, a preset angle is determined at an interval of 3 degrees, and the continuous frame static element pictures with the fixed sequence are correspondingly arranged according to the size sequence of the preset angles.
S10305, respectively rendering the element textures of the three-dimensional model of the first card according to the element graphs corresponding to the N continuous frames of static element pictures to obtain N target three-dimensional models, wherein the N target three-dimensional models respectively correspond to N angles.
And after the corresponding relation between the continuous frame static element pictures and the preset angle is determined, respectively rendering the three-dimensional model of the first card according to the corresponding relation to obtain the target three-dimensional models of the N angles.
Correspondingly, in this embodiment, the step S104 may include steps S10403 to S10404:
s10403, acquiring a second input, and determining a third angle, wherein the third angle is one of N angles;
s10404, responding to the second input, and displaying a preview screen of the target three-dimensional model corresponding to the third angle.
The second input may be a click input, or a voice instruction input by a user, or a specific gesture or an air-space gesture input by the user, which may be determined according to actual usage requirements, and this embodiment does not limit this.
After the target three-dimensional models of the N angles are obtained, determining a third angle to be previewed according to second input of a user, and displaying a previewing picture of the target three-dimensional model corresponding to the third angle in response to the second input; by changing the displayed preview pictures at different angles, a raster visual effect with moving pictures is formed visually.
In other embodiments, generally, the content change between adjacent pictures is not very large, the backgrounds are substantially the same, and the element graphics on the pictures have a certain degree of angle and/or position change, so if the number of continuous frame static element pictures is small, the interpolation between adjacent preview pictures can be calculated through interpolation to obtain an interpolated picture between the content change trends of the adjacent preview pictures, and the interpolated picture is inserted between the adjacent preview pictures, thereby improving the continuity and the fluency of the preview picture transformation. For example, after obtaining N target three-dimensional models through S10304 to S10305, a preview screen of the corresponding N target three-dimensional models may be obtained, and then step S103 in the method may further include steps S10306 to S10307:
s10306, carrying out interpolation calculation according to the Mth preview picture and the (M + 1) th preview picture in the preview pictures of the N target three-dimensional models to obtain U interpolation pictures; and
s10307, inserting U interpolation pictures between the Mth preview picture and the (M + 1) th preview picture to obtain N + U target preview pictures. Wherein M is a positive integer, M < M +1 is less than N; u is a positive integer.
For example, in the preview images of N target three-dimensional models, corresponding N angles fall between 0 ° and 180 °, each angle is spaced by 30 °, a user wants to preview a model image rotated from 0 ° to 180 °, and in order to enhance the smoothness of image display, interpolation calculation is performed between a 30 ° preview image and a 60 ° preview image, so as to obtain two interpolation images, such as a preview image corresponding to 40 ° and a preview image corresponding to 50 °. After the N + U target preview pictures are obtained through steps S10306 to S10307, the preview display of the pictures can be performed through steps S10403 to S10404.
For example, in the process of displaying a model picture which rotates from 0 degree to 180 degrees, after displaying a preview picture which corresponds to 30 degrees, two interpolation pictures are sequentially displayed, and then a preview picture which corresponds to 60 degrees is displayed, so that a preview animation with more coherent picture content changes is obtained, and the raster display effect is more vivid.
It should be understood that, in the foregoing embodiments, after the user previews the target three-dimensional model rendered from the element data, the target three-dimensional model, and the like may be packaged together into a data packet and sent to a subsequent production system for production.
For cards such as fingerprint cards and display cards that can change with the external environment or can be displayed and transformed according to external input, optionally, the element data acquired in step S101 may include corresponding element pictures, for example, a fingerprint sensor element picture needs to be acquired when the fingerprint card is manufactured, a display screen element picture needs to be acquired when the card body for manufacturing the display screen, and the acquisition of these element pictures may be the same as other element pictures such as Logo elements.
Furthermore, material pictures and light effect pictures can be obtained according to the temperature display elements, the light display elements, the display screen digital transformation and the like of the actual card body.
In order to obtain a more realistic preview effect, for a card with a display element, optionally, in this embodiment of the application, after the target three-dimensional model is obtained through rendering in steps S101 to S103, when previewing the model in step S104, the method may specifically include steps S10405 to S10407:
s10405, presetting a timer or an input interface, wherein the timer is used for triggering the change of the corresponding display element picture at fixed time, and the input interface is used for receiving a user virtual instruction to trigger the change of the corresponding display element picture;
s10406, determining an element graph corresponding to the display element picture according to a trigger instruction corresponding to the timer or the input interface;
and S10407, updating a preview picture of the target three-dimensional model according to the element graph of the display element picture.
If the timer is set to change the number (namely the figure element graph) on a certain display screen once every 1s, the three-dimensional model is rendered once again every 1s according to the setting of the timer, and a new target three-dimensional model is obtained to be displayed.
Similarly, the preview picture of the target three-dimensional model can be dynamically updated according to the trigger instruction of the input interface for displaying.
In order to meet the diversified requirements of the user and obtain the target three-dimensional model with the raised texture, optionally, if the element data includes a first element picture and a second element picture, the first element picture includes a first element graph, and the second element picture includes a second element graph, the 3D card body with the raised graph and text can be generated based on the element data. In the embodiment of the present application, step S103 may specifically include S10308-S10310:
s10308, creating a graph three-dimensional model of the first element graph according to the first element graph;
s10309, rendering the second element pattern into an element texture of the three-dimensional model of the first card to obtain the three-dimensional model of the card;
and S10310, superposing the graphic three-dimensional model on a corresponding plane of the first card three-dimensional model to generate a target three-dimensional model.
The three-dimensional model of the first element graph is created according to the first element graph and used as a protruding graph and text in the final three-dimensional model, for example, a logo graph in a logo element graph generates a corresponding three-dimensional model of the graph, and then a second element graph (such as element graphs of chips, characters and the like or a graph of a cover element graph serving as a base graph) is overlaid in sequence and rendered into a texture of the three-dimensional model of the first card, so that the three-dimensional model of the card is obtained. And superposing the graphic three-dimensional model on a corresponding plane of the card three-dimensional model to generate a target three-dimensional model with convex pictures and texts on the surface.
When the three-dimensional graph picture is superposed to the corresponding position of the corresponding plane of the first card, the lower surface of the three-dimensional graph picture is enabled to be superposed with the front surface of the card body, so that the three-dimensional graph picture and the front surface of the card body are integrated, and when related data of a target three-dimensional model is transmitted to a production system subsequently, vector data of a corresponding 3D printing layer are transmitted together to print out a corresponding 3D image text, so that the diversified card drawing requirements of users are met.
Fig. 6 is a schematic structural diagram of a card preview device according to an embodiment of the present application. As shown in fig. 6, the apparatus includes:
an obtaining module 601, configured to obtain element data;
a creating module 602, configured to create a three-dimensional model of a first card in a preset three-dimensional scene;
the generating module 603 is configured to render the three-dimensional model of the first card according to the element data, and generate a target three-dimensional model of the first card;
and a display module 604, configured to display a preview screen of the target three-dimensional model.
For example, the obtaining module 601 may perform the step S101 shown in fig. 1, the creating module 602 may perform the step S102 shown in fig. 1, the generating module 603 may perform the step S103 shown in fig. 1, and the displaying module 604 may perform the step S104 shown in fig. 1.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and the corresponding technical effect can be achieved, and for brevity, no further description is provided herein.
Optionally, in some embodiments, the apparatus may include:
the first recording module 605 is configured to record, after the preview screen of the target three-dimensional model is displayed, an interface change in a process of displaying the preview screen of the target three-dimensional model, and generate a first recorded video.
Optionally, the apparatus may comprise:
the second recording module 606 is configured to, after the preview screen of the target three-dimensional model is displayed, capture interface content in a process of displaying the preview screen of the target three-dimensional model, and generate a first recording picture.
Optionally, the element data includes card body shape data of the first card; the creation module 602 may include:
the first generation submodule 60201 is configured to generate a card body plane graph of the first card according to the card body shape data in a field space in a preset three-dimensional scene, where the field space is a viewing cone space determined by coordinates, a field angle and a field ratio corresponding to a preset observation point in the preset three-dimensional scene, and a center of the card body plane graph is associated with the coordinates corresponding to the observation point;
and the second generation submodule 60202 is used for performing thickness stretching on the card body plane graph to generate a three-dimensional model of the first card.
Optionally, the element data includes one or more of the following types:
mark element data, element data, cover element data, text element data, material data and light effect data.
Optionally, the element data includes first element data and second element data, and the first element data is cover element data; the generating module 603 may include:
a first overlay sub-module 60301, configured to take the first element data as a base map texture, and overlay the second element data on an upper layer of the first element data to form first rendering texture data;
the first rendering submodule 60302 is configured to render the first rendering texture data into an element texture of a plane corresponding to the three-dimensional model of the first card, so as to obtain the target three-dimensional model.
Optionally, in some embodiments, the element data includes first element data and second element data, the first element data is cover element data, the second element data is an element picture, a first element region on the element picture corresponding to the second element data represents a corresponding element graphic, and a region on the element picture except the first element region is transparent; the first superposition sub-module 60301 may include:
a first overlay grandchild module 603011, configured to align a center of the second element data with a center of the first element data and overlay the center on an upper layer of the first element data to form first rendering texture data; and
a first alignment sun module 603012 to align a center of the first rendered texture data with a center of a corresponding plane of the three-dimensional model.
Optionally, in some embodiments, the element data includes first element data and second element data, the first element data is cover element data, the second element data is an element picture, the element picture includes an element graphic, and the first overlay sub-module 60301 specifically includes:
a second stacking grandchild module 603013, configured to use the first element data as a base map texture, and stack the second element data on an upper layer of the first element data according to a preset coordinate position to form first rendering texture data; and
a second aligning module 603014, configured to use the center of the first element data as a center of the first rendered texture data, and align the center with a center of a corresponding plane of the three-dimensional model.
Optionally, the generating module 603 in the apparatus may further include:
a third generating sub-module 60303, configured to generate, after the first rendering texture data is formed, first rendering texture data on which the preset ambient light is superimposed according to the preset ambient light data and the first rendering texture data;
correspondingly, the first rendering sub-module 60302 may specifically be configured to: rendering the first rendering texture data after the preset ambient light is superposed into element textures of the plane corresponding to the three-dimensional model of the first card, and obtaining the target three-dimensional model.
Optionally, in some embodiments, the element data includes a first element picture including a first element graphic therein; the target three-dimensional model is a model corresponding to a first angle; after displaying the preview screen of the target three-dimensional model, the apparatus may further include:
the first determining module 609 is configured to determine, according to the target three-dimensional model, an initial coordinate and an initial form of the first element pattern at a first angle on a plane corresponding to the target three-dimensional model;
a second determining module 610, configured to determine a second coordinate and a second form of the first element graph at a second angle on the corresponding plane of the target three-dimensional model;
a first updating module 611, configured to update the rendering three-dimensional model according to the first element graph at a preset updating time, to obtain an updated target three-dimensional model corresponding to a second angle, where an element texture corresponding to the first element graph in the updated target three-dimensional model is located at a second coordinate of the plane and is rendered into a second shape;
the first dynamic display module 612 is configured to dynamically update and display a preview screen of the target three-dimensional model as a preview screen of the updated target three-dimensional model.
Optionally, in some embodiments, the target three-dimensional model is a model corresponding to the first field-of-view space; after displaying the preview screen of the target three-dimensional model, the apparatus may include:
an adjusting module 613, configured to adjust coordinates, a view field angle, and a view field ratio of an observation point corresponding to the three-dimensional model of the first card, to obtain a second view field space;
and a second display module 614, configured to display a preview screen of the target three-dimensional model in the second field space.
Optionally, in some embodiments, the display module 604 may specifically include:
a first receiving sub-module 60401 for receiving a first input from a user,
and a first dynamic update sub-module 60402 for dynamically updating the preview screen displaying the corresponding angle of the target three-dimensional model in response to the first input.
Optionally, in some embodiments, the target three-dimensional model is a model corresponding to a first angle; the first dynamic update submodule 60402 may include:
a first determine grandchild module 604021 for determining, in response to a first input, update data for an updated rendering of the three-dimensional model of the first card, the update data including second coordinates and a second modality corresponding to a second angle or a second field of view space or element data corresponding to the three-dimensional model;
a first rendering grandchild module 604022, configured to perform, according to the update data, update rendering on the three-dimensional model using the element data, to obtain an update target three-dimensional model;
and a first display grandchild module 604023, configured to display a preview screen of the updated target three-dimensional model.
Optionally, in some embodiments, the element data includes N continuous frame static element pictures, the element pictures include corresponding element graphics, and the generating module 603 may include:
a third alignment submodule 60304, configured to align centers of N consecutive frames of static element pictures with a plane center of the three-dimensional model corresponding to N angles, where N is an integer greater than 1;
a fourth rendering submodule 60305, configured to respectively render the element textures of the three-dimensional model of the first card according to the element graphics corresponding to the respective N consecutive frames of static element pictures, so as to obtain N target three-dimensional models, where the N target three-dimensional models respectively correspond to the N angles;
correspondingly, the display module 604 may specifically include:
the second obtaining submodule 60403 is configured to obtain a second input, and determine a third angle, where the third angle is one of N angles;
and a third display sub-module 60404 for displaying a preview of the three-dimensional model of the object corresponding to a third angle in response to the second input.
Optionally, in some embodiments, the generating module 603 in the apparatus may include:
a calculation module 60306, configured to perform interpolation calculation according to an mth preview picture and an M +1 preview picture in preview pictures of the N target three-dimensional models after the N target three-dimensional models are obtained, to obtain U interpolation pictures;
an inserting module 60307, configured to insert U interpolation pictures between the mth preview picture and the M +1 th preview picture, to obtain N + U target preview pictures.
Optionally, in some embodiments, the element data includes a first element picture and a second element picture, the first element picture includes a first element graphic, the second element picture includes a second element graphic, and the generating module 603 may include:
a creating sub-module 60308 for creating a graphical three-dimensional model of the first element graphic based on the first element graphic;
a fifth rendering submodule 60309, configured to render the second element graph into an element texture of the three-dimensional model of the first card, so as to obtain a card three-dimensional model;
and a third superimposing submodule 60310, configured to superimpose the graphical three-dimensional model onto a corresponding plane of the three-dimensional model of the first card, so as to generate a target three-dimensional model.
Fig. 7 shows a hardware structure diagram of an electronic device provided in an embodiment of the present application.
The electronic device may include a processor 701 and a memory 702 storing computer program instructions.
Specifically, the processor 701 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 702 may include a mass storage for data or instructions. By way of example, and not limitation, memory 702 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 702 may include removable or non-removable (or fixed) media, where appropriate. The memory 702 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 702 is non-volatile solid-state memory.
The memory may include Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform operations described with reference to the methods according to an aspect of the application.
The processor 701 implements any of the card preview methods in the above embodiments by reading and executing computer program instructions stored in the memory 702.
In one example, the electronic device may also include a communication interface 703 and a bus 710. As shown in fig. 7, the processor 701, the memory 702, and the communication interface 703 are connected by a bus 710 to complete mutual communication.
The communication interface 703 is mainly used for implementing communication between modules, apparatuses, units and/or devices in this embodiment of the application.
Bus 710 includes hardware, software, or both to couple the components of the electronic device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 710 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, in combination with the card preview method in the foregoing embodiments, the embodiments of the present application may provide a computer storage medium to implement. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the card preview methods in the above embodiments.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As will be apparent to those skilled in the art, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (19)

1. A card preview method, the method comprising:
acquiring element data;
creating a three-dimensional model of a first card in a preset three-dimensional scene;
rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card;
displaying a preview picture of the target three-dimensional model;
wherein the element data comprises card body shape data of the first card;
the creating of the three-dimensional model of the first card in the preset three-dimensional scene comprises:
generating a card body plane graph of the first card according to the card body shape data in a view field space in a preset three-dimensional scene, wherein the view field space is a view cone space determined by coordinates, view field angles and view field proportions corresponding to an observation point preset in the preset three-dimensional scene, and the center of the card body plane graph is associated with the coordinates corresponding to the observation point;
and performing thickness stretching on the card body plane graph to generate a three-dimensional model of the first card.
2. The card preview method of claim 1, wherein the element data includes one or more of the following types:
mark element data, element data, cover element data, text element data, material data and light effect data.
3. The card preview method of claim 1, wherein the element data is data uploaded by a user.
4. The card previewing method according to claim 1, wherein the element data includes first element data and second element data, the first element data being cover element data;
the rendering the three-dimensional model of the first card according to the element data to generate the target three-dimensional model of the first card includes:
taking the first element data as base map texture, and overlaying the second element data on the upper layer of the first element data to form first rendering texture data;
rendering the first rendering texture data into element textures of a plane corresponding to the three-dimensional model of the first card to obtain the target three-dimensional model.
5. The card preview method of claim 4, wherein after the forming of the first rendered texture data, the method comprises:
generating first rendering texture data on which preset ambient light is superposed according to preset ambient light data and the first rendering texture data;
the rendering the first rendering texture data into an element texture of a plane corresponding to the three-dimensional model of the first card to obtain the target three-dimensional model includes:
rendering the first rendering texture data after the preset ambient light is superposed into element textures of the plane corresponding to the three-dimensional model of the first card, and obtaining the target three-dimensional model.
6. The card preview method according to claim 4, wherein the second element data is an element picture, a first element region on the element picture represents a corresponding element graphic, and a region on the element picture other than the first element region is transparent;
the forming a first rendering texture data by using the first element data as a base map texture and superimposing the second element data on the first element data, includes:
aligning the center of the second element data with the center of the first element data and then superposing the center of the second element data on the upper layer of the first element data to form first rendering texture data; and
aligning a center of the first rendered texture data with a center of a corresponding plane of the three-dimensional model.
7. The card preview method of claim 4, wherein the second element data is an element picture including an element graphic therein,
the forming a first rendering texture data by using the first element data as a base map texture and superimposing the second element data on the first element data, includes:
taking the first element data as base map texture, and overlapping the second element data on the upper layer of the first element data according to a preset coordinate position to form first rendering texture data; and
and aligning the center of the first rendering texture data with the center of the corresponding plane of the three-dimensional model by taking the center of the first element data as the center of the first rendering texture data.
8. The card preview method according to claim 1, wherein the element data includes a first element picture including a first element graphic therein; the target three-dimensional model is a model corresponding to a first angle; after the displaying of the preview screen of the target three-dimensional model, the method comprises:
determining an initial coordinate and an initial form of a first element graph at a first angle of a plane corresponding to the target three-dimensional model according to the target three-dimensional model;
determining a second coordinate and a second form of the first element graph at a second angle of the corresponding plane of the target three-dimensional model;
updating and rendering the three-dimensional model according to the first element graph at preset updating time to obtain an updated target three-dimensional model corresponding to the second angle, wherein the element texture corresponding to the first element graph in the updated target three-dimensional model is located at a second coordinate of the plane and is rendered into a second shape;
and dynamically updating and displaying the preview picture of the target three-dimensional model as the preview picture of the updated target three-dimensional model.
9. The card preview method of claim 1, wherein the target three-dimensional model is a model corresponding to a first field of view space; after the displaying of the preview screen of the target three-dimensional model, the method comprises:
adjusting the coordinates, the view field angle and the view field proportion of the observation point to obtain a second view field space;
and displaying a preview picture of the target three-dimensional model in the second view field space.
10. The card preview method according to claim 1, wherein the displaying a preview screen of the target three-dimensional model includes:
a first input by a user is received,
and responding to the first input, and dynamically updating a preview picture for displaying the corresponding angle of the target three-dimensional model.
11. The card preview method of claim 10, wherein the target three-dimensional model is a model corresponding to a first angle; the responding to the first input, dynamically updating and displaying a preview picture of the corresponding angle of the target three-dimensional model, and the method comprises the following steps:
in response to the first input, determining update data for an updated rendering of a three-dimensional model of the first card, the update data including a second angle or a second field of view space corresponding to the three-dimensional model, or a second coordinate and a second modality corresponding to the element data;
according to the updating data, updating and rendering the three-dimensional model by using the element data to obtain an updated target three-dimensional model;
and displaying a preview picture of the updated target three-dimensional model.
12. The card previewing method according to claim 1, wherein said element data comprises N consecutive frame still element pictures including corresponding element graphics therein,
the rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card includes:
respectively aligning the centers of the N continuous frames of static element pictures with the plane centers of the three-dimensional model corresponding to N angles, wherein N is an integer greater than 1;
respectively rendering the element textures of the three-dimensional model of the first card according to the element graphics corresponding to the N continuous frame static element pictures to obtain N target three-dimensional models, wherein the N target three-dimensional models respectively correspond to the N angles;
the preview screen for displaying the target three-dimensional model comprises the following steps:
acquiring a second input, and determining a third angle, wherein the third angle is one of the N angles;
and responding to the second input, and displaying a preview screen of the target three-dimensional model corresponding to the third angle.
13. The card previewing method according to claim 12, wherein after said obtaining N target three-dimensional models, said method comprises:
carrying out interpolation calculation according to the Mth preview picture and the (M + 1) th preview picture in the preview pictures of the N target three-dimensional models to obtain U interpolation pictures;
and inserting the U interpolation pictures between the Mth preview picture and the M +1 th preview picture to obtain N + U target preview pictures.
14. The card preview method according to claim 1, wherein after said displaying a preview screen of the target three-dimensional model, the method comprises:
and recording interface changes in the process of displaying the preview picture of the target three-dimensional model to generate a first recorded video.
15. The card preview method according to claim 1, wherein after said displaying a preview screen of the target three-dimensional model, the method comprises:
and intercepting the interface content in the process of displaying the preview picture of the target three-dimensional model, and generating a first recording picture.
16. The card previewing method according to claim 1, wherein the element data comprises a first element picture including a first element graphic therein and a second element picture including a second element graphic therein;
the rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card includes:
creating a graphic three-dimensional model of the first element graphic according to the first element graphic;
rendering a second element pattern into an element texture of the three-dimensional model of the first card to obtain a card three-dimensional model;
and superposing the graphic three-dimensional model to a corresponding plane of the three-dimensional model of the first card to generate a target three-dimensional model.
17. A card preview apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring element data;
the creating module is used for creating a three-dimensional model of the first card in a preset three-dimensional scene;
the generating module is used for rendering the three-dimensional model of the first card according to the element data to generate a target three-dimensional model of the first card;
the display module is used for displaying a preview picture of the target three-dimensional model;
wherein the element data comprises card body shape data of the first card;
the creation module includes:
the first generation submodule is used for generating a card body plane graph of the first card according to the card body appearance data in a view field space in a preset three-dimensional scene, wherein the view field space is a view cone space determined by coordinates, view field angles and view field proportions corresponding to an observation point preset in the preset three-dimensional scene, and the center of the card body plane graph is associated with the coordinates corresponding to the observation point;
and the second generation submodule is used for performing thickness stretching on the card body plane graph to generate a three-dimensional model of the first card.
18. An electronic device, characterized in that the device comprises: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a card preview method according to any of claims 1 to 16.
19. A computer storage medium having computer program instructions stored thereon which, when executed by a processor, implement the card preview method of any one of claims 1 to 16.
CN202111316720.5A 2021-11-09 2021-11-09 Card preview method and device and electronic equipment Pending CN113763546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111316720.5A CN113763546A (en) 2021-11-09 2021-11-09 Card preview method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111316720.5A CN113763546A (en) 2021-11-09 2021-11-09 Card preview method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113763546A true CN113763546A (en) 2021-12-07

Family

ID=78784628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111316720.5A Pending CN113763546A (en) 2021-11-09 2021-11-09 Card preview method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113763546A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091671A (en) * 2022-12-21 2023-05-09 北京纳通医用机器人科技有限公司 Rendering method and device of surface drawing 3D and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156567A1 (en) * 2016-04-06 2019-05-23 Beijing Xiaoxiaoniu Creative Technologies Ltd 3D Virtual Environment Generating Method and Device
CN112560158A (en) * 2020-12-23 2021-03-26 杭州群核信息技术有限公司 Table preview body generation method and table design system in home decoration design

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156567A1 (en) * 2016-04-06 2019-05-23 Beijing Xiaoxiaoniu Creative Technologies Ltd 3D Virtual Environment Generating Method and Device
CN112560158A (en) * 2020-12-23 2021-03-26 杭州群核信息技术有限公司 Table preview body generation method and table design system in home decoration design

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王乐乐等: "三维模型展示与标注***的设计与实现", 《电脑迷》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091671A (en) * 2022-12-21 2023-05-09 北京纳通医用机器人科技有限公司 Rendering method and device of surface drawing 3D and electronic equipment
CN116091671B (en) * 2022-12-21 2024-02-06 北京纳通医用机器人科技有限公司 Rendering method and device of surface drawing 3D and electronic equipment

Similar Documents

Publication Publication Date Title
CN107251101B (en) Scene modification for augmented reality using markers with parameters
WO2018196738A1 (en) Information presentation method, terminal, and storage medium
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
US8866841B1 (en) Method and apparatus to deliver imagery with embedded data
US9202309B2 (en) Methods and apparatus for digital stereo drawing
CN110163942B (en) Image data processing method and device
US20150254903A1 (en) Augmented Reality Image Transformation
US20190130648A1 (en) Systems and methods for enabling display of virtual information during mixed reality experiences
US20190244412A1 (en) Photorealistic three dimensional texturing using canonical views and a two-stage approach
CN109660783A (en) Virtual reality parallax correction
US20190130599A1 (en) Systems and methods for determining when to provide eye contact from an avatar to a user viewing a virtual environment
CN111489429B (en) Image rendering control method, terminal equipment and storage medium
US11854111B2 (en) Systems and methods of providing enhanced product visualization on a graphical display
CN110033507B (en) Method, device and equipment for drawing internal trace of model map and readable storage medium
US10614633B2 (en) Projecting a two-dimensional image onto a three-dimensional graphical object
US11900552B2 (en) System and method for generating virtual pseudo 3D outputs from images
CN108597035A (en) A kind of three-dimensional object display methods, storage medium and computer based on augmented reality
CN106447756A (en) Method and system for generating a user-customized computer-generated animation
KR101726184B1 (en) Method And Apparatus For 3-Dimensional Showing Animation Of Packing Box
CN113763546A (en) Card preview method and device and electronic equipment
Najgebauer et al. Inertia‐based Fast Vectorization of Line Drawings
CN111179390B (en) Method and device for efficiently previewing CG (content distribution) assets
CN114612641A (en) Material migration method and device and data processing method
US20190130633A1 (en) Systems and methods for using a cutting volume to determine how to display portions of a virtual object to a user
CN111651031A (en) Virtual content display method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination