CN114758042A - Novel virtual simulation engine, virtual simulation method and device - Google Patents

Novel virtual simulation engine, virtual simulation method and device Download PDF

Info

Publication number
CN114758042A
CN114758042A CN202210664852.5A CN202210664852A CN114758042A CN 114758042 A CN114758042 A CN 114758042A CN 202210664852 A CN202210664852 A CN 202210664852A CN 114758042 A CN114758042 A CN 114758042A
Authority
CN
China
Prior art keywords
simulation
image
virtual simulation
data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210664852.5A
Other languages
Chinese (zh)
Other versions
CN114758042B (en
Inventor
邓鑫
刘辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhihua Technology Development Co ltd
Original Assignee
Shenzhen Zhihua Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhihua Technology Development Co ltd filed Critical Shenzhen Zhihua Technology Development Co ltd
Priority to CN202210664852.5A priority Critical patent/CN114758042B/en
Publication of CN114758042A publication Critical patent/CN114758042A/en
Application granted granted Critical
Publication of CN114758042B publication Critical patent/CN114758042B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of virtual simulation, and discloses a novel virtual simulation engine, a virtual simulation method and a device. The method comprises the following steps: carrying out simulation test according to the sample data, and determining to receive a user instruction when the test result is that the test is passed; acquiring data to be drawn according to the user instruction; performing virtual simulation according to the data to be drawn to obtain a simulation image; acquiring sensor data; determining a simulated action of the user according to the sensor data; and rendering and synchronously displaying according to the simulation action and the simulation image. By the method, the data to be drawn are obtained, the simulation image is obtained by performing virtual simulation based on the data to be drawn, the simulation action of the user is determined through the sensor data, the simulation rendering is completed based on the simulation action and the simulation image, meanwhile, the simulation test is performed before the simulation, the user instruction is received when the test is passed, the subsequent steps are performed, and finally, the effect of performing the virtual simulation rapidly and accurately is achieved.

Description

Novel virtual simulation engine, virtual simulation method and device
Technical Field
The invention relates to the technical field of virtual simulation, in particular to a novel virtual simulation engine, a virtual simulation method and a device.
Background
With the development of graphics, computers have entered the three-dimensional era, and three-dimensional graphics are ubiquitous around people. Scientific computer visualization, computer animation and virtual reality have become three major hot topics of computer graphics in recent years, and research levels in the field of computer graphics are gradually deepened and developed forward at an increasing speed. In a computer, because various information is expressed by using graphs, the capacity is large, the visualization and the convenience are realized, and the habit of observing and knowing the motion rule of an object is better met, so that the three-dimensional graph technology has wide practical application in the aspects of building virtualization, city planning, scene roaming, effect scene making, city planning, real estate development, virtual education, exhibition of an exhibition hall, ancient writing restoration, traffic route design, 3D games and the like. In particular, in recent years, the virtual simulation industry in china is currently in a period of rapid development, and the mainstream of virtual simulation is also developing in a 3D direction. At present, the domestic three-dimensional graphic engine is relatively backward developed and cannot realize rapid virtual simulation.
The above is only for the purpose of assisting understanding of the technical solution of the present invention, and does not represent an admission that the above is the prior art.
Disclosure of Invention
The invention mainly aims to provide a novel virtual simulation engine and a novel virtual simulation method, and aims to solve the technical problem that the prior art cannot carry out virtual simulation quickly.
In order to achieve the above object, the present invention provides a virtual simulation method, including:
carrying out simulation test according to the sample data, and determining to receive a user instruction when the test result is that the test is passed;
acquiring data to be drawn according to the user instruction;
performing virtual simulation according to the data to be drawn to obtain a simulation image;
acquiring sensor data;
determining a simulated action of the user according to the sensor data;
and rendering and synchronously displaying according to the simulation action and the simulation image.
Optionally, the obtaining data to be drawn according to the user instruction includes:
when the user instruction is a drawing instruction, acquiring current image and audio data of a user according to the drawing instruction;
the virtual simulation is carried out according to the data to be drawn to obtain a simulation image, and the method comprises the following steps:
and performing virtual simulation according to the current image, the audio data and the drawing instruction to obtain a simulation image.
Optionally, the performing virtual simulation according to the current image, the audio data, and the drawing instruction to obtain a simulation image includes:
Extracting facial features and accessory features according to the current image;
determining a display scene according to the drawing instruction;
and performing virtual simulation according to the facial features, the accessory features and the display scene to generate a simulation image.
Optionally, the extracting facial features and accessory features from the current image comprises:
performing image segmentation on the current image to obtain a skin color segmentation result and a gray segmentation result of the image;
organ positioning is carried out according to the gray segmentation result, and organs and corresponding positions of the organs are obtained;
extracting contour lines according to the texture features of the organs, the corresponding positions of the organs and the skin color segmentation results to obtain organ contour lines;
performing edge detection on the current image to obtain an edge detection result;
contour line extraction is carried out according to the edge detection result and the image texture characteristics of the current image, and a human face edge contour line is obtained;
obtaining facial features according to the skin color segmentation result, the gray segmentation result, the organ contour line and the face edge contour line;
determining an accessory characteristic according to the current image.
Optionally, the performing edge detection on the current image to obtain an edge detection result includes:
Determining a target structural element according to a preset angle and a preset length;
determining an edge strength value according to the target structural element and the current image;
determining a target contour point according to the edge strength value, the target structural element and the current image;
and obtaining an edge detection result according to the target contour point.
Optionally, the obtaining data to be drawn according to the user instruction includes:
when the user instruction is a custom instruction, determining a target custom facial feature, a target custom accessory and a target custom tone in a preset configuration library according to the custom instruction;
the virtual simulation is carried out according to the data to be drawn to obtain a simulation image, and the method comprises the following steps:
and performing virtual simulation according to the target self-defined facial features, the target self-defined facial form, the target self-defined accessory, the target self-defined tone and the self-defined instruction to obtain a simulation image.
Optionally, the determining a simulated action of the user from the sensor data includes:
determining the position information of the limbs of the user according to the sensor data;
matching the position information with a preset action to obtain a matching result;
And determining the simulation action of the user according to the matching result.
In addition, to achieve the above object, the present invention further provides a virtual simulation apparatus, including:
the test module is used for carrying out simulation test according to the sample data and determining to receive a user instruction when the test result is that the test is passed;
the acquisition module is used for acquiring data to be drawn according to the user instruction;
the simulation module is used for carrying out virtual simulation according to the data to be drawn to obtain a simulation image;
the acquisition module is also used for acquiring sensor data;
a determination module for determining a simulated action of a user according to the sensor data;
and the rendering module is used for rendering and synchronously displaying according to the simulation action and the simulation image.
In addition, to achieve the above object, the present invention further provides a new virtual simulation engine, including: a memory, a processor, and a virtual simulation program stored on the memory and executable on the processor, the virtual simulation program configured to implement a virtual simulation method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium having a virtual simulation program stored thereon, wherein the virtual simulation program, when executed by a processor, implements the virtual simulation method as described above.
According to the method, simulation test is carried out according to the sample data, and when the test result is that the test is passed, a user instruction is determined to be received; acquiring data to be drawn according to the user instruction; performing virtual simulation according to the data to be drawn to obtain a simulation image; acquiring sensor data; determining a simulated action of the user according to the sensor data; and rendering and synchronously displaying according to the simulation action and the simulation image. By the method, the data to be drawn are obtained, the simulation image is obtained by performing virtual simulation based on the data to be drawn, the simulation action of the user is determined through the sensor data, the simulation rendering is completed based on the simulation action and the simulation image, meanwhile, the simulation test is performed before the simulation, the user instruction is received when the test is passed, the accurate proceeding of the subsequent steps is ensured, and finally, the effect of performing the virtual simulation rapidly and efficiently is achieved.
Drawings
FIG. 1 is a schematic structural diagram of a virtual simulation device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a virtual simulation method according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a virtual simulation method according to the present invention;
FIG. 4 is a flowchart illustrating a virtual simulation method according to a second embodiment of the present invention;
FIG. 5 is a block diagram of a virtual simulation apparatus according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a virtual simulation device of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the virtual simulation apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of a virtual simulation device, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a virtual simulation program.
In the virtual simulation apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the virtual simulation apparatus of the present invention may be disposed in the virtual simulation apparatus, and the virtual simulation apparatus calls the virtual simulation program stored in the memory 1005 through the processor 1001 and executes the virtual simulation method provided in the embodiment of the present invention.
An embodiment of the present invention provides a virtual simulation method, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of a virtual simulation method according to the present invention.
The virtual simulation method comprises the following steps:
step S10: and carrying out simulation test according to the sample data, and determining to receive a user instruction when the test result is that the test is passed.
It should be noted that the execution subject of this embodiment is a new virtual simulation engine, a simulation test is performed in the new virtual simulation engine according to sample data, and when a test result is that the test is passed, it is determined that a user instruction is received; acquiring data to be drawn according to a user instruction; performing virtual simulation according to data to be drawn to obtain a simulation image; acquiring sensor data, determining the simulation action of a user according to the sensor data, rendering and synchronously displaying according to the simulation action and the simulation image.
It is understood that the sample data refers to sample sensor data, sample drawing data, a sample avatar corresponding to the sample drawing data, and corresponding sample synchronous display data stored in a preset database.
In the concrete implementation, the image simulation test is carried out according to the sample drawing data in the sample data and the sample simulation image corresponding to the sample drawing data, whether the displayed test simulation image is consistent with the sample simulation image or not is determined, the action synchronous test is carried out according to the sample sensor data in the sample data and the corresponding sample synchronous display data, whether the synchronously displayed action is consistent with the sample synchronous display data or not is determined, when the test results of the action synchronous test and the image simulation test are consistent, the representative test is passed, and at the moment, the user instruction is determined to be received.
Step S20: and acquiring data to be drawn according to the user instruction.
It should be noted that the user instruction refers to an instruction sent by a user to construct and virtually display the avatar, and the data to be drawn is data used for constructing the avatar.
It can be understood that, since the simulation image can be obtained by performing virtual simulation according to the user image and can also be obtained by performing virtual simulation according to the user-defined content, the instruction type needs to be determined according to the user instruction, and the construction mode of the simulation image can be determined according to the instruction type, so as to obtain the corresponding data to be drawn.
In a specific implementation, when the user instruction is a user-defined instruction, determining that the simulation image performs virtual simulation according to user-defined content to obtain the simulation image, and further, obtaining data to be drawn according to the user instruction includes: when the user instruction is a custom instruction, determining a target custom facial feature, a target custom accessory and a target custom tone in a preset configuration library according to the custom instruction; the virtual simulation is carried out according to the data to be drawn to obtain a simulation image, and the method comprises the following steps: and performing virtual simulation according to the target self-defined facial features, the target self-defined facial form, the target self-defined accessory, the target self-defined tone and the self-defined instruction to obtain a simulation image.
It should be noted that, when it is determined that the instruction type corresponding to the user instruction is a user-defined instruction, the simulation image is determined to be constructed by performing virtual simulation according to user-defined content, and at this time, a skin color instruction, a facial feature configuration instruction, a face shape configuration instruction, an accessory configuration instruction, and a tone color configuration instruction in the user-defined instruction need to be extracted.
It can be understood that the preset database includes a skin color library, a facial feature library, a fitting library and a tone library, and the facial feature library stores various types of facial features, in this embodiment, the facial features include ears, eyes, eyebrows, nose and mouth. For example, the eyebrows include willow-leaf eyebrows, knife-bent eyebrows, straight eyebrows, etc. The face type library stores various types of face types, the accessory library stores necklaces, earrings, bracelets and clothes of various types and colors, the tone color library stores various types of tone colors, and the skin color library stores various types of skin colors.
In the specific implementation, a target custom skin color is determined in a skin color library according to a skin color instruction, a target custom facial feature is determined in a facial feature library according to a facial feature configuration instruction, a target custom facial form is determined in a facial form library according to a facial form configuration instruction, a target custom accessory is determined in an accessory library according to an accessory configuration instruction, and a target custom tone is determined in a tone library according to a tone configuration instruction. The target self-defined tone refers to the speaking tone of the simulated image.
It should be noted that the simulation image is constructed based on the custom instruction, and the virtual simulation is performed according to the target custom skin color, the target custom facial features, the target custom accessory and the target custom timbre, so that the simulation image under the custom instruction is obtained.
Step S30: and performing virtual simulation according to the data to be drawn to obtain a simulation image.
It should be noted that the construction of the simulation image, including the scene construction and the character construction, is performed according to the data to be drawn, so as to obtain the simulation image including the virtual image and the scene where the virtual image is located.
Step S40: sensor data is acquired.
It should be noted that, because a camera is installed in the new virtual simulation engine, data acquired by the camera is sensor data.
Step S50: and determining the simulation action of the user according to the sensor data.
It should be noted that the simulation action shown by the current user of the user image can be determined according to the sensor data, and the simulation image is displayed according to the simulation action, so that the simulation image and the action of the user are synchronous.
It is to be understood that, in order to obtain an accurate simulated action, further, the determining a simulated action of the user according to the sensor data includes: determining position information of the user's limb according to the sensor data; matching the position information with a preset action to obtain a matching result; and determining the simulation action of the user according to the matching result.
In the specific implementation, the position information of the user body is determined according to the user image in the sensor data, the position information of the body comprises user body node information, local body node information, skeleton position information and body position information, matching is carried out on preset actions according to the user body node information, the local body node information, the skeleton position information and the body position information, corresponding matching results are obtained, and the simulation action of the user can be determined according to the matching results. For example, the preset action includes actions one to five, and if the combination of the user body node information, the local limb node information, the skeleton position information and the body position information is matched with action three, the simulation action of the user is action three.
It should be noted that, as shown in fig. 3, position information of limbs 1 to 11 of a user is acquired, where 1 is a head of the user, 2 and 7 are a right shoulder and a left shoulder of the user, respectively, 3 and 8 are a right elbow joint and a left elbow joint of the user, 4 and 9 are a right hand and a left hand of the user, respectively, 5 and 10 are a right knee and a left knee of the user, 6 and 11 are a right foot and a left foot of the user, respectively, matching is performed according to the position information of the limbs of the user and a preset action, it is determined that a motion of the user matches with the preset action one, and then the simulated action is the preset action one.
Step S60: and rendering and synchronously displaying according to the simulation action and the simulation image.
It should be noted that, after the simulation action is obtained, the simulation image is displayed according to the simulation action, so that the synchronization between the simulation image and the action of the user is achieved.
In the embodiment, the simulation test is carried out according to the sample data, and the user instruction is determined to be received when the test result is that the test is passed; acquiring data to be drawn according to the user instruction; performing virtual simulation according to the data to be drawn to obtain a simulation image; acquiring sensor data; determining a simulated action of the user according to the sensor data; and rendering and synchronously displaying according to the simulation action and the simulation image. By the method, the data to be drawn are obtained, the virtual simulation is carried out on the basis of the data to be drawn to obtain the simulation image, the simulation action of the user is determined through the sensor data, the simulation rendering is completed on the basis of the simulation action and the simulation image, meanwhile, the simulation test is carried out before the simulation, the user instruction is received when the test is passed, the accurate proceeding of the subsequent steps is ensured, and finally, the effect of carrying out the virtual simulation quickly and efficiently is achieved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a virtual simulation method according to a second embodiment of the present invention.
Based on the first embodiment, the step S20 in the virtual simulation method of this embodiment includes:
step S21: and when the user instruction is a drawing instruction, acquiring the current image and audio data of the user according to the drawing instruction.
It should be noted that, when it is determined that the instruction type corresponding to the user instruction is the drawing instruction, it is determined that the simulation image is constructed in a manner of performing virtual simulation according to the user image, and at this time, data collected by the camera and audio data collected by the recording device and corresponding to the user are obtained according to the drawing instruction. And carrying out face extraction on the data acquired by the camera so as to obtain the current image of the user. The audio data is used to determine the artificial timbre in the construction of the artificial figure.
It is understood that, in order to ensure that the avatar meets the user' S requirements, after the current image and audio data of the user are obtained, the step S30 further includes:
step S31: and performing virtual simulation according to the current image, the audio data and the drawing instruction to obtain a simulation image.
It should be noted that, the construction of the simulation image is started based on the drawing instruction, and the virtual simulation is performed according to the current image and audio data, so that the simulation image under the drawing instruction is obtained and displayed.
It can be understood that, for accurate construction based on the current image of the user, further, the virtual simulation is performed according to the current image, the audio data and the drawing instruction to obtain a simulation image, including: extracting facial features and accessory features according to the current image; determining a display scene according to the drawing instruction; and performing virtual simulation according to the facial features, the accessory features and the display scene to generate a simulation image.
In a specific implementation, the facial features refer to facial image features of a user, the facial image construction of the simulated image can be performed based on the facial features, the accessory features refer to clothing features and jewelry features of the user, the clothing features include but are not limited to clothing shape features, clothing texture features and clothing color features, and the wearing accessory image construction of the simulated image can be performed based on the clothing features.
It should be noted that the facial features and the accessory features of the user are extracted according to the current image, the corresponding simulation image display scene is determined according to the drawing instruction, virtual simulation is performed according to the facial features and the accessory features, a simulation image is generated, and the selected display scene is set up, so that the simulation image is rendered and displayed in the display scene.
It is to be understood that, in order to extract an accurate facial feature based on a current image, further, the extracting facial features and accessory features from the current image includes: performing image segmentation on the current image to obtain a skin color segmentation result and a gray segmentation result of the image; organ positioning is carried out according to the gray segmentation result, and organs and corresponding positions of the organs are obtained; extracting contour lines according to the texture features of the organs, the corresponding positions of the organs and the skin color segmentation results to obtain organ contour lines; performing edge detection on the current image to obtain an edge detection result; contour line extraction is carried out according to the edge detection result and the image texture characteristics of the current image, and a human face edge contour line is obtained; obtaining facial features according to the skin color segmentation result, the gray segmentation result, the organ contour line and the face edge contour line; determining an accessory characteristic according to the current image.
In the specific implementation, a Gaussian model constructed by TSL colors and sample training images is used for carrying out image segmentation on a current image, the similarity of the current image relative to a skin color center is calculated and binaryzation is carried out, so that a skin color segmentation result is obtained, and gray segmentation is carried out on the current image according to priori knowledge to obtain a gray segmentation result.
It should be noted that after the gray segmentation result and the skin color segmentation result are obtained, the corresponding positions of the organ and the organ can be determined according to the gray segmentation result and the face geometric structure, the contour line of the organ can be extracted according to the texture features of the organ, the corresponding positions of the organ and the skin color segmentation result, so that a luo-kuai line of the organ is obtained, and the face organ can be accurately simulated during the subsequent virtual simulation based on the contour line of the organ. For example, the corresponding position of the organ can determine the approximate position of the organ, and the accurate contour line of the organ can be determined according to the difference between the texture feature of the organ and the skin color in the skin color segmentation result.
It can be understood that the edge detection is performed on the current image to obtain an edge detection result, and the extraction of the chin contour line is completed through point-by-point investigation based on the edge detection result and the image texture feature, so as to obtain the human face edge contour line. The face shape can be accurately simulated during subsequent virtual simulation based on the face edge contour line.
In the specific implementation, after a skin color segmentation result, a gray segmentation result, an organ contour line and a face edge contour line are obtained, the skin color segmentation result, the gray segmentation result, the organ contour line and the face edge contour line are used as face features, and the accessory features are determined according to the current image.
It should be noted that, in order to obtain an accurate edge detection result, further, the performing edge detection on the current image to obtain an edge detection result includes: determining a target structural element according to a preset angle and a preset length; determining an edge strength value according to the target structural element and the current image; determining a target contour point according to the edge strength value, the target structural element and the current image; and obtaining an edge detection result according to the target contour point.
It can be understood that, on the basis of increasing texture constraints, an edge detection model constructed by a morphological edge detection operator containing direction information is used for edge detection, and the edge detection model determines a target structural element by using a preset angle and a preset length. For example, the preset directions may be 0 °, 45 °, 90 °, and 135 °, and the preset length may be 3, so as to determine the straight symmetric structural elements in four directions, where the straight symmetric structural elements in four directions are the target structural elements.
In a specific implementation, after the target structural element is obtained, the difference between the expansion and corrosion of the target structural element on the gray level image of the current image is used as the edge intensity value. And (3) expansion calculation:
Figure 227147DEST_PATH_IMAGE001
(ii) a And (3) corrosion calculation:
Figure 560040DEST_PATH_IMAGE002
and the edge strength value:
Figure 374412DEST_PATH_IMAGE003
where B is the target structural element, q is the direction of the target structural element, f is the grayscale image of the current image, D isgAnd DcRepresenting the current image coordinates and the domain of the target structure element, respectively, (x, y) representing the coordinates of a certain point.
When the edge direction is calculated, two coordinate systems are constructed according to a preset direction, for example, 0 ° and 90 ° target structure tuples form one coordinate system, 45 ° and 135 ° target structure tuples form the other coordinate system, the edge intensity value of each point to be extracted of the edge on the current image under each target structure element is regarded as the projection of the point on the target structure element, and the directions of the contour lines in different target structure element coordinate systems are determined. And the direction of the contour line under the coordinate system in which the angle target structure element is preset relates to the rotation of the coordinate system, for example, the direction of the contour line under the coordinate system composed of the target structure elements of 45 ° and 135 ° relates to the rotation of the coordinate system, when the edge is detected, if the calculated angles under the coordinate systems of the two target structure elements are the same, or the angle difference is smaller than the target value, the point to be extracted is the target contour point.
It should be noted that after each target contour point is obtained, each target contour point is an edge detection result.
In this embodiment, when the user instruction is a drawing instruction, current image and audio data of a user are obtained according to the drawing instruction; and performing virtual simulation according to the current image, the audio data and the drawing instruction to obtain a simulation image. Virtual simulation is carried out based on the current image and the audio data under the drawing instruction, a simulation image fitting the real image of the user can be obtained, and the simulation accuracy is improved.
In addition, referring to fig. 5, an embodiment of the present invention further provides a virtual simulation apparatus, where the virtual simulation apparatus includes:
and the test module 10 is used for carrying out simulation test according to the sample data and determining to receive the user instruction when the test result is that the test is passed.
And an obtaining module 20, configured to obtain data to be drawn according to the user instruction.
And the simulation module 30 is used for performing virtual simulation according to the data to be drawn to obtain a simulation image.
The obtaining module 20 is further configured to obtain sensor data.
A determination module 40 for determining a simulated action of the user from the sensor data.
And the rendering module 50 is used for rendering and synchronously displaying according to the simulation action and the simulation image.
In the embodiment, the simulation test is carried out according to the sample data, and the user instruction is determined to be received when the test result is that the test is passed; acquiring data to be drawn according to the user instruction; performing virtual simulation according to the data to be drawn to obtain a simulation image; acquiring sensor data; determining a simulated action of the user according to the sensor data; and rendering and synchronously displaying according to the simulation action and the simulation image. By the method, the data to be drawn are obtained, the simulation image is obtained by performing virtual simulation based on the data to be drawn, the simulation action of the user is determined through the sensor data, the simulation rendering is completed based on the simulation action and the simulation image, meanwhile, the simulation test is performed before the simulation, the user instruction is received when the test is passed, the accurate proceeding of the subsequent steps is ensured, and finally, the effect of performing the virtual simulation rapidly and efficiently is achieved.
In an embodiment, the obtaining module 20 is further configured to, when the user instruction is a drawing instruction, obtain a current image and audio data of the user according to the drawing instruction.
In an embodiment, the simulation module 30 is further configured to perform virtual simulation according to the current image, the audio data, and the drawing instruction to obtain a simulation image.
In an embodiment, the simulation module 30 is further configured to extract facial features and accessory features from the current image;
determining a display scene according to the drawing instruction;
and performing virtual simulation according to the facial features, the accessory features and the display scene to generate a simulation image.
In an embodiment, the simulation module 30 is further configured to perform image segmentation on the current image to obtain a skin color segmentation result and a gray scale segmentation result of the image;
organ positioning is carried out according to the gray segmentation result to obtain organs and corresponding positions of the organs;
contour line extraction is carried out according to the texture features of the organ, the corresponding position of the organ and the skin color segmentation result to obtain an organ contour line;
performing edge detection on the current image to obtain an edge detection result;
contour line extraction is carried out according to the edge detection result and the image texture characteristics of the current image, and a human face edge contour line is obtained;
Obtaining facial features according to the skin color segmentation result, the gray segmentation result, the organ contour line and the face edge contour line;
determining an accessory characteristic from the current image.
In an embodiment, the simulation module 30 is further configured to determine a target structural element according to a preset angle and a preset length;
determining an edge strength value according to the target structural element and the current image;
determining a target contour point according to the edge strength value, the target structural element and the current image;
and obtaining an edge detection result according to the target contour point.
In an embodiment, the obtaining module 20 is further configured to determine, according to the user instruction, a target customized facial shape, a target customized accessory and a target customized timbre in a preset configuration library when the user instruction is a customized instruction.
In an embodiment, the simulation module 30 is further configured to perform virtual simulation according to the target customized facial features, the target customized facial form, the target customized accessories, the target customized timbre and the customized instruction to obtain a simulation image.
In an embodiment, the determining module 40 is further configured to determine position information of the user's limb according to the sensor data;
Matching with a preset action according to the position information to obtain a matching result;
and determining the simulation action of the user according to the matching result.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or system in which the element is included.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A virtual simulation method, characterized in that the virtual simulation method comprises:
carrying out simulation test according to the sample data, and determining to receive a user instruction when the test result is that the test is passed;
acquiring data to be drawn according to the user instruction;
performing virtual simulation according to the data to be drawn to obtain a simulation image;
acquiring sensor data;
determining a simulated action of the user according to the sensor data;
and rendering and synchronously displaying according to the simulation action and the simulation image.
2. The virtual simulation method according to claim 1, wherein the obtaining data to be drawn according to the user instruction comprises:
when the user instruction is a drawing instruction, acquiring current image and audio data of a user according to the drawing instruction;
the virtual simulation is carried out according to the data to be drawn to obtain a simulation image, and the method comprises the following steps:
And performing virtual simulation according to the current image, the audio data and the drawing instruction to obtain a simulation image.
3. The virtual simulation method according to claim 2, wherein the performing the virtual simulation based on the current image, the audio data and the drawing instruction to obtain the simulated image comprises:
extracting facial features and accessory features according to the current image;
determining a display scene according to the drawing instruction;
and performing virtual simulation according to the facial features, the accessory features and the display scene to generate a simulation image.
4. The virtual simulation method of claim 3, wherein said extracting facial features and accessory features from the current image comprises:
performing image segmentation on the current image to obtain a skin color segmentation result and a gray level segmentation result of the image;
organ positioning is carried out according to the gray segmentation result to obtain organs and corresponding positions of the organs;
contour line extraction is carried out according to the texture features of the organ, the corresponding position of the organ and the skin color segmentation result to obtain an organ contour line;
performing edge detection on the current image to obtain an edge detection result;
contour line extraction is carried out according to the edge detection result and the image texture characteristics of the current image, and a human face edge contour line is obtained;
Obtaining facial features according to the skin color segmentation result, the gray segmentation result, the organ contour line and the face edge contour line;
determining an accessory characteristic from the current image.
5. The virtual simulation method of claim 4, wherein the performing the edge detection on the current image to obtain an edge detection result comprises:
determining a target structural element according to a preset angle and a preset length;
determining an edge strength value according to the target structural element and the current image;
determining a target contour point according to the edge strength value, the target structural element and the current image;
and obtaining an edge detection result according to the target contour point.
6. The virtual simulation method according to claim 1, wherein the obtaining data to be drawn according to the user instruction comprises:
when the user instruction is a custom instruction, determining a target custom facial feature, a target custom accessory and a target custom tone in a preset configuration library according to the custom instruction;
the virtual simulation is carried out according to the data to be drawn to obtain a simulation image, and the method comprises the following steps:
and performing virtual simulation according to the target self-defined facial features, the target self-defined facial form, the target self-defined accessory, the target self-defined tone and the self-defined instruction to obtain a simulation image.
7. The virtual simulation method of claim 1, wherein said determining a simulated action of a user from said sensor data comprises:
determining the position information of the limbs of the user according to the sensor data;
matching with a preset action according to the position information to obtain a matching result;
and determining the simulation action of the user according to the matching result.
8. A virtual simulation apparatus, characterized in that the virtual simulation apparatus comprises:
the test module is used for carrying out simulation test according to the sample data and determining to receive a user instruction when the test result is that the test is passed;
the acquisition module is used for acquiring data to be drawn according to the user instruction;
the simulation module is used for carrying out virtual simulation according to the data to be drawn to obtain a simulation image;
the acquisition module is also used for acquiring sensor data;
a determination module for determining a simulated action of a user according to the sensor data;
and the rendering module is used for rendering and synchronously displaying according to the simulation action and the simulation image.
9. A new virtual simulation engine, comprising: a memory, a processor, and a virtual simulation program stored on the memory and executable on the processor, the virtual simulation program configured to implement the virtual simulation method of any of claims 1 to 7.
10. A storage medium having stored thereon a virtual simulation program which, when executed by a processor, implements the virtual simulation method according to any one of claims 1 to 7.
CN202210664852.5A 2022-06-14 2022-06-14 Novel virtual simulation engine, virtual simulation method and device Active CN114758042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210664852.5A CN114758042B (en) 2022-06-14 2022-06-14 Novel virtual simulation engine, virtual simulation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210664852.5A CN114758042B (en) 2022-06-14 2022-06-14 Novel virtual simulation engine, virtual simulation method and device

Publications (2)

Publication Number Publication Date
CN114758042A true CN114758042A (en) 2022-07-15
CN114758042B CN114758042B (en) 2022-09-02

Family

ID=82336986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210664852.5A Active CN114758042B (en) 2022-06-14 2022-06-14 Novel virtual simulation engine, virtual simulation method and device

Country Status (1)

Country Link
CN (1) CN114758042B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305325A (en) * 2017-01-25 2018-07-20 网易(杭州)网络有限公司 The display methods and device of virtual objects
US20190302259A1 (en) * 2018-03-27 2019-10-03 The Mathworks, Inc. Systems and methods for generating synthetic sensor data
CN111133409A (en) * 2017-10-19 2020-05-08 净睿存储股份有限公司 Ensuring reproducibility in artificial intelligence infrastructure
CN111408129A (en) * 2020-02-28 2020-07-14 苏州叠纸网络科技股份有限公司 Interaction method and device based on virtual character image and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN112651288A (en) * 2014-06-14 2021-04-13 奇跃公司 Method and system for generating virtual and augmented reality
CN112711458A (en) * 2021-01-15 2021-04-27 腾讯科技(深圳)有限公司 Method and device for displaying prop resources in virtual scene
CN112906126A (en) * 2021-01-15 2021-06-04 北京航空航天大学 Vehicle hardware in-loop simulation training system and method based on deep reinforcement learning
CN113808007A (en) * 2021-09-16 2021-12-17 北京百度网讯科技有限公司 Method and device for adjusting virtual face model, electronic equipment and storage medium
CN113822973A (en) * 2021-11-24 2021-12-21 支付宝(杭州)信息技术有限公司 Method, apparatus, electronic device, medium, and program for operating avatar
CN114332374A (en) * 2021-12-30 2022-04-12 深圳市慧鲤科技有限公司 Virtual display method, equipment and storage medium
CN114612643A (en) * 2022-03-07 2022-06-10 北京字跳网络技术有限公司 Image adjusting method and device for virtual object, electronic equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651288A (en) * 2014-06-14 2021-04-13 奇跃公司 Method and system for generating virtual and augmented reality
CN108305325A (en) * 2017-01-25 2018-07-20 网易(杭州)网络有限公司 The display methods and device of virtual objects
CN111133409A (en) * 2017-10-19 2020-05-08 净睿存储股份有限公司 Ensuring reproducibility in artificial intelligence infrastructure
US20190302259A1 (en) * 2018-03-27 2019-10-03 The Mathworks, Inc. Systems and methods for generating synthetic sensor data
CN111408129A (en) * 2020-02-28 2020-07-14 苏州叠纸网络科技股份有限公司 Interaction method and device based on virtual character image and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN112711458A (en) * 2021-01-15 2021-04-27 腾讯科技(深圳)有限公司 Method and device for displaying prop resources in virtual scene
CN112906126A (en) * 2021-01-15 2021-06-04 北京航空航天大学 Vehicle hardware in-loop simulation training system and method based on deep reinforcement learning
CN113808007A (en) * 2021-09-16 2021-12-17 北京百度网讯科技有限公司 Method and device for adjusting virtual face model, electronic equipment and storage medium
CN113822973A (en) * 2021-11-24 2021-12-21 支付宝(杭州)信息技术有限公司 Method, apparatus, electronic device, medium, and program for operating avatar
CN114332374A (en) * 2021-12-30 2022-04-12 深圳市慧鲤科技有限公司 Virtual display method, equipment and storage medium
CN114612643A (en) * 2022-03-07 2022-06-10 北京字跳网络技术有限公司 Image adjusting method and device for virtual object, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GYÖRGY WERSENYI .ETC: "Effect of Emulated Head-Tracking for Reducing Localization Errors in Virtual Audio Simulation", 《 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING》 *
卢晓军: "维修仿真中虚拟人动作建模与行为控制技术研究", 《中国博士学位论文全文数据库(电子期刊)》 *
李永亮等: "基于CAVE2的森林虚拟仿真***应用研究", 《林业资源管理》 *
王誉立: "基于OSG的三维可视化研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Also Published As

Publication number Publication date
CN114758042B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN108447043B (en) Image synthesis method, equipment and computer readable medium
CN109325990B (en) Image processing method, image processing apparatus, and storage medium
WO2021253788A1 (en) Three-dimensional human body model construction method and apparatus
JP2024522287A (en) 3D human body reconstruction method, apparatus, device and storage medium
CN112102480B (en) Image data processing method, apparatus, device and medium
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
WO2023088277A1 (en) Virtual dressing method and apparatus, and device, storage medium and program product
US10891801B2 (en) Method and system for generating a user-customized computer-generated animation
CN110322571B (en) Page processing method, device and medium
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
JP4842242B2 (en) Method and apparatus for real-time expression of skin wrinkles during character animation
CN107808372B (en) Image crossing processing method and device, computing equipment and computer storage medium
CN114049468A (en) Display method, device, equipment and storage medium
KR20080050284A (en) An efficient real-time skin wrinkle rendering method and apparatus in character animation
CN111882380A (en) Virtual fitting method, device, system and electronic equipment
Chalás et al. Generating various composite human faces from real 3D facial images
CN114529647A (en) Object rendering method, device and apparatus, electronic device and storage medium
CN114758042B (en) Novel virtual simulation engine, virtual simulation method and device
CN117132711A (en) Digital portrait customizing method, device, equipment and storage medium
WO2023160074A1 (en) Image generation method and apparatus, electronic device, and storage medium
CN111275610A (en) Method and system for processing face aging image
JP2004213481A (en) Image processor and processing method
CN114677476A (en) Face processing method and device, computer equipment and storage medium
CN109840948A (en) The put-on method and device of target object based on augmented reality
CN116030509A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant