CN110703916B - Three-dimensional modeling method and system thereof - Google Patents

Three-dimensional modeling method and system thereof Download PDF

Info

Publication number
CN110703916B
CN110703916B CN201910938301.1A CN201910938301A CN110703916B CN 110703916 B CN110703916 B CN 110703916B CN 201910938301 A CN201910938301 A CN 201910938301A CN 110703916 B CN110703916 B CN 110703916B
Authority
CN
China
Prior art keywords
sub
data
mode
region
calling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910938301.1A
Other languages
Chinese (zh)
Other versions
CN110703916A (en
Inventor
李小波
甘健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Shambala Culture Co ltd
Original Assignee
Hengxin Shambala Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Shambala Culture Co ltd filed Critical Hengxin Shambala Culture Co ltd
Priority to CN201910938301.1A priority Critical patent/CN110703916B/en
Publication of CN110703916A publication Critical patent/CN110703916A/en
Application granted granted Critical
Publication of CN110703916B publication Critical patent/CN110703916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional modeling method and a system thereof, wherein the three-dimensional modeling method comprises the following steps: creating a plurality of sub-modes matched with the virtual mould and storage paths corresponding to the sub-modes; creating region call data corresponding to the sub-mode according to the sub-mode, and storing the region call data under a storage path of the sub-mode; and calling a corresponding sub-mode for use according to the accessed live-action mold. The three-dimensional virtual reality and real object and dynamic operation gestures are combined, and the technical effect of improving the real use feeling of a user is achieved.

Description

Three-dimensional modeling method and system thereof
Technical Field
The present disclosure relates to the field of computers, and in particular, to a three-dimensional modeling method and system thereof.
Background
The accurate and efficient reconstruction of three-dimensional models thereof from the real world is of increasing interest. After the three-dimensional model is built, people use a Virtual Reality technology, and Virtual Reality (VR) is a Virtual world generated by computer simulation, so that the simulation of senses such as vision, hearing, touch and the like of a user is provided, and the user can observe things in the three-dimensional space in time and without limitation like the situation of calendar.
However, the existing three-dimensional virtual reality technology only provides the user with the observation and simulation of the pre-constructed three-dimensional world, and how to combine the three-dimensional virtual reality with the existing real object and the operation gestures of the user, so as to achieve the purpose of providing the real use feeling for the user, which is a problem today.
Disclosure of Invention
The purpose of the application is to provide a three-dimensional modeling method and a system thereof, which have the technical effects of combining three-dimensional virtual reality with real objects and dynamic operation gestures, and improving the real use feeling of a user.
To achieve the above object, the present application provides a three-dimensional modeling method, including: creating a plurality of sub-modes matched with the virtual mould and storage paths corresponding to the sub-modes; creating region call data corresponding to the sub-mode according to the sub-mode, and storing the region call data under a storage path of the sub-mode; and calling a corresponding sub-mode for use according to the accessed live-action mold.
Preferably, the sub-steps of creating a plurality of sub-patterns adapted to the virtual mold are as follows: classifying the virtual molds, and creating a plurality of using modes according to the types of the virtual molds; creating a plurality of sub-patterns in each usage pattern, respectively; a corresponding storage path is created for each sub-mode separately.
Preferably, the substeps of creating region call data are as follows: basic data of each sub-mode is obtained; processing the basic data to obtain regional data, and storing the regional data in a regional data comparison library; acquiring or simulating a plurality of operation gestures, acquiring coordinate data of the plurality of operation gestures, judging the coordinate data positions of the operation gestures through a region data comparison library, analyzing and presetting a virtual mold active state according to the coordinate data positions of the operation gestures, and creating a plurality of dynamic virtual molds corresponding to the virtual mold active state; creating a plurality of region calling files, and storing the dynamic virtual mould in the corresponding region calling files.
Preferably, the sub-steps of processing the base data to obtain the region data and storing the region data in the region data comparison library are as follows: dividing coordinate data of the virtual mold into a plurality of first use areas; dividing the virtual space coordinate data into a plurality of second use areas corresponding to the first use areas and a third use area; and creating a region data comparison library, and storing the coordinate data of the first use region, the coordinate data of the second use region and the coordinate data of the third use region in the region data comparison library.
Preferably, the sub-steps of calling the corresponding sub-mode for use according to the accessed live-action mold are as follows: acquiring identification information of a live-action mold; and calling the corresponding sub-mode for use according to the identification information.
Preferably, the sub-steps of calling the corresponding sub-mode for use according to the identification information are as follows: judging the type of the virtual mould to be called according to the type of the mould in the identification information, and judging the use mode according to the type of the virtual mould; and judging a sub-mode from the use modes according to the specific types in the identification information, and calling the region calling data of the sub-mode for use.
The application also provides a three-dimensional modeling system, which comprises at least one live-action mould, an access device, VR equipment and a somatosensory controller, wherein the access device is respectively connected with the live-action mould, the VR equipment and the somatosensory controller, and the access device is used for executing the three-dimensional modeling method.
Preferably, the access device comprises an identifier, a processor and a display, wherein the processor is respectively connected with the display and the identifier; the identifier is used for acquiring identification information of a live-action mold accessed to the access device according to the instruction of the processor and sending the identification information to the processor for processing; the processor is used for receiving the data sent by the identifier, processing the data, calling the sub-mode according to the processed data, and respectively sending the region calling data called according to the sub-mode to the display and the VR equipment; the display is used for receiving and displaying data sent by the processor and the VR device.
Preferably, the processor comprises a storage module, a three-dimensional modeling module, a processing module, a judging module and a calling module; the storage module is used for storing basic data, sub-modes and corresponding region calling files; the three-dimensional modeling module is used for acquiring basic data and sending the basic data to the processing module, and is also used for creating a dynamic virtual mould; the processing module is used for carrying out partition processing on the acquired basic data; and a judging module: the method comprises the steps of judging a sub-mode to be used according to identification information, judging a region calling file to be called according to the coordinate data position of a manipulation gesture, and feeding a judging result back to a calling module for calling; and the calling module is used for calling the sub-mode and the region calling file according to the judging result.
Preferably, the live-action mold is provided with identification information; the identification information includes at least a mold category and a specific type of the live-action mold.
The beneficial effects realized by the application are as follows:
(1) The three-dimensional modeling method and the system thereof have the technical effects of combining three-dimensional virtual reality with real objects and dynamic operation gestures, and improving the real use feeling of a user.
(2) According to the three-dimensional modeling method and the system thereof, operators can utilize a plurality of live-action molds and corresponding sub-modes to perform different learning and exercise, the learning and exercise cost is low, and the application range is wide.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of a three-dimensional modeling system in an embodiment;
FIG. 2 is a flow chart of one embodiment of a three-dimensional modeling method.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The three-dimensional modeling method and the system thereof have the technical effects of combining three-dimensional virtual reality with real articles and operation gestures and improving the real feeling of a user.
As shown in fig. 1, the present application provides a three-dimensional modeling system, which includes at least one live-action mold 1, an access device 2, a VR device 3 and a somatosensory controller 4, where the access device 2 is connected to the live-action mold 1, the VR device 3 and the somatosensory controller 4, and the access device 2 is used to execute a three-dimensional modeling method described below.
Further, the access device 2 comprises an identifier, a processor, and a display, wherein the processor is respectively connected with the display and the identifier.
The identifier is used for acquiring identification information of the live-action mold 1 accessed to the access device 2 according to the instruction of the processor, and sending the identification information to the processor for processing.
The processor is used for receiving the data sent by the identifier, processing the data, calling the sub-mode according to the processed data, and respectively sending the region calling data called according to the sub-mode to the display and the VR equipment.
The display is used for receiving and displaying data sent by the processor and the VR device.
Further, the processor comprises a storage module, a three-dimensional modeling module, a processing module, a judging module and a calling module.
The storage module is used for storing the basic data, the sub-modes and the corresponding region calling files.
The three-dimensional modeling module is used for acquiring basic data and sending the basic data to the processing module, and is also used for creating a dynamic virtual mould.
The processing module is used for carrying out partition processing on the acquired basic data.
And a judging module: the method is used for judging the sub-mode to be used according to the identification information, judging the region calling file to be called according to the coordinate data position of the manipulation gesture, and feeding the judging result back to the calling module for calling.
And the calling module is used for calling the sub-mode and the region calling file according to the judging result.
The processor also comprises a three-dimensional modeling module, and the three-dimensional modeling module is connected with the storage module; the three-dimensional modeling module is used for creating a virtual mold and a virtual three-dimensional space, and sending the created virtual mold and virtual three-dimensional space to the storage module for storage.
Further, the storage module comprises a regional data comparison library.
As shown in fig. 2, the present application provides a three-dimensional modeling method, including:
s1: creating a plurality of sub-patterns matched with the virtual mold and storage paths corresponding to the sub-patterns.
Specifically, as an example, a plurality of live action molds 1 are prefabricated, and the live action molds 1 may be of the type of musical instrument mold, writing mold, moving mold, or the like.
Further, as an example, the specific type of the musical instrument mold may be a piano, an electronic organ, a drum kit, or the like; the specific type of the writing and drawing die can be a drawing board and the like; the specific type of motion mold may be boxing glove, sandbag, etc. The three-dimensional modeling module creates a virtual mold matched with the live-action mold according to the live-action mold. The virtual mold can be a virtual musical instrument mold, a virtual writing and drawing mold, a virtual moving mold and the like. Further, as an example, the specific type of the virtual musical instrument mold may be a virtual piano, a virtual electronic organ, a virtual drum set, or the like; the specific type of the virtual writing and drawing die can be a virtual drawing board and the like; the specific type of virtual motion mold may be a virtual boxing glove, a virtual sandbag, or the like.
Specifically, as another embodiment, a plurality of virtual molds are created in advance by a three-dimensional modeling module, and a live-action mold adapted to the virtual mold is manufactured according to the virtual mold.
Further, identification information is arranged on the live-action mold.
Specifically, as one embodiment, the identification information includes at least a mold category and a specific type of the live-action mold 1.
Wherein, the mould category includes at least: hit class, key class, pure gesture class, and write-draw class. Specific types at least include pianos, electronic organ, drum kit, drawing board, boxing glove, sandbag, etc.
Further, the sub-steps of creating a plurality of sub-patterns adapted to the virtual mold are as follows:
s110: the virtual molds are classified, and a plurality of usage patterns are created according to the categories of the virtual molds.
Specifically, as an embodiment, the processing module discriminates the virtual mold and classifies the virtual mold according to the discrimination type, and the classification of the virtual mold at least includes: hit class, key class, pure gesture class, and write-draw class.
The plurality of usage modes at least includes: strike mode, key mode, pure gesture mode, and write mode.
Storing the striking class of the virtual mould in a striking mode, storing the key class of the virtual mould in a key mode, and storing the pure gesture mode of the virtual mould in a pure gesture mode; storing the write pattern of the virtual mold into the write pattern, and so on.
S120: creating a plurality of sub-patterns in each usage pattern, respectively;
specifically, for example, the striking mode includes a musical instrument mode and a striking mode; the musical instrument modes comprise a drum set mode, a bass drum mode, a high cymbal mode, a gong mode, a tambourine mode and the like. The movement mode in the striking mode includes a boxing mode and the like.
The key modes include a piano mode, a organ mode, and the like.
The pure gesture mode comprises a paper folding mode, a building block stacking mode and the like.
The writing and drawing modes include a writing mode, a drawing mode and the like.
S130: a corresponding storage path is created for each sub-mode separately.
Specifically, a corresponding storage path is created on the storage module for each sub-mode, and when data of a certain sub-mode needs to be called, the data can be directly obtained from the storage path.
S2: region call data corresponding to the sub-mode is created according to the sub-mode and stored under the storage path of the sub-mode.
Specifically, the substeps of creating region call data are as follows:
t1: basic data of each sub-mode is acquired.
Wherein, the basic data at least comprises: a plurality of virtual space coordinate data, a plurality of virtual mold coordinate data, a plurality of audio data, a plurality of text and drawing data.
Specifically, virtual mold coordinate data and current virtual space coordinate data which are arranged in a virtual space are derived from a three-dimensional modeling module, and the obtained virtual space coordinate data and the obtained virtual mold coordinate data are transmitted to a processing module; a plurality of audio data, a plurality of text and drawing data are acquired from an existing database.
T2: and processing the basic data to obtain regional data, and storing the regional data in a regional data comparison library.
Further, the sub-steps of processing the base data to obtain the region data and storing the region data in the region data comparison library are as follows:
specifically, the area data includes at least the coordinate data of the first use area, the coordinate data of the second use area, and the coordinate data of the third use area.
H1: the coordinate data of the virtual mold is divided into a plurality of first use areas.
Specifically, as an example, a virtual piano mold is described as an example. After virtual coordinate data of each key in the virtual piano mold is obtained from the three-dimensional modeling module, marking or marking the area of each key by a three-dimensional modeling module or a worker, and taking the area of each key as a first use area.
H2: the virtual space coordinate data is divided into a plurality of second use areas corresponding to the first use areas, and a third use area.
Specifically, as an example, a virtual piano mold is described as an example. And acquiring the coordinate data of the space occupied by the virtual piano mold in the virtual space according to the matching position of the virtual piano mold and the virtual space, and setting the area corresponding to each key in the virtual piano mold in the virtual space as a second use area.
Wherein, the part of the virtual space not covered by the virtual mold is a third use area.
And H3: and creating a region data comparison library, and storing the coordinate data of the first use region, the coordinate data of the second use region and the coordinate data of the third use region in the region data comparison library.
Specifically, an area data comparison library is created in the storage module, and the acquired coordinate data of the first use area, the coordinate data of the second use area and the coordinate data of the third use area are stored in the area data.
T3: collecting or simulating a plurality of operation gestures, acquiring coordinate data of the operation gestures, judging the coordinate data positions of the operation gestures through a region data comparison library, analyzing and presetting a virtual mold active state according to the coordinate data positions of the operation gestures, and creating a plurality of dynamic virtual molds corresponding to the virtual mold active state.
Specifically, as an example, a virtual piano mold is described as an example. The virtual piano mold has eighty eight keys, the area where each key is located is a first use area, namely, the virtual piano mold has eighty eight first use areas, when coordinate data of an operation gesture falls into one of the first use areas, the keys of the first use areas should be preset to be in a pressed state, and the virtual piano dynamic mold with the first use areas in the pressed state and the rest eighty eight first use areas in normal non-pressed states is created according to the positions where the coordinate data of the operation gesture falls.
T4: creating a plurality of region calling files, and storing the dynamic virtual mould in the corresponding region calling files.
Specifically, the region calling file at least includes coordinate data of a first usage region of the plurality of first usage regions, and a dynamic virtual mold corresponding to the first usage region.
Further, for the sub-mode requiring audio usage, the region call file also includes audio data corresponding to the first usage region.
Specifically, as an example, a virtual piano mold is described as an example. The range of audio data that the virtual piano mold needs to use ranges from A0 (27.5 HZ) to C8 (4186 HZ), wherein each range corresponds to eighty-eight first use areas of the virtual piano mold according to the setting of the existing piano, respectively.
Further, the region calling file is stored in the corresponding sub-mode.
S3: and calling a corresponding sub-mode for use according to the accessed live-action mold.
Further, the sub-steps of calling the corresponding sub-mode for use according to the accessed live-action mold are as follows:
p1: and obtaining the identification information of the live-action mold.
Specifically, the live-action mold to be used is accessed to the access device, the identifier of the live-action mold is obtained through the identifier of the access device, and the identification information is sent to the processor for processing.
P2: and calling the corresponding sub-mode for use according to the identification information.
Further, the sub-steps of calling the corresponding sub-mode for use according to the identification information are as follows:
n1: judging the type of the virtual mould to be called according to the mould type in the identification information, and judging the use mode according to the type of the virtual mould.
Specifically, the judging module receives the identification information sent by the identifier, analyzes the identification information, judges the type of the virtual mold to be called according to the type of the mold in the identification information, and judges the use mode according to the type of the virtual mold.
N2: and judging a sub-mode from the use modes according to the specific types in the identification information, and calling the region calling data of the sub-mode for use.
Specifically, the judging module judges the sub-mode from the judged using modes according to the specific type in the identification information, and sends the judging result to the calling module, and the calling module calls the region calling data of the sub-mode from the storage module for use.
Specifically, the motion sensing controller collects the data of the operation gesture of the user, the data are sent to the processor for analysis, the processor judges the coordinate data position of the operation gesture according to the region data comparison library, and if the coordinate data of the operation gesture are located in the third use region, the virtual mold is in an unoperated state without calling the region calling data; if the coordinate data of the operation gesture is located in the second use area, judging which first use area corresponds to the operation gesture, and calling corresponding area calling data for use according to the judged first use area through a calling module.
The beneficial effects realized by the application are as follows:
(2) The three-dimensional modeling method and the system thereof have the technical effects of combining three-dimensional virtual reality with real objects and dynamic operation gestures, and improving the real use feeling of a user.
(2) According to the three-dimensional modeling method and the system thereof, operators can utilize a plurality of live-action molds and corresponding sub-modes to perform different learning and exercise, the learning and exercise cost is low, and the application range is wide.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the scope of the present application be interpreted as including the preferred embodiments and all alterations and modifications that fall within the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the protection of the present application and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (8)

1. A method of three-dimensional modeling, comprising:
creating a plurality of sub-modes matched with the virtual mould and storage paths corresponding to the sub-modes;
creating region call data corresponding to the sub-mode according to the sub-mode, and storing the region call data under a storage path of the sub-mode;
calling a corresponding sub-mode for use according to the accessed live-action mold;
wherein the sub-steps of creating region call data corresponding to a sub-pattern from the sub-pattern are as follows:
basic data of each sub-mode is obtained;
processing the basic data to obtain regional data, and storing the regional data in a regional data comparison library;
acquiring or simulating a plurality of operation gestures, acquiring coordinate data of the plurality of operation gestures, judging the coordinate data positions of the operation gestures through a region data comparison library, analyzing and presetting a virtual mold active state according to the coordinate data positions of the operation gestures, and creating a plurality of dynamic virtual molds corresponding to the virtual mold active state;
creating a plurality of region calling files, storing the dynamic virtual mould in the corresponding region calling files, and obtaining region calling data after finishing storage;
the sub-steps of processing the basic data to obtain the region data and storing the region data in the region data comparison library are as follows:
dividing coordinate data of the virtual mold into a plurality of first use areas;
dividing the virtual space coordinate data into a plurality of second use areas corresponding to the first use areas and a third use area;
and creating a region data comparison library, and storing the coordinate data of the first use region, the coordinate data of the second use region and the coordinate data of the third use region in the region data comparison library.
2. The three-dimensional modeling method of claim 1, wherein the sub-steps of creating a plurality of sub-patterns that are adapted to the virtual mold are as follows:
classifying the virtual molds, and creating a plurality of using modes according to the types of the virtual molds;
creating a plurality of sub-patterns in each usage pattern, respectively;
a corresponding storage path is created for each sub-mode separately.
3. The three-dimensional modeling method of claim 1, wherein the sub-steps of invoking the corresponding sub-mode for use according to the accessed live-action mold are as follows:
acquiring identification information of a live-action mold;
and calling the corresponding sub-mode for use according to the identification information.
4. A three-dimensional modeling method according to claim 3, characterized in that the sub-steps of invoking the corresponding sub-mode for use according to the identification information are as follows:
judging the type of the virtual mould to be called according to the type of the mould in the identification information, and judging the use mode according to the type of the virtual mould;
and judging a sub-mode from the use modes according to the specific types in the identification information, and calling the region calling data of the sub-mode for use.
5. A three-dimensional modeling system comprising at least one live-action mold, an access device, a VR device and a somatosensory controller, the access device being connected to the live-action mold, the VR device and the somatosensory controller, respectively, the access device being configured to perform the three-dimensional modeling method according to any one of claims 1-4.
6. The three-dimensional modeling system of claim 5, wherein the access device comprises an identifier, a processor, and a display, the processor being coupled to the display and the identifier, respectively;
the identifier is used for acquiring identification information of a live-action mold accessed to the access device according to the instruction of the processor and sending the identification information to the processor for processing;
the processor is used for receiving the data sent by the identifier, processing the data, calling the sub-mode according to the processed data, and respectively sending the region calling data called according to the sub-mode to the display and the VR equipment;
the display is used for receiving and displaying data sent by the processor and the VR device.
7. The three-dimensional modeling system of claim 6, wherein the processor comprises a memory module, a three-dimensional modeling module, a processing module, a discrimination module, a calling module;
the storage module is used for storing basic data, sub-modes and corresponding region calling files;
the three-dimensional modeling module is used for acquiring basic data and sending the basic data to the processing module, and is also used for creating a dynamic virtual mould;
the processing module is used for carrying out partition processing on the acquired basic data;
and a judging module: the method comprises the steps of judging a sub-mode to be used according to identification information, judging a region calling file to be called according to the coordinate data position of a manipulation gesture, and feeding a judging result back to a calling module for calling;
and the calling module is used for calling the sub-mode and the region calling file according to the judging result.
8. The three-dimensional modeling system of claim 5, wherein the live-action mold is provided with identification information; the identification information includes at least a mold category and a specific type of the live-action mold.
CN201910938301.1A 2019-09-30 2019-09-30 Three-dimensional modeling method and system thereof Active CN110703916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910938301.1A CN110703916B (en) 2019-09-30 2019-09-30 Three-dimensional modeling method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910938301.1A CN110703916B (en) 2019-09-30 2019-09-30 Three-dimensional modeling method and system thereof

Publications (2)

Publication Number Publication Date
CN110703916A CN110703916A (en) 2020-01-17
CN110703916B true CN110703916B (en) 2023-05-09

Family

ID=69197419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910938301.1A Active CN110703916B (en) 2019-09-30 2019-09-30 Three-dimensional modeling method and system thereof

Country Status (1)

Country Link
CN (1) CN110703916B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399873B (en) * 2013-07-10 2017-09-29 中国大唐集团科学技术研究院有限公司 The Database Dynamic loading management method and device of virtual reality system
WO2017031089A1 (en) * 2015-08-15 2017-02-23 Eyefluence, Inc. Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
CN109559370A (en) * 2017-09-26 2019-04-02 华为技术有限公司 A kind of three-dimensional modeling method and device
CN108320333B (en) * 2017-12-29 2022-01-11 ***股份有限公司 Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method

Also Published As

Publication number Publication date
CN110703916A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN101159064B (en) Image generation system and method for generating image
US8170702B2 (en) Method for classifying audio data
CN108700940A (en) Scale of construction virtual reality keyboard method, user interface and interaction
US20130142417A1 (en) System and method for automatically defining and identifying a gesture
JP2020533654A (en) Holographic anti-counterfeit code inspection method and equipment
CN112669417A (en) Virtual image generation method and device, storage medium and electronic equipment
US11682206B2 (en) Methods and apparatus for projecting augmented reality enhancements to real objects in response to user gestures detected in a real environment
CN110189394A (en) Shape of the mouth as one speaks generation method, device and electronic equipment
CN108898181A (en) A kind of processing method, device and the storage medium of image classification model
JP2020046500A (en) Information processing apparatus, information processing method and information processing program
Santini Augmented Piano in Augmented Reality.
CN116528016A (en) Audio/video synthesis method, server and readable storage medium
CN109784140A (en) Driver attributes' recognition methods and Related product
CN113821296B (en) Visual interface generation method, electronic equipment and storage medium
CN110703916B (en) Three-dimensional modeling method and system thereof
Liu et al. Applying models of visual search to menu design
Bovermann et al. Tangible data scanning sonification model
US11282267B2 (en) System and method for providing automated data visualization and modification
CN111796709B (en) Method for reproducing image texture features on touch screen
KR20140078083A (en) Method of manufacturing cartoon contents for augemented reality and apparatus performing the same
Bering et al. Virtual Drum Simulator Using Computer Vision
Armitage et al. mConduct: a multi-sensor interface for the capture and analysis of conducting gesture
Adhikari et al. Computer Vision Based Virtual Musical Instruments
JP2008165098A (en) Electronic musical instrument
JP7474175B2 (en) Sound image drawing device and sound image drawing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant