CN108074286A - A kind of VR scenario buildings method and system - Google Patents

A kind of VR scenario buildings method and system Download PDF

Info

Publication number
CN108074286A
CN108074286A CN201711367071.5A CN201711367071A CN108074286A CN 108074286 A CN108074286 A CN 108074286A CN 201711367071 A CN201711367071 A CN 201711367071A CN 108074286 A CN108074286 A CN 108074286A
Authority
CN
China
Prior art keywords
model
particle
update
physical model
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711367071.5A
Other languages
Chinese (zh)
Inventor
胡蔚萌
余贵飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Land Carving Technology Co Ltd
Original Assignee
Wuhan Land Carving Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Land Carving Technology Co Ltd filed Critical Wuhan Land Carving Technology Co Ltd
Priority to CN201711367071.5A priority Critical patent/CN108074286A/en
Publication of CN108074286A publication Critical patent/CN108074286A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a kind of VR scenario buildings method and system, method comprises the following steps:Step S1:The scene that physical model is built is shot, receives the video information of shooting;Step S2:Each physical model in video information is identified using image recognition technology, and determines the location information of each physical model;Step S3:Call database in the corresponding multiple master patterns of physical model, and combine each physical model location information multiple master patterns are built into the three-dimensional digital model consistent with the scene that physical model is built;Step S4:Using three-dimensional digital model as VR scenes;Show VR scenes.The present invention realizes the virtually enjoyment for being used interchangeably family and preferably experiencing VR scene constructions and bringing with reality by VR scenario buildings method, has not only tempered manipulative ability, while enhances their imagination and creativity, so as to play the role of having both amusement and education.

Description

A kind of VR scenario buildings method and system
Technical field
The present invention relates to field of computer technology, more particularly to a kind of VR scenario buildings method and system.
Background technology
VR game can with let us during object for appreciation tap intellectual resources, increase wisdom, while VR game can aid in us Recognize the world, culture space visionary.However currently on the market VR game mainly with educate, train, entertain based on, shortage The really virtually interaction with reality;Such as:VR scenes complete in advance, i.e., the various models in VR scenes are (as built Object, flowers, plants and trees etc.) selection, the function and/or effect (such as rotating, mobile or bounce) of the position put and each model Imparting all complete in advance, and the user for only possessing relevant professional knowledge could make, and cause VR scenes Pattern is rare, it is impossible to meet the growing user demand with variation.
The content of the invention
The technical problems to be solved by the invention are to provide a kind of VR scenario buildings method and system, above-mentioned existing to overcome Deficiency in technology.
The technical solution that the present invention solves above-mentioned technical problem is as follows:
One side according to the invention provides a kind of VR scenario buildings method, comprises the following steps:
Step S1:The scene that physical model is built is shot, receives the video information of shooting;
Step S2:Each physical model in video information is identified using image recognition technology, and determines each entity mould The location information of type;
Step S3:Call database in the corresponding multiple master patterns of physical model, and combine each physical model Location information multiple master patterns are built into the three-dimensional digital model consistent with the scene that physical model is built;
Step S4:Using three-dimensional digital model as VR scenes;Show VR scenes.
Further:The identification of physical model is implemented as in step S2:
Step S21a:A physical model in selecting video information is as target entity model, by target entity model The region at place is as target area;The fisrt feature histogram of target area is calculated according to the color characteristic of target area; N number of particle is chosen in target area, and initializes particle, the weighted value of each particle is equal when initial;
Step S22a:With the mobile update particle of target entity model in video information, update the position of target area;Root The marginal information of target entity model is found out according to the position of target area, identifies target entity model;
Step S23a:Return to step S21a and step S22a, until using all physical models in video information as mesh The region at mark physical model and its place is finished as target area.
The advantageous effect of above-mentioned further scheme is:User, which takes action on one's own to enclose, selects target entity model, the structure with VR scenes Direct interaction is had, initiative, the practicability and intelligence of more personal understanding to machine vision can be experienced in intelligence system Property.
Further:Step S22a's is implemented as:
Step S221a:With the mobile update particle of target entity model in video information;
Step S222a:The second feature of particle after each update is calculated according to the color characteristic at particle after each update Histogram;
Step S223a:By second feature histogram compared with fisrt feature histogram, its similarity, and root are calculated The weighted value of particle after each update is adjusted according to similarity;
Step S224a:The weighted value of particle after each update is normalized;
Step S225a:Resampling is carried out according to the Posterior probability distribution of the weighted value of particle after each update;
Step S226a:The mathematic expectaion of particle after calculating resampling, using mathematic expectaion as updated target area Position;
Step S227a:The side of target entity model is found out according to the position of target area using Canny edge detection algorithms Then edge information is extracted and analyzed to marginal information, and then identify target entity model.
The advantageous effect of above-mentioned further scheme is:Using particle filter tracking algorithm, it can realize that mobile terminal images In head moving process, the real-time tracking and capture of target entity modal position and information in video information can solve difference and regard The problem of angle, different illumination, different scale bring difficulty for machine vision object identification, makes the knowledge of physical model position and information It is more inaccurate.
Further:In step S4 before using three-dimensional digital model as VR scenes, part in three-dimensional digital model is assigned Or whole master patterns are implemented as in specific function and/or effect:
Step S41:Make the compiling interface with multiple function buttons and/or effect button, display compiling interface;
Step S42:Choosing needs a master pattern for assigning function and/or effect in three-dimensional digital model, click on and compile Corresponding function button and/or effect button are to assign master pattern in specific function and/or effect in world of translation face;
Step S43:Repeat step S42 until complete function to needing the master pattern for assigning function and/or effect and/ Or the imparting of effect.
The advantageous effect of above-mentioned further scheme is:User is carrying out function to the master pattern in three-dimensional digital model And/or effect imparting when, only need to click on corresponding button, compared in the past to model carry out function imparting need to write journey Difficulty greatly reduces in sequence, allow user according to oneself imagination and creativity take action on one's own to build it is exclusive oneself VR scenes promote interest when user uses, and can meet the needs of more users.
Other side according to the invention provides a kind of VR scenario buildings system, including mobile terminal and server;
Mobile terminal is for shooting the scene that physical model is built, and the video information of real-time display shooting;It moves Dynamic terminal is additionally operable to display VR scenes;
Server includes control module and information identification module;
Information identification module is used for using each physical model in image recognition technology identification video information, and is determined every The location information of a physical model;
Control module for call in database with the corresponding multiple master patterns of physical model, and combine each entity Multiple master patterns are built into the three-dimensional digital model consistent with the scene that physical model is built by the location information of model, will Three-dimensional digital model is as VR scenes;And mobile terminal is controlled to show VR scenes.
Further:Information identification module includes selecting unit, recognition unit, judging unit and computing unit;
Selecting unit is used for a physical model in selecting video information as target entity model, and by target entity Model region as target area;Selecting unit is additionally operable to calculate target area according to the color characteristic of target area Fisrt feature histogram;N number of particle is chosen in target area, and initializes particle, the weighted value of each particle is equal when initial;
Recognition unit is used for the mobile update particle of target entity model in video information, updates the position of target area It puts;Recognition unit is additionally operable to find out the marginal information of target entity model according to the position of target area, identifies target entity Model;
Judging unit be used to judging all physical models in video information whether all as target entity model and Region where it is as target area;It is then computing unit to be driven to work, it is no, then selecting unit is driven to work;
Computing unit is used to calculate the location information of the physical model in video information using image recognition technology.
The advantageous effect of above-mentioned further scheme is:User, which takes action on one's own to enclose, selects target entity model, the structure with VR scenes Direct interaction is had, initiative, the practicability and intelligence of more personal understanding to machine vision can be experienced in intelligence system Property.
Further:Recognition unit includes particle update subelement, feature calculation subelement, weighted value and updates subelement, returns One changes processing subelement, resampling sub-units, position acquisition subelement and first object identification subelement;
Particle update subelement is used for the mobile update particle with target entity model in video information;
Feature calculation subelement is used for according to particle after each update of color characteristic calculating after each update at particle Second feature histogram;
Weighted value update subelement is used for second feature histogram compared with fisrt feature histogram, calculates its phase Like degree, and according to the weighted value of particle after each update of similarity adjustment;
Normalized subelement is used to that the weighted value of particle after each update to be normalized;
Resampling sub-units are used to carry out resampling according to the Posterior probability distribution of the weighted value of particle after each update;
Position acquisition subelement is used to calculate the mathematic expectaion of particle after resampling, and using mathematic expectaion as updated The position of target area;
First object identification subelement is used to find out target using position of the Canny edge detection algorithms according to target area Then the marginal information of physical model is extracted and analyzed to marginal information, and then identify target entity model.
The advantageous effect of above-mentioned further scheme is:Using particle filter tracking algorithm, it can realize that mobile terminal images In head moving process, the real-time tracking and capture of target entity modal position and information in video information can solve difference and regard The problem of angle, different illumination, different scale bring difficulty for machine vision object identification, makes the knowledge of physical model position and information It is more inaccurate.
Further:Server further includes model function and/or effect assigns module, and model function and/or effect assign mould Block is for assigning in three-dimensional digital model part or all of master pattern in specific function and/or effect;Model function and/or Effect, which assigns module, includes compiling interface manufacture unit and function and/or effect given unit;
Interface manufacture unit is compiled for making the compiling interface with multiple function buttons and/or effect button, and is shown Show compiling interface;
Function and/or effect given unit, which are used to choose, to be needed to assign the one of function and/or effect in three-dimensional digital model A master pattern clicks in compiling interface corresponding function button and/or effect button to assign master pattern specific function And/or effect.
The advantageous effect of above-mentioned further scheme is:User is carrying out function to the master pattern in three-dimensional digital model And/or effect imparting when, only need to click on corresponding button, compared in the past to model carry out function imparting need to write journey Difficulty greatly reduces in sequence, allow user according to oneself imagination and creativity take action on one's own to build it is exclusive oneself VR scenes promote interest when user uses, and can meet the needs of more users.
The beneficial effects of the invention are as follows:User can take action on one's own to build physical model scene, and generate the reality with building The consistent three-dimensional digital model of body Model by the structure of VR scenes as VR scenes, to realize that the interaction virtually with reality makes User preferably experiences the enjoyment that VR scene constructions are brought, and has not only tempered manipulative ability, at the same enhance user imagination and Creativity, so as to play the role of having both amusement and education.
Description of the drawings
Fig. 1 is a kind of flow chart of VR scenario buildings method of the present invention;
Fig. 2 is a kind of structure diagram of VR scenario buildings system of the present invention.
In figure, 1 be mobile terminal, 2 be server, 21 in order to control module, 22 be information identification module, 221 be that selection is single Member, 222 be recognition unit, 2221 be particle update subelement, 2222 be characterized computation subunit, 2223 be that weighted value update is sub Unit, 2224 be normalized subelement, 2225 be resampling sub-units, 2226 be position acquisition subelement, 2227 be One target identification subelement, 223 be judging unit, 224 be computing unit, 2241 be Object selection subelement, 2242 be filtering Processing subelement, 2243 for the second target identification subelement, 2244 be the 3rd target identification subelement, 2245 be distance calculate son Unit, 225 be comparing unit, 23 be model function and/or effect assign module, 231 be compiling interface manufacture unit, 232 be Function and/or effect given unit.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and It is non-to be used to limit the scope of the present invention.
Embodiment one, as shown in Figure 1, a kind of VR scenario buildings method, comprises the following steps:
Step S1:The scene that physical model is built is shot, receives the video information of shooting;
Step S2:Each physical model in video information is identified using image recognition technology, and determines each entity mould The location information of type;
Step S3:Call database in the corresponding multiple master patterns of physical model, and combine each physical model Location information multiple master patterns are built into the three-dimensional digital model consistent with the scene that physical model is built;
Step S4:Using three-dimensional digital model as VR scenes;Show VR scenes.
Physical model is built between the scene capture that user builds in progress physical model and is finished.
Preferably:The identification of physical model is implemented as in step S2:
Step S21a:A physical model in selecting video information is as target entity model, by target entity model The region at place is as target area;The fisrt feature histogram of target area is calculated according to the color characteristic of target area; N number of particle is chosen in target area, and initializes particle, the weighted value of each particle is equal when initial;
Step S22a:With the mobile update particle of target entity model in video information, update the position of target area;Root The marginal information of target entity model is found out according to the position of target area, identifies target entity model;
Step S23a:Return to step S21a and step S22a, until using all physical models in video information as mesh The region at mark physical model and its place is finished as target area.
By the above-mentioned means, user takes action on one's own, circle selects target entity model, and direct interaction is built with VR scenes, Initiative can be experienced in intelligence system, the practicability of more personal understanding to machine vision and intelligent.
Be previously stored in database with the corresponding master pattern of physical model, according to target area in step S21a Before initializing particle, also need pre-stored master pattern in target entity model and database through image recognition technology It is compared, judges whether be stored with and the corresponding master pattern of target entity model in database.Specifically comparison process is: Both target entity model is compared with pre-stored master pattern in database using image recognition technology, calculate Degree of fitting when being fitted angle value more than or equal to default degree of fitting threshold value, judges to be stored with and target entity model phase in database Corresponding master pattern.
Preferably, step S22a is implemented as:
Step S221a:With the mobile update particle of target entity model in video information;Using second order dynamic model pair Particle carries out STOCHASTIC DIFFUSION, and each particle is according to second order dynamic model STOCHASTIC DIFFUSION to a new position;Pass through second order dynamic Model carries out STOCHASTIC DIFFUSION to particle, and the weighted value of particle is adjusted again afterwards, can increase the accuracy of identification;
Step S222a:The second feature of particle after each update is calculated according to the color characteristic at particle after each update Histogram;
Step S223a:By second feature histogram compared with fisrt feature histogram, its similarity, and root are calculated According to the weighted value of particle after similarity adjustment update;Similarity use bar between second feature histogram and fisrt feature histogram Family name's distance is measured, and apart from smaller, the two is more similar, when Pasteur's distance is zero, represents particle region and mesh after update Mark physical model region exactly matches;
Step S224a:The weighted value of particle after each update is normalized;
Step S225a:Resampling is carried out according to the Posterior probability distribution of the weighted value of particle after each update;In resampling During, the probability that particle is selected after the larger update of weighted value is bigger, and particle is selected after the smaller update of weighted value Probability it is smaller or even directly eliminated, particle will be by multiple repairing weld after the larger update of such weighted value, and location information will be by It inherits, and generates new particle;Step S222a is repeated afterwards to step S224a to realize the update of new particle weighted value.Through excessive After secondary resampling, particle of new generation will focus near target entity model.
Step S226a:The mathematic expectaion of particle after calculating resampling, using mathematic expectaion as updated target area Position;
Step S227a:The side of target entity model is found out according to the position of target area using Canny edge detection algorithms Then edge information is extracted and analyzed to marginal information, and then identify target entity model.
By the above-mentioned means, using particle filter tracking algorithm, the camera moving process of mobile terminal 1 can be realized In, the real-time tracking and capture of target entity modal position and information, can solve difference in the video information in mobile terminal 1 The problem of visual angle, different illumination, different scale bring difficulty for machine vision object identification makes physical model position and information Identification is more accurate.
Preferably:What the location information of physical model determined in step S2 is implemented as:
Step S21b:A physical model in a frame picture in selecting video information is as target entity model;
Step S22b:Mean filter processing, noise caused by inhibit electrical equipment or environmental factor are carried out to picture;
Step S23b:All possible marginal information is found out using Canny edge detection algorithms, then to marginal information into Row extraction and analysis, to identify target entity model;
Step S24b:The marginal information that Canny edge detection algorithms are drawn is provided with OpenCV FindContours function marginal informations are converted into profile information, and profile information is analyzed and processed, and find out target entity mould Type;
Step S25b:Using the length of known target physical model and the length in pixels for the target entity model being obtained, obtain Go out the size of pixel, recycle Pixel Dimensions and target entity model in the X of self-defined origin, the pixel distance of Y-direction, and then The X of the camera of target entity model relative level placement, the actual range of Y-direction is obtained.
By the above-mentioned means, computer such as using OpenCV is pre-processed, identified, positioned, measured at the image processing algorithms, And then draw the coordinate position and relative distance of target entity model.
Preferably:In step S4 before using three-dimensional digital model as VR scenes, part in three-dimensional digital model is assigned Or whole master patterns are implemented as in specific function and/or effect:
Step S41:Make the compiling interface with multiple function buttons and/or effect button, display compiling interface;Its In, program corresponding to multiple function buttons and/or effect button is compiled to be finished and stored in database;
Step S42:Choosing needs a master pattern for assigning function and/or effect in three-dimensional digital model, click on and compile Corresponding function button and/or effect button are to assign master pattern in specific function and/or effect in world of translation face;
Step S43:Repeat step S42 until complete function to needing the master pattern for assigning function and/or effect and/ Or the imparting of effect.
By the above-mentioned means, user is carrying out the master pattern in three-dimensional digital model the imparting of function and/or effect When, corresponding button only need to be clicked on, program need to be write compared to function imparting is carried out to model in the past, greatly reduce Difficulty allows user to be taken action on one's own to build the exclusive VR scenes of oneself according to oneself imagination and creativity, promotes user Interest during use, and can meet the needs of more users.
Preferably:Mobile terminal 1 is mobile phone and/or tablet computer.
By the above-mentioned means, user can be selected according to self-demand using mobile phone and/or tablet computer, selection variation, More users is facilitated to use.
Embodiment two, as shown in Fig. 2, a kind of VR scenario buildings system, including mobile terminal 1 and server 2;
Mobile terminal 1 is for shooting the scene that physical model is built, and the video information of real-time display shooting;It moves Dynamic terminal 1 is additionally operable to display VR scenes;
Server 2 includes control module 21 and information identification module 22;
Information identification module 22 is used for using each physical model in image recognition technology identification video information, and is determined The location information of each physical model;
Control module 21 and combines each real for calling in database with the corresponding multiple master patterns of physical model Multiple master patterns are built into the three-dimensional digital model consistent with the scene that physical model is built by the location information of body Model, Using three-dimensional digital model as VR scenes;And mobile terminal 1 is controlled to show VR scenes.
The application of corresponding mobile terminal is installed, user is building physical model by mobile terminal 1 on mobile terminal 1 Before scene is shot, it need to first start the mobile terminal application on mobile terminal 1, enter mobile terminal application interface.It is if mobile It is applied in terminal 1 without corresponding mobile terminal, user need to first complete the installation of mobile terminal application.
After VR scenes are shown on mobile terminal 1, user fits together mobile terminal 1 and VR glasses devices, and passes through VR glasses are watched, and are played into the VR scenes built.
Physical model is built between the scene capture that user builds in progress physical model and is finished.
Preferably:22 selecting unit 221 of information identification module, recognition unit 222, judging unit 223 and computing unit 224;
Selecting unit 221 is used for a physical model in selecting video information as target entity model, and by target Region where physical model is as target area;Selecting unit 22 is additionally operable to calculate target according to the color characteristic of target area The fisrt feature histogram in region;N number of particle is chosen in target area, and initializes particle, the weight of each particle when initial It is worth equal;
Recognition unit 222 is used for the mobile update particle of target entity model in video information, update target area Position;Recognition unit 222 is additionally operable to find out the marginal information of target entity model according to the position of target area, identifies target Physical model;
Whether judging unit 223 is used to judge all physical models in video information all as target entity model And its region at place is as target area;It is that computing unit 224 is then driven to work, it is no, then selecting unit 221 is driven to work;
Computing unit 224 is used to calculate the location information of the physical model in video information using image recognition technology.
By the above-mentioned means, user takes action on one's own, circle selects target entity model, and direct interaction is built with VR scenes, Initiative can be experienced in intelligence system, the practicability of more personal understanding to machine vision and intelligent.
It is previously stored in database and further includes comparison with the corresponding master pattern of physical model, information identification module 22 Unit 225, comparing unit 225 are used to pass through image recognition technology by pre-stored standard in target entity model and database Model is compared, and judges whether be stored with and the corresponding master pattern of target entity model in database.Specifically compared Cheng Wei:Target entity model with pre-stored master pattern in database is compared using image recognition technology, is calculated The degree of fitting of the two when being fitted angle value and being more than or equal to default degree of fitting threshold value, judges to be stored in database and target entity The corresponding master pattern of model.
Preferably:Recognition unit 222 includes particle update subelement 2221, feature calculation subelement 2222, weighted value more New subelement 2223, normalized subelement 2224, resampling sub-units 2225,2226 and first mesh of position acquisition subelement Identify small pin for the case unit 2227;
Particle update subelement 2221 is used for the mobile update particle with target entity model in video information;Using two Rank dynamic model carries out STOCHASTIC DIFFUSION to particle, and each particle is according to second order dynamic model STOCHASTIC DIFFUSION to a new position; STOCHASTIC DIFFUSION is carried out to particle by second order dynamic model, the weighted value of particle is adjusted again afterwards, identification can be increased Accuracy;
Feature calculation subelement 2222 is used for according to grain after each update of color characteristic calculating after each update at particle The second feature histogram of son;
Weighted value update subelement 2223 is used to by second feature histogram compared with fisrt feature histogram, calculate Its similarity, and according to the weighted value of particle after each update of similarity adjustment;Second feature histogram and fisrt feature Nogata Similarity between figure is measured with Pasteur's distance, and apart from smaller, the two is more similar, when Pasteur's distance is zero, represents update Particle region is exactly matched with target entity model region afterwards;
Normalized subelement 2224 is used to that the weighted value of particle after each update to be normalized;
Resampling sub-units 2225 are used to be adopted again according to the Posterior probability distribution of the weighted value of particle after each update Sample;During resampling, the probability that particle is selected after the larger update of weighted value is bigger, after the smaller update of weighted value The probability that particle is selected is smaller or even is directly eliminated, and particle will be by multiple repairing weld, position after the larger update of such weighted value Confidence breath will be inherited, and generate new particle;Afterwards repeated characteristic computation subunit 2222, weighted value update subelement 2223 with And normalized subelement 2224 is to realize the update of new particle weighted value.After multiple resampling, particle of new generation It will focus near target entity model.
Position acquisition subelement 2226 is used to calculate the mathematic expectaion of particle after resampling, and using mathematic expectaion as updating The position of target area afterwards;
First object identification subelement 2227 is used to find out using position of the Canny edge detection algorithms according to target area Then the marginal information of target entity model is extracted and analyzed to marginal information, and then identify target entity model.
By the above-mentioned means, using particle filter tracking algorithm, the camera moving process of mobile terminal 1 can be realized In, the real-time tracking and capture of target entity modal position and information, can solve difference in the video information in mobile terminal 1 The problem of visual angle, different illumination, different scale bring difficulty for machine vision object identification makes physical model position and information Identification is more accurate.
Preferably:Computing unit 224 includes Object selection subelement 2241, filtering process subelement 2242, the second target Identify subelement 2243, the 3rd target identification subelement 2244 and apart from computation subunit 2245:
Object selection subelement 2241 is for a physical model in the frame picture in selecting video information as mesh Mark physical model;
Filtering process subelement 2242 be used for picture carry out mean filter processing, with inhibit electrical equipment or environment because Noise caused by element;
Second target identification subelement 2243 is used to find out all possible edge letter using Canny edge detection algorithms Breath, then extracts and analyzes to marginal information, to identify target entity model;
3rd target identification subelement 2244 is used to use OpenCV to the marginal information that Canny edge detection algorithms are drawn The findContours function marginal informations of offer are converted into profile information, and profile information is analyzed and processed, finds out target Physical model;
Apart from computation subunit 2245 for utilizing the length of known target physical model and the target entity model that is obtained Length in pixels, draw the size of pixel, recycle Pixel Dimensions and target entity model in the X of self-defined origin, Y-direction Pixel distance, and then the X of the camera of target entity model relative level placement, the actual range of Y-direction is obtained.
By the above-mentioned means, computer such as using OpenCV is pre-processed, identified, positioned, measured at the image processing algorithms, And then draw the coordinate position and relative distance of target entity model.
Preferably:Server 2 further includes model function and/or effect assigns module 23, and model function and/or effect assign Module 23 is for assigning in three-dimensional digital model part or all of master pattern in specific function and/or effect;Model function And/or effect assigns module 23 and includes compiling interface manufacture unit 231 and function and/or effect given unit 232;
Interface manufacture unit 231 is compiled for making the compiling interface with multiple function buttons and/or effect button, and Display compiling interface;
Function and/or effect given unit 232, which are used to choosing, needs to assign function and/or effect in three-dimensional digital model One master pattern clicks in compiling interface corresponding function button and/or effect button to assign master pattern in specific Function and/or effect.
By the above-mentioned means, user is carrying out the master pattern in three-dimensional digital model the imparting of function and/or effect When, corresponding button only need to be clicked on, program need to be write compared to function imparting is carried out to model in the past, greatly reduce Difficulty allows user to be taken action on one's own to build the exclusive VR scenes of oneself according to oneself imagination and creativity, promotes user Interest during use, and can meet the needs of more users.
Preferably:Mobile terminal 1 is mobile phone and/or tablet computer.
By the above-mentioned means, user can be selected according to self-demand using mobile phone and/or tablet computer, selection variation, More users is facilitated to use.
The beneficial effects of the invention are as follows:User can take action on one's own to build physical model scene, and generate the reality with building The consistent three-dimensional digital model of body Model by the structure of VR scenes as VR scenes, to realize that the interaction virtually with reality makes User preferably experiences the enjoyment that VR scene constructions are brought, and has not only tempered manipulative ability, at the same enhance user imagination and Creativity, so as to play the role of having both amusement and education.
The foregoing is merely a prefered embodiment of the invention, is not intended to limit the invention, all in the spirit and principles in the present invention Within, any modifications, equivalent replacements and improvements are made should all be included in the protection scope of the present invention.

Claims (8)

  1. A kind of 1. VR scenario buildings method, it is characterised in that:Comprise the following steps:
    Step S1:The scene that physical model is built is shot, receives the video information of shooting;
    Step S2:Each physical model in the video information is identified using image recognition technology, and determines each reality The location information of body Model;
    Step S3:Call in database with the corresponding multiple master patterns of the physical model, and with reference to each entity Multiple master patterns are built into the 3-dimensional digital mould consistent with the scene that physical model is built by the location information of model Type;
    Step S4:Using the three-dimensional digital model as VR scenes;Show the VR scenes.
  2. 2. a kind of VR scenario buildings method according to claim 1, it is characterised in that:The knowledge of physical model in the step S2 It is other to be implemented as:
    Step S21a:A physical model in the video information is chosen as target entity model, by the target Region where physical model is as target area;The of the target area is calculated according to the color characteristic of the target area One feature histogram;N number of particle is chosen in the target area, and initializes the particle, the weight of each particle when initial It is worth equal;
    Step S22a:With particle, the update target described in the mobile update of target entity model described in the video information The position in region;The marginal information of the target entity model is found out according to the position of the target area, identifies the mesh Mark physical model;
    Step S23a:Return to step S21a and step S22a, until using all physical models in the video information as mesh The region at mark physical model and its place is finished as target area.
  3. 3. a kind of VR scenario buildings method according to claim 2, it is characterised in that:The specific implementation of the step S22a For:
    Step S221a:With particle described in the mobile update of target entity model described in the video information;
    Step S222a:The second feature of particle after each update is calculated according to the color characteristic at particle after each update Histogram;
    Step S223a:By the second feature histogram compared with the fisrt feature histogram, its similarity is calculated, And the weighted value of particle after each update is adjusted according to the similarity;
    Step S224a:The weighted value of particle after each update is normalized;
    Step S225a:Resampling is carried out according to the Posterior probability distribution of the weighted value of particle after each update;
    Step S226a:The mathematic expectaion of particle after calculating resampling, using the mathematic expectaion as the updated target area The position in domain;
    Step S227a:The target entity model is found out according to the position of the target area using Canny edge detection algorithms Marginal information, then the marginal information is extracted and analyzed, and then identifies the target entity model.
  4. 4. a kind of VR scenario buildings method according to claim 1, it is characterised in that:By the three-dimensional in the step S4 Before mathematical model is as VR scenes, part or all of master pattern is assigned in the three-dimensional digital model in specific function And/or effect, it is implemented as:
    Step S41:The compiling interface with multiple function buttons and/or effect button is made, shows the compiling interface;
    Step S42:A master pattern for needing to assign function and/or effect in the three-dimensional digital model is chosen, clicks on institute Corresponding function button and/or effect button are stated in compiling interface to assign the master pattern in specific function and/or effect Fruit;
    Step S43:Step S42 is repeated until completing the function and/or effect to needing the master pattern for assigning function and/or effect The imparting of fruit.
  5. 5. a kind of VR scenario buildings system, it is characterised in that:Including mobile terminal (1) and server (2);
    The mobile terminal (1) is for shooting the scene that physical model is built, and the video information of real-time display shooting; The mobile terminal (1) is additionally operable to display VR scenes;
    The server (2) includes control module (21) and information identification module (22);
    Described information identification module (22) is used to identify each physical model in the video information using image recognition technology, And determine the location information of each physical model;
    The control module (21) and combines for calling in database with the corresponding multiple master patterns of the physical model Multiple master patterns are built into consistent with the scene that physical model is built by the location information of each physical model Three-dimensional digital model, using the three-dimensional digital model as VR scenes;And control described VR of the mobile terminal (1) display Scape.
  6. 6. a kind of VR scenario buildings system according to claim 5, it is characterised in that:Described information identification module (22) includes Selecting unit (221), recognition unit (222), judging unit (223) and computing unit (224);
    The selecting unit (221) is used to choose a physical model in the video information as target entity mould Type, and using the target entity model region as target area;The selecting unit (22) is additionally operable to according to The color characteristic of target area calculates the fisrt feature histogram of the target area;N number of grain is chosen in the target area Son, and the particle is initialized, the weighted value of each particle is equal when initial;
    The recognition unit (222) be used for particle described in the mobile update of target entity model described in the video information, Update the position of the target area;The recognition unit (222) is additionally operable to according to being found out the position of the target area The marginal information of target entity model identifies the target entity model;
    The judging unit (223) is used to judge whether all physical models in the video information to be all real as target The region at body Model and its place is as target area;It is that the computing unit (224) is then driven to work, it is no, then described in driving Selecting unit (221) works;
    The computing unit (224) is used to calculate the location information of the physical model in video information using image recognition technology.
  7. 7. a kind of VR scenario buildings system according to claim 6, it is characterised in that:The recognition unit (222) includes grain Son update subelement (2221), feature calculation subelement (2222), weighted value update subelement (2223), normalized are single First (2224), resampling sub-units (2225), position acquisition subelement (2226) and first object identification subelement (2227);
    The particle update subelement (2221) is used for the mobile update institute with target entity model described in the video information State particle;
    Described in color characteristic calculating each after the feature calculation subelement (2222) updates for basis to be each at particle more The second feature histogram of particle after new;
    Weighted value update subelement (2223) be used for by the second feature histogram and the fisrt feature histogram into Row compares, and calculates its similarity, and the weighted value of particle after each update is adjusted according to the similarity;
    The normalized subelement (2224) is used to that the weighted value of particle after each update to be normalized;
    The resampling sub-units (2225) be used for according to the Posterior probability distribution of the weighted value of particle after each update into Row resampling;
    The position acquisition subelement (2226) is used to calculate the mathematic expectaion of particle after resampling, and the mathematic expectaion is made For the position of the updated target area;
    The first object identification subelement (2227) is used for using position of the Canny edge detection algorithms according to the target area The marginal information for finding out the target entity model is put, then the marginal information is extracted and analyzed, and then is identified The target entity model.
  8. 8. a kind of VR scenario buildings system according to claim 5, it is characterised in that:The server (2) further includes model Function and/or effect assign module (23), and the model function and/or effect assign module (23) for assigning three dimension Part or all of master pattern is in specific function and/or effect in word model;The model function and/or effect assign module (23) compiling interface manufacture unit (231) and function and/or effect given unit (232) are included;
    The compiling interface manufacture unit (231) has the compiling interface of multiple function buttons and/or effect button for making, And show the compiling interface;
    The function and/or effect given unit (232) for choose need to assign in the three-dimensional digital model function and/or One master pattern of effect clicks in the compiling interface corresponding function button and/or effect button to assign the mark Quasi-mode type is in specific function and/or effect.
CN201711367071.5A 2018-03-02 2018-03-02 A kind of VR scenario buildings method and system Pending CN108074286A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711367071.5A CN108074286A (en) 2018-03-02 2018-03-02 A kind of VR scenario buildings method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711367071.5A CN108074286A (en) 2018-03-02 2018-03-02 A kind of VR scenario buildings method and system

Publications (1)

Publication Number Publication Date
CN108074286A true CN108074286A (en) 2018-05-25

Family

ID=62158411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711367071.5A Pending CN108074286A (en) 2018-03-02 2018-03-02 A kind of VR scenario buildings method and system

Country Status (1)

Country Link
CN (1) CN108074286A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108939551A (en) * 2018-07-23 2018-12-07 浙江树人学院 A kind of 3-D scanning virtual game constructing technology
CN108961881A (en) * 2018-08-06 2018-12-07 林墨嘉 A kind of the intelligent scene building and system of real-time interactive
CN109045486A (en) * 2018-06-06 2018-12-21 沈阳东软医疗***有限公司 A kind of exchange method applied to therapeutic process, device and system
CN109545323A (en) * 2018-10-31 2019-03-29 贵州医科大学附属医院 A kind of ankle rehabilitation system with VR simulation walking
CN109963137A (en) * 2019-04-01 2019-07-02 赵福涛 A kind of completely new interactive system and method
CN111192350A (en) * 2019-12-19 2020-05-22 武汉西山艺创文化有限公司 Motion capture system and method based on 5G communication VR helmet

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109045486A (en) * 2018-06-06 2018-12-21 沈阳东软医疗***有限公司 A kind of exchange method applied to therapeutic process, device and system
CN108939551A (en) * 2018-07-23 2018-12-07 浙江树人学院 A kind of 3-D scanning virtual game constructing technology
CN108961881A (en) * 2018-08-06 2018-12-07 林墨嘉 A kind of the intelligent scene building and system of real-time interactive
CN109545323A (en) * 2018-10-31 2019-03-29 贵州医科大学附属医院 A kind of ankle rehabilitation system with VR simulation walking
CN109963137A (en) * 2019-04-01 2019-07-02 赵福涛 A kind of completely new interactive system and method
CN111192350A (en) * 2019-12-19 2020-05-22 武汉西山艺创文化有限公司 Motion capture system and method based on 5G communication VR helmet

Similar Documents

Publication Publication Date Title
CN108074286A (en) A kind of VR scenario buildings method and system
CN105426827B (en) Living body verification method, device and system
US20230008567A1 (en) Real-time system for generating 4d spatio-temporal model of a real world environment
Azarbayejani et al. Real-time 3-D tracking of the human body
CN110705390A (en) Body posture recognition method and device based on LSTM and storage medium
Rikert et al. Gaze estimation using morphable models
CN102222431B (en) Computer implemented method for performing sign language translation
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CN106778628A (en) A kind of facial expression method for catching based on TOF depth cameras
CN103390174A (en) Physical education assisting system and method based on human body posture recognition
CN112085835B (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN110717391A (en) Height measuring method, system, device and medium based on video image
CN106599770A (en) Skiing scene display method based on body feeling motion identification and image matting
CN110298218B (en) Interactive fitness device and interactive fitness system
CN110544302A (en) Human body action reconstruction system and method based on multi-view vision and action training system
CN110637324B (en) Three-dimensional data system and three-dimensional data processing method
CN109117753A (en) Position recognition methods, device, terminal and storage medium
CN109274883A (en) Posture antidote, device, terminal and storage medium
CN114363689B (en) Live broadcast control method and device, storage medium and electronic equipment
CN113706373A (en) Model reconstruction method and related device, electronic equipment and storage medium
CN110743160B (en) Real-time pace tracking system and pace generation method based on somatosensory capture device
CN116580151A (en) Human body three-dimensional model construction method, electronic equipment and storage medium
CN106204744B (en) It is the augmented reality three-dimensional registration method of marker using encoded light source
CN109558797B (en) Method for distinguishing human body balance disorder based on gravity center area model under visual stimulation
CN112700568A (en) Identity authentication method, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180525