CN109032361A - Intelligent 3D shadow casting technique - Google Patents

Intelligent 3D shadow casting technique Download PDF

Info

Publication number
CN109032361A
CN109032361A CN201811002477.8A CN201811002477A CN109032361A CN 109032361 A CN109032361 A CN 109032361A CN 201811002477 A CN201811002477 A CN 201811002477A CN 109032361 A CN109032361 A CN 109032361A
Authority
CN
China
Prior art keywords
speaker
projection
video
modeling
acceleration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811002477.8A
Other languages
Chinese (zh)
Inventor
薛爱凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Win Win Time Technology Co Ltd
Original Assignee
Shenzhen Win Win Time Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Win Win Time Technology Co Ltd filed Critical Shenzhen Win Win Time Technology Co Ltd
Priority to CN201811002477.8A priority Critical patent/CN109032361A/en
Publication of CN109032361A publication Critical patent/CN109032361A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B29/00Combinations of cameras, projectors or photographic printing apparatus with non-photographic non-optical apparatus, e.g. clocks or weapons; Cameras having the shape of other objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Otolaryngology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to intelligent 3D shadow casting techniques, including carrying out projection and sounding to image by projection speaker, and by personal as the wearable movement of carrying judges that sensor realizes modeling to the acquisition for being carried out data-signal by sampler, to control projection speaker for the feedback control and audio output of image according to modeling analysis data.The beneficial effects of the present invention are: the present invention proposes a kind of method, in the case where the full nature without wearing VR glasses and earphone is without wearing conditions, by the adaptive cooperation of multi-directional projection type speaker, the complete natural VR visual field and VR sound of learning adjustment everyone use habit and use environment.

Description

Intelligent 3D shadow casting technique
Technical field
The present invention relates to intelligent 3D shadow casting techniques.
Background technique
The mankind never stop in the exploration simulated closest to true audiovisual experience, show from plane and move towards virtual reality, Stereo-circulation sound is moved towards from unidirectional sound, current virtual reality technology can simulate the video perception of different directions, also have People proposes VR sound, i.e., the sound effect of different distance distance is simulated by earphone, but these VR pictures and VR sound need VR glasses and earphone are worn, still there is certain sense of discomfort.
Summary of the invention
To overcome the defects of present in the prior art, the present invention provides intelligence 3D shadow casting technique, is able to solve above-mentioned technology Problem.
The present invention through the following technical solutions to achieve the above objectives:
Intelligent 3D shadow casting technique, including projection and sounding are carried out to image by projection speaker, and by it is personal with The wearable movement carried judges that sensor realizes modeling to the acquisition for being carried out data-signal by sampler, thus according to modeling Analyze data control projection speaker for image feedback control and audio output, acquisition data modeling as steps described below according to Secondary progress:
Step 1: launching video and audio by projection speaker, is judged by sampler by wearing wearable movement Image and audio are listened in sensor sight, in the process by audio-video collection unit capturing motion, carry out the voice of high precision Collected data are simultaneously sent to the network friendship by identification, gesture identification, facial characteristics identification in the form of video data stream Changing device is handled by the video frequency collection card, and video data stream is transmitted in the form of IP traffic by treated Network;
Step 2: carrying out Gait Recognition to by sampler by acceleration sensor and watch, by the variation of vector length, It may determine that the direction of current acceleration.
Step 3: select Wavelet Transform Threshold method dry to the electromagnetism in circuit in the collection process in step 1 and step 2 (i.e. high-frequency noise) is disturbed to be denoised;
Step 4: using WAVELET PACKET DECOMPOSITION, difference algorithm, from three directions of four areal pressures, (left and right, is hung down at front and back respectively Time domain frequency domain feature directly) is extracted, is identified with SVM;
Step 5: minimum is selected from the multiple wavelet packets for the gait frequency domain character that step 5 is extracted with fuzzy C-mean algorithm method Wavelet packets set, then sorted based on fuzzy membership with fuzzy C-mean algorithm method select from the set picked out it is minimum optimal WAVELET PACKET DECOMPOSITION coefficient obtains minimum optimal gait frequency domain character subset, then combines with gait temporal signatures, obtains fused Then gait feature collection carries out Gait Recognition using SVM, using Nonlinear Mapping Radial basis kernel function by the low of linearly inseparable Dimension space is mapped to the higher dimensional space of linear separability to identify modeling;
Step 6: calculation process is carried out using mode at different levels, firstly, the dither noise in projection process is put down Weighing apparatus, filtering and noise reduction, then hierarchical layered dimensionality reduction models, and filters using the output data of the acceleration transducer, and using intermediate value Wave judges human motion type, layering judge human body whether stationary motion, motive position, type, the main spy of classification sampling judgement The influence of sign, comprehensive verification emphasis feature, and then judge to turn over, push, getting up etc. the features such as sleep when modeling, passes through first Accelerometer output synthesis amplitude then determines that human body is static between given upper lower threshold value;Conversely, then determine that people moves, The accelerometer output synthesis amplitude are as follows:
The upper lower threshold value is respectively as follows: tha min=8m/s, tha max=11m/s, first condition are as follows:
If first condition is judged as static, without the judgement of second, third condition, the part of accelerometer output Variance then determines that the body part is static lower than given threshold value;Conversely, then determine that the body part moves, the Article 2 Part calculation formula are as follows:
Wherein, thσa=0.5, if second condition is judged as that the body part is static, without sentencing for third condition It is disconnected, conversely, the third condition calculation formula are as follows:
Wherein,thamax=50, then carry out The state of movement is sampled calculating and extracts characteristic parameter;
Step 7: modeling Fusion Features, assesses values of light grade, movement class that the light sensor of camera obtains Type, the important kind of user setting obtain light sensation quality user 5 divide appraisement system, by Supervised classification algorithm-with history most Good data are supervision factor pair ratio, establish subjective feeling and environmental parameter adaptively projects and adjusts model;
Step 8: establishing the corresponding projection portrait model depth of whole crowd and mutually learn pattern-recognition, according in database Have registered N class sample, sample inputted into classifier training, according to input value judgement be in (1, N) which kind of, if beyond (1, N) range, then new registration classification N+1 class, then updates classifier again;
Step 9: to sub- result merging treatment is calculated, host process reads in the output that current time walks interior each process respectively The collected multi-path video stream is carried out split processing, to generate the panoramic video stream for carrying the timestamp, so by file Result is merged according to the algorithm of Region Decomposition afterwards and is restored, result is kept in ASCLL format;It is whole that virtual reality is worn in user When end, whether detection virtual reality terminal is kept in motion, if so, video frequency frame to be played is adjusted according to acceleration, with It is supplied to user's synchronization video information, information is shown in the vision coverage area of virtual reality terminal.
Step 10: for constantly being repeated step 1 by sampler to the process of step 9, (referring to Fig. 1) is with sample The increase SVM classifier of amount can adaptively be continued to optimize to improve inputs new sample every time, according to cross-validation method principle, meter SVM classifier discrimination is calculated, Fitness analysis is carried out, does not set the stop value of genetic algorithm, termination condition is used than supreme people's court, such as Fruit training discrimination be higher than it is existing, be set as optimized parameter, otherwise, executes selection, intersection and make a variation etc. operations advanced optimize Training parameter, implementation model it is adaptive perfect, finally formed according to personal viewing habit and motor habit for personal Personalized model reuses the projection speaker in step 1 and project for everyone intelligent holographic.
In the present embodiment, it is made of outside the projection speaker in the step 1 upper and lower two parts, wherein one It is divided into 3 equally distributed projectors in direction (projecting 120 degree respectively), another part is panorama speaker, has 3 sides in speaker It include edge calculations mould inside speaker to equally distributed earpiece (carrying out the audio in three directions and the feedback identifying of volume) Group, communication module are connect with cloud central server, and wearable movement judges sensor, including is not limited to bracelet, hand The sensor with movement judgement such as table, waistband, shoes, the group of mould containing edge calculations, communication module pass through communication module and cloud center Server connection.It is central server total tune speaker, personal as the wearable movement of carrying judges the data of sensor, Comprehensive descision carries out audible feedback and output.
In the present embodiment, the dither noise balance in step 6 is can to acquire velocity sensor to accelerate including three-dimensional Degree, three-dimensional magnetic field, three-dimensional angular velocity geometric mean be overlapped to obtain weighted geometric mean: Y=k according to weight coefficient1Y1+ k2Y2+k3Y3, wherein Y1For acceleration geometric mean, Y2For magnetic field geometry mean value, Y3For angular speed geometric mean, k adds for norm Weight coefficient.
In the present embodiment, the original motion vector group (F1, F2 ..., Fm) of the extraction characteristic parameter in step 6, m is small In 9, matrix is extracted are as follows:
Wherein it is most to contain information content by original vector F1, has maximum variance, Referred to as first principal component, F2 ..., Fm successively successively decrease, referred to as Second principal component, " " ", m principal component.Therefore principal component analysis Process can regard as determining weight coefficient aik (i=1, " " ", m;K=1, " " " 9) process.
The beneficial effects of the present invention are:
The present invention proposes a kind of method, in the case where the full nature without wearing VR glasses and earphone is without wearing conditions, by more The complete natural VR visual field of the adaptive cooperation of direction projection formula speaker, everyone use habit of learning adjustment and use environment and VR sound.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Specific embodiment
The present invention will be further explained below with reference to the attached drawings:
As shown in Figure 1, intelligence 3D shadow casting technique, including projection and sounding are carried out to image by projection speaker, and lead to Individual is crossed to model as the wearable movement of carrying judges that sensor realizes the acquisition for being carried out data-signal by sampler, from And according to modeling analysis data control projection speaker for image feedback control and audio output, acquisition data modeling according to Following step successively carries out:
Step 1: launching video and audio by projection speaker, is judged by sampler by wearing wearable movement Image and audio are listened in sensor sight, in the process by audio-video collection unit capturing motion, carry out the voice of high precision Collected data are simultaneously sent to the network friendship by identification, gesture identification, facial characteristics identification in the form of video data stream Changing device is handled by the video frequency collection card, and video data stream is transmitted in the form of IP traffic by treated Network;
Step 2: carrying out Gait Recognition to by sampler by acceleration sensor and watch, by the variation of vector length, It may determine that the direction of current acceleration.
Step 3: select Wavelet Transform Threshold method dry to the electromagnetism in circuit in the collection process in step 1 and step 2 (i.e. high-frequency noise) is disturbed to be denoised;
Step 4: using WAVELET PACKET DECOMPOSITION, difference algorithm, from three directions of four areal pressures, (left and right, is hung down at front and back respectively Time domain frequency domain feature directly) is extracted, is identified with SVM;
Step 5: minimum is selected from the multiple wavelet packets for the gait frequency domain character that step 5 is extracted with fuzzy C-mean algorithm method Wavelet packets set, then sorted based on fuzzy membership with fuzzy C-mean algorithm method select from the set picked out it is minimum optimal WAVELET PACKET DECOMPOSITION coefficient obtains minimum optimal gait frequency domain character subset, then combines with gait temporal signatures, obtains fused Then gait feature collection carries out Gait Recognition using SVM, using Nonlinear Mapping Radial basis kernel function by the low of linearly inseparable Dimension space is mapped to the higher dimensional space of linear separability to identify modeling;
Step 6: calculation process is carried out using mode at different levels, firstly, the dither noise in projection process is put down Weighing apparatus, filtering and noise reduction, then hierarchical layered dimensionality reduction models, and filters using the output data of the acceleration transducer, and using intermediate value Wave judges human motion type, layering judge human body whether stationary motion, motive position, type, the main spy of classification sampling judgement The influence of sign, comprehensive verification emphasis feature, and then judge to turn over, push, getting up etc. the features such as sleep when modeling, passes through first Accelerometer output synthesis amplitude then determines that human body is static between given upper lower threshold value;Conversely, then determine that people moves, The accelerometer output synthesis amplitude are as follows:
The upper lower threshold value is respectively as follows: tha min=8m/s, tha max=11m/s, first condition are as follows:
If first condition is judged as static, without the judgement of second, third condition, the part of accelerometer output Variance then determines that the body part is static lower than given threshold value;Conversely, then determine that the body part moves, the Article 2 Part calculation formula are as follows:
Wherein, thσa=0.5, if second condition is judged as that the body part is static, without sentencing for third condition It is disconnected, conversely, the third condition calculation formula are as follows:
Wherein,thamax=50, then carry out The state of movement is sampled calculating and extracts characteristic parameter;
Step 7: modeling Fusion Features, assesses values of light grade, movement class that the light sensor of camera obtains Type, the important kind of user setting obtain light sensation quality user 5 divide appraisement system, by Supervised classification algorithm-with history most Good data are supervision factor pair ratio, establish subjective feeling and environmental parameter adaptively projects and adjusts model;
Step 8: establishing the corresponding projection portrait model depth of whole crowd and mutually learn pattern-recognition, according in database Have registered N class sample, sample inputted into classifier training, according to input value judgement be in (1, N) which kind of, if beyond (1, N) range, then new registration classification N+1 class, then updates classifier again;
Step 9: to sub- result merging treatment is calculated, host process reads in the output that current time walks interior each process respectively The collected multi-path video stream is carried out split processing, to generate the panoramic video stream for carrying the timestamp, so by file Result is merged according to the algorithm of Region Decomposition afterwards and is restored, result is kept in ASCLL format;It is whole that virtual reality is worn in user When end, whether detection virtual reality terminal is kept in motion, if so, video frequency frame to be played is adjusted according to acceleration, with It is supplied to user's synchronization video information, information is shown in the vision coverage area of virtual reality terminal.
Step 10: for constantly being repeated step 1 by sampler to the process of step 9, (referring to Fig. 1) is with sample The increase SVM classifier of amount can adaptively be continued to optimize to improve inputs new sample every time, according to cross-validation method principle, meter SVM classifier discrimination is calculated, Fitness analysis is carried out, does not set the stop value of genetic algorithm, termination condition is used than supreme people's court, such as Fruit training discrimination be higher than it is existing, be set as optimized parameter, otherwise, executes selection, intersection and make a variation etc. operations advanced optimize Training parameter, implementation model it is adaptive perfect, finally formed according to personal viewing habit and motor habit for personal Personalized model reuses the projection speaker in step 1 and project for everyone intelligent holographic.
In the present embodiment, it is made of outside the projection speaker in the step 1 upper and lower two parts, wherein one It is divided into 3 equally distributed projectors in direction (projecting 120 degree respectively), another part is panorama speaker, has 3 sides in speaker It include edge calculations mould inside speaker to equally distributed earpiece (carrying out the audio in three directions and the feedback identifying of volume) Group, communication module are connect with cloud central server, and wearable movement judges sensor, including is not limited to bracelet, hand The sensor with movement judgement such as table, waistband, shoes, the group of mould containing edge calculations, communication module pass through communication module and cloud center Server connection.It is central server total tune speaker, personal as the wearable movement of carrying judges the data of sensor, Comprehensive descision carries out audible feedback and output.
In the present embodiment, the dither noise balance in step 6 is can to acquire velocity sensor to accelerate including three-dimensional Degree, three-dimensional magnetic field, three-dimensional angular velocity geometric mean be overlapped to obtain weighted geometric mean: Y=k according to weight coefficient1Y1+ k2Y2+k3Y3, wherein Y1For acceleration geometric mean, Y2For magnetic field geometry mean value, Y3For angular speed geometric mean, k adds for norm Weight coefficient.
In the present embodiment, the original motion vector group (F1, F2 ..., Fm) of the extraction characteristic parameter in step 6, m is small In 9, matrix is extracted are as follows:
Wherein it is most to contain information content by original vector F1, has maximum variance, Referred to as first principal component, F2 ..., Fm successively successively decrease, referred to as Second principal component, " " ", m principal component.Therefore principal component analysis Process can regard as determining weight coefficient aik (i=1, " " ", m;K=1, " " " 9) process.
It should be noted last that: the above embodiments are only used to illustrate and not limit the technical solutions of the present invention, although ginseng It is described the invention in detail according to above-described embodiment, it will be apparent to an ordinarily skilled person in the art that: it still can be to this Invention is modified or replaced equivalently, without departing from the spirit or scope of the invention, or any substitutions, It is intended to be within the scope of the claims of the invention.

Claims (7)

1. intelligence 3D shadow casting technique, it is characterised in that: including carrying out projection and sounding to image by projection speaker, and pass through Individual judges that sensor realizes modeling to the acquisition for being carried out data-signal by sampler with the wearable movement of carrying, thus Projection speaker is controlled for the feedback control and audio output of image according to modeling analysis data, acquires data modeling under Step is stated successively to carry out:
Step 1: launching video and audio by projection speaker, by sampler by wearing wearable movement judgement sensing Image and audio are listened in device sight, in the process by audio-video collection unit capturing motion, carry out high precision speech recognition, Collected data are simultaneously sent to the network exchange dress by gesture identification, facial characteristics identification in the form of video data stream It sets, is handled by the video frequency collection card, and video data stream is transmitted to network in the form of IP traffic by treated;
Step 2: carrying out Gait Recognition to by sampler by acceleration sensor and watch, can be with by the variation of vector length Judge the direction of current acceleration.
Step 3: select Wavelet Transform Threshold method to the electromagnetic interference in circuit in the collection process in step 1 and step 2 (i.e. high-frequency noise) is denoised;
Step 4: using WAVELET PACKET DECOMPOSITION, difference algorithm respectively from three directions of four areal pressures (left and right, front and back, vertical) Time domain frequency domain feature is extracted, is identified with SVM;
Step 5: it is selected from the multiple wavelet packets for the gait frequency domain character that step 5 is extracted with fuzzy C-mean algorithm method minimum optimal Wavelet packet set, then minimum optimal wavelet is selected from the set picked out based on fuzzy membership sequence with fuzzy C-mean algorithm method Packet decomposition coefficient obtains minimum optimal gait frequency domain character subset, then combines with gait temporal signatures, obtains fused gait Then feature set carries out Gait Recognition using SVM, using Nonlinear Mapping Radial basis kernel function that the low-dimensional of linearly inseparable is empty Between be mapped to the higher dimensional space of linear separability to identify modeling;
Step 6: carrying out calculation process using mode at different levels, firstly, by the dither noise balance in projection process, filter Wave denoising, then hierarchical layered dimensionality reduction models, and judges using the output data of the acceleration transducer, and using median filtering Human motion type, layering judge human body whether stationary motion, motive position, type, classification sampling judges main feature, comprehensive The influence of emphasis feature is verified, and then judges to turn over, push, getting up etc. the features such as sleep, when modeling, passes through accelerometer first Output synthesis amplitude then determines that human body is static between given upper lower threshold value;Conversely, then determine that people moves, the acceleration Degree meter output synthesis amplitude are as follows:
The upper lower threshold value is respectively as follows: tha min=8m/s, tha max=11m/s, first condition are as follows:
If first condition is judged as static, without the judgement of second, third condition, the part side of accelerometer output Difference then determines that the body part is static lower than given threshold value;Conversely, then determine that the body part moves, the second condition Calculation formula are as follows:
Wherein, thσa=0.5, if second condition is judged as that the body part is static, without the judgement of third condition, instead It, the third condition calculation formula are as follows:
Wherein,thamax=50, then moved State be sampled calculating and extract characteristic parameter;
Step 7: modeling Fusion Features, assesses values of light grade, type of sports, use that the light sensor of camera obtains The important kind of family setting obtains light sensation quality user 5 and divides appraisement system, by Supervised classification algorithm-with history optimum data To supervise factor pair ratio, establishes subjective feeling and environmental parameter adaptively projects and adjusts model;
Step 8: it establishes the corresponding projection portrait model depth of whole crowd and mutually learns pattern-recognition, registered according in database N class sample, inputs classifier training for sample, according to input value judgement be in (1, N) which kind of, if exceeding (1, N) model It encloses, then new registration classification N+1 class, then updates classifier again;
Step 9: to sub- result merging treatment is calculated, host process reads in the output file that current time walks interior each process respectively, The collected multi-path video stream is subjected to split processing, to generate the panoramic video stream for carrying the timestamp, is then pressed Result is merged into reduction according to the algorithm of Region Decomposition, result is kept in ASCLL format;Virtual reality terminal is worn in user When, whether detection virtual reality terminal is kept in motion, if so, adjusting video frequency frame to be played, according to acceleration to mention User's synchronization video information is supplied, information is shown in the vision coverage area of virtual reality terminal.
Step 10: for by sampler constantly repeat step 1 arrive step 9 process, with sample amount increase SVM divide Class device can adaptively be continued to optimize to improve inputs new sample every time, according to cross-validation method principle, calculates SVM classifier and knows Not rate carries out Fitness analysis, does not set the stop value of genetic algorithm, and termination condition is used than supreme people's court, if the identification of training Rate be higher than it is existing, be set as optimized parameter, otherwise, executes selection, intersect and make a variation etc. operations advanced optimize training parameter, reality Now model is adaptive perfect, is finally formed according to personal viewing habit and motor habit for personal personalized model, The projection speaker in step 1 is reused project for everyone intelligent holographic.
2. intelligence 3D shadow casting technique according to claim 1, it is characterised in that: outside the projection speaker in the step 1 Portion is made of upper and lower two parts, and a portion is 3 equally distributed projectors in direction (projecting 120 degree respectively), another Part is panorama speaker, has 3 equally distributed earpieces in direction in speaker.
3. intelligence 3D shadow casting technique according to claim 1, it is characterised in that: dither noise in step 6 balance be by Velocity sensor can acquire the geometric mean including three-dimensional acceleration, three-dimensional magnetic field, three-dimensional angular velocity and carry out according to weight coefficient Superposition obtains weighted geometric mean: Y=k1Y1+k2Y2+k3Y3, wherein Y1For acceleration geometric mean, Y2For magnetic field geometry mean value, Y3For angular speed geometric mean, k is norm weighting coefficient.
4. intelligence 3D shadow casting technique according to claim 1, it is characterised in that: the original of the extraction characteristic parameter in step 6 Beginning set of motion vectors (F1, F2 ..., Fm), m extract matrix less than 9 are as follows:
Wherein it is most to contain information content by original vector F1, has a maximum variance, and referred to as the One principal component, F2 ..., Fm successively successively decrease, referred to as Second principal component, " " ", m principal component.Therefore the process of principal component analysis can With regard as determining weight coefficient aik (i=1, " " ", m;K=1, " " " 9) process.
5. intelligence 3D shadow casting technique according to claim 2, it is characterised in that: include edge calculations mould inside the speaker Group, communication module and cloud central server connect.
6. intelligence 3D shadow casting technique according to claim 1, it is characterised in that: wearable movement judges sensor, packet It includes and is not limited to the sensor with movement judgement such as bracelet, wrist-watch, waistband, shoes.
7. intelligence 3D shadow casting technique according to claim 5, it is characterised in that: described in the central server total tune It is speaker, described personal as the wearable movement of carrying judge the data of sensor, comprehensive descision progress audible feedback and defeated Out.
CN201811002477.8A 2018-08-29 2018-08-29 Intelligent 3D shadow casting technique Pending CN109032361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811002477.8A CN109032361A (en) 2018-08-29 2018-08-29 Intelligent 3D shadow casting technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811002477.8A CN109032361A (en) 2018-08-29 2018-08-29 Intelligent 3D shadow casting technique

Publications (1)

Publication Number Publication Date
CN109032361A true CN109032361A (en) 2018-12-18

Family

ID=64625595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811002477.8A Pending CN109032361A (en) 2018-08-29 2018-08-29 Intelligent 3D shadow casting technique

Country Status (1)

Country Link
CN (1) CN109032361A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI724858B (en) * 2020-04-08 2021-04-11 國軍花蓮總醫院 Mixed Reality Evaluation System Based on Gesture Action

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202677083U (en) * 2012-06-01 2013-01-16 中国人民解放军第四军医大学 Sleep and fatigue monitoring type watch apparatus
CN102945079A (en) * 2012-11-16 2013-02-27 武汉大学 Intelligent recognition and control-based stereographic projection system and method
CN103584840A (en) * 2013-11-25 2014-02-19 天津大学 Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability
CN106971059A (en) * 2017-03-01 2017-07-21 福州云开智能科技有限公司 A kind of wearable device based on the adaptive health monitoring of neutral net
CN107015646A (en) * 2017-03-28 2017-08-04 北京犀牛数字互动科技有限公司 The recognition methods of motion state and device
CN107102728A (en) * 2017-03-28 2017-08-29 北京犀牛数字互动科技有限公司 Display methods and system based on virtual reality technology
CN107205140A (en) * 2017-07-12 2017-09-26 赵政宇 A kind of panoramic video segmentation projecting method and apply its system
CN107465850A (en) * 2016-06-03 2017-12-12 王建文 Virtual reality system
CN107753026A (en) * 2017-09-28 2018-03-06 古琳达姬(厦门)股份有限公司 For the intelligent shoe self-adaptive monitoring method of backbone leg health
CN108107578A (en) * 2017-12-14 2018-06-01 腾讯科技(深圳)有限公司 View angle regulating method, device, computing device and the storage medium of virtual reality

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202677083U (en) * 2012-06-01 2013-01-16 中国人民解放军第四军医大学 Sleep and fatigue monitoring type watch apparatus
CN102945079A (en) * 2012-11-16 2013-02-27 武汉大学 Intelligent recognition and control-based stereographic projection system and method
CN103584840A (en) * 2013-11-25 2014-02-19 天津大学 Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability
CN107465850A (en) * 2016-06-03 2017-12-12 王建文 Virtual reality system
CN106971059A (en) * 2017-03-01 2017-07-21 福州云开智能科技有限公司 A kind of wearable device based on the adaptive health monitoring of neutral net
CN107015646A (en) * 2017-03-28 2017-08-04 北京犀牛数字互动科技有限公司 The recognition methods of motion state and device
CN107102728A (en) * 2017-03-28 2017-08-29 北京犀牛数字互动科技有限公司 Display methods and system based on virtual reality technology
CN107205140A (en) * 2017-07-12 2017-09-26 赵政宇 A kind of panoramic video segmentation projecting method and apply its system
CN107753026A (en) * 2017-09-28 2018-03-06 古琳达姬(厦门)股份有限公司 For the intelligent shoe self-adaptive monitoring method of backbone leg health
CN108107578A (en) * 2017-12-14 2018-06-01 腾讯科技(深圳)有限公司 View angle regulating method, device, computing device and the storage medium of virtual reality

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI724858B (en) * 2020-04-08 2021-04-11 國軍花蓮總醫院 Mixed Reality Evaluation System Based on Gesture Action

Similar Documents

Publication Publication Date Title
US10701506B2 (en) Personalized head related transfer function (HRTF) based on video capture
CN104881881B (en) Moving Objects method for expressing and its device
CN110287825B (en) Tumble action detection method based on key skeleton point trajectory analysis
CN111729283B (en) Training system and method based on mixed reality technology
CN107102728A (en) Display methods and system based on virtual reality technology
TWI714926B (en) Method for constructing dream reproduction model, method and device for dream reproduction
WO2021248916A1 (en) Gait recognition and emotion sensing method and system based on intelligent acoustic device
CN113822136A (en) Video material image selection method, device, equipment and storage medium
CN104794446B (en) Human motion recognition method and system based on synthesis description
CN114223215A (en) Dynamic customization of head-related transfer functions for rendering audio content
CN105653020A (en) Time traveling method and apparatus and glasses or helmet using same
CN110472622A (en) Method for processing video frequency and relevant apparatus, image processing method and relevant apparatus
Hu et al. AVMSN: An audio-visual two stream crowd counting framework under low-quality conditions
TW201835721A (en) Method and system for interacting with intelligent adult product
WO2016131793A1 (en) Method of transforming visual data into acoustic signals and aid device for visually impaired or blind persons
CN109032361A (en) Intelligent 3D shadow casting technique
CN108769640A (en) Automatically adjust visual angle shadow casting technique
CN114332976A (en) Virtual object processing method, electronic device and storage medium
CN113408397A (en) Domain-adaptive cross-subject motor imagery electroencephalogram signal identification system and method
WO2019094114A1 (en) Personalized head related transfer function (hrtf) based on video capture
Wang et al. A multi-view gait recognition method using deep convolutional neural network and channel attention mechanism
CN109285598A (en) The mobile phone projection technology for having color mood regulation
CN109308133A (en) Intelligent interaction projects interaction technique
Guerrero et al. Human Activity Recognition via Feature Extraction and Artificial Intelligence Techniques: A Review
CN116172580B (en) Auditory attention object decoding method suitable for multi-sound source scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181218