CN202003298U - Three-dimensional uncalibrated display interactive device - Google Patents
Three-dimensional uncalibrated display interactive device Download PDFInfo
- Publication number
- CN202003298U CN202003298U CN2010207008266U CN201020700826U CN202003298U CN 202003298 U CN202003298 U CN 202003298U CN 2010207008266 U CN2010207008266 U CN 2010207008266U CN 201020700826 U CN201020700826 U CN 201020700826U CN 202003298 U CN202003298 U CN 202003298U
- Authority
- CN
- China
- Prior art keywords
- dimensional
- interactive device
- display
- main body
- observer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The utility model discloses a three-dimensional uncalibrated display interactive device which comprises a main body bracket, a helmet display, an image storage and processing device and a host machine, wherein a cloud platform for installing a plurality of cameras is arranged on the main body bracket, and the cameras are installed on the cloud platform; an observer stands in space inside the main body bracket and can watch a virtual scene through the helmet display, and can realize free moving including head turning, freely walking and rotating in space; meanwhile, real-time interaction of scenes can be realized; the plurality of cameras collect image information of the observer, the host machine can be used for analyzing, matching and calculating a plurality of images after the image preprocessing to work out position, posture and motion information of people in real time, so as to control the virtual scene in the helmet display to change accordingly and realize virtual display and interaction effect. In the utility model, the observer does not need to be calibrated and can naturally turn the head and the body in the work space to change viewpoint, so as to realize real three-dimensional panoramic display and interaction; therefore, the three-dimensional uncalibrated display interactive device has good immerse feeling and flexibility.
Description
Technical field
The utility model relates to a kind of three-dimensional demonstration of demarcation interactive device that do not have, and belongs to computer vision and field of human-computer interaction.
Background technology
At present, immerse be make the people realistic and to experience basic, and be the key that realizes man-machine harmony alternately.The demonstration of virtual scene and be the core place of virtual reality alternately, mainly realize by various display device the display part at present, as three-dimensional display, post curtain or a ring curtain system, CAVE system, to offer people's feeling of immersion; And it is mutual mainly by realizations such as various input equipments such as data gloves, optical mark point, position trackers.As following the tracks of head part's motion, need special position tracker, volume is big, the price height, and bigger weight is arranged, use inconvenient.And the people must wear data glove if mutual with hand and virtual scene.Also can only catch the information of several degree of freedom, underaction.Aspect the seizure and identification of the action of position of human body and attitude, also have at present and do, but demarcation is all arranged.
Summary of the invention
In view of this, technical problem to be solved in the utility model is, overcome the deficiency that prior art exists, a kind of can realize not having true three-dimensional panorama demonstration of demarcation and mutual device and method are provided, need not demarcate the observer, the observer can nature in work space rotation head and health to change viewpoint, the demonstration that realizes true three-dimensional panorama has and immerses sense and dirigibility preferably with mutual.
The utility model technical solution:
A kind of three-dimensional demonstration of demarcation interactive device that do not have is characterized in that, comprising:
Main body rack, Helmet Mounted Display, image storage and processor, the main frame that can carry out high-speed computation and handle in real time;
Described main body rack is provided with a plurality of The Cloud Terraces that video camera is installed, and on the described The Cloud Terrace video camera is housed;
The observer stands in the main body rack inner space, can watch a virtual scene in real time by Helmet Mounted Display, and the observer can do moving freely on the space, comprises that head turns to, walking freely, rotation, and can with the scene real-time, interactive;
Described a plurality of video camera is by the image information from a plurality of angle synchronous acquisition observers, be input to image storage and processor and carry out the pre-service of image, pretreated image is input to main frame again, by main frame multiple image is analyzed, mated and calculates, real-time resolving goes out people's position, attitude and action message, and the virtual scene of controlling thus in the Helmet Mounted Display produces corresponding the variation, realizes that virtual demonstration produces interaction.
The fixture of sensor also is installed on the main body rack, infrared sensor or sound transducer or temperature sensor can be installed on the described sensor fixture, after infrared sensor is used to discern human body information, or behind sound transducer collection people's the natural language, or the input main frame is discerned, is handled and stores after the temperature in temperature sensor collecting work space, as record and analysis, can be used to simulation and restoration scenario to scene information.
Support and The Cloud Terrace all are movably.
A plurality of video cameras can both carry out image acquisition by external trigger.
In a plurality of video cameras, each video camera is connected with processor with an image storage.
Helmet Mounted Display is closed Helmet Mounted Display.
Support and The Cloud Terrace all are movably.
The utility model advantage compared with prior art is:
1. this device can hold many people simultaneously in work space, and what the observer can nature moves in three dimensions, realizes very three-dimensional effect;
2. owing to be provided with a plurality of video cameras, thus can be under situation about need not demarcate to the observer, and the head that obtains the observer in real time by the collection and the mode identification technology of image and position, attitude and the action message of hand realize following the tracks of and the location;
3. observer's rotation head and health are to change viewpoint, and the scene of being seen is that the virtual world that computing machine generates will change in real time;
4. the observer can carry out interactive operation with the mode of nature object direct and the virtual environment fly.
In a word, the utility model has been created one and has been exactly liked the man-machine interaction environment that objective environment not only surmounts objective space-time, can either immerse wherein but also can control the harmony on it, i.e. steerable solid space of constructing by how female information, its main target is to realize the human the most real experience that obtain in computer-controlled virtual environment, and carries out mutual between the human and environment in the mode that makes things convenient for nature.The observer need not see through external display, and can directly be immersed in the middle of the virtual environment under the computer control, as human daily life at real world.
Description of drawings
Fig. 1 is the utility model structural representation;
Fig. 2 is the utility model system flowchart.
Concrete structure
Below in conjunction with drawings and the specific embodiments the utility model is further described in more detail.
As shown in Figure 1, device of the present utility model is made up of main body rack 1, The Cloud Terrace 2, fixture 3, video camera 4, infrared sensor 5, sound transducer 6, temperature sensor 7, image storage and processor 8, Helmet Mounted Display 9 and main frame 10.Main body rack 1 is provided with the The Cloud Terrace 2 of a plurality of installation video cameras 4 and the fixture 3 of sensor installation, and video camera 4 is housed on the The Cloud Terrace 2, and infrared sensor 5, sound transducer 6, temperature sensor 7 are housed on the sensor fixture 3; Support 1, The Cloud Terrace 2 and fixture 3 all are movably, and a plurality of video cameras can both carry out image acquisition by external trigger, and each video camera is connected with processor 8 with an image storage; Helmet Mounted Display 9 is closed Helmet Mounted Displays.The observer stands in the work space of device, wears Helmet Mounted Display 9, shows computer-controlled virtual scene on the display.The observer can walk about, the position of rotation head or mobile hand, and a plurality of video cameras of establishing in the device are gathered observer's image in real time, sends into image storage and processor 8 and carries out the pre-service of image, and the extraction foreground information is observer's a profile information.Image storage and processor 8 are sent image into main frame 10, main frame 10 obtains the observer's of diverse location that synchronization collects and angle image, this group image is cut apart and pattern match, obtain the position and the attitude of head and hand, produce corresponding displacement and angle changes as input control virtual scene, and deliver in the Helmet Mounted Display and show.Like this, the observer promptly can change viewpoint by rotation head and health, also can and touch with scene mutual by the mobile of hand.
In addition, infrared sensor 5 or sound transducer 6, or temperature sensor 7 is used for doing data acquisition, as the record and the analysis of scene information, can be used to simulation and restoration scenario.Infrared sensor is used to discern human body information, can be used to write down the people's who occurred in the scene information; Sound transducer is gathered people's natural language, through identification and processing, can be converted into instruction, and the control virtual scene produces corresponding the variation; The temperature in temperature sensor collecting work district can be used as historical information and stores in the main frame, can reappear scene in certain time period according to these information.
The utility model is specific as follows:
The first step, camera calibration;
Second step, image acquisition: the observer stands in the main body rack inner space, can watch a virtual scene in real time by Helmet Mounted Display, the observer can do moving freely on the space, comprise that head turns to, walking freely, rotation, a plurality of video cameras are synchronous acquisition observer's image from different perspectives;
In the 3rd step, the image pre-service: the image of camera acquisition sends into the image storage and processor is stored and pre-service;
In the 4th step, image information is extracted: pretreated image is sent to main frame, and main frame is analyzed, mated and calculate the multiple image that receives, and extracts key message wherein;
In the 5th step, resolve people's position, attitude and action message;
In the 6th step, upgrade the demonstration of virtual scene: the virtual scene of controlling Helmet Mounted Display according to the change of displacement and attitude produces corresponding the variation, to realize virtual demonstration and to produce interaction.
Above-mentioned main frame is analyzed, is mated and calculates, and extracts key message wherein, and the process of position, attitude and action message that real-time resolving goes out the people is as follows:
The first, image segmentation and extraction: the set of diagrams to input looks like to carry out image segmentation, extracts the profile of head and the profile of hand from every width of cloth image people's profile information;
The second, the three-dimensional coupling: according to the principle of solid coupling, organize the image of having cut apart and having extracted by this head and hand are carried out Feature Points Matching respectively, calculate the depth information and the three-dimensional coordinate of unique point;
Three, difference is calculated: because image is to gather in real time, the three-dimensional coordinate that calculates according to the picture frame of current time collection carries out difference with the three-dimensional coordinate that the set of diagrams of former frame looks like to calculate, and calculates head part's the displacement and the action message of rotation amount and hand.
Image storage and processor are stored with pre-service as follows:
The first, the image of a plurality of video camera synchronous acquisition observer different angles stores in image storage and the processor;
The second, image storage and processor carry out foreground extraction to the image of storage, remove background, and obtaining destination object is people's profile information.
The method of camera calibration is as follows: with a uniform black and white chessboard grid template as destination object, after video camera fixes, gather the image of the template of diverse location and angle, carry out angle point with this group image and extract and calculate, thereby obtain the model of video camera, i.e. the inside and outside parameter of video camera.
The above is preferred embodiment of the present utility model only, is not to be used for limiting protection domain of the present utility model.
Claims (5)
1. a three-dimensional nothing is demarcated and is shown interactive device, it is characterized in that, comprising:
Main body rack, Helmet Mounted Display, image storage and processor, the main frame that can carry out high-speed computation and handle in real time;
Described main body rack is provided with a plurality of The Cloud Terraces that video camera is installed, and on the described The Cloud Terrace video camera is housed.
2. the three-dimensional demonstration of the demarcation interactive device that do not have according to claim 1, it is characterized in that, the fixture of sensor also is installed on the described main body rack, infrared sensor or sound transducer or temperature sensor can be installed on the described sensor fixture, after infrared sensor is used to discern human body information, or behind sound transducer collection people's the natural language, or the input main frame is discerned, is handled and stores after the temperature in temperature sensor collecting work space, as record and analysis, can be used to simulation and restoration scenario to scene information.
3. the three-dimensional demonstration of the demarcation interactive device that do not have according to claim 1 and 2 is characterized in that described support and The Cloud Terrace all are movably.
4. the three-dimensional demonstration of the demarcation interactive device that do not have according to claim 1 and 2 is characterized in that, in described a plurality of video cameras, each video camera is connected with processor with an image storage.
5. the three-dimensional demonstration of the demarcation interactive device that do not have according to claim 1 and 2 is characterized in that described Helmet Mounted Display is closed Helmet Mounted Display.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010207008266U CN202003298U (en) | 2010-12-27 | 2010-12-27 | Three-dimensional uncalibrated display interactive device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010207008266U CN202003298U (en) | 2010-12-27 | 2010-12-27 | Three-dimensional uncalibrated display interactive device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN202003298U true CN202003298U (en) | 2011-10-05 |
Family
ID=44706031
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010207008266U Expired - Fee Related CN202003298U (en) | 2010-12-27 | 2010-12-27 | Three-dimensional uncalibrated display interactive device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN202003298U (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013139181A1 (en) * | 2012-03-19 | 2013-09-26 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
CN104298244A (en) * | 2013-07-17 | 2015-01-21 | 刘永 | Industrial robot three-dimensional real-time and high-precision positioning device and method |
CN106303246A (en) * | 2016-08-23 | 2017-01-04 | 刘永锋 | Real-time video acquisition methods based on Virtual Realization |
CN107145822A (en) * | 2017-03-24 | 2017-09-08 | 深圳奥比中光科技有限公司 | Deviate the method and system of user's body feeling interaction demarcation of depth camera |
-
2010
- 2010-12-27 CN CN2010207008266U patent/CN202003298U/en not_active Expired - Fee Related
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013139181A1 (en) * | 2012-03-19 | 2013-09-26 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
CN104298244A (en) * | 2013-07-17 | 2015-01-21 | 刘永 | Industrial robot three-dimensional real-time and high-precision positioning device and method |
CN106303246A (en) * | 2016-08-23 | 2017-01-04 | 刘永锋 | Real-time video acquisition methods based on Virtual Realization |
CN107145822A (en) * | 2017-03-24 | 2017-09-08 | 深圳奥比中光科技有限公司 | Deviate the method and system of user's body feeling interaction demarcation of depth camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101231752B (en) | Mark-free true three-dimensional panoramic display and interactive apparatus | |
CA3068645C (en) | Cloud enabled augmented reality | |
Zollmann et al. | Augmented reality for construction site monitoring and documentation | |
CN105094335B (en) | Situation extracting method, object positioning method and its system | |
Rabbi et al. | A survey on augmented reality challenges and tracking | |
CN104050859A (en) | Interactive digital stereoscopic sand table system | |
CN109828658B (en) | Man-machine co-fusion remote situation intelligent sensing system | |
CN104252712B (en) | Video generation device, image generating method and recording medium | |
CN104536579A (en) | Interactive three-dimensional scenery and digital image high-speed fusing processing system and method | |
CN102981616A (en) | Identification method and identification system and computer capable of enhancing reality objects | |
CN102848389A (en) | Realization method for mechanical arm calibrating and tracking system based on visual motion capture | |
CN103019024A (en) | System for realtime and accurate observation and analysis of table tennis rotating and system operating method | |
CN103064514A (en) | Method for achieving space menu in immersive virtual reality system | |
Mutis et al. | Challenges and enablers of augmented reality technology for in situ walkthrough applications. | |
CN202003298U (en) | Three-dimensional uncalibrated display interactive device | |
CN102184342B (en) | Virtual-real fused hand function rehabilitation training system and method | |
CN114140528A (en) | Data annotation method and device, computer equipment and storage medium | |
US10964104B2 (en) | Remote monitoring and assistance techniques with volumetric three-dimensional imaging | |
KR102199772B1 (en) | Method for providing 3D modeling data | |
CN109426336A (en) | A kind of virtual reality auxiliary type selecting equipment | |
CN114020978B (en) | Park digital roaming display method and system based on multi-source information fusion | |
Kluckner et al. | AVSS 2011 demo session: Construction site monitoring from highly-overlapping MAV images | |
Czesak et al. | Fusion of pose and head tracking data for immersive mixed-reality application development | |
CN117369233A (en) | Holographic display method, device, equipment and storage medium | |
WO2022129646A1 (en) | Virtual reality environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
DD01 | Delivery of document by public notice |
Addressee: Han Xu Document name: Notification to Pay the Fees |
|
DD01 | Delivery of document by public notice |
Addressee: Han Xu Document name: Notification of Termination of Patent Right |
|
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20111005 Termination date: 20121227 |