CN103279942A - Control method for realizing virtual 3D (3-dimension) display on 2D (2-dimension) screen on basis of environment sensor - Google Patents

Control method for realizing virtual 3D (3-dimension) display on 2D (2-dimension) screen on basis of environment sensor Download PDF

Info

Publication number
CN103279942A
CN103279942A CN2013101237081A CN201310123708A CN103279942A CN 103279942 A CN103279942 A CN 103279942A CN 2013101237081 A CN2013101237081 A CN 2013101237081A CN 201310123708 A CN201310123708 A CN 201310123708A CN 103279942 A CN103279942 A CN 103279942A
Authority
CN
China
Prior art keywords
screen
user
coordinate
eyes
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013101237081A
Other languages
Chinese (zh)
Inventor
马冲
陈震
陈大伟
闫昭
李建欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN2013101237081A priority Critical patent/CN103279942A/en
Publication of CN103279942A publication Critical patent/CN103279942A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a control method for realizing virtual 3D (3-dimension) display on a 2D (2-dimension) screen on the basis of an environment sensor. The method is characterized in that the technology utilizes the embedded environment sensor of electronic equipment to obtain the positions of the eyes of a user relative to a display screen through calculation, the coordinate conversion is carried out through an equation to obtain the image coordinates of objects displayed on the display screen, and in addition, the results are introduced into a pattern program interface to be bloated and displayed, so the operator in the corresponding observation position can feel the virtual 3D imaging effect. Through using the method, the problem that the 3D visual perception cannot be popularized because of limitation of the manufacturing cost of the 3D screen and the equipment in the existing market is solved, the existing 2D plane electronic display equipment in the prior art is sufficiently utilized, the cost is saved, and meanwhile, the user obtains better visual experience.

Description

The control method that virtual 3D shows on a kind of 2D screen based on the environment receptor
Technical field
The present invention relates to the screen shows control method, relate in particular to the control method that virtual 3D shows on the 2D screen based on the environment receptor.
Background technology
Continuous development along with computer realm, in informationalized today, appliance computer is not only the display mode that is confined to the plane, from showing of 3D film, to constantly emerging of bore hole 3D screen, computer realm will welcome the new turning point that moves towards 3D from 2D.
Be not difficult to find the demand that people experience for the 3D technology by the 3D technology in the application in market.At first, the 3D technology is different from conventional two dimensional surface display technique, by vivid visual effect is built in the improvement of equipment and display effect, allows the user reach a kind of true favorable user experience.In addition, 3D can find out also that in the application in film market and recreation market the 3D technology can make people obtain splendid recreational experience.
Yet, although the 3D technology has bigger development space in productive life from now on, be not difficult to find out but that by analyzing still there are various drawbacks in existing 3D screen: (1) need wear special eyeglasses in order to reach the 3D visual effect, produces to wear inconvenience and wear the ophthalmic uncomfortable problem of generation for a long time; (2) state-of-the-art technology bore hole 3D visual effect is relatively poor, and production cost is higher, involves great expense; (3) the high-end hardware of 3D Technology Need supports that as sound card, video card aspect maintenance cost is higher; (4) consider technology and hardware aspect factor, can't be implemented in the movable terminal the small screen and use the 3D technology.
Because the 2D screen has bigger market share as the main flow display device, and the influence in many aspects of 3D technology application, cause the 3D technology can not accomplish extensive popularizing as yet.How to utilize existing a large amount of two-dimentional viewing hardwares to realize that virtual 3D shows, becomes a problem.
3D street corner ground is drawn and is derived from west roadside culture, English: 3D Street Painting, domestic being translated into: ground, 3D street corner is drawn, ground, street corner pictures, street corner stereogram, ground, three-dimensional street corner picture, street corner are drawn three-dimensionally, urban picture, city three-dimensional stereograph etc.Ground, 3D street corner is drawn and is that its perspective principle is different with common drawing in the most special place by comparison with common picture kind.Normal drawing, the perspective arrangement of picture does not have the viewpoint with reference to appreciator's erect-position, its picture formation is foundation with the perspective of picture itself just, it then is with reference to appreciator's erect-position viewpoint that ground, 3D street corner is drawn, the formation of whole image is the vision initial point with people's viewpoint, making ground, 3D street corner draw is not only that width of cloth picture also becomes a real visual space, and the appreciator can be dissolved in the middle of the picture.The best viewpoint that stands in initial design uses camera to watch the visual effect that can reach best, and with the naked eye watching picture from other angles then is stretcher strain.Has only the image that to watch picture normally in the camera.Such contrast can allow ground, 3D street corner painter's paraphernalia that strong vision impact is arranged, thereby causes onlooker's vision sympathetic response, deepens onlooker's impression.But receive the own restriction that ground, 3D street corner is drawn, the beholder can only just can observe from fixing angle has relief still image.
Summary of the invention
The technology of the present invention is dealt with problems: overcome the deficiencies in the prior art, the control method (hereinafter to be referred as the virtual 3D method of 2D) that virtual 3D shows on a kind of 2D screen based on the environment receptor is provided, utilize the method, thereby solved in the market because the cost of 3D screen and the problem that device-restrictive can't be popularized the 3D visual experience, taken full advantage of present existing 2D plane electronics display device, when saving cost, allow the user obtain better visual experience.
The technology of the present invention solution: the control method (hereinafter to be referred as the virtual 3D method of 2D) that virtual 3D shows on a kind of 2D screen based on the environment receptor is characterized in that:
Step 1) is set up the screen rectangular coordinate system.
Step 2) carries out coordinate system transformation respectively and obtain each coordinate under the screen rectangular coordinate system by calculate obtaining current all or part key point of waiting to paint 3D model in the frame, obtain " article coordinate group " that " article coordinate " formed.
Step 3) is utilized the environment receptor and is calculated acquisition user's eyes with respect to the position of screen.Comprise user's eyes with respect to drift angle and the distance of screen.The environment receptor that can use comprises gravity sensing chip, camera but not only for these two kinds of environment receptors.
Step 4) utilizes the environment receptor to obtain eyes of user apart from the distance of screen coordinate system initial point.The environment receptor that can use comprises displacement transducer or other environment receptors that is suitable for, if there is not the environment receptor that is suitable for, can get the estimated value of an agreement according to analysis of statistical data.
Step 5) utilizes the user's who obtains in step 3 and the step 4 eyes to calculate the coordinate of eyes under the screen rectangular coordinate system in the distance of the coordinate method vector sum under the screen rectangular coordinate system.
Article coordinate in the step 6) integrating step 2 and the user's eyes coordinates in the step 5 obtain " as coordinate " by coordinate transform, and substitution graphic package interface is played up and shown.
Step 7) continues to use step 1 to step 6 to handle for the next frame image
Preferred, can or identify user's initial position automatically by the User Defined initial position.
Preferred, user's eyes coordinates of utilizing previous frame or former frame to obtain in step 5 is revised current coordinate, reduces because the float that environment receptor error causes.
Preferred, the drift angle of user's eyes in the step 2 with respect to screen strengthened or weaken to the screen normal direction, make that picture shows more rationally, the observed stereoscopic sensation of user is more true.
Preferred, the movement of user perspective is predicted and pre-service calculating and buffer memory in addition, improve its performance.
The present invention compared with prior art advantage is:
(1) the present invention utilizes the embedded environment receptor of electronic equipment to be calculated acquisition user's eyes with respect to the position of display screen, carry out " as the coordinate " that coordinate transform obtains shown object in the display screen by equation, and substitution graphic package interface is played up and is shown, thereby the operator who makes corresponding observation place experiences the imaging effect of virtual 3D, thereby solved in the market because the cost of 3D screen and the problem that device-restrictive can't be popularized the 3D visual experience, taken full advantage of present existing 2D plane electronics display device, when saving cost, allow the user obtain better visual experience.
(2) the present invention directly utilizes existing 2D virtual screen and environment receptor to show the image of 3D effect by this technical method, can be described as a kind of method of utilizing two dimensional surface display device virtual reality 3D effect easily.Mobile devices such as present smart mobile phone, panel computer have spreaded all over huge numbers of families, and it provides powerful usage platform for the virtual 3D method of 2D.Owing to do not need other novel devices, as a brand-new technology, its adaptable field is very widely.
At first, as a kind of virtual 3D display packing, it uses the general utility functions that does not change the General Two-Dimensional flat-panel display devices, that is to say the normal demonstration that does not influence the 2D picture.Because virtual 3D brings user's visual experience true to nature, we can mainly be applied in it and bring the recreational experience of user aspect.
This method also can be used for for the transformation of existing 2D vision picture.Because this method is used between graphic package interface and 3D model, therefore have compatibility preferably, can utilize the present invention to carry out minor modifications on the basis of existing game engine and can reach desirable effect.
This method can be used for advertisement, film.With respect to the ordinary two dimensional billboard, the dynamic advertising of using by virtual 3D method can be widely used among the mobile devices such as mobile phone, panel computer, and brings the beholder fresh visual experience.
This method also can be used in a large amount of public places.Such as, can pass through the every information of the virtual 3D method of 2D transmission comprehensive to people, three-dimensional in public places such as museum, gymnasium, make people no longer be confined to space imagination, but obtain visual information intuitively.Draw for common 3D street corner, have various visual angles, dynamic advantage.
Description of drawings
Fig. 1 is the virtual 3D method flow diagram of 2D;
Fig. 2 is core coordinate transform formula schematic diagram.
Embodiment
Below in conjunction with the drawings and specific embodiments, technical scheme of the present invention is described in more detail.
As shown in Figure 1, specific implementation step of the present invention is as follows;
Guarantee equal initialization success such as graphic package interface, environment receptor and can supply normal call.
Step 1) is set up the screen rectangular coordinate system, usually adopt as giving a definition: with the X-axis that is of screen place plane parallel and sensing top intuitively, with screen place plane parallel and to point to intuitively right-hand be Y-axis, it is the z axle that and direction vertical with plane, screen place pointed to user's side, and screen center's (generally getting center of gravity) is initial point.X-axis, Y-axis, Z axle form right-handed coordinate system.Obtain primary data and the record of each environment receptor.
Step 2) use the virtual 3D method of 2D to handle before importing the graphic package interface into for the current 3D model of waiting to paint frame.The graphic package interface refers to but is not limited only to interfaces such as OpenGL, Direct X.Carry out coordinate system transformation respectively by all or part key point that calculate to obtain to wait to show the 3D model and obtain each coordinate under the screen rectangular coordinate system, claim these new coordinates " article coordinate ", the belongings coordinate is formed " the article coordinate group " of present frame.
Step 3) is utilized the environment receptor and is calculated acquisition user's eyes with respect to the position of screen, and this position comprises user's eyes with respect to drift angle and the distance of screen.Can utilize the gravity sensing chip, by obtaining current state at the weight distribution vector of screen rectangular coordinate system, calculate and obtain " gravitational method vector " (gX, gY, gZ), gravitational method vector (gX, gY gZ) can be used as a kind of metric of the deflection angle of screen, supposes that user's eyes in the process from the initial position to the current location with respect to the position on ground significant change do not take place, according to the relativity of motion the coordinate method vector of user's eyes under the screen rectangular coordinate system and (gX as can be known, gY, gZ) direction is opposite, is designated as (vX, vY, vZ); Also can utilize the camera that can catch user's face to obtain user's face-image, locate the position of eyeball and calculate user's eyes with respect to the coordinate method vector of screen by the recognizer in the graphics; Also can use other environment receptors that can directly or indirectly obtain user's eye locations to obtain the coordinate method vector; If there is not the environment receptor that is suitable for, then can't realize this method, inform that the user moves failure and end.
Step 4) utilize the environment receptor obtain user's eye distance screen coordinate system initial point apart from d.Can utilize the direct human body of displacement transducer to the distance of displacement transducer, revise back or directly approximate as the value apart from d; Also can utilize the camera that can catch user's face to obtain user's face-image, calculate the distance of user's eye distance screen according to user's face position or body imaging size; Also can utilize other can directly or indirectly obtain the far and near environment receptor of distance obtains apart from d; If do not have the environment receptor that is suitable for, can estimate to the distance from screen to human eye that according to analysis of statistical data value is about 25cm to 40cm and is advisable with 30cm.
The coordinate method vector of the eyes that step 5) is utilized in step 3 and the step 4 user who obtains under the screen rectangular coordinate system (vX, vY is vZ) with apart from d, calculate mul ((vX, vY, vZ), d) as the coordinate of the eyes of screen rectangular coordinate system (viewerX, viewerY, viewerZ).Wherein mul () is vector and the multiplying of scalar in the geometry, and its result be vectorial.
Step 6) is for each the article coordinate (objerctX in the article coordinate group that obtains in the step 2, objectY, objectZ), (viewerX, viewerY viewerZ) carry out coordinate transform to the person's of being used in combination eyes coordinates, obtain " as coordinate " (newX, newrY newZ), and uses as article coordinate corresponding in the coordinate replacement step 2.As coordinate refer to according to the perspective principle calculate, substitution graphic package interface is played up with show after can make the user when current visual angle is watched screen, produces with the identical stereoeffect of 3D ground picture coordinate, its computing method are " core coordinate transform formula ":
newX=-(viewerX-objectX)*objectZ/abs(viewerZ-objectZ)+objectX
newY=-(viewerY-objectY)*objectZ/abs(viewerZ-objectZ)+objectY
newZ=objectZ
What wherein the summit defined in core coordinate transform formula and the accompanying drawing 2 corresponds to:
1) user's eyes coordinates (viewerX, viewerY, viewerZ).
2) screen imaging point.Be the position of screen real pixel point.
3) article coordinate (objerctX, objectY, objectZ).
4) as coordinate (newX, newrY, newZ).
Step 7) continues to use step 1 to step 6 to handle for the next frame image.
The part that the present invention does not elaborate belongs to techniques well known.
Although above the illustrative embodiment of the present invention is described; so that the technician of present technique neck understands the present invention; but should be clear; the invention is not restricted to the scope of embodiment; to those skilled in the art; as long as various variations appended claim limit and the spirit and scope of the present invention determined in, these variations are apparent, all utilize innovation and creation that the present invention conceives all at the row of protection.

Claims (3)

1. the control method that virtual 3D shows on the 2D screen based on the environment receptor is characterized in that:
Step 1) is set up the screen rectangular coordinate system;
Step 2) carries out coordinate system transformation respectively and obtain each coordinate under the screen rectangular coordinate system by calculate obtaining current all or part key point of waiting to paint 3D model in the frame, obtain the article coordinate group that article coordinate is formed;
Step 3) is utilized the environment receptor and is calculated acquisition user's eyes with respect to the position of screen, comprises user's eyes with respect to drift angle and the distance of screen; Described environment receptor comprises gravity sensing chip, camera;
Step 4) utilizes the environment receptor to obtain eyes of user apart from the distance of screen coordinate system initial point, and described environment receptor comprises displacement transducer, if there is not the environment receptor that is suitable for, can get the estimated value of an agreement according to analysis of statistical data;
Step 5) utilizes the user's who obtains in step 3) and the step 4) eyes to calculate the coordinate of eyes under the screen rectangular coordinate system in the distance of the coordinate method vector sum under the screen rectangular coordinate system;
Step 6) integrating step 2) article coordinate in and the user's eyes coordinates in the step 5) obtain as coordinate by coordinate transform, and substitution graphic package interface is played up and shown;
Step 7) continues to use step 1) to step 6) to handle for the next frame image.
2. control method of showing based on the virtual 3D on the 2D screen of environment receptor according to claim 1, it is characterized in that: described step 2) user's eyes strengthen or weaken to the screen normal direction with respect to the drift angle of screen, make picture show more reasonable, the more real method of the observed stereoscopic sensation of user.
3. control method of showing based on the virtual 3D on the 2D screen of environment receptor according to claim 1, it is characterized in that: user's eyes coordinates of utilizing previous frame or former frame to obtain in the described step 5) is revised current coordinate, reduces because the method for the float that environment receptor error causes.
CN2013101237081A 2013-04-10 2013-04-10 Control method for realizing virtual 3D (3-dimension) display on 2D (2-dimension) screen on basis of environment sensor Pending CN103279942A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013101237081A CN103279942A (en) 2013-04-10 2013-04-10 Control method for realizing virtual 3D (3-dimension) display on 2D (2-dimension) screen on basis of environment sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013101237081A CN103279942A (en) 2013-04-10 2013-04-10 Control method for realizing virtual 3D (3-dimension) display on 2D (2-dimension) screen on basis of environment sensor

Publications (1)

Publication Number Publication Date
CN103279942A true CN103279942A (en) 2013-09-04

Family

ID=49062449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013101237081A Pending CN103279942A (en) 2013-04-10 2013-04-10 Control method for realizing virtual 3D (3-dimension) display on 2D (2-dimension) screen on basis of environment sensor

Country Status (1)

Country Link
CN (1) CN103279942A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282532A (en) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3D display method and device
CN105354874A (en) * 2015-11-05 2016-02-24 韩东润 Stereoscopic display method and apparatus for digital material
CN105745955A (en) * 2013-11-15 2016-07-06 微软技术许可有限责任公司 Protecting privacy in web-based immersive augmented reality
WO2017024626A1 (en) * 2015-08-13 2017-02-16 深圳市华星光电技术有限公司 Naked-eye 3d imaging method and system
CN106873300A (en) * 2016-12-30 2017-06-20 北京光年无限科技有限公司 Towards the Virtual Space projecting method and device of intelligent robot
CN108076355A (en) * 2017-12-26 2018-05-25 百度在线网络技术(北京)有限公司 Video playing control method and device
CN108182659A (en) * 2018-02-01 2018-06-19 周金润 A kind of bore hole 3D display technology based on viewpoint tracking, single-view relief painting
CN108269232A (en) * 2018-02-24 2018-07-10 夏云飞 A kind of method for transformation of bore hole 3D pictures and the conversion method of bore hole 3D panoramic pictures
CN108304071A (en) * 2018-02-02 2018-07-20 惠州学院 A method of interactive mode 2.5D is realized based on eye tracking

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663812A (en) * 2012-03-27 2012-09-12 南昌航空大学 Direct method of three-dimensional motion detection and dense structure reconstruction based on variable optical flow

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663812A (en) * 2012-03-27 2012-09-12 南昌航空大学 Direct method of three-dimensional motion detection and dense structure reconstruction based on variable optical flow

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ADRIAN JACOBS ET AL.: "2D/3D Switchable Displays", 《シャープ技報》, 30 April 2003 (2003-04-30), pages 15 - 18 *
杨关良: "基于二维图像的三维信息提取研究", 《海军工程大学学报》, 31 August 2004 (2004-08-31), pages 39 - 42 *
陈君等: "基于数字图像处理的平面图像立体化技术", 《光电子激光》, 31 December 2005 (2005-12-31), pages 19 - 22 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105745955B (en) * 2013-11-15 2019-06-04 微软技术许可有限责任公司 Protection privacy in network-based immersion augmented reality
CN105745955A (en) * 2013-11-15 2016-07-06 微软技术许可有限责任公司 Protecting privacy in web-based immersive augmented reality
CN105282532A (en) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3D display method and device
CN105282532B (en) * 2014-06-03 2018-06-22 天津拓视科技有限公司 3D display method and apparatus
WO2017024626A1 (en) * 2015-08-13 2017-02-16 深圳市华星光电技术有限公司 Naked-eye 3d imaging method and system
US9900575B2 (en) 2015-08-13 2018-02-20 Shenzhen China Star Optoelectronics Technology Co., Ltd. Naked-eye 3D image forming method and system
CN105354874A (en) * 2015-11-05 2016-02-24 韩东润 Stereoscopic display method and apparatus for digital material
CN105354874B (en) * 2015-11-05 2019-11-08 韩东润 A kind of stereo display method and device of numeric fodder
CN106873300A (en) * 2016-12-30 2017-06-20 北京光年无限科技有限公司 Towards the Virtual Space projecting method and device of intelligent robot
CN106873300B (en) * 2016-12-30 2019-12-24 北京光年无限科技有限公司 Virtual space projection method and device for intelligent robot
CN108076355A (en) * 2017-12-26 2018-05-25 百度在线网络技术(北京)有限公司 Video playing control method and device
CN108182659A (en) * 2018-02-01 2018-06-19 周金润 A kind of bore hole 3D display technology based on viewpoint tracking, single-view relief painting
CN108304071A (en) * 2018-02-02 2018-07-20 惠州学院 A method of interactive mode 2.5D is realized based on eye tracking
CN108269232A (en) * 2018-02-24 2018-07-10 夏云飞 A kind of method for transformation of bore hole 3D pictures and the conversion method of bore hole 3D panoramic pictures

Similar Documents

Publication Publication Date Title
CN103279942A (en) Control method for realizing virtual 3D (3-dimension) display on 2D (2-dimension) screen on basis of environment sensor
KR100953931B1 (en) System for constructing mixed reality and Method thereof
CN103543830B (en) Method for mapping human skeleton points to virtual three-dimensional space points in three-dimensional display
CN102411474B (en) Mobile terminal and method of controlling operation of the same
CN107018336A (en) The method and apparatus of image procossing and the method and apparatus of Video processing
CN102968809A (en) Method for realizing virtual information marking and drawing marking line in enhanced practical field
CN104504671A (en) Method for generating virtual-real fusion image for stereo display
CN102281455A (en) Image display apparatus, image display system, and image display method
CN103959340A (en) Graphics rendering technique for autostereoscopic three dimensional display
CN104656880B (en) A kind of writing system and method based on intelligent glasses
CN109445598B (en) Augmented reality system device based on vision
CN102005062A (en) Method and device for producing three-dimensional image for three-dimensional stereo display
CN105278663A (en) Augmented reality house experience system
CN110488981A (en) Mobile phone terminal VR scene interactivity formula display methods based on cloud rendering
IL299465A (en) Object recognition neural network for amodal center prediction
CN114419226A (en) Panorama rendering method and device, computer equipment and storage medium
CN103310045B (en) One utilizes augmented reality to carry out the large molecule three-dimensional visualization method of crystal
CN106683152A (en) Three-dimensional visual sense effect simulation method and apparatus
CN108205823A (en) MR holographies vacuum experiences shop and experiential method
CN205812280U (en) A kind of virtual implementing helmet
CN110060349A (en) A method of extension augmented reality head-mounted display apparatus field angle
CN206115390U (en) Virtual roaming device in digit coastal city based on virtual reality helmet
CN206002838U (en) 360 degree of phantom imaging systems based on body feeling interaction
CN107526566A (en) The display control method and device of a kind of mobile terminal
CN107564066B (en) Combined calibration method for virtual reality glasses and depth camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20160914

C20 Patent right or utility model deemed to be abandoned or is abandoned