KR102024863B1 - Method and appratus for processing virtual world - Google Patents
Method and appratus for processing virtual world Download PDFInfo
- Publication number
- KR102024863B1 KR102024863B1 KR1020130017404A KR20130017404A KR102024863B1 KR 102024863 B1 KR102024863 B1 KR 102024863B1 KR 1020130017404 A KR1020130017404 A KR 1020130017404A KR 20130017404 A KR20130017404 A KR 20130017404A KR 102024863 B1 KR102024863 B1 KR 102024863B1
- Authority
- KR
- South Korea
- Prior art keywords
- sensor
- virtual world
- information
- image sensor
- feature point
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42202—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] environmental sensors, e.g. for detecting temperature, luminosity, pressure, earthquakes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4722—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
- H04N21/4725—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Biodiversity & Conservation Biology (AREA)
- Ecology (AREA)
- Emergency Management (AREA)
- Environmental & Geological Engineering (AREA)
- Environmental Sciences (AREA)
- Remote Sensing (AREA)
- Marketing (AREA)
- Studio Devices (AREA)
Abstract
A virtual world processing apparatus and method are disclosed. According to the exemplary embodiments, the interaction between the real world and the virtual world may be realized by transferring information sensed about the captured image of the real world to the virtual world using the image sensor characteristic which is information on the characteristic of the image sensor.
Description
The following embodiments relate to an apparatus and a method for processing a virtual world, and more particularly, to an apparatus and a method for applying sensing information measured by an image sensor to a virtual world.
Recently, interest in immersive games has increased. At the "E3 2009" Press Conference, Microsoft combined its sensor console, Xbox360, with a separate sensor device consisting of a Depth / Color camera and microphone array, providing users' full-body motion capturing, facial recognition, and speech recognition technologies. Introduced "Project Natal", which allows users to interact with virtual worlds without a controller. In addition, Sony applied position / direction sensing technology that combined color camera, marker, and ultrasonic sensor to Play Station3, its game console, so that the user can interact with the virtual world by inputting the motion trajectory of the controller. Wand "was released.
The interaction between the real world and the virtual world has two directions. The first is to reflect the data information obtained from the sensor of the real world to the virtual world, and the second is to reflect the data information obtained from the virtual world to the real world through an actuator.
In the present specification, a new method for an apparatus and method for applying information sensed from a real world using an environmental sensor to a virtual world is presented.
According to one or more exemplary embodiments, an apparatus for processing a virtual world may include: a receiver configured to receive sensing information of a captured image and sensor characteristics of characteristics of the image sensor from an image sensor; A processor configured to generate control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic; And a transmission unit transmitting the control information to the virtual world.
According to another aspect of the present invention, there is provided a method of processing a virtual world, the method comprising: receiving sensor information on a characteristic of the image sensor and sensing information about a captured image from an image sensor; Generating control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic; And transmitting the control information to the virtual world.
1 is a diagram illustrating a virtual world processing system that controls information exchange between a real world and a virtual world, according to an exemplary embodiment.
2 is a diagram for describing an augmented reality system, according to an exemplary embodiment.
3 is a diagram illustrating a configuration of a virtual world processing apparatus according to an exemplary embodiment.
4 is a flowchart illustrating a virtual world processing method according to an exemplary embodiment.
Hereinafter, with reference to the accompanying drawings an embodiment according to the present invention will be described in detail. However, the present invention is not limited or limited by the embodiments. Like reference numerals in the drawings denote like elements.
1 is a diagram illustrating a virtual world processing system that controls information exchange between a real world and a virtual world, according to an exemplary embodiment.
Referring to FIG. 1, a virtual world processing system according to an embodiment of the present invention may include a
The
In addition, the
According to an embodiment, a sensor may sense and transmit information on a user's motion, state, intention, shape, etc. of the
According to an embodiment, the sensor may transmit a
The
The virtual world processing apparatus according to an embodiment may include an adaptation adaptation real world to virtual world (120), a virtual world information (VWI) 104, and an adaptation adaptation real world to virtual world / Virtual World to Real World) 130.
The
According to an embodiment, the
The VWI 104 is information about a virtual object of the
The adaptive RV / VR 130 encodes the transformed VWI 104 to generate a Virtual World Effect Metadata (VWEM) 107 that is metadata about the effect applied to the
The VWC 105 is information about the characteristics of the
In addition, adaptive RV / VR 130 may send VWEM 107 to
According to one side of the present invention, the effect event occurring in the
The
The adaptive RV /
The
The SDCap 113 is information on characteristics of the sensory device. In addition, the
2 is a diagram for describing an augmented reality system, according to an exemplary embodiment.
Referring to FIG. 2, the augmented reality system according to an exemplary embodiment may acquire an image representing the real world by using the
Here, the AR camera according to an embodiment may include a real-time media acquisition device 220 and
The
The augmented
The augmented
The
Hereinafter, a configuration of a virtual world processing apparatus according to an embodiment will be described in detail with reference to FIG. 3.
3 is a diagram illustrating a configuration of a virtual world processing apparatus according to an exemplary embodiment.
Referring to FIG. 3, the virtual
The
The
The
At this time, the operation of the virtual world may be controlled based on the received control information.
For example, assume that the
Here, the feature point may be mainly extracted from the boundary surfaces included in the captured
In some cases, the
The
The virtual
Therefore, the virtual
In this case, the virtual world may control the virtual object based on the plurality of feature points included in the control information.
More specifically, the virtual world may represent the
In addition, the virtual world may simultaneously display the
According to an embodiment, the sensing information and sensor characteristics received from the
For example, the sensing information received from the
-Camera sensor type
-AR Camera type
Here, the AR camera type may basically include a camera sensor type. The camera sensor type may include an element of a resource element, a camera location element, and a camera orientation element, and an attribute of a focal length attribute, an aperture attribute, a shutter speed attribute, and a filter attribute.
At this time, the resource element includes a link to the image captured by the image sensor, the camera location element includes information related to the position of the image sensor measured using the Global Position System (GPS) sensor, and the camera orientation element May include information related to the attitude of the image sensor.
The focal length attribute includes information related to the focal length of the image sensor, the aperture attribute includes information related to the aperture of the image sensor, the shutter speed attribute includes information related to the shutter speed of the image sensor, and the filter attribute includes the image It may include information related to the filter signal processing of the sensor. Here, the filter type may include a UV filter, a polarizing light filter, a neutral density filter, a diffusion filter, a star filter, and the like.
In addition, the augmented reality camera type may further include a feature element and a camera position element.
In this case, the feature element may include a feature point related to the boundary surface in the captured image, and the camera position element may include information related to the position of the image sensor measured using a position sensor that is distinguished from the GPS sensor.
As described above, the feature point is a point mainly generated at boundary surfaces in the image photographed by the image sensor, and may be used to represent a virtual object in an augmented reality environment. More specifically, a feature element including at least one feature point may be used as an element representing a face by a scene descriptor. More details related to the operation of the scene description will be described later.
The camera position element may be utilized to measure the position of the image sensor in a room or a tunnel where it is difficult to measure the position using the GPS sensor.
On the other hand, the sensor characteristics received from the
-Camera sensor capability type
AR Camera capability type
Here, the AR camera capability type may basically include a camera sensor capability type. The camera sensor characteristic type may consist of a Supported Resolution List element, a focal length range element, an aperture range element and a shutter speed range element.
In this case, the supported resolution list element includes a list of resolutions supported by the image sensor, the focal length range element includes a range of focal lengths supported by the image sensor, and the aperture range element is an aperture range supported by the image sensor The shutter speed range element may include a range of shutter speeds supported by the image sensor.
In addition, the augmented reality camera characteristic type may further include a maximum feature point element and a camera position range element.
At this time, the maximum feature point element may include the maximum number of feature points that may be detected by the image sensor, and the camera position range element may include a range of positions that may be measured by the position sensor.
Table 3 shows an XML syntax (eXtensible Markup Language Syntax) for a camera sensor type according to an embodiment.
<!-Camera Sensor Type->
<!-############################################## ##->
<complexType name = "CameraSensorType">
<complexContent>
<extension base = "iidl: SensedInfoBaseType">
<sequence>
<element name = "Resource" type = "anyURI"/>
<element name = "CameraOrientation" type = "siv: OrientationSensorType" minOccurs = "0"/>
<element name = "CameraLocation" type = "siv: GlobalPositionSensorType" minOccurs = "0"/>
</ sequence>
<attribute name = "focalLength" type = "float" use = "optional"/>
<attribute name = "aperture" type = "float" use = "optional"/>
<attribute name = "shutterSpeed" type = "float" use = "optional"/>
<attribute name = "filter" type = "mpeg7: termReferenceType" use = "optional"/>
</ extension>
</ complexContent>
</ complexType>
Table 4 shows semantics for a camera sensor type according to one embodiment.
Table 5 shows XML syntax for a camera sensor characteristic type according to one embodiment.
<!-Camera Sensor capability type->
<!-############################################## ##->
<complexType name = "CameraSensorCapabilityType">
<complexContent>
<extension base = "cidl: SensorCapabilityBaseType">
<sequence>
<element name = "SupportedResolutions" type = "scdv: ResolutionListType" minOccurs = "0"/>
<element name = "FocalLengthRange" type = "scdv: ValueRangeType" minOccurs = "0"/>
<element name = "ApertureRange" type = "scdv: ValueRangeType" minOccurs = "0"/>
<element name = "ShutterSpeedRange" type = "scdv: ValueRangeType" minOccurs = "0"/>
</ sequence>
</ extension>
</ complexContent>
</ complexType>
<complexType name = "ResolutionListType">
<sequence>
<element name = "Resolution" type = "scdv: ResolutionType" maxOccurs = "unbounded"/>
</ sequence>
</ complexType>
<complexType name = "ResolutionType">
<sequence>
<element name = "Width" type = "nonNegativeInteger"/>
<element name = "Height" type = "nonNegativeInteger"/>
</ sequence>
</ complexType>
<complexType name = "ValueRangeType">
<sequence>
<element name = "MaxValue" type = "float"/>
<element name = "MinValue" type = "float"/>
</ sequence>
</ complexType>
Table 6 shows the semantics for the camera sensor characteristic type according to one embodiment.
NOTE The minValue and the maxValue in the SensorCapabilityBaseType are not used for this sensor.
NOTE The minValue and the maxValue in the SensorCapabilityBaseType are not used for this sensor.
NOTE The minValue and the maxValue in the SensorCapabilityBaseType are not used for this sensor.
Table 7 shows XML syntax for an augmented reality camera type according to one embodiment.
<!-AR Camera Type->
<!-############################################## ##->
<complexType name = "ARCameraType">
<complexContent>
<extension base = "siv: CameraSensorType">
<sequence>
<element name = "Feature" type = "siv: FeaturePointType" minOccurs = "0" maxOccurs = "unbounded"/>
<element name = "CameraPosition" type = "siv: PositionSensorType" minOccurs = "0"/>
</ sequence>
</ extension>
</ complexContent>
</ complexType>
<complexType name = "FeaturePointType">
<sequence>
<element name = "Position" type = "mpegvct: Float3DVectorType"/>
</ sequence>
<attribute name = "featureID" type = "ID" use = "optional"/>
</ complexType>
Table 8 shows semantics for the augmented reality camera type according to one embodiment.
Table 9 shows XML syntax for an augmented reality camera characteristic type according to one embodiment.
<!-AR Camera capability type->
<!-############################################## ##->
<complexType name = "ARCameraCapabilityType">
<complexContent>
<extension base = "siv: CameraSensorCapabilityType">
<sequence>
<element name = "MaxFeaturePoint" type = "nonNegativeInteger" minOccurs = "0"/>
<element name = "CameraPositionRange" type = "scdv: RangeType" minOccurs = "0"/>
</ sequence>
</ extension>
</ complexContent>
</ complexType>
Table 10 shows semantics for the augmented reality camera characteristic type according to one embodiment.
NOTE The minValue and the maxValue in the SensorCapabilityBaseType are not used for this sensor.
Table 11 shows XML syntax for a scene descriptor type according to an embodiment.
<!-Scene Descriptor Type->
<!-############################################## #############->
<complexType name = "SceneDescriptorType">
<sequence>
<element name = "image" type = "anyURI"/>
</ sequence>
<complexType name = "plan">
<sequence>
<element name = "ID" type = "int32"/>
<element name = "X" type = "float"/>
<element name = "Y" type = "float"/>
<element name = "Z" type = "float"/>
<element name = "Scalar" type = "float"/>
</ sequence>
</ complexType>
<complexType name = "feature">
<sequence>
<element name = "ID" type = "int32"/>
<element name = "X" type = "float"/>
<element name = "Y" type = "float"/>
<element name = "N" type = "float"/>
</ sequence>
<complexType>
</ complexType>
Here, the image element included in the scene descriptor type may include a plurality of pixels. The plurality of pixels may depict an ID of a plan or an ID of a feature.
Where the plan is X plan , Y plan , Z plan And Scalar, and referring to Equation 1, the scene descriptor may represent a face using a face equation including X plan , Y plan , and Z plan .
In addition, the feature is a type corresponding to a feature element included in the sensing information, and the feature may include an X feature , a Y feature, and a Z feature . In this case, the feature may represent three-dimensional points (X feature , Y feature , Z feature ), and the scene descriptor may represent a face using three-dimensional points located at (X feature , Y feature , Z feature ). .
4 is a flowchart illustrating a virtual world processing method according to an exemplary embodiment.
Referring to FIG. 4, the virtual world processing method according to an exemplary embodiment receives sensing information on a captured image and sensor characteristics on characteristics of an image sensor from an image sensor (410).
The virtual world processing method generates control information for controlling an object of the virtual world based on the sensing information and the sensor characteristic (420).
The virtual world processing method transmits control information to the virtual world (430).
At this time, the operation of the virtual world may be controlled based on the received control information. Since the details described with reference to FIGS. 1 through 3 may be applied to each step illustrated in FIG. 4, a detailed description thereof will be omitted.
The method according to the embodiment may be embodied in the form of program instructions that can be executed by various computer means and recorded in a computer readable medium. The computer readable medium may include program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the media may be those specially designed and constructed for the purposes of the embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks. Magneto-optical media, and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like. The hardware device described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
Although the embodiments have been described by the limited embodiments and the drawings as described above, various modifications and variations are possible to those skilled in the art from the above description. For example, the described techniques may be performed in a different order than the described method, and / or components of the described systems, structures, devices, circuits, etc. may be combined or combined in a different form than the described method, or other components. Or even if replaced or substituted by equivalents, an appropriate result can be achieved.
Therefore, other implementations, other embodiments, and equivalents to the claims are within the scope of the claims that follow.
Claims (10)
A processor configured to generate control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic; And
Transmission unit for transmitting the control information to the virtual world
Including,
In the image sensor,
At least one feature point associated with the boundary of the closest object or the boundary of the largest object among the boundary surfaces included in the photographed image is extracted, and the sensing information including the at least one feature point is transmitted.
The processing unit,
Extracting the at least one feature point from the sensed information, and generating the control information based on the at least one feature point.
The image sensor is
And at least one of a photographing sensor and a video photographing sensor.
The detection information is
A resource element comprising a link to an image captured by the image sensor;
A camera location element containing information related to the position of the image sensor measured using a Global Position System (GPS) sensor; And
Camera orientation element containing information related to the pose of the image sensor
Virtual world processing device comprising a.
The detection information is
A focal length attribute including information related to a focal length of the image sensor;
An aperture property including information related to an aperture of the image sensor;
A shutter speed attribute including information related to a shutter speed of the image sensor; And
Filter attributes including information related to filter signal processing of the image sensor
Virtual world processing device comprising a.
The detection information is
A feature element comprising a feature point associated with an interface in the captured image; And
A camera position element comprising information relating to the position of the image sensor measured using a position sensor distinct from the GPS sensor
The virtual world processing device further comprising.
The sensor characteristics
A Supported Resolution List element containing a list of resolutions supported by the image sensor;
A focal length range element including a range of focal lengths supported by the image sensor;
An aperture range element including an aperture range supported by the image sensor; And
Shutter speed range element including a range of shutter speeds supported by the image sensor
Virtual world processing device comprising a.
The sensor characteristics
A maximum feature point element comprising a maximum number of feature points that can be detected by the image sensor; And
Camera position range element containing the range of positions that can be measured by the position sensor
The virtual world processing device further comprising.
The transmission unit transmits the at least one feature point to the virtual world.
And the virtual world represents at least one plane included in the virtual world based on the at least one feature point.
Generating control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic; And
Transmitting the control information to the virtual world
Including,
In the image sensor,
At least one feature point associated with the boundary of the closest object or the boundary of the largest object among the boundary surfaces included in the photographed image is extracted, and the sensing information including the at least one feature point is transmitted.
Generating control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic,
Extracting the at least one feature point from the sensing information and generating the control information based on the at least one feature point
Virtual world processing method comprising a.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/934,605 US20140015931A1 (en) | 2012-07-12 | 2013-07-03 | Method and apparatus for processing virtual world |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261670825P | 2012-07-12 | 2012-07-12 | |
US61/670,825 | 2012-07-12 |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20140009913A KR20140009913A (en) | 2014-01-23 |
KR102024863B1 true KR102024863B1 (en) | 2019-09-24 |
Family
ID=50142901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020130017404A KR102024863B1 (en) | 2012-07-12 | 2013-02-19 | Method and appratus for processing virtual world |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR102024863B1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001075726A (en) | 1999-08-02 | 2001-03-23 | Lucent Technol Inc | Computer input device with six degrees of freedom for controlling movement of three-dimensional object |
JP2007195091A (en) | 2006-01-23 | 2007-08-02 | Sharp Corp | Synthetic image generating system |
US20090221374A1 (en) * | 2007-11-28 | 2009-09-03 | Ailive Inc. | Method and system for controlling movements of objects in a videogame |
WO2012032996A1 (en) | 2010-09-09 | 2012-03-15 | ソニー株式会社 | Information processing device, method of processing information, and program |
JP2012094100A (en) * | 2010-06-02 | 2012-05-17 | Nintendo Co Ltd | Image display system, image display device and image display method |
US20120242866A1 (en) | 2011-03-22 | 2012-09-27 | Kyocera Corporation | Device, control method, and storage medium storing program |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6778171B1 (en) * | 2000-04-05 | 2004-08-17 | Eagle New Media Investments, Llc | Real world/virtual world correlation system using 3D graphics pipeline |
KR100514308B1 (en) * | 2003-12-05 | 2005-09-13 | 한국전자통신연구원 | Virtual HDR Camera for creating HDRI for virtual environment |
KR100918392B1 (en) * | 2006-12-05 | 2009-09-24 | 한국전자통신연구원 | Personal-oriented multimedia studio platform for 3D contents authoring |
US20090300144A1 (en) * | 2008-06-03 | 2009-12-03 | Sony Computer Entertainment Inc. | Hint-based streaming of auxiliary content assets for an interactive environment |
-
2013
- 2013-02-19 KR KR1020130017404A patent/KR102024863B1/en active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001075726A (en) | 1999-08-02 | 2001-03-23 | Lucent Technol Inc | Computer input device with six degrees of freedom for controlling movement of three-dimensional object |
JP2007195091A (en) | 2006-01-23 | 2007-08-02 | Sharp Corp | Synthetic image generating system |
US20090221374A1 (en) * | 2007-11-28 | 2009-09-03 | Ailive Inc. | Method and system for controlling movements of objects in a videogame |
JP2012094100A (en) * | 2010-06-02 | 2012-05-17 | Nintendo Co Ltd | Image display system, image display device and image display method |
WO2012032996A1 (en) | 2010-09-09 | 2012-03-15 | ソニー株式会社 | Information processing device, method of processing information, and program |
US20120242866A1 (en) | 2011-03-22 | 2012-09-27 | Kyocera Corporation | Device, control method, and storage medium storing program |
Also Published As
Publication number | Publication date |
---|---|
KR20140009913A (en) | 2014-01-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105279795B (en) | Augmented reality system based on 3D marker | |
US10681276B2 (en) | Virtual reality video processing to compensate for movement of a camera during capture | |
US20120026376A1 (en) | Anamorphic projection device | |
KR101227237B1 (en) | Augmented reality system and method for realizing interaction between virtual object using the plural marker | |
US10706634B1 (en) | System for generating augmented reality content from a perspective view of an unmanned aerial vehicle | |
KR20140082610A (en) | Method and apaaratus for augmented exhibition contents in portable terminal | |
US11189057B2 (en) | Provision of virtual reality content | |
KR102197615B1 (en) | Method of providing augmented reality service and server for the providing augmented reality service | |
US9304603B2 (en) | Remote control using depth camera | |
CN111131735B (en) | Video recording method, video playing method, video recording device, video playing device and computer storage medium | |
WO2020110323A1 (en) | Video synthesis device, video synthesis method and recording medium | |
US20210038975A1 (en) | Calibration to be used in an augmented reality method and system | |
WO2020110322A1 (en) | Video synthesis device, video synthesis method and recording medium | |
US20130040737A1 (en) | Input device, system and method | |
US20140015931A1 (en) | Method and apparatus for processing virtual world | |
US11430178B2 (en) | Three-dimensional video processing | |
EP3542877A1 (en) | Optimized content sharing interaction using a mixed reality environment | |
US9942540B2 (en) | Method and a device for creating images | |
WO2019034804A2 (en) | Three-dimensional video processing | |
KR102024863B1 (en) | Method and appratus for processing virtual world | |
JP6370446B1 (en) | Viewpoint-based object picking system and method | |
KR102471792B1 (en) | Cloud server for rendering ar contents and operaing method of thereof | |
US20210037230A1 (en) | Multiview interactive digital media representation inventory verification | |
KR101860215B1 (en) | Content Display System and Method based on Projector Position | |
KR101572348B1 (en) | Image data process method using interactive computing device and system thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |