WO2017107758A1 - 应用于图像或视频的ar显示***及方法 - Google Patents

应用于图像或视频的ar显示***及方法 Download PDF

Info

Publication number
WO2017107758A1
WO2017107758A1 PCT/CN2016/108466 CN2016108466W WO2017107758A1 WO 2017107758 A1 WO2017107758 A1 WO 2017107758A1 CN 2016108466 W CN2016108466 W CN 2016108466W WO 2017107758 A1 WO2017107758 A1 WO 2017107758A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
image
user
original
scene
Prior art date
Application number
PCT/CN2016/108466
Other languages
English (en)
French (fr)
Inventor
赵良华
张圣明
解长庆
Original Assignee
大连新锐天地传媒有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大连新锐天地传媒有限公司 filed Critical 大连新锐天地传媒有限公司
Priority to CN201680056530.5A priority Critical patent/CN108140263B/zh
Publication of WO2017107758A1 publication Critical patent/WO2017107758A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to the field of augmented reality, and more particularly to an AR display system and method for application to images or video.
  • Pictures or videos created by the traditional imaging industry can only be viewed in the form of two-dimensional images, lacking display methods combined with 3D models, music, sound effects, special effects, etc., and lack of display channels or videos based on augmented reality.
  • augmented reality is being used on various platforms, including televisions, displays, and to some extent, handheld devices such as mobile phones and tablets.
  • 3D engines are increasingly being used on these platforms, especially handheld devices.
  • digital cameras and digital video cameras, as well as a certain degree of mobile phone devices with higher level of photography and camera functions are increasingly used in daily life.
  • the presentation of captured photographs, videos, and a three-dimensional scene model on a handheld device using augmented reality technology is considered necessary.
  • the present invention provides a method and system for implementing a customized picture or video based on an augmented reality display channel, and a user can select one or more customized pictures or videos according to a preset three-dimensional scene model template, and then Through the production end input to the server database, and then the raw data is processed by the production end, one or more customized pictures or videos can be combined with the three-dimensional scene model and browsed in the augmented reality environment on the handheld device. This enhances the user's immersion in viewing images or videos, increasing fun and interactivity.
  • the present invention provides a method and system for realizing a customized picture or video based on an augmented reality display with a channel composed of a production end, a production end, a server, and a handheld device, and provides a distributed production for multi-user and multi-data.
  • the process framework, the conversion speed is fast, and the implementation process is more concise.
  • the present invention provides an AR display system for image or video, which comprises a production end, a production end, a server, a storage unit, a camera, an AR processing unit, and a display terminal, and the production end is used for uploading.
  • the original production data Using the original production data to the server; the original production data user text information and the user original scene composition; the production end is used to process the original production data to synthesize the scene production data; the camera is used to obtain the user image; the server is used to obtain the scene production data For combining the scene creation data with the transparent model part of the server preset three-dimensional scene model, for matching the user image, the scene production data, the three-dimensional scene model, the audio and the user text information; the storage unit is used for storing The matched user picture, the scene creation data, the three-dimensional scene model and the audio in the server; the AR processing unit is configured to identify the user image, and combine the user image with the three-dimensional scene model, the scene production data and the audio in the storage unit and The display is completed on the display terminal.
  • the user original scene is a user scene image.
  • the user original scene is a user scene video.
  • the scene making data is a model map image and an identifiable image.
  • the scene production data is a model map video.
  • the identifiable image is produced by the server preset identifiable image template and the model map image through the production end.
  • the production end processes the user original scene through Adobe Photoshop;
  • the AR processing unit is a Vuforia AR unit.
  • the model map image is synthesized by the production end to recognize the image.
  • the production end submits the identifiable image to the AR processing unit to form identifiable data.
  • the production end acquires the processed identifiable image through the server.
  • the present invention discloses an AR display method applied to an image or a video, which is implemented by uploading user information, an image, and a video of the original production data to a server by a production end, where the user information includes: user name, gender , selected 3D scenes, mobile phone numbers, anniversaries, and remarks.
  • the user information is user text information, where the user information forms a unique string on the server.
  • the image and video refer to the video and image provided by the user, and are collected by the user's own camera device.
  • the production side obtains the original production data of the image and the video through the server, processes the image, and synthesizes the model map image and the identifiable image through the Adobe Photoshop, and submits the identifiable image to the Vuforia AR unit to form the identifiable data.
  • the model image image, the identifiable image, and the identifiable data are returned to the server through the production end.
  • the production side obtains the processed identifiable image through the server.
  • the data is matched by the camera through the server to identify the user information. Specifically, the camera obtains the image information and matches the user information string on the server. The matching refers to finding the identifiable image information and searching for corresponding data information on the server.
  • the user information string and the processed identifiable image, texture image, three-dimensional scene model, audio, and video data are stored in the storage unit one by one through the server.
  • the mapping function, the three-dimensional scene model, the audio and video data are extracted from the storage unit, and the interactive functions such as clicking on the fixed or unfixed area, tapping and swiping are performed on the display terminal.
  • Identification here refers to the use of the Vuforia AR unit to identify the above identifiable image, and then obtain a continuous real-time picture through the camera, and match the identifiable image to match the 3D scene model, audio, model map picture, model map video data. The outputs are displayed together on the display.
  • Step 1 The user inputs the original production material data to the server through the production end, and includes one or more original image data and/or original video data, and the user information data applied to the user identity information verification; specific steps:
  • the server outputs the two-dimensional scene image template in the database to the production end, and the user selects one or more custom original image data and/or original video data that he likes by previewing and selecting the template, and inputting the user data packet to the user data package.
  • custom original picture data, the additional piece of original picture data, or the two-dimensional scene image data for the channel map model as a mark image for augmented reality recognition
  • Step 2 Making an end connection server, outputting original picture data and/or original video data in the user data packet from the database, performing processing on the data by the production end, and then outputting the data to the user data package;
  • the tag image for augmented reality recognition is input to the AR toolkit and the AR toolkit data of the image is output to the user data packet on the server, and the file type, format, quantity, and file type of the processed data packet are processed at the production end. Specification check, when the result is correct, it ends; when the result is wrong, the error result is returned to the production end;
  • Case a is that the original video data and the markup image for augmented reality recognition are included in the data packet, and the original video data is input to a video compression program including but not limited to, such as: QuickTime, After Effects, Final Cut, etc. Video program.
  • the .mp4 file is output to the user data packet through the server.
  • the tag image for augmented reality recognition in the data packet is input to the AR toolkit and the AR toolkit data of the image is output to the user data package on the server, and the AR toolkit is provided by the AR engine.
  • the file type, format, quantity, and specification are checked on the processed packet, and when the result is correct, the process ends; when the result is wrong, the error result is returned to the production end.
  • Case b is that the original picture data is included in the data packet, and the original picture data is input to an image processing program, including but not limited to image processing programs such as Adobe Photoshop and Affinity Photo.
  • the original image for the channel map model is separated, the non-display area channel image is removed, and the display area channel image is output as a .png file and output to the user data packet through the server.
  • the tag image for augmented reality in the data packet is input to the AR toolkit and the AR toolkit data of the image is output to the user data packet on the server.
  • the file type, format, quantity, and specification are checked on the processed packet, and when the result is correct, the process ends; when the result is wrong, the error result is returned to the production end.
  • Case c is that the original picture data and the two-dimensional scene image data are included in the data packet, and the original picture data and the two-dimensional scene image data are input to the image processing program, and the image processing programs include, but are not limited to, Adobe Photoshop, Affinity Photo, etc. Image processing program; separating the original image used for the channel map model, removing the non-exhibition Displaying the area channel image, combining the display area channel image and the two-dimensional scene image into a .jpeg file and inputting the AR toolkit data of the image into the user data package on the server, and displaying the area channel
  • the image is output as a .png file and output to the user data packet through the server.
  • the file type, format, quantity, and specification are checked on the processed packet, and when the result is correct, the process ends; when the result is wrong, the error result is returned to the production end.
  • Step 3 On the handheld device, input user information data, connect the user information through the communication unit to verify the user information, and when the result is an error, end; when the result is correct, output the user data packet from the server: the three-dimensional scene model, A map model with a channel, image data with channels, compressed video data, AR toolkit data, identifiable image data, and audio data are stored to a storage unit;
  • Step 4 Using the arithmetic unit to retrieve the real-world continuous image through the AR engine unit, and place the identifiable image in the camera to obtain the real-world continuous image range, and the AR engine unit is anchored according to the spatial relationship of the identifiable image in the real world. The position is displayed, and the contents of the packet are output through the three-dimensional engine unit and displayed on the display unit.
  • the specific steps are: outputting the three-dimensional scene model in the data packet through the three-dimensional engine unit, mapping the picture data with the channel and/or the compressed video data on the map model with the channel, and playing the audio data by using the speaker of the device.
  • Step 5 Use the interactive unit to control the playback, pause, skip, and stop of the data output by the 3D engine.
  • the specific steps are: inputting a play command to the interaction unit, and the three-dimensional engine starts outputting the three-dimensional scene model, the picture data with the channel and/or the compressed video data on the map model with the channel, and the audio data; and inputting the pause command to the interaction unit.
  • the 3D engine pauses and freezes the 3D scene model, the channeled picture data, and/or the compressed video data on the mapped map model with the audio data.
  • a skip command is input to the interaction unit, and the three-dimensional engine continues to output the three-dimensional scene model and the audio data and replaces the picture data with the channel and/or Or the compressed video data is mapped on the mapped map model; the stop command is input to the interactive unit, the 3D engine stops outputting the above content; the identifiable image is removed from the real-world continuous image acquired by the camera, and the 3D engine stops inputting data.
  • the invention has the beneficial effects of providing a method and a system for implementing a three-dimensional scene model with a channel in an augmented reality environment, which has a fast conversion speed, simplifies user operation steps, and provides a simple overall implementation method, which improves original data and three-dimensional
  • the degree of integration between scene models and the immersion during browsing improves the effect of augmented reality.
  • the user transmits the original material data to the database through the production terminal connection server, and then the data production is completed by the production end, and then the handheld device is used to browse the three-dimensional digital content generated by the processed raw data in the augmented reality environment.
  • the present invention provides a process framework for providing distributed production that is more convenient for multi-user, multi-data, and high concurrency.
  • the invention inputs user information data to the server through the production end, and creates the data packet of the user in the database; simplifies the implementation steps of the system, is more efficient, and reduces the cost.
  • the present invention provides an implementation method for one or more custom pictures or a custom picture or video with transparent channels based on augmented reality. Any picture or video can be combined and displayed through a preset 3D model scene, and one or more pictures and/or videos can be displayed in the same 3D model scene.
  • the present invention not only provides a method and system for implementing a customized picture based on augmented reality display with a transparent channel, but also applies to video, and gives detailed steps of its implementation process.
  • the present invention provides a handheld device with a method of applying one or more customized original picture data and/or original video data through a transparent channel removal and compression process to a preset three-dimensional scene model for use in an augmented reality environment. And/or video display methods, which improve the degree of integration between the original data and the 3D scene model and the immersion during browsing.
  • the present invention further provides a perfect method for use in the use of a network communication, a production terminal, a production terminal, a server and a handheld device system constituted by a three-dimensional engine, an AR toolkit, and a device basic unit.
  • the present invention displays an existing picture or video in combination with a three-dimensional model in an augmented reality environment, without the need to generate additional pictures or videos in real time.
  • the present invention implements one or more pictures or video data with transparent channels in a three-dimensional scene on a handheld device through a camera, a display unit, an interaction unit, a storage unit, an arithmetic unit, a three-dimensional engine unit, and an AR engine unit.
  • Method; end-to-end, end-to-device standard process implementation systems can be implemented through production, production, server and handheld devices.
  • Real world The real world refers to images taken from reality, such as the physical real-world situation using electronic photo capture techniques such as video recording.
  • Augmented Reality A technique for calculating the position and angle of a camera image in real time and adding corresponding images.
  • the goal of this technology is to place the virtual world on the screen and interact with it in the real world.
  • Production side A micro-program that has access to the network function on the computer, responsible for transmitting to or receiving data from the server.
  • Video compression programs including but not limited to video editing programs such as QuickTime, After Effects, and Final Cut.
  • Image processing programs including but not limited to image editing programs such as Adobe Photoshop and Affinity Photo.
  • AR Toolkit Includes, but is not limited to, augmented reality developer kits such as VuforiaAR and EasyAR.
  • Three-dimensional engine including but not limited to three-dimensional programs that are widely used in computers, especially handheld devices, such as Untiy3D and Unreal Engine.
  • 3D scene model The digital resources in the 3D engine are composed of logical relationships of certain real-world scenes, including: 3D models, textures, animations, special effects, audio and other components.
  • Figure 1 is a structural diagram of the system of the present invention
  • Figure 2 is a flow chart of the production side of the method of the present invention.
  • Figure 3 is a flow chart of the production end of the method of the present invention.
  • FIG. 4 is a flow chart of a handheld device of the method of the present invention.
  • Step 1 The user inputs the original production material data to the server through the production end, and includes one or more original image data and/or original video data, and the user information data applied to the user identity information verification; specific steps:
  • the server outputs the two-dimensional scene image template in the database to the production end, and the user selects multiple original video data that he likes by previewing and selecting the template, and inputs the original video data into the user data packet, which is used for the original with the channel map model.
  • the video data is designated as a marker image for augmented reality recognition;
  • Step 2 Make the end connection server, output the original video data in the user data packet from the database, input the original video data into the QuickTime video compression program, compress the original video data, and output the .mp4 file to the user data packet through the server. .
  • the tag image for augmented reality recognition in the data packet is input to the AR toolkit and the AR toolkit data of the image is output to the user data package on the server, and the AR toolkit is provided by the AR engine.
  • the file type, format, quantity, and specification are checked on the processed packet, and when the result is correct, the process ends; when the result is wrong, the error result is returned to the production end.
  • Step 3 On the handheld device, input user information data, connect the user information through the communication unit to verify the user information, and when the result is an error, end; when the result is correct, output the user data packet from the server: the three-dimensional scene model, a map model with a channel, compressed video data, AR kit data, identifiable image data, and audio data are stored to a storage unit;
  • Step 4 Using the arithmetic unit to retrieve the real-world continuous image through the AR engine unit, and place the identifiable image in the camera to obtain the real-world continuous image range, and the AR engine unit is anchored according to the spatial relationship of the identifiable image in the real world. The position is displayed, and the contents of the packet are output through the three-dimensional engine unit and displayed on the display unit.
  • the specific step is: outputting the three-dimensional scene model in the data packet through the three-dimensional engine unit, and compressing the video data. Map the map model with a channel and use the device's speakers to play audio data.
  • Step 5 Use the interactive unit to control the playback, pause, skip, and stop of the data output by the 3D engine.
  • the specific steps are: inputting a play command to the interaction unit, the 3D engine starts to output the 3D scene model, the compressed video data is mapped on the map model with the channel, and the audio data; the pause command is input to the interaction unit, and the 3D engine pauses and stops the 3D scene.
  • the model and compressed video data are mapped and audio data on a map model with channels.
  • the 3D engine When storing a plurality of compressed video data in the data packet, inputting a skip command to the interaction unit, the 3D engine continues to output the 3D scene model and the audio data and replaces the compressed video data on the map model with the channel; The unit inputs a stop command, and the three-dimensional engine stops outputting the above content; the recognizable image is removed from the real-world continuous image acquired by the camera, and the three-dimensional engine stops inputting data.
  • Step 1 The user inputs the original production material data to the server through the production end, and includes one or more original image data and/or original video data, and the user information data applied to the user identity information verification; specific steps:
  • the server outputs the two-dimensional scene image template in the database to the production end, and the user selects multiple original picture data that he likes by previewing and selecting the template, and inputs the original picture data into the user data package, which is used for the original with the channel map model.
  • the picture data is designated as a mark image for augmented reality recognition;
  • Step 2 Make an end connection server, output the original picture data in the user data packet from the database, input the original picture data into the Affinity Photo image processing program, and perform channel separation for the original picture with the channel map model, and remove the non- The area channel image is displayed, and the display area channel image is output as a .png file and output to the user data packet through the server. Then, the tag image for augmented reality in the data packet is input to the AR toolkit and the AR toolkit data of the image is output to the user data packet on the server.
  • the file type, format, quantity, and specification are checked on the processed packet, and when the result is correct, the process ends; when the result is wrong, the error result is returned to the production end.
  • Step 3 On the handheld device, input user information data, connect the user information through the communication unit to verify the user information, and when the result is an error, end; when the result is correct, output the user data packet from the server: the three-dimensional scene model, a map model with a channel, image data with channels, AR toolkit data, identifiable image data, and audio data are stored to a storage unit;
  • Step 4 Using the arithmetic unit to retrieve the camera through the AR engine unit to obtain a real-world continuous image, and will be identifiable The image is placed in the camera to obtain the real-world continuous image range, and the AR engine unit anchors the display position according to the spatial relationship of the identifiable image in the real world, and outputs the data packet content through the three-dimensional engine unit and displays it on the display unit.
  • the specific steps are: outputting the three-dimensional scene model in the data packet through the three-dimensional engine unit, mapping the picture data with the channel on the map model with the channel, and playing the audio data by using the speaker of the device.
  • Step 5 Use the interactive unit to control the playback, pause, skip, and stop of the data output by the 3D engine.
  • the specific steps are: inputting a play command to the interaction unit, the 3D engine starts outputting the 3D scene model, the picture data with the channel is mapped on the map model with the channel, and the audio data; the pause command is input to the interaction unit, and the 3D engine pauses and stops the 3D scene.
  • Model channel image data on the map with channel map, audio data.
  • Step 1 The user inputs the original production material data to the server through the production end, and includes one or more original image data and/or original video data, and the user information data applied to the user identity information verification; specific steps:
  • the server outputs the two-dimensional scene image template in the database to the production end, and the user selects the original image data and the two-dimensional scene image data that he likes by previewing and selecting the template, and inputs the data into the user data packet, which is used for the channel.
  • the original picture data of the texture model is designated as a mark image for augmented reality recognition;
  • Step 2 Making an end connection server, outputting original picture data and 2D scene image data in the user data package from the database, and inputting the original picture data and the 2D scene image data into an Adobe Photoshop image processing program, which will be used for channeling
  • the original image of the texture model is channel separated, the non-display area channel image is removed, and the display area channel image and the two-dimensional scene image are combined and output as a .jpeg file to the AR toolkit and the AR toolkit data of the image is output to the server.
  • the display area channel image is output as a .png file and output to the user data package through the server.
  • the file type, format, quantity, and specification are checked on the processed packet, and when the result is correct, the process ends; when the result is wrong, the error result is returned to the production end.
  • Step 3 On the handheld device, input user information data, connect with the server through the communication unit to verify the user information, and when the result is an error, end; when the result is correct, output the user data packet from the server: the three-dimensional scene mode Type, channel map model, channel image data, AR toolkit data, identifiable image data and audio data are stored to the storage unit;
  • Step 4 Using the arithmetic unit to retrieve the real-world continuous image through the AR engine unit, and place the identifiable image in the camera to obtain the real-world continuous image range, and the AR engine unit is anchored according to the spatial relationship of the identifiable image in the real world. The position is displayed, and the contents of the packet are output through the three-dimensional engine unit and displayed on the display unit.
  • the specific steps are: outputting the three-dimensional scene model in the data packet through the three-dimensional engine unit, mapping the picture data with the channel and the mapping model on the channel, and playing the audio data using the speaker of the device.
  • Step 5 Use the interactive unit to control the playback, pause, skip, and stop of the data output by the 3D engine.
  • the specific steps are: inputting a play command to the interaction unit, the 3D engine starts outputting the 3D scene model, the picture data with the channel is mapped on the map model with the channel, and the audio data; the pause command is input to the interaction unit, and the 3D engine pauses and stops the 3D scene.
  • Model channel image data on the map with channel map, audio data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

应用于图像或视频的AR显示***及方法,涉及增强现实领域,***生产端用于上传原始生产数据到服务器;所述原始生成数据由用户文字信息和用户原始场景组成;制作端用于将原始生成数据处理后合成场景制作数据;摄像头用于获取用户图像;服务器用于获取场景制作数据,用于将场景制作数据与服务器预置的三维场景模型中透明模型部分结合,用于用户图像、场景制作数据、三维场景模型、音频和用户文字信息的匹配;AR处理单元用于识别用户图像,用于将用户图像与存储单元中的三维场景模型、场景制作数据和音频结合并在显示终端上完成显示。提升了原始数据与三维场景模型间的融合度以及浏览时的沉浸感,改善了增强现实的效果。

Description

应用于图像或视频的AR显示***及方法 技术领域
本发明涉及增强现实领域,特别涉及应用于图像或视频的AR显示***及方法。
背景技术
传统影像行业所创作的图片或视频仅能以二维图像的形式来浏览,缺少与三维模型、音乐、音效、特效等相结合的展示方法,也缺少基于增强现实环境下展示带通道图片或视频的方法,更缺乏将基于增强现实带透明通道的图片或视频展示的方法及***。
现今,一方面,微型计算机的图形处理性能的提高,增强现实正在各个平台上使用,包括电视机,显示器,以及在一定程度上的手持装置,比如手机、平板电脑。同时,三维引擎也越来越多的在上述平台特别是手持装置上被得到应用。另一方面,数码相机和数码摄像机以及一定程度上的具有较高水准摄影及摄像功能的手机设备也越来越多的在日常生活中被广泛使用。将拍摄的照片、视频实现使用增强现实技术在手持装置上与三维场景模型结合的展示被认为是必要的。
然而,由于现有增强现实技术中并不能为自定义一个或多个图片和/或视频与三维场景模型结合浏览提供展示方法,且缺乏具有完整***结构框架设计,因此将带透明通道的自定义图片或视频在手持装置上实现增强现实的展示***和方法被认为较困难。
发明内容
针对上述问题,本发明提供一种基于增强现实展示带通道的自定义图片或视频的实现方法及***,用户可以根据预设三维场景模型模板选择适合的一个或多个自定义图片或视频,然后通过生产端输入到服务器数据库中,进而由制作端对原始数据进行加工处理,即可在手持装置上实现一个或多个自定义的图片或视频与三维场景模型结合并在增强现实环境进行浏览,从而提升用户在浏览图片或视频的沉浸感,提高了趣味性和互动性。另外,本发明提供了由生产端、制作端、服务器和手持装置所组成的实现基于增强现实展示带通道的自定义图片或视频的实现方法及***,为多用户、多数据的分布式制作提供了流程框架,转化速度快,实现过程更加简洁。
为达到上述目的,一方面本发明提供了一种应用于图像或视频的AR显示***,其包括生产端、制作端、服务器、存储单元、摄像头、AR处理单元和显示终端,生产端用于上传用原始生产数据到服务器;所述原始生产数据用户文字信息和用户原始场景组成;制作端用于将原始生产数据处理后合成场景制作数据;摄像头用于获取用户图像;服务器用于获取场景制作数据,用于将场景制作数据与服务器预置的三维场景模型中透明模型部分结合,用于用户图像、场景制作数据、三维场景模型、音频和用户文字信息的匹配;储存单元用于存储 在服务器中匹配完毕的用户图片、场景制作数据、三维场景模型和音频;AR处理单元用于识别用户图像,用于将用户图像与存储单元中的三维场景模型、场景制作数据和音频结合并在显示终端上完成显示。
所述用户原始场景为用户场景图像。
所述用户原始场景为用户场景视频。
所述场景制作数据为模型贴图图像和可识别图像。
所述场景制作数据为模型贴图视频。
所述可识别图像是由服务器预置可识别图像模版与模型贴图图像通过制作端制作得到。
所述制作端通过Adobe Photoshop处理用户原始场景;所述AR处理单元为Vuforia AR单元。
所述模型贴图图像通过制作端合成可识别图像。
所述制作端将可识别图像提交AR处理单元形成可识别数据。
所述生产端通过服务器获取处理后的可识别图像。
另一方面本发明公开一种应用于图像或视频的AR显示方法,其实现过程是:由生产端上传用户信息、图像、视频上述原始生产数据到服务器,此处用户信息包括:用户姓名、性别、所选择的三维场景、手机号、纪念日、备注信息。用户信息是用户文字信息,此处用户信息在服务器上形成一个唯一的字符串,此处图像、视频是指用户提供的视频、图像,是通过用户自己的摄像机设备进行采集。由制作端通过服务器获取图像、视频上述原始生产数据,通过Adobe Photoshop处理、合成模型贴图图像以及可识别图像,将可识别图像提交Vuforia AR单元形成可识别数据。经制作端传回模型贴图图像、可识别图像、可识别数据上述的处理结果到服务器。生产端通过服务器获取处理后的可识别图像。由摄像头通过服务器进行数据匹配识别用户信息,具体是指摄像头获取图像信息后与服务器上的用户信息字符串进行匹配,匹配是指获取可识别图像信息后,在服务器上查找对应的数据信息。将用户信息字符串和处理后的可识别图像、贴图图像、三维场景模型、音频、视频上述数据通过服务器一一对应储存到储存单元。由摄像头通过Vuforia AR单元识别图像数据后,从储存单元提取贴图图像、三维场景模型、音频、视频数据在显示终端上完成显示和固定或不固定区域的点击、轻触扫过等交互功能。此处识别指的是采用Vuforia AR单元,识别上文的可识别图像,然后通过摄像头获取持续的实时画面,并与可识别图像对应匹配的三维场景模型、音频、模型贴图图片、模型贴图视频数据输出共同显示在显示器上。
其具体实现步骤为:
步骤1:用户通过生产端向服务器输入原始制作素材数据,包括一个或多个原始图片数据和/或原始视频数据,应用于用户身份信息验证的用户信息数据;具体步骤:
1)通过生产端向服务器输入用户信息数据,在数据库中创建该用户的数据包;
2)由服务器向生产端输出数据库中二维场景图像模板,用户通过预览和选择模板,进而选择自己喜欢的一个或多个自定义原始图片数据和/或原始视频数据,输入到该用户数据包中,将用于带通道贴图模型的所述自定义原始图片数据、额外一张原始图片数据或二维场景图像数据指定为用于增强现实识别的标记图像;
步骤2:制作端连接服务器,从数据库中输出用户数据包中的原始图片数据和/或原始视频数据,通过制作端对数据进行处理完成制作,之后输出到该用户数据包内;将该数据包内用于增强现实识别的标记图像输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内,在制作端对上述经过处理的数据包进行文件类型、格式、数量、规格检查,当结果正确时,结束;当结果错误时,返回错误结果到制作端;
具体步骤分为三种情况,
情况a为数据包内仅包括原始视频数据和用于增强现实识别的标记图像,将原始视频数据输入到视频压缩程序,该视频压缩程序包括但不限于如:QuickTime,After Effects,Final Cut等处理视频程序。将原始视频数据压缩后.mp4文件通过服务器输出到该用户数据包内。然后,将该数据包内用于增强现实识别的标记图像输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内,该AR工具包由AR引擎提供。在制作端对上述经过处理的数据包进行文件类型、格式、数量、规格检查,当结果正确时,结束;当结果错误时,返回错误结果到制作端。
情况b为数据包内仅包括原始图片数据,将原始图片数据输入到图像处理程序,该图像处理程序包括但不限于如:Adobe Photoshop,Affinity Photo等图像处理程序。将用于带通道贴图模型的原始图片进行通道分离,移除非展示区域通道图像,将展示区域通道图像输出为.png文件后通过服务器输出到该用户数据包内。然后,将该数据包内用于增强现实的标记图像输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内。在制作端对上述经过处理的数据包进行文件类型、格式、数量、规格检查,当结果正确时,结束;当结果错误时,返回错误结果到制作端。
情况c为数据包内仅包括原始图片数据和二维场景图像数据,将原始图片数据和二维场景图像数据输入到图像处理程序,该图像处理程序包括但不限于如:Adobe Photoshop,Affinity Photo等图像处理程序;将用于带通道贴图模型的原始图片进行通道分离,移除非展 示区域通道图像,将展示区域通道图像与二维场景图像合并输出为.jpeg文件输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内,另外,将展示区域通道图像输出为.png文件后通过服务器输出到该用户数据包内。在制作端对上述经过处理的数据包进行文件类型、格式、数量、规格检查,当结果正确时,结束;当结果错误时,返回错误结果到制作端。
步骤3:在手持装置上,输入用户信息数据,通过通信单元与服务器连接验证用户信息,当结果为错误时,结束;当结果为正确时,从服务器输出该用户数据包:将三维场景模型、带通道的贴图模型、带通道的图片数据、压缩后的视频数据、AR工具包数据、可识别图像数据和音频数据存储到储存单元;
步骤4:使用运算单元通过AR引擎单元调取摄像头获取现实世界持续图像,并将可识别图像置于摄像头获取现实世界持续图像范围内,AR引擎单元根据可识别图像在现实世界中空间关系锚定显示位置,通过三维引擎单元输出数据包内容并在显示单元上显示。
具体步骤为:通过三维引擎单元输出该数据包中的三维场景模型,将带通道的图片数据和/或压缩后的视频数据在带通道的贴图模型上贴图,使用装置的扬声器播放音频数据。
步骤5:使用交互单元对三维引擎输出的数据进行播放、暂停、跳过和停止的控制。具体步骤是:向交互单元输入播放命令,三维引擎开始输出三维场景模型、带通道的图片数据和/或压缩后的视频数据在带通道的贴图模型上贴图、音频数据;向交互单元输入暂停命令,三维引擎暂停并静止三维场景模型、带通道的图片数据和/或压缩后的视频数据在带通道的贴图模型上贴图、音频数据。当数据包中存储多个带通道的原始图片数据和/或压缩后的视频数据时,向交互单元输入跳过命令,三维引擎继续输出三维场景模型和音频数据并替换带通道的图片数据和/或压缩后的视频数据在带通道的贴图模型上贴图;向交互单元输入停止命令,三维引擎停止输出上述内容;将可识别图像从摄像头获取的现实世界持续图像范围内移开,三维引擎停止输入数据。
本发明的有益效果:提供了带通道的三维场景模型在增强现实环境中实现的方法及***,其转换速度快,简化用户操作步骤,以及提供了简单的整体实现方法,提升了原始数据与三维场景模型间的融合度以及浏览时的沉浸感,改善了增强现实的效果。
用户通过生产端连接服务器向数据库传输原始素材数据,然后由制作端完成数据制作,进而使用手持装置在增强现实环境中浏览由经过处理的原始数据生成的三维数字内容。对本发明的进一部分说明:
(1)本发明为提供了更便于多用户、多数据、高并发的分散式制作提供了流程框架。针对 多用户,本发明通过生产端向服务器输入用户信息数据,在数据库中创建该用户的数据包;简化了***的实现步骤,更加高效,降低了成本。针对多数据,本发明为一次或多次自定义图片或基于增强现实展示带透明通道的自定义图片或视频提供了实现方法。任意图片或视频均可以通过预置的三维模型场景进行组合展示,且一张或多张的图片和/或视频可以在同一个三维模型场景中展示。
(2)本发明不仅提供基于增强现实展示带透明通道的自定义图片实现方法及***,还适用于视频,且给出了其实现过程的详细步骤。
(3)本发明为手持装置提供了将一个或多个自定义的原始图片数据和/或原始视频数据通过透明通道移除和压缩处理与预置三维场景模型结合应用于增强现实环境中的图片和/或视频展示方法,进而提升了原始数据与三维场景模型间的融合度以及浏览时的沉浸感。
(4)本发明以网络通信为基础,以三维引擎、AR工具包以及装置基本单元为构成的生产端、制作端、服务器与手持装置***,进一步的提供了其在使用中的完善方法。
(5)本发明将已有图片或视频通过处理与三维模型结合在增强现实环境下展示,不需要即时的产生额外图片或视频。
(6)本发明在手持装置上通过摄像头、显示单元、交互单元、储存单元、运算单元以及三维引擎单元、AR引擎单元实现一个或多个带透明通道的图片或视频数据在三维场景中展示的方法;通过生产端、制作端、服务器和手持装置可以实现端到端、端到装置的标准流程实现***。
术语解释:
现实世界:现实世界是指取自现实的图像,比如使用电子照片捕获技术例如视频记录的物理的现实世界情况。
增强现实:是一种实时地计算摄像机影像的位置及角度并加上相应图像的技术,这种技术的目标是在屏幕上把虚拟世界套在现实环境并进行互动。
生产端、制作端:在计算机上拥有接入网络功能的微型程序,负责向服务器传输或从服务器获取数据。
视频压缩程序:包括但不限于如:QuickTime,After Effects,Final Cut等视频编辑程序。
图像处理程序:包括但不限于如:Adobe Photoshop,Affinity Photo等图像编辑程序。
AR工具包:包括但不限于VuforiaAR、EasyAR等增强现实开发者工具包。
三维引擎:包括但不限于应用于如Untiy3D、Unreal Engine等被广泛应用于计算机特别是手持装置的三维程序。
三维场景模型:三维引擎中的数字资源以一定现实世界场景逻辑关系组成的数据包,其中包括:三维模型、贴图、动画、特效、音频等元件。
附图说明
图1为本发明***结构图;
图2为本发明方法的生产端流程图;
图3为本发明方法的制作端流程图;
图4为本发明方法的手持装置流程图;
具体实施方式
实施例1
步骤1:用户通过生产端向服务器输入原始制作素材数据,包括一个或多个原始图片数据和/或原始视频数据,应用于用户身份信息验证的用户信息数据;具体步骤:
1)通过生产端向服务器输入用户信息数据,在数据库中创建该用户的数据包;具体为用户姓名、性别、所选择的三维场景、手机号、纪念日、备注信息,用户文字信息在服务器上形成一个唯一的用户字符串;
2)由服务器向生产端输出数据库中二维场景图像模板,用户通过预览和选择模板,进而选择自己喜欢的多个原始视频数据输入到该用户数据包中,将用于带通道贴图模型的原始视频数据指定为用于增强现实识别的标记图像;
步骤2:制作端连接服务器,从数据库中输出用户数据包中的原始视频数据,将原始视频数据输入到QuickTime视频压缩程序,将原始视频数据压缩后.mp4文件通过服务器输出到该用户数据包内。然后,将该数据包内用于增强现实识别的标记图像输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内,该AR工具包由AR引擎提供。在制作端对上述经过处理的数据包进行文件类型、格式、数量、规格检查,当结果正确时,结束;当结果错误时,返回错误结果到制作端。
步骤3:在手持装置上,输入用户信息数据,通过通信单元与服务器连接验证用户信息,当结果为错误时,结束;当结果为正确时,从服务器输出该用户数据包:将三维场景模型、带通道的贴图模型、压缩后的视频数据、AR工具包数据、可识别图像数据和音频数据存储到储存单元;
步骤4:使用运算单元通过AR引擎单元调取摄像头获取现实世界持续图像,并将可识别图像置于摄像头获取现实世界持续图像范围内,AR引擎单元根据可识别图像在现实世界中空间关系锚定显示位置,通过三维引擎单元输出数据包内容并在显示单元上显示。
具体步骤为:通过三维引擎单元输出该数据包中的三维场景模型,将压缩后的视频数据 在带通道的贴图模型上贴图,使用装置的扬声器播放音频数据。
步骤5:使用交互单元对三维引擎输出的数据进行播放、暂停、跳过和停止的控制。具体步骤是:向交互单元输入播放命令,三维引擎开始输出三维场景模型、压缩后的视频数据在带通道的贴图模型上贴图、音频数据;向交互单元输入暂停命令,三维引擎暂停并静止三维场景模型、压缩后的视频数据在带通道的贴图模型上贴图、音频数据。当数据包中存储多个压缩后的视频数据时,向交互单元输入跳过命令,三维引擎继续输出三维场景模型和音频数据并替换压缩后的视频数据在带通道的贴图模型上贴图;向交互单元输入停止命令,三维引擎停止输出上述内容;将可识别图像从摄像头获取的现实世界持续图像范围内移开,三维引擎停止输入数据。
实施例2
步骤1:用户通过生产端向服务器输入原始制作素材数据,包括一个或多个原始图片数据和/或原始视频数据,应用于用户身份信息验证的用户信息数据;具体步骤:
1)通过生产端向服务器输入用户信息数据,在数据库中创建该用户的数据包;具体为用户姓名、性别、所选择的三维场景、手机号、纪念日、备注信息,用户文字信息在服务器上形成一个唯一的用户字符串;
2)由服务器向生产端输出数据库中二维场景图像模板,用户通过预览和选择模板,进而选择自己喜欢的多个原始图片数据输入到该用户数据包中,将用于带通道贴图模型的原始图片数据指定为用于增强现实识别的标记图像;
步骤2:制作端连接服务器,从数据库中输出用户数据包中的原始图片数据,将原始图片数据输入到Affinity Photo图像处理程序,将用于带通道贴图模型的原始图片进行通道分离,移除非展示区域通道图像,将展示区域通道图像输出为.png文件后通过服务器输出到该用户数据包内。然后,将该数据包内用于增强现实的标记图像输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内。在制作端对上述经过处理的数据包进行文件类型、格式、数量、规格检查,当结果正确时,结束;当结果错误时,返回错误结果到制作端。
步骤3:在手持装置上,输入用户信息数据,通过通信单元与服务器连接验证用户信息,当结果为错误时,结束;当结果为正确时,从服务器输出该用户数据包:将三维场景模型、带通道的贴图模型、带通道的图片数据、AR工具包数据、可识别图像数据和音频数据存储到储存单元;
步骤4:使用运算单元通过AR引擎单元调取摄像头获取现实世界持续图像,并将可识别 图像置于摄像头获取现实世界持续图像范围内,AR引擎单元根据可识别图像在现实世界中空间关系锚定显示位置,通过三维引擎单元输出数据包内容并在显示单元上显示。
具体步骤为:通过三维引擎单元输出该数据包中的三维场景模型,将带通道的图片数据在带通道的贴图模型上贴图,使用装置的扬声器播放音频数据。
步骤5:使用交互单元对三维引擎输出的数据进行播放、暂停、跳过和停止的控制。具体步骤是:向交互单元输入播放命令,三维引擎开始输出三维场景模型、带通道的图片数据在带通道的贴图模型上贴图、音频数据;向交互单元输入暂停命令,三维引擎暂停并静止三维场景模型、带通道的图片数据在带通道的贴图模型上贴图、音频数据。当数据包中存储多个带通道的原始图片数据时,向交互单元输入跳过命令,三维引擎继续输出三维场景模型和音频数据并替换带通道的图片数据在带通道的贴图模型上贴图;向交互单元输入停止命令,三维引擎停止输出上述内容;将可识别图像从摄像头获取的现实世界持续图像范围内移开,三维引擎停止输入数据。
实施例3
步骤1:用户通过生产端向服务器输入原始制作素材数据,包括一个或多个原始图片数据和/或原始视频数据,应用于用户身份信息验证的用户信息数据;具体步骤:
1)通过生产端向服务器输入用户信息数据,在数据库中创建该用户的数据包;具体为用户姓名、性别、所选择的三维场景、手机号、纪念日、备注信息,用户文字信息在服务器上形成一个唯一的用户字符串;
2)由服务器向生产端输出数据库中二维场景图像模板,用户通过预览和选择模板,进而选择自己喜欢的原始图片数据和二维场景图像数据输入到该用户数据包中,将用于带通道贴图模型的原始图片数据指定为用于增强现实识别的标记图像;
步骤2:制作端连接服务器,从数据库中输出用户数据包中的原始图片数据和二维场景图像数据,将原始图片数据和二维场景图像数据输入到Adobe Photoshop图像处理程序,将用于带通道贴图模型的原始图片进行通道分离,移除非展示区域通道图像,将展示区域通道图像与二维场景图像合并输出为.jpeg文件输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内,另外,将展示区域通道图像输出为.png文件后通过服务器输出到该用户数据包内。在制作端对上述经过处理的数据包进行文件类型、格式、数量、规格检查,当结果正确时,结束;当结果错误时,返回错误结果到制作端。
步骤3:在手持装置上,输入用户信息数据,通过通信单元与服务器连接验证用户信息,当结果为错误时,结束;当结果为正确时,从服务器输出该用户数据包:将三维场景模 型、带通道的贴图模型、带通道的图片数据、AR工具包数据、可识别图像数据和音频数据存储到储存单元;
步骤4:使用运算单元通过AR引擎单元调取摄像头获取现实世界持续图像,并将可识别图像置于摄像头获取现实世界持续图像范围内,AR引擎单元根据可识别图像在现实世界中空间关系锚定显示位置,通过三维引擎单元输出数据包内容并在显示单元上显示。
具体步骤为:通过三维引擎单元输出该数据包中的三维场景模型,将带通道的图片数据和在带通道的贴图模型上贴图,使用装置的扬声器播放音频数据。
步骤5:使用交互单元对三维引擎输出的数据进行播放、暂停、跳过和停止的控制。具体步骤是:向交互单元输入播放命令,三维引擎开始输出三维场景模型、带通道的图片数据在带通道的贴图模型上贴图、音频数据;向交互单元输入暂停命令,三维引擎暂停并静止三维场景模型、带通道的图片数据在带通道的贴图模型上贴图、音频数据。当数据包中存储多个带通道的原始图片数据时,向交互单元输入跳过命令,三维引擎继续输出三维场景模型和音频数据并替换带通道的图片数据在带通道的贴图模型上贴图;向交互单元输入停止命令,三维引擎停止输出上述内容;将可识别图像从摄像头获取的现实世界持续图像范围内移开,三维引擎停止输入数据。

Claims (18)

  1. 应用于图像或视频的AR显示***,其特征在于:包括生产端、制作端、服务器、存储单元、摄像头、AR处理单元和显示终端,生产端用于上传用原始生产数据到服务器;所述原始生产数据用户文字信息和用户原始场景组成;制作端用于将原始生产数据处理后合成场景制作数据;摄像头用于获取用户图像;服务器用于获取场景制作数据,用于将场景制作数据与服务器预置的三维场景模型中透明模型部分结合,用于用户图像、场景制作数据、三维场景模型、音频和用户文字信息的匹配;储存单元用于存储在服务器中匹配完毕的用户图片、场景制作数据、三维场景模型和音频;AR处理单元用于识别用户图像,用于将用户图像与存储单元中的三维场景模型、场景制作数据和音频结合并在显示终端上完成显示。
  2. 根据权利要求1所述的应用于图像或视频的AR显示***,其特征在于:所述用户原始场景为用户场景图像。
  3. 根据权利要求1所述的应用于图像或视频的AR显示***,其特征在于:所述用户原始场景为用户场景视频。
  4. 根据权利要求2所述的应用于图像或视频的AR显示***,其特征在于:所述场景制作数据为模型贴图图像和可识别图像。
  5. 根据权利要求2所述的应用于图像或视频的AR显示***,其特征在于:所述场景制作数据为模型贴图视频。
  6. 根据权利要求4所述的应用于图像或视频的AR显示***,其特征在于:所述可识别图像是由服务器预置可识别图像模版与模型贴图图像通过制作端制作得到。
  7. 根据权利要求1所述的应用于图像或视频的AR显示***,其特征在于:所述制作端通过Adobe Photoshop处理用户原始场景;所述AR处理单元为Vuforia AR单元。
  8. 根据权利要求4所述的应用于图像或视频的AR显示***,其特征在于:所述模型贴图图像通过制作端合成可识别图像。
  9. 根据权利要求4所述的应用于图像或视频的AR显示***,其特征在于:所述制作端将可识别图像提交AR处理单元形成可识别数据。
  10. 根据权利要求4所述的应用于图像或视频的AR显示***,其特征在于:所述生产端通过服务器获取处理后的可识别图像。
  11. 应用于图像或视频的AR显示方法,其特征在于:
    步骤1:用户通过生产端向服务器输入原始制作素材数据,包括一个或多个原始图片数据和/或原始视频数据,应用于用户身份信息验证的用户信息数据;具体步骤:
    1)通过生产端向服务器输入用户信息数据,在数据库中创建该用户的数据包;
    2)由服务器向生产端输出数据库中二维场景图像模板,用户通过预览和选择模板,进而选择自己喜欢的一个或多个自定义原始图片数据和/或原始视频数据,输入到该用户数据包中,将用于带通道贴图模型的所述自定义原始图片数据、额外一张原始图片数据或二维场景图像数据指定为用于增强现实识别的标记图像;
    步骤2:制作端连接服务器,从数据库中输出用户数据包中的原始图片数据和/或原始视频数据,通过制作端对数据进行处理完成制作,之后处理后的数据输出到该用户数据包内;将该数据包内用于增强现实识别的标记图像输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内,在制作端对上述经过处理的数据包进行文件类型、格式、数量、规格检查,当结果正确时,结束;当结果错误时,返回错误结果到制作端;
    步骤3:在手持装置上,输入用户信息数据,通过通信单元与服务器连接验证用户信息,当结果为错误时,结束;当结果为正确时,从服务器输出该用户数据包:将三维场景模型、带通道的贴图模型、带通道的图片数据、压缩后的视频数据、AR工具包数据、可识别图像数据和音频数据存储到储存单元;
    步骤4:使用运算单元通过AR引擎单元调取摄像头获取现实世界持续图像,并将可识别图像置于摄像头获取现实世界持续图像范围内,AR引擎单元根据可识别图像在现实世界中空间关系锚定显示位置,通过三维引擎单元输出数据包内容并在显示单元上显示。
  12. 根据权利要求11所述应用于图像或视频的AR显示方法,其特征在于:步骤2中,当数据包内仅包括原始视频数据和用于增强现实识别的标记图像,所述处理完成制作的过程为:将原始视频数据输入到视频压缩程序,将原始视频数据压缩后的.mp4文件通过服务器输出到该用户数据包内。
  13. 根据权利要求11所述应用于图像或视频的AR显示方法,其特征在于:步骤2中,当数据包内仅包括原始图片数据,所述处理完成制作的过程为:将原始图片数据输入到图像处理程序,将用于带通道贴图模型的原始图片进行通道分离,移除非展示区域通道图像,将展示区域通道图像输出为.png文件后通过服务器输出到该用户数据包内。
  14. 根据权利要求11所述应用于图像或视频的AR显示方法,其特征在于:步骤2中,当数据包内仅包括原始图片数据和二维场景图像数据,所述处理完成制作的过程为:将原始图片数据和二维场景图像数据输入到图像处理程序,将用于带通道贴图模型的原始图片进行通道分离,移除非展示区域通道图像,将展示区域通道图像与二维场景图像合并输出为.jpeg文件输入到AR工具包并输出该图像的AR工具包数据到服务器上该用户数据包内,另外,将展示区域通道图像输出为.png文件后通过服务器输出到该用户数据包内。
  15. 根据权利要求11所述应用于图像或视频的AR显示方法,其特征在于:步骤4的具体步骤为:通过三维引擎单元输出该数据包中的三维场景模型,将带通道的图片数据和/或压缩后的视频数据在带通道的贴图模型上贴图,使用装置的扬声器播放音频数据。
  16. 根据权利要求11所述应用于图像或视频的AR显示方法,其特征在于:方法还包括步骤5:使用交互单元对三维引擎输出的数据进行播放、暂停、跳过和停止的控制。
  17. 根据权利要求16所述应用于图像或视频的AR显示方法,其特征在于:步骤5的具体步骤是:向交互单元输入播放命令,三维引擎开始输出三维场景模型、带通道的图片数据和/或压缩后的视频数据在带通道的贴图模型上贴图、音频数据;向交互单元输入暂停命令,三维引擎暂停并静止三维场景模型、带通道的图片数据和/或压缩后的视频数据在带通道的贴图模型上贴图、音频数据。
  18. 根据权利要求17所述应用于图像或视频的AR显示方法,其特征在于:当数据包中存储多个带通道的原始图片数据和/或压缩后的视频数据时,向交互单元输入跳过命令,三维引擎继续输出三维场景模型和音频数据并替换带通道的图片数据和/或压缩后的视频数据在带通道的贴图模型上贴图;向交互单元输入停止命令,三维引擎停止输出上述内容;将可识别图像从摄像头获取的现实世界持续图像范围内移开,三维引擎停止输入数据。
PCT/CN2016/108466 2015-12-21 2016-12-03 应用于图像或视频的ar显示***及方法 WO2017107758A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201680056530.5A CN108140263B (zh) 2015-12-21 2016-12-03 应用于图像或视频的ar显示***及方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510959624.0A CN105608745B (zh) 2015-12-21 2015-12-21 应用于图像或视频的ar显示***
CN201510959624.0 2015-12-21

Publications (1)

Publication Number Publication Date
WO2017107758A1 true WO2017107758A1 (zh) 2017-06-29

Family

ID=55988656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108466 WO2017107758A1 (zh) 2015-12-21 2016-12-03 应用于图像或视频的ar显示***及方法

Country Status (2)

Country Link
CN (2) CN105608745B (zh)
WO (1) WO2017107758A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767462A (zh) * 2017-10-16 2018-03-06 北京视据科技有限公司 一种非穿戴增强现实全息展示方法及展示***
CN109615705A (zh) * 2018-11-22 2019-04-12 云南电网有限责任公司电力科学研究院 一种基于虚拟现实技术的电力科技成果展现方法及装置
CN111047672A (zh) * 2019-11-26 2020-04-21 湖南龙诺数字科技有限公司 一种数字动漫生成***及方法
CN111047693A (zh) * 2019-12-27 2020-04-21 浪潮(北京)电子信息产业有限公司 一种图像训练数据集生成方法、装置、设备及介质
CN111665945A (zh) * 2020-06-10 2020-09-15 浙江商汤科技开发有限公司 一种游览信息展示方法及装置
CN111669666A (zh) * 2019-03-08 2020-09-15 北京京东尚科信息技术有限公司 模拟现实的方法、装置和***
CN112738499A (zh) * 2020-12-25 2021-04-30 京东方科技集团股份有限公司 基于ar的信息显示方法、装置、ar设备、电子设备及介质
CN112734943A (zh) * 2021-01-27 2021-04-30 昭通亮风台信息科技有限公司 基于ar的电子书***、运行方法及电子书数据结构运行方法
CN113032699A (zh) * 2021-03-04 2021-06-25 广东博智林机器人有限公司 机器人的模型构建方法、模型构建装置和处理器
CN113784107A (zh) * 2021-09-17 2021-12-10 国家能源集团陕西富平热电有限公司 一种用于视频信号的三维可视化显示方法和***
CN114003190A (zh) * 2021-12-30 2022-02-01 江苏移动信息***集成有限公司 一种适应多场景和多设备的增强现实方法和装置
CN116977607A (zh) * 2023-07-21 2023-10-31 武汉熠腾科技有限公司 基于像素流的文物模型展示方法及***

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608745B (zh) * 2015-12-21 2019-01-29 大连新锐天地文化科技有限公司 应用于图像或视频的ar显示***
CN106790470A (zh) * 2016-12-12 2017-05-31 上海尤卡城信息科技有限责任公司 一种结合ar技术的图片和视频定制以及读取成像流程的***及方法
CN106851421A (zh) * 2016-12-15 2017-06-13 天津知音网络科技有限公司 一种应用于视频ar的显示***
CN106850971B (zh) * 2017-01-03 2021-01-08 惠州Tcl移动通信有限公司 一种基于移动终端动态即时显示画面内容的方法及***
CN109388231A (zh) * 2017-08-14 2019-02-26 广东畅响源教育科技有限公司 基于标准模型实现vr物体或场景交互操控的***及方法
CN107818459A (zh) * 2017-10-30 2018-03-20 努比亚技术有限公司 基于增强现实的红包发送方法、终端及存储介质
US11386653B2 (en) * 2018-01-22 2022-07-12 Apple Inc. Method and device for generating a synthesized reality reconstruction of flat video content
CN108614638B (zh) * 2018-04-23 2020-07-07 太平洋未来科技(深圳)有限公司 Ar成像方法和装置
CN109191369B (zh) 2018-08-06 2023-05-05 三星电子(中国)研发中心 2d图片集转3d模型的方法、存储介质和装置
CN111028597B (zh) * 2019-12-12 2022-04-19 塔普翊海(上海)智能科技有限公司 混合现实的外语情景、环境、教具教学***及其方法
CN111310049B (zh) * 2020-02-25 2023-04-07 腾讯科技(深圳)有限公司 一种信息交互方法及相关设备
CN111754641A (zh) * 2020-06-28 2020-10-09 中国银行股份有限公司 一种基于ar的资金代管物品展示方法、装置及设备
EP4167192A4 (en) 2020-07-10 2023-12-13 Beijing Bytedance Network Technology Co., Ltd. IMAGE PROCESSING METHOD AND APPARATUS FOR AUGMENTED REALITY, ELECTRONIC DEVICE AND RECORDING MEDIUM
CN112509147A (zh) * 2020-11-27 2021-03-16 武汉全乐科技有限公司 一种立体增强现实显示方法
CN112734883A (zh) * 2021-01-25 2021-04-30 腾讯科技(深圳)有限公司 一种数据处理方法、装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100033484A1 (en) * 2006-12-05 2010-02-11 Nac-Woo Kim Personal-oriented multimedia studio platform apparatus and method for authorization 3d content
CN102708355A (zh) * 2011-02-15 2012-10-03 索尼公司 信息处理装置、信息处理方法及程序
CN104134229A (zh) * 2014-08-08 2014-11-05 李成 实时交互的增强现实***以及方法
CN104834375A (zh) * 2015-05-05 2015-08-12 常州恐龙园股份有限公司 基于增强现实的游乐园指南***
CN105608745A (zh) * 2015-12-21 2016-05-25 大连新锐天地传媒有限公司 应用于图像或视频的ar显示***
CN106033333A (zh) * 2015-03-10 2016-10-19 沈阳中云普华科技有限公司 一种可视化的增强现实场景制作***及方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2876130A1 (en) * 2012-06-14 2013-12-19 Bally Gaming, Inc. System and method for augmented reality gaming
CN102821323B (zh) * 2012-08-01 2014-12-17 成都理想境界科技有限公司 基于增强现实技术的视频播放方法、***及移动终端
CN104834897A (zh) * 2015-04-09 2015-08-12 东南大学 一种基于移动平台的增强现实的***及方法
CN104851004A (zh) * 2015-05-12 2015-08-19 杨淑琪 一种饰品试戴的展示装置和展示方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100033484A1 (en) * 2006-12-05 2010-02-11 Nac-Woo Kim Personal-oriented multimedia studio platform apparatus and method for authorization 3d content
CN102708355A (zh) * 2011-02-15 2012-10-03 索尼公司 信息处理装置、信息处理方法及程序
CN104134229A (zh) * 2014-08-08 2014-11-05 李成 实时交互的增强现实***以及方法
CN106033333A (zh) * 2015-03-10 2016-10-19 沈阳中云普华科技有限公司 一种可视化的增强现实场景制作***及方法
CN104834375A (zh) * 2015-05-05 2015-08-12 常州恐龙园股份有限公司 基于增强现实的游乐园指南***
CN105608745A (zh) * 2015-12-21 2016-05-25 大连新锐天地传媒有限公司 应用于图像或视频的ar显示***

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767462B (zh) * 2017-10-16 2023-08-25 北京视据科技有限公司 一种非穿戴增强现实全息展示方法及展示***
CN107767462A (zh) * 2017-10-16 2018-03-06 北京视据科技有限公司 一种非穿戴增强现实全息展示方法及展示***
CN109615705A (zh) * 2018-11-22 2019-04-12 云南电网有限责任公司电力科学研究院 一种基于虚拟现实技术的电力科技成果展现方法及装置
CN111669666A (zh) * 2019-03-08 2020-09-15 北京京东尚科信息技术有限公司 模拟现实的方法、装置和***
CN111047672A (zh) * 2019-11-26 2020-04-21 湖南龙诺数字科技有限公司 一种数字动漫生成***及方法
CN111047693A (zh) * 2019-12-27 2020-04-21 浪潮(北京)电子信息产业有限公司 一种图像训练数据集生成方法、装置、设备及介质
CN111665945A (zh) * 2020-06-10 2020-09-15 浙江商汤科技开发有限公司 一种游览信息展示方法及装置
CN111665945B (zh) * 2020-06-10 2023-11-24 浙江商汤科技开发有限公司 一种游览信息展示方法及装置
CN112738499A (zh) * 2020-12-25 2021-04-30 京东方科技集团股份有限公司 基于ar的信息显示方法、装置、ar设备、电子设备及介质
US11830154B2 (en) 2020-12-25 2023-11-28 Beijing Boe Optoelectronics Technology Co., Ltd. AR-based information displaying method and device, AR apparatus, electronic device and medium
CN112734943A (zh) * 2021-01-27 2021-04-30 昭通亮风台信息科技有限公司 基于ar的电子书***、运行方法及电子书数据结构运行方法
CN113032699A (zh) * 2021-03-04 2021-06-25 广东博智林机器人有限公司 机器人的模型构建方法、模型构建装置和处理器
CN113784107A (zh) * 2021-09-17 2021-12-10 国家能源集团陕西富平热电有限公司 一种用于视频信号的三维可视化显示方法和***
CN114003190A (zh) * 2021-12-30 2022-02-01 江苏移动信息***集成有限公司 一种适应多场景和多设备的增强现实方法和装置
CN114003190B (zh) * 2021-12-30 2022-04-01 江苏移动信息***集成有限公司 一种适应多场景和多设备的增强现实方法和装置
CN116977607A (zh) * 2023-07-21 2023-10-31 武汉熠腾科技有限公司 基于像素流的文物模型展示方法及***
CN116977607B (zh) * 2023-07-21 2024-05-07 武汉熠腾科技有限公司 基于像素流的文物模型展示方法及***

Also Published As

Publication number Publication date
CN105608745B (zh) 2019-01-29
CN108140263A (zh) 2018-06-08
CN108140263B (zh) 2021-04-27
CN105608745A (zh) 2016-05-25

Similar Documents

Publication Publication Date Title
WO2017107758A1 (zh) 应用于图像或视频的ar显示***及方法
US11488355B2 (en) Virtual world generation engine
JP6397911B2 (ja) ビデオコンテンツを配布するビデオブロードキャストシステム及び方法
WO2019128787A1 (zh) 网络视频直播方法、装置及电子设备
CN107633441A (zh) 追踪识别视频图像中的商品并展示商品信息的方法和装置
US8135724B2 (en) Digital media recasting
CN106303289A (zh) 一种将真实对象与虚拟场景融合显示的方法、装置及***
WO2018103384A1 (zh) 一种360度全景视频的播放方法、装置及***
CN107995482A (zh) 视频文件的处理方法和装置
US20210084239A1 (en) Systems and Methods of Transitioning Between Video Clips in Interactive Videos
JP7511026B2 (ja) 画像データの符号化方法及び装置、表示方法及び装置、並びに電子機器
CN112019907A (zh) 直播画面分流方法、计算机设备及可读存储介质
CN109743584A (zh) 全景视频合成方法、服务器、终端设备及存储介质
CN105138216A (zh) 一种在虚拟座位上显示观众互动信息的方法及装置
CN106303288A (zh) 合成演唱视频的方法、装置及***
CN106101576B (zh) 一种增强现实照片的拍摄方法、装置及移动终端
CN108961368A (zh) 三维动画环境中实时直播综艺节目的方法和***
CN112449707A (zh) 由计算机实现的用于创建包括合成图像的内容的方法
CN109313653A (zh) 增强媒体
Langlotz et al. AR record&replay: situated compositing of video content in mobile augmented reality
US20120251081A1 (en) Image editing device, image editing method, and program
KR101915792B1 (ko) 얼굴인식을 이용한 광고 삽입 시스템 및 방법
CN106851421A (zh) 一种应用于视频ar的显示***
US20160350955A1 (en) Image processing method and device
WO2013187796A1 (ru) Способ автоматического монтажа цифровых видеофайлов

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16877569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16877569

Country of ref document: EP

Kind code of ref document: A1