CN108140263B - AR display system and method applied to image or video - Google Patents

AR display system and method applied to image or video Download PDF

Info

Publication number
CN108140263B
CN108140263B CN201680056530.5A CN201680056530A CN108140263B CN 108140263 B CN108140263 B CN 108140263B CN 201680056530 A CN201680056530 A CN 201680056530A CN 108140263 B CN108140263 B CN 108140263B
Authority
CN
China
Prior art keywords
data
image
user
original
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680056530.5A
Other languages
Chinese (zh)
Other versions
CN108140263A (en
Inventor
赵良华
张圣明
解长庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian saide Boqiang Culture Technology Co.,Ltd.
Original Assignee
Dalian Saide Boqiang Culture Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Saide Boqiang Culture Technology Co ltd filed Critical Dalian Saide Boqiang Culture Technology Co ltd
Publication of CN108140263A publication Critical patent/CN108140263A/en
Application granted granted Critical
Publication of CN108140263B publication Critical patent/CN108140263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

An AR display system and method applied to images or videos relate to the field of augmented reality, and a system production end is used for uploading original production data to a server; the original generated data consists of user character information and a user original scene; the making end is used for processing the original generated data and then synthesizing the processed data into scene making data; the camera is used for acquiring a user image; the server is used for acquiring scene making data, combining the scene making data with a transparent model part in a three-dimensional scene model preset by the server and matching user images, the scene making data, the three-dimensional scene model, audio and user character information; the AR processing unit is used for identifying the user image, combining the user image with the three-dimensional scene model, the scene making data and the audio in the storage unit and finishing displaying on the display terminal. The fusion degree between the original data and the three-dimensional scene model and the immersion feeling during browsing are improved, and the effect of augmented reality is improved.

Description

AR display system and method applied to image or video
Technical Field
The present invention relates to the field of augmented reality, and more particularly, to an AR display system and method applied to an image or video.
Background
Pictures or videos created in the traditional image industry can only be browsed in a two-dimensional image form, and a display method combined with a three-dimensional model, music, sound effect, special effect and the like is lacked, a method for displaying pictures or videos with channels based on an augmented reality environment is also lacked, and a method and a system for displaying pictures or videos with transparent channels based on the augmented reality are also lacked.
Nowadays, on the one hand, the graphics processing performance of microcomputers is improved, and augmented reality is being used on various platforms, including televisions, monitors, and to some extent, handheld devices such as mobile phones and tablet computers. Meanwhile, three-dimensional engines are increasingly used on the platforms, especially handheld devices. On the other hand, digital cameras and digital video cameras, and mobile phone devices having a high level of photographing and image pickup functions to some extent, are also increasingly used in daily life. It is considered necessary to enable the presentation of the taken photos, videos, using augmented reality technology in combination with a three-dimensional scene model on a handheld device.
However, since the existing augmented reality technology cannot provide a presentation method for customizing one or more pictures and/or videos to browse in combination with a three-dimensional scene model, and lacks a complete system structural framework design, the presentation system and method for implementing augmented reality of customized pictures or videos with transparent channels on a handheld device is considered to be difficult.
Disclosure of Invention
In order to solve the problems, the invention provides a method and a system for realizing a user-defined picture or video with a channel based on augmented reality display, wherein a user can select one or more suitable user-defined pictures or videos according to a preset three-dimensional scene model template, then the user-defined pictures or videos are input into a server database through a production end, and then the original data is processed by a manufacturing end, so that the one or more user-defined pictures or videos can be combined with a three-dimensional scene model on a handheld device and browsed in an augmented reality environment, thereby improving the immersion of the user in browsing the pictures or videos and improving the interestingness and the interactivity. In addition, the invention provides a method and a system for realizing the user-defined picture or video based on the augmented reality display band channel, which are composed of a production end, a manufacturing end, a server and a handheld device, and provides a flow framework for multi-user and multi-data distributed manufacturing, and the method and the system have the advantages of high conversion speed and simpler realization process.
In order to achieve the above object, in one aspect, the present invention provides an AR display system applied to images or videos, which includes a production end, a server, a storage unit, a camera, an AR processing unit, and a display terminal, where the production end is configured to upload original production data to the server; the original production data consists of user character information and an original user scene; the manufacturing end is used for processing the original production data and then synthesizing the original production data into scene manufacturing data; the camera is used for acquiring a user image; the server is used for acquiring scene making data, combining the scene making data with a transparent model part in a three-dimensional scene model preset by the server and matching user images, the scene making data, the three-dimensional scene model, audio and user character information; the storage unit is used for storing the matched user pictures, scene making data, three-dimensional scene models and audios in the server; the AR processing unit is used for identifying the user image, combining the user image with the three-dimensional scene model, the scene making data and the audio in the storage unit and finishing displaying on the display terminal.
The original scene of the user is a scene image of the user.
The original scene of the user is a user scene video.
The scene making data are a model chartlet image and a recognizable image.
The scene making data is a model map video.
The recognizable image is obtained by presetting a recognizable image template and a model chartlet image by a server and manufacturing the recognizable image by a manufacturing end.
The manufacturing end processes the original scene of the user through Adobe Photoshop; the AR processing unit is a Vuforia AR unit.
And the model chartlet image is synthesized into an identifiable image through a production end.
And the manufacturing end submits the recognizable image to the AR processing unit to form recognizable data.
And the production end acquires the processed recognizable image through the server.
On the other hand, the invention discloses an AR display method applied to images or videos, which comprises the following implementation processes: the production end uploads the original production data of user information, images and videos to the server, wherein the user information comprises: user name, gender, selected three-dimensional scene, mobile phone number, commemorative day, remark information. The user information is user character information, the user information forms a unique character string on the server, and the image and the video refer to video and images provided by the user and are collected through the camera equipment of the user. The production end acquires the original production data of the image and the video through the server, and submits the recognizable image to the Vufaria AR unit to form the recognizable data through Adobe Photoshop processing, model mapping image synthesis and the recognizable image. The model chartlet image, the recognizable image and the recognizable data are transmitted back to the server through the production end. And the production end acquires the processed recognizable image through the server. The camera carries out data matching and identification on the user information through the server, specifically, the camera obtains the image information and then matches with the user information character string on the server, and the matching refers to searching the corresponding data information on the server after obtaining the identifiable image information. And storing the user information character strings and the processed recognizable image, the map image, the three-dimensional scene model, the audio and the video into a storage unit in a one-to-one correspondence manner through a server. After the image data is identified by the camera through the Vuforia AR unit, the mapping image, the three-dimensional scene model, the audio and the video data are extracted from the storage unit, and the interactive functions of display, clicking and swiping a fixed or unfixed area and the like are completed on the display terminal. The identification refers to identifying the above identifiable images by using a Vuforia AR unit, then acquiring continuous real-time pictures through a camera, and outputting and jointly displaying three-dimensional scene models, audio, model map pictures and model map video data which are correspondingly matched with the identifiable images on a display.
The method comprises the following concrete steps:
step 1: a user inputs original manufacturing material data including one or more original picture data and/or original video data to a server through a production end, and the original manufacturing material data is applied to user information data of user identity information verification; the method comprises the following specific steps:
1) inputting user information data to a server through a production end, and creating a data packet of the user in a database;
2) outputting a two-dimensional scene image template in a database to a production end by a server, selecting favorite one or more user-defined original picture data and/or original video data by a user through previewing and selecting the template, inputting the user-defined original picture data, an additional piece of original picture data or two-dimensional scene image data into a user data packet, and designating the user-defined original picture data, the additional piece of original picture data or the two-dimensional scene image data for a channel mapping model as a marked image for augmented reality identification;
step 2: the making end is connected with the server, original picture data and/or original video data in the user data packet are output from the database, the data are processed through the making end to complete making, and then the data are output into the user data packet; inputting the marked image used for augmented reality identification in the data packet into an AR tool packet and outputting AR tool packet data of the image into the user data packet on a server, checking the file type, format, quantity and specification of the processed data packet at a manufacturing end, and finishing when the result is correct; when the result is wrong, returning the wrong result to the manufacturing end;
the specific steps are divided into three cases,
case a is where only the original video data and the tagged image for augmented reality recognition are included within the data packet, the original video data is input to a video compression program including but not limited to such as: QuickTime, After Effects, Final Cut, etc. process video programs. The mp4 file is output to the user data package by the server. Then, the marked image used for augmented reality recognition in the data packet is input into an AR tool packet and the AR tool packet data of the image is output to the user data packet on the server, and the AR tool packet is provided by an AR engine. Checking the file type, format, quantity and specification of the processed data packet at a manufacturing end, and finishing when the result is correct; and when the result is wrong, returning the wrong result to the manufacturing end.
Case b is where only raw picture data is included in the data packet, the raw picture data is input to an image processing program including, but not limited to, such as: adobe Photoshop, Affinity Photo, and other image processing programs. And (3) carrying out channel separation on the original picture used for the channel mapping model, removing the channel image of the non-display area, and outputting the channel image of the display area into the png file and then outputting the png file into the user data packet through the server. Then, the marked image for augmented reality in the data packet is input into an AR tool packet, and the AR tool packet data of the image is output to the user data packet on the server. Checking the file type, format, quantity and specification of the processed data packet at a manufacturing end, and finishing when the result is correct; and when the result is wrong, returning the wrong result to the manufacturing end.
Case c is where only the raw picture data and the two-dimensional scene image data are included in the data packet, and the raw picture data and the two-dimensional scene image data are input to an image processing program, which includes but is not limited to such as: image processing programs such as Adobe Photoshop, Affinity Photo, and the like; and inputting a jpeg file into an AR toolkit and outputting AR toolkit data of the image to the user data packet on the server, and outputting the channel image of the display area into a png file and then outputting the png file to the user data packet through the server. Checking the file type, format, quantity and specification of the processed data packet at a manufacturing end, and finishing when the result is correct; and when the result is wrong, returning the wrong result to the manufacturing end.
And step 3: inputting user information data on the handheld device, connecting the handheld device with the server through the communication unit to verify the user information, and ending when the result is an error; when the result is correct, the user data packet is output from the server: storing the three-dimensional scene model, the mapping model with the channel, the picture data with the channel, the compressed video data, the AR tool package data, the recognizable image data and the audio data into a storage unit;
and 4, step 4: the operation unit is used for calling the camera through the AR engine unit to acquire the real world continuous image, the recognizable image is placed in the range of the camera for acquiring the real world continuous image, the AR engine unit anchors the display position according to the spatial relation of the recognizable image in the real world, and the data packet content is output through the three-dimensional engine unit and displayed on the display unit.
The method comprises the following specific steps: and outputting the three-dimensional scene model in the data packet through a three-dimensional engine unit, mapping the picture data with the channel and/or the compressed video data on the mapping model with the channel, and playing audio data by using a loudspeaker of the device.
And 5: and controlling the playing, pausing, skipping and stopping of the data output by the three-dimensional engine by using the interaction unit. The method comprises the following specific steps: inputting a playing command to the interaction unit, and starting outputting a three-dimensional scene model, picture data with a channel and/or compressed video data by a three-dimensional engine, wherein the picture data with the channel and/or the compressed video data are subjected to mapping and audio data on a mapping model with the channel; and inputting a pause command to the interaction unit, pausing and stilling the three-dimensional scene model, the picture data with the channel and/or the compressed video data, and pasting the picture and the audio data on the map model with the channel by the three-dimensional engine. When a plurality of original picture data with channels and/or compressed video data are stored in the data packet, a skip command is input to the interaction unit, the three-dimensional engine continues to output a three-dimensional scene model and audio data and replaces the picture data with channels and/or the compressed video data with the mapping model with channels to map; inputting a stop command to the interaction unit, and stopping outputting the content by the three-dimensional engine; and moving the recognizable image away from the range of the real world continuous image acquired by the camera, and stopping the three-dimensional engine from inputting data.
The invention has the beneficial effects that: the method and the system for realizing the three-dimensional scene model with the channel in the augmented reality environment have the advantages of high conversion speed, simplified user operation steps, simple integral realization method, improvement of the fusion degree between original data and the three-dimensional scene model and the immersion feeling during browsing, and improvement of the augmented reality effect.
A user transmits raw material data to the database through the production end connecting server, then the data making is completed by the making end, and then the three-dimensional digital content generated by the processed raw data is browsed in the augmented reality environment by using the handheld device. For a further part of the description of the invention:
(1) the invention provides a flow framework for providing distributed production which is more convenient for multiple users, multiple data and high concurrency. Aiming at multiple users, the invention inputs user information data to a server through a production end and creates a data packet of the user in a database; the implementation steps of the system are simplified, the efficiency is higher, and the cost is reduced. Aiming at multiple data, the invention provides an implementation method for customizing the picture once or for multiple times or customizing the picture or the video based on the augmented reality display band transparent channel. Any picture or video can be displayed in a combined mode through a preset three-dimensional model scene, and one or more pictures and/or videos can be displayed in the same three-dimensional model scene.
(2) The invention not only provides a method and a system for realizing a user-defined picture with a transparent channel based on an augmented reality display, but also is suitable for videos, and provides detailed steps of the realization process.
(3) The invention provides a picture and/or video display method for a handheld device, wherein one or more user-defined original picture data and/or original video data are removed and compressed through a transparent channel, and are combined with a preset three-dimensional scene model to be applied to an augmented reality environment, so that the fusion degree between the original data and the three-dimensional scene model and the immersion feeling during browsing are improved.
(4) The invention further provides a perfecting method in use of the system, which is based on network communication and comprises a production end, a manufacturing end, a server and a handheld device system which are formed by a three-dimensional engine, an AR tool kit and a device basic unit.
(5) The method combines the existing picture or video with the three-dimensional model through processing and shows the picture or video in the augmented reality environment, and does not need to generate additional pictures or videos in real time.
(6) The invention realizes a method for displaying one or more pictures or video data with transparent channels in a three-dimensional scene on a handheld device through a camera, a display unit, an interaction unit, a storage unit, an operation unit, a three-dimensional engine unit and an AR engine unit; the system can realize end-to-end and end-to-device standard processes through a production end, a manufacturing end, a server and a handheld device.
Interpretation of terms:
the real world: real world refers to images taken from reality, such as the physical real world situation using electronic photo capture technology, e.g., video recording.
Augmented reality: the technology is a technology for calculating the position and the angle of a camera image in real time and adding a corresponding image, and aims to sleeve a virtual world on a screen in a real environment and interact with the virtual world.
Production end and manufacturing end: the computer is provided with a micro program with a network access function and is responsible for transmitting data to the server or acquiring data from the server.
Video compression program: including but not limited to such things as: video editing programs such as QuickTime, After Effects, Final Cut, etc. An image processing program: including but not limited to such things as: adobe Photoshop, Affinity Photo, and other image editing programs.
AR toolkit: including but not limited to augmented reality developer toolkits such as VuforiaAR, easylar, etc.
A three-dimensional engine: including but not limited to three-dimensional programs as applied to computers such as Untiy3D, Ureal Engine, etc. which are widely used in computers, particularly hand-held devices.
Three-dimensional scene model: the data packet composed of digital resources in a three-dimensional engine and a certain real world scene logical relationship comprises: three-dimensional models, maps, animations, special effects, audio, etc.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a process flow diagram of the production side of the method of the present invention;
FIG. 3 is a process flow diagram of the manufacturing end of the method of the present invention;
FIG. 4 is a flow chart of a handheld device of the method of the present invention;
Detailed Description
Example 1
Step 1: a user inputs original manufacturing material data including one or more original picture data and/or original video data to a server through a production end, and the original manufacturing material data is applied to user information data of user identity information verification; the method comprises the following specific steps:
1) inputting user information data to a server through a production end, and creating a data packet of the user in a database; specifically, the user name, the gender, the selected three-dimensional scene, the mobile phone number, the commemorative day and the remark information are selected, and the user character information forms a unique user character string on the server;
2) outputting a two-dimensional scene image template in a database to a production end by a server, selecting a plurality of favorite original video data by a user through previewing and selecting the template, inputting the favorite original video data into a user data packet, and designating the original video data used for the mapping model with the channel as a marked image used for augmented reality identification;
step 2: the production end is connected with the server, original video data in a user data packet is output from the database, the original video data is input into a QuickTime video compression program, and after the original video data is compressed, the mp4 file is output into the user data packet through the server. Then, the marked image used for augmented reality recognition in the data packet is input into an AR tool packet and the AR tool packet data of the image is output to the user data packet on the server, and the AR tool packet is provided by an AR engine. Checking the file type, format, quantity and specification of the processed data packet at a manufacturing end, and finishing when the result is correct; and when the result is wrong, returning the wrong result to the manufacturing end.
And step 3: inputting user information data on the handheld device, connecting the handheld device with the server through the communication unit to verify the user information, and ending when the result is an error; when the result is correct, the user data packet is output from the server: storing the three-dimensional scene model, the mapping model with the channel, the compressed video data, the AR tool package data, the recognizable image data and the audio data into a storage unit;
and 4, step 4: the operation unit is used for calling the camera through the AR engine unit to acquire the real world continuous image, the recognizable image is placed in the range of the camera for acquiring the real world continuous image, the AR engine unit anchors the display position according to the spatial relation of the recognizable image in the real world, and the data packet content is output through the three-dimensional engine unit and displayed on the display unit.
The method comprises the following specific steps: and outputting the three-dimensional scene model in the data packet through a three-dimensional engine unit, mapping the compressed video data on the mapping model with the channel, and playing the audio data by using a loudspeaker of the device.
And 5: and controlling the playing, pausing, skipping and stopping of the data output by the three-dimensional engine by using the interaction unit. The method comprises the following specific steps: inputting a playing command to the interaction unit, and starting outputting a three-dimensional scene model, a compressed video data mapping on a mapping model with a channel and audio data by a three-dimensional engine; and inputting a pause command to the interaction unit, pausing and stopping the three-dimensional scene model by the three-dimensional engine, pasting the compressed video data on the pasting model with the channel, and audio data. When a plurality of compressed video data are stored in the data packet, a skip command is input to the interaction unit, the three-dimensional engine continues to output the three-dimensional scene model and the audio data and replaces the compressed video data to be pasted on the map model with the channel; inputting a stop command to the interaction unit, and stopping outputting the content by the three-dimensional engine; and moving the recognizable image away from the range of the real world continuous image acquired by the camera, and stopping the three-dimensional engine from inputting data.
Example 2
Step 1: a user inputs original manufacturing material data including one or more original picture data and/or original video data to a server through a production end, and the original manufacturing material data is applied to user information data of user identity information verification; the method comprises the following specific steps:
1) inputting user information data to a server through a production end, and creating a data packet of the user in a database; specifically, the user name, the gender, the selected three-dimensional scene, the mobile phone number, the commemorative day and the remark information are selected, and the user character information forms a unique user character string on the server;
2) outputting a two-dimensional scene image template in a database to a production end by a server, selecting a plurality of favorite original image data by a user through previewing and selecting the template, inputting the favorite original image data into a user data packet, and designating the original image data used for the mapping model with the channel as a marked image used for augmented reality identification;
step 2: and the manufacturing end is connected with the server, outputs original picture data in the user data packet from the database, inputs the original picture data into the Affinity Photo image processing program, performs channel separation on the original picture with the channel mapping model, removes the channel image in the non-display area, and outputs the channel image in the display area into the png file and then outputs the png file into the user data packet through the server. Then, the marked image for augmented reality in the data packet is input into an AR tool packet, and the AR tool packet data of the image is output to the user data packet on the server. Checking the file type, format, quantity and specification of the processed data packet at a manufacturing end, and finishing when the result is correct; and when the result is wrong, returning the wrong result to the manufacturing end.
And step 3: inputting user information data on the handheld device, connecting the handheld device with the server through the communication unit to verify the user information, and ending when the result is an error; when the result is correct, the user data packet is output from the server: storing the three-dimensional scene model, the mapping model with the channel, the picture data with the channel, the AR tool package data, the recognizable image data and the audio data into a storage unit;
and 4, step 4: the operation unit is used for calling the camera through the AR engine unit to acquire the real world continuous image, the recognizable image is placed in the range of the camera for acquiring the real world continuous image, the AR engine unit anchors the display position according to the spatial relation of the recognizable image in the real world, and the data packet content is output through the three-dimensional engine unit and displayed on the display unit.
The method comprises the following specific steps: and outputting the three-dimensional scene model in the data packet through a three-dimensional engine unit, mapping the picture data with the channel on the mapping model with the channel, and playing audio data by using a loudspeaker of the device.
And 5: and controlling the playing, pausing, skipping and stopping of the data output by the three-dimensional engine by using the interaction unit. The method comprises the following specific steps: inputting a playing command to the interaction unit, and starting outputting a three-dimensional scene model, a chartlet on the chartlet model with the channel of the picture data with the channel and audio data by the three-dimensional engine; and inputting a pause command to the interaction unit, pausing and stopping the three-dimensional scene model by the three-dimensional engine, pasting the picture on the picture model with the channel and audio data. When a plurality of original image data with channels are stored in the data packet, a skip command is input to the interaction unit, and the three-dimensional engine continues to output the three-dimensional scene model and the audio data and replaces the image data with channels to be pasted on the image model with channels; inputting a stop command to the interaction unit, and stopping outputting the content by the three-dimensional engine; and moving the recognizable image away from the range of the real world continuous image acquired by the camera, and stopping the three-dimensional engine from inputting data.
Example 3
Step 1: a user inputs original manufacturing material data including one or more original picture data and/or original video data to a server through a production end, and the original manufacturing material data is applied to user information data of user identity information verification; the method comprises the following specific steps:
1) inputting user information data to a server through a production end, and creating a data packet of the user in a database; specifically, the user name, the gender, the selected three-dimensional scene, the mobile phone number, the commemorative day and the remark information are selected, and the user character information forms a unique user character string on the server;
2) the server outputs a two-dimensional scene image template in the database to a production end, a user selects favorite original picture data and two-dimensional scene image data to input into a user data packet through previewing and selecting the template, and the original picture data used for the mapping model with the channel is designated as a marked image used for augmented reality identification;
step 2: the manufacturing end is connected with the server, original picture data and two-dimensional scene image data in a user data packet are output from the database, the original picture data and the two-dimensional scene image data are input into an Adobe Photoshop image processing program, the original picture with a channel mapping model is subjected to channel separation, a non-display area channel image is removed, a display area channel image and a two-dimensional scene image are merged and output to be a jpeg file, the AR tool packet data of the image is input into the AR tool packet, the AR tool packet data of the image is output into the user data packet on the server, and the display area channel image is output to be a png file and then is output into the user data packet through the server. Checking the file type, format, quantity and specification of the processed data packet at a manufacturing end, and finishing when the result is correct; and when the result is wrong, returning the wrong result to the manufacturing end.
And step 3: inputting user information data on the handheld device, connecting the handheld device with the server through the communication unit to verify the user information, and ending when the result is an error; when the result is correct, the user data packet is output from the server: storing the three-dimensional scene model, the mapping model with the channel, the picture data with the channel, the AR tool package data, the recognizable image data and the audio data into a storage unit;
and 4, step 4: the operation unit is used for calling the camera through the AR engine unit to acquire the real world continuous image, the recognizable image is placed in the range of the camera for acquiring the real world continuous image, the AR engine unit anchors the display position according to the spatial relation of the recognizable image in the real world, and the data packet content is output through the three-dimensional engine unit and displayed on the display unit.
The method comprises the following specific steps: and outputting the three-dimensional scene model in the data packet through a three-dimensional engine unit, mapping the picture data with the channel and the mapping model with the channel, and playing audio data by using a loudspeaker of the device.
And 5: and controlling the playing, pausing, skipping and stopping of the data output by the three-dimensional engine by using the interaction unit. The method comprises the following specific steps: inputting a playing command to the interaction unit, and starting outputting a three-dimensional scene model, a chartlet on the chartlet model with the channel of the picture data with the channel and audio data by the three-dimensional engine; and inputting a pause command to the interaction unit, pausing and stopping the three-dimensional scene model by the three-dimensional engine, pasting the picture on the picture model with the channel and audio data. When a plurality of original image data with channels are stored in the data packet, a skip command is input to the interaction unit, and the three-dimensional engine continues to output the three-dimensional scene model and the audio data and replaces the image data with channels to be pasted on the image model with channels; inputting a stop command to the interaction unit, and stopping outputting the content by the three-dimensional engine; and moving the recognizable image away from the range of the real world continuous image acquired by the camera, and stopping the three-dimensional engine from inputting data.

Claims (8)

1. An AR display method applied to an image or video, characterized in that:
step 1: a user inputs original manufacturing material data including one or more original picture data and/or original video data to a server through a production end, and the original manufacturing material data is applied to user information data of user identity information verification; the method comprises the following specific steps:
1) inputting user information data to a server through a production end, and creating a data packet of the user in a database;
2) outputting a two-dimensional scene image template in a database to a production end by a server, selecting favorite one or more user-defined original picture data and/or original video data by a user through previewing and selecting the template, inputting the user-defined original picture data, an additional piece of original picture data or two-dimensional scene image data into a user data packet, and designating the user-defined original picture data, the additional piece of original picture data or the two-dimensional scene image data for a channel mapping model as a marked image for augmented reality identification;
step 2: the making end is connected with the server, original picture data and/or original video data in the user data packet are output from the database, the making end processes the data to complete making, and then the processed data are output to the user data packet; inputting the marked image used for augmented reality identification in the data packet into an AR tool packet and outputting AR tool packet data of the image into the user data packet on a server, checking the file type, format, quantity and specification of the processed data packet at a manufacturing end, and finishing when the result is correct; when the result is wrong, returning the wrong result to the manufacturing end;
and step 3: inputting user information data on the handheld device, connecting the handheld device with the server through the communication unit to verify the user information, and ending when the result is an error; when the result is correct, the user data packet is output from the server: storing the three-dimensional scene model, the mapping model with the channel, the picture data with the channel, the compressed video data, the AR tool package data, the recognizable image data and the audio data into a storage unit;
and 4, step 4: the operation unit is used for calling the camera through the AR engine unit to acquire the real world continuous image, the recognizable image is placed in the range of the camera for acquiring the real world continuous image, the AR engine unit anchors the display position according to the spatial relation of the recognizable image in the real world, and the data packet content is output through the three-dimensional engine unit and displayed on the display unit.
2. The AR display method applied to an image or video according to claim 1, wherein: in step 2, when the data packet only includes original video data and a marker image for augmented reality identification, the process of completing the production by the processing is as follows: the original video data is input to the video compression program, and the mp4 file, which is a compressed version of the original video data, is output to the user data packet via the server.
3. The AR display method applied to an image or video according to claim 1, wherein: in step 2, when the data packet only includes original picture data, the process of completing the manufacturing process is as follows: inputting the original picture data into an image processing program, carrying out channel separation on the original picture with the channel mapping model, removing a non-display area channel image, and outputting the display area channel image into a png file and then outputting the png file into the user data packet through a server.
4. The AR display method applied to an image or video according to claim 1, wherein: in step 2, when the data packet only includes original picture data and two-dimensional scene image data, the process of completing the manufacturing is as follows: inputting original picture data and two-dimensional scene image data into an image processing program, carrying out channel separation on an original picture with a channel mapping model, removing a non-display area channel image, combining and outputting the display area channel image and the two-dimensional scene image into a jpeg file, inputting the jpeg file into an AR toolkit, outputting AR toolkit data of the image into a user data packet on a server, and outputting the png file and the png file into the user data packet through the server.
5. The AR display method applied to an image or video according to claim 1, wherein: the specific steps of the step 4 are as follows: and outputting the three-dimensional scene model in the data packet through a three-dimensional engine unit, mapping the picture data with the channel and/or the compressed video data on the mapping model with the channel, and playing audio data by using a loudspeaker of the device.
6. The AR display method applied to an image or video according to claim 1, wherein: the method further comprises the step 5: and controlling the playing, pausing, skipping and stopping of the data output by the three-dimensional engine by using the interaction unit.
7. The AR display method applied to an image or video according to claim 6, wherein: the specific steps of the step 5 are as follows: inputting a playing command to the interaction unit, and starting outputting a three-dimensional scene model, picture data with a channel and/or compressed video data by a three-dimensional engine, wherein the picture data with the channel and/or the compressed video data are subjected to mapping and audio data on a mapping model with the channel; and inputting a pause command to the interaction unit, pausing and stilling the three-dimensional scene model, the picture data with the channel and/or the compressed video data, and pasting the picture and the audio data on the map model with the channel by the three-dimensional engine.
8. The AR display method applied to an image or video according to claim 7, wherein: when a plurality of original picture data with channels and/or compressed video data are stored in the data packet, a skip command is input to the interaction unit, the three-dimensional engine continues to output a three-dimensional scene model and audio data and replaces the picture data with channels and/or the compressed video data with the mapping model with channels to map; inputting a stop command to the interaction unit, and stopping outputting the content by the three-dimensional engine; and moving the recognizable image away from the range of the real world continuous image acquired by the camera, and stopping the three-dimensional engine from inputting data.
CN201680056530.5A 2015-12-21 2016-12-03 AR display system and method applied to image or video Active CN108140263B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2015109596240 2015-12-21
CN201510959624.0A CN105608745B (en) 2015-12-21 2015-12-21 AR display system applied to image or video
PCT/CN2016/108466 WO2017107758A1 (en) 2015-12-21 2016-12-03 Ar display system and method applied to image or video

Publications (2)

Publication Number Publication Date
CN108140263A CN108140263A (en) 2018-06-08
CN108140263B true CN108140263B (en) 2021-04-27

Family

ID=55988656

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201510959624.0A Expired - Fee Related CN105608745B (en) 2015-12-21 2015-12-21 AR display system applied to image or video
CN201680056530.5A Active CN108140263B (en) 2015-12-21 2016-12-03 AR display system and method applied to image or video

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201510959624.0A Expired - Fee Related CN105608745B (en) 2015-12-21 2015-12-21 AR display system applied to image or video

Country Status (2)

Country Link
CN (2) CN105608745B (en)
WO (1) WO2017107758A1 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608745B (en) * 2015-12-21 2019-01-29 大连新锐天地文化科技有限公司 AR display system applied to image or video
CN106790470A (en) * 2016-12-12 2017-05-31 上海尤卡城信息科技有限责任公司 Picture and the video customization of a kind of combination AR technologies and the system and method for reading Irnaging procedures
CN106851421A (en) * 2016-12-15 2017-06-13 天津知音网络科技有限公司 A kind of display system for being applied to video AR
CN106850971B (en) * 2017-01-03 2021-01-08 惠州Tcl移动通信有限公司 Method and system for dynamically and immediately displaying picture content based on mobile terminal
CN109388231A (en) * 2017-08-14 2019-02-26 广东畅响源教育科技有限公司 The system and method for VR object or scene interactivity manipulation is realized based on master pattern
CN107767462B (en) * 2017-10-16 2023-08-25 北京视据科技有限公司 Non-wearable augmented reality holographic display method and display system
CN107818459A (en) * 2017-10-30 2018-03-20 努比亚技术有限公司 Red packet sending method, terminal and storage medium based on augmented reality
CN111615832B (en) * 2018-01-22 2022-10-25 苹果公司 Method and apparatus for generating a composite reality reconstruction of planar video content
CN108614638B (en) * 2018-04-23 2020-07-07 太平洋未来科技(深圳)有限公司 AR imaging method and apparatus
CN109191369B (en) 2018-08-06 2023-05-05 三星电子(中国)研发中心 Method, storage medium and device for converting 2D picture set into 3D model
CN109615705A (en) * 2018-11-22 2019-04-12 云南电网有限责任公司电力科学研究院 A kind of electric power science and technology exhibition based on virtual reality technology shows method and device
CN111669666A (en) * 2019-03-08 2020-09-15 北京京东尚科信息技术有限公司 Method, device and system for simulating reality
CN111047672A (en) * 2019-11-26 2020-04-21 湖南龙诺数字科技有限公司 Digital animation generation system and method
CN111028597B (en) * 2019-12-12 2022-04-19 塔普翊海(上海)智能科技有限公司 Mixed reality foreign language scene, environment and teaching aid teaching system and method thereof
CN111047693A (en) * 2019-12-27 2020-04-21 浪潮(北京)电子信息产业有限公司 Image training data set generation method, device, equipment and medium
CN111310049B (en) * 2020-02-25 2023-04-07 腾讯科技(深圳)有限公司 Information interaction method and related equipment
CN111665945B (en) * 2020-06-10 2023-11-24 浙江商汤科技开发有限公司 Tour information display method and device
CN111754641A (en) * 2020-06-28 2020-10-09 中国银行股份有限公司 Capital escrow article display method, device and equipment based on AR
CN111833460A (en) 2020-07-10 2020-10-27 北京字节跳动网络技术有限公司 Augmented reality image processing method and device, electronic equipment and storage medium
CN112509147A (en) * 2020-11-27 2021-03-16 武汉全乐科技有限公司 Three-dimensional augmented reality display method
CN112738499A (en) 2020-12-25 2021-04-30 京东方科技集团股份有限公司 Information display method and device based on AR, AR equipment, electronic equipment and medium
CN113032699B (en) * 2021-03-04 2023-04-25 广东博智林机器人有限公司 Model construction method, model construction device and processor of robot
CN113784107A (en) * 2021-09-17 2021-12-10 国家能源集团陕西富平热电有限公司 Three-dimensional visual display method and system for video signal
CN114003190B (en) * 2021-12-30 2022-04-01 江苏移动信息***集成有限公司 Augmented reality method and device suitable for multiple scenes and multiple devices
CN116977607B (en) * 2023-07-21 2024-05-07 武汉熠腾科技有限公司 Cultural relic model display method and system based on pixel flow

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821323A (en) * 2012-08-01 2012-12-12 成都理想境界科技有限公司 Video playing method, video playing system and mobile terminal based on augmented reality technique
CN104834897A (en) * 2015-04-09 2015-08-12 东南大学 System and method for enhancing reality based on mobile platform
CN104834375A (en) * 2015-05-05 2015-08-12 常州恐龙园股份有限公司 Amusement park guide system based on augmented reality
CN104851004A (en) * 2015-05-12 2015-08-19 杨淑琪 Display device of decoration try and display method thereof

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100918392B1 (en) * 2006-12-05 2009-09-24 한국전자통신연구원 Personal-oriented multimedia studio platform for 3D contents authoring
JP5799521B2 (en) * 2011-02-15 2015-10-28 ソニー株式会社 Information processing apparatus, authoring method, and program
CA2876130A1 (en) * 2012-06-14 2013-12-19 Bally Gaming, Inc. System and method for augmented reality gaming
CN104134229A (en) * 2014-08-08 2014-11-05 李成 Real-time interaction reality augmenting system and method
CN106033333A (en) * 2015-03-10 2016-10-19 沈阳中云普华科技有限公司 A visual augmented reality scene making system and method
CN105608745B (en) * 2015-12-21 2019-01-29 大连新锐天地文化科技有限公司 AR display system applied to image or video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821323A (en) * 2012-08-01 2012-12-12 成都理想境界科技有限公司 Video playing method, video playing system and mobile terminal based on augmented reality technique
CN104834897A (en) * 2015-04-09 2015-08-12 东南大学 System and method for enhancing reality based on mobile platform
CN104834375A (en) * 2015-05-05 2015-08-12 常州恐龙园股份有限公司 Amusement park guide system based on augmented reality
CN104851004A (en) * 2015-05-12 2015-08-19 杨淑琪 Display device of decoration try and display method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于Unity3D的移动增强现实自动导览方法;罗永东 等;《计算机与数字工程》;20151215(第11期);全文 *
基于智能手机的增强现实导游***;武新飞;《中国优秀硕士学位论文全文数据库信息科技辑》;20130115(第1期);全文 *

Also Published As

Publication number Publication date
CN108140263A (en) 2018-06-08
CN105608745A (en) 2016-05-25
WO2017107758A1 (en) 2017-06-29
CN105608745B (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN108140263B (en) AR display system and method applied to image or video
US11488355B2 (en) Virtual world generation engine
CN105320695B (en) Picture processing method and device
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
CN109168026A (en) Instant video display methods, device, terminal device and storage medium
CN107798932A (en) A kind of early education training system based on AR technologies
CN103310099A (en) Method and system for realizing augmented reality by adopting image capture and recognition technology
WO2018103384A1 (en) Method, device and system for playing 360 degree panoramic video
CN106648098B (en) AR projection method and system for user-defined scene
CA3001480C (en) Video-production system with dve feature
EP3024223B1 (en) Videoconference terminal, secondary-stream data accessing method, and computer storage medium
CN110930220A (en) Display method, display device, terminal equipment and medium
CN109743584A (en) Panoramic video synthetic method, server, terminal device and storage medium
TW201631960A (en) Display system, method, computer readable recording medium and computer program product for video stream on augmented reality
CN107995482A (en) The treating method and apparatus of video file
CN109313653A (en) Enhance media
CN113806306A (en) Media file processing method, device, equipment, readable storage medium and product
CN111583348A (en) Image data encoding method and device, display method and device, and electronic device
TWM506428U (en) Display system for video stream on augmented reality
US20140286624A1 (en) Method and apparatus for personalized media editing
US20160350955A1 (en) Image processing method and device
CN108320331B (en) Method and equipment for generating augmented reality video information of user scene
WO2023241377A1 (en) Video data processing method and device, equipment, system, and storage medium
TWI514319B (en) Methods and systems for editing data using virtual objects, and related computer program products
KR20140078043A (en) A lecture contents manufacturing system and method which anyone can easily make

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20181018

Address after: 116000 No. 14, No. 14, pin Tao Street, Dalian high tech Industrial Park, Liaoning, China.

Applicant after: Dalian New World Culture Technology Co., Ltd.

Address before: 116000 18, 7 Torch Road, hi tech park, Dalian, Liaoning

Applicant before: DALIAN NEW VISION MEDIA CO., LTD.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200811

Address after: 116000 West 702, floor 7, No.1, Gaoxin street, high tech Industrial Park, Dalian City, Liaoning Province

Applicant after: Dalian lanchuang Technology Co.,Ltd.

Address before: 116000 No. 14, No. 14, pin Tao Street, Dalian high tech Industrial Park, Liaoning, China.

Applicant before: Dalian New World Culture Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210408

Address after: 116000 room 702, west side, 7th floor, 1 Gaoxin street, Dalian hi tech Industrial Park, Liaoning Province

Applicant after: Dalian saide Boqiang Culture Technology Co.,Ltd.

Address before: 116000 West 702, 7th floor, No.1 Gaoxin street, high tech Industrial Park, Dalian, Liaoning Province

Applicant before: Dalian lanchuang Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant