CN110248207B - Image reality display server, image reality display method, recording medium and image reality display system - Google Patents

Image reality display server, image reality display method, recording medium and image reality display system Download PDF

Info

Publication number
CN110248207B
CN110248207B CN201810191383.3A CN201810191383A CN110248207B CN 110248207 B CN110248207 B CN 110248207B CN 201810191383 A CN201810191383 A CN 201810191383A CN 110248207 B CN110248207 B CN 110248207B
Authority
CN
China
Prior art keywords
image
background
main body
display
video file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810191383.3A
Other languages
Chinese (zh)
Other versions
CN110248207A (en
Inventor
叶树灵
王振东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201810191383.3A priority Critical patent/CN110248207B/en
Publication of CN110248207A publication Critical patent/CN110248207A/en
Application granted granted Critical
Publication of CN110248207B publication Critical patent/CN110248207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention is directed to provide an image realistic display server, a display method, a recording medium, and a display system, which can realize not only video display of a display main body but also achieve more realistic and vivid effects by combining the display with a recording medium, the display system including: the first user terminal stores video files; a second user terminal having a playback superimposition unit; the cloud server is used as an image reality display server, wherein the background removing part removes an original background in a video file to obtain a transparent background video file, the image selecting part selects a representative image from the transparent background video file, the main body feature extracting part extracts main body features from the representative image, the video storing part correspondingly stores the transparent background video file and the main body features, the video retrieving and acquiring part acquires the corresponding transparent background video file according to the current main body features, and the playing and overlapping part overlaps according to relative positions to realize playing and displaying.

Description

Image reality display server, image reality display method, recording medium and image reality display system
Technical Field
The invention belongs to the field of image display, and particularly relates to an image reality display system, an image reality display method and a computer-readable recording medium.
Background
People often use invitations (i.e., posts) in their daily lives, such as birthdays, weddings, or banquet of various names, which invites typically send paper to the invitees. Some inviters also print their own photos on the invitation in order to make the invitee more aware of the invitee's identity.
Similarly, card issuers of greeting cards, couriers of resumes, and the like often print their own photos on corresponding greeting cards or resumes, and then send the greeting cards or resumes to the recipient. The form can make the receiver clearly see the image of the sender, is beneficial to the communication between the two parties and can also reflect the emotion or honesty of the sender. However, the way in which the photographs are printed on paper-based recording media is rigid, static for presentation of square images, and the presentation is not ideal.
In order to allow the recipient to see a more realistic presentation of the image, some video greeting card services have emerged in the prior art. The sender records the video in advance and uploads the video to the facilitator, the facilitator distributes a code (such as a two-dimensional code) corresponding to the video, the sender prints the code on a recording medium and sends the code to the receiver, and the receiver can watch the corresponding video by accessing a website of the facilitator and inputting the corresponding code after receiving the code. The method can enable the receiver to better see the real emitting square image through the video, thereby having good display effect. However, in such a presentation system, the presentation main body and the recording medium are completely separated, and the recipient needs to view the video or view the content recorded on the recording medium, respectively, which has a very limited improvement in the presentation effect. In addition, the receiver needs to input the code specially to see the corresponding video, and if the code is input incorrectly, the receiver can also see the video which is not corresponding or irrelevant, so that the display process is inconvenient, and the display and communication effects are lost.
Disclosure of Invention
In order to solve the above problems, the present invention provides an image reality display server, an image reality display method, a computer-readable recording medium and an image reality display system, which can not only realize video display of a display main body, but also combine the display with an image recorded on a recording medium to achieve a more realistic and vivid display effect, and the present invention adopts the following structures:
< Structure I >
The present invention provides an image realistic display server, which is used for playing and displaying a video file corresponding to a display image and containing a display main body through the display image on a recording medium, and is in communication connection with at least one first user terminal which stores the video file and is held by a first user and at least one second user terminal which is used for playing and displaying based on the recording medium and is held by a second user and is provided with a camera and a playing and overlapping part, and is characterized by comprising: the system comprises a background storage part, a background retrieval acquisition part, a background removal part, an image selection part, a main body characteristic extraction part, a video storage part, a display image synthesis part, a video retrieval acquisition part and a server communication part, wherein the background storage part stores various background images for setoff the display main body and application scene types corresponding to the background images, the server communication part receives a video file selected by a first user and appointed application scene information from the first user terminal, the background retrieval acquisition part retrieves the background storage part according to the application scene information to acquire at least one corresponding background image, the server communication part transmits all acquired background images to the first user terminal and receives the background image appointed by the first user from the first user terminal as an appointed background image, and the background removal part removes the original background of the display main body in the received video file to acquire a transparent background video file, an image selecting part selects a frame of image from a transparent background video file as a representative image, a main body feature extracting part extracts various main body features of a display main body from the representative image, a video storage part at least correspondingly stores the transparent background video file and the main body features, a display image synthesizing part synthesizes the representative image and a specified background image to obtain a display image, a server communication part sends the display image to a first user terminal, the first user presents the display image on a recording medium according to an application scene, once a camera of a second user terminal is aligned with the display image on the recording medium, the server communication part receives the display image read by the camera from the second user terminal as a current display image, a main body feature extracting part extracts the current main body features of the display main body in the current display image, and a video retrieval obtaining part extracts the main body features which are consistent with the current main body features in the video storage part according to the current main body features And the server communication part sends the acquired transparent background video file to the second user terminal, so that the playing and overlapping part in the second user terminal overlaps the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body and the specified background image in the current display image to realize the playing and displaying of the transparent background video file.
< Structure two >
The present invention also provides an image realistic sensation presentation method for presenting a video file corresponding to a presentation image and including a presentation main body by playing the video file through a presentation image on a recording medium, the image realistic sensation presentation method being performed by using a cloud server which is in communication connection with at least one first user terminal which stores the video file and is held by a first user and at least one second user terminal which is presented by playing based on the recording medium and is held by a second user and has a camera and a playing superposition section, the method comprising the steps of: a background storage step of storing various background images of a setoff display main body and application scene categories corresponding to the background images; a background retrieval and acquisition step, wherein once a server receives a video file selected by a first user and appointed application scene information from a first user terminal, the server retrieves and acquires at least one corresponding background image according to the application scene information; a background image sending and receiving step, wherein all the obtained background images are sent to a first user terminal and a background image appointed by a first user is received from the first user terminal to be used as an appointed background image, and a background removing step is carried out to remove the original background of a display main body in the received video file to obtain a transparent background video file; selecting a frame of image from a transparent background video file as a representative image; a main body feature extraction step of extracting various main body features of a display main body from the representative image; a video storage step, at least storing the transparent background video file and the main body characteristics correspondingly; a display image synthesis step, synthesizing the representative image and the appointed background image to obtain a display image; a display image sending step, namely sending the display image to a first user terminal to enable the first user to present the display image on a recording medium according to an application scene; a current display image receiving step, wherein once the camera of the second user terminal is aligned with the display image on the recording medium, the server receives the display image read by the camera from the second user terminal as a current display image; a current main body feature extraction step, wherein the current main body features of the display main body in the current display image are extracted; a video retrieval obtaining step, namely retrieving according to the current main body characteristics to obtain a corresponding transparent background video file; and a transparent background video file sending step, namely sending the obtained transparent background video file to a second user terminal, so that a playing and overlapping part in the second user terminal can overlap the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body in the current display image and the appointed background image to realize the playing and displaying of the transparent background video file.
< Structure III >
The present invention also provides a computer-readable recording medium for recording a computer program, wherein the computer program is for causing a cloud server, which stores a video file and is held by a first user and at least one first user terminal which is held by the first user and which performs playback presentation based on a recording medium, and at least one second user terminal which is held by a second user and which has a camera and a playback superimposition unit, to perform the following steps by performing playback presentation of the video file corresponding to a presentation image and containing a presentation main body on a recording medium: a background storage step of storing various background images of a setoff display main body and application scene categories corresponding to the background images; a background retrieval and acquisition step, wherein once a server receives a video file selected by a first user and appointed application scene information from a first user terminal, the server retrieves and acquires at least one corresponding background image according to the application scene information; a background image sending and receiving step, wherein all the obtained background images are sent to a first user terminal and a background image appointed by a first user is received from the first user terminal to be used as an appointed background image, and a background removing step is carried out to remove the original background of a display main body in the received video file to obtain a transparent background video file; selecting a frame of image from a transparent background video file as a representative image; a main body feature extraction step of extracting various main body features of a display main body from the representative image; a video storage step, at least storing the transparent background video file and the main body characteristics correspondingly; a display image synthesis step, synthesizing the representative image and the appointed background image to obtain a display image; a display image sending step, namely sending the display image to a first user terminal to enable the first user to present the display image on a recording medium according to an application scene; a current display image receiving step, wherein once the camera of the second user terminal is aligned with the display image on the recording medium, the server receives the display image read by the camera from the second user terminal as a current display image; a current main body feature extraction step, wherein the current main body features of the display main body in the current display image are extracted; a video retrieval obtaining step, namely retrieving according to the current main body characteristics to obtain a corresponding transparent background video file; and a transparent background video file sending step, namely sending the obtained transparent background video file to a second user terminal, so that a playing and overlapping part in the second user terminal can overlap the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body in the current display image and the appointed background image to realize the playing and displaying of the transparent background video file.
< Structure four >
The present invention also provides an image realistic display system for displaying a video file corresponding to a display image and including a display main body by playing the display image on a recording medium, comprising: at least one first user terminal storing a video file, held by a first user; at least one second user terminal for playing and displaying based on the recording medium, which is held by the second user; and a cloud server, which is respectively connected with the first user terminal and the second user terminal in a communication way, wherein the first user terminal is provided with a first side picture storage part, a first side input display part and a first side communication part, the second user terminal is provided with a camera, a playing superposition part and a second side communication part, the cloud server is provided with a background storage part, a background retrieval acquisition part, a background removal part, an image selection part, a main body characteristic extraction part, a video storage part, a display image synthesis part, a video retrieval acquisition part and a cloud side communication part, the background storage part is stored with various background images for supporting the display main body and application scene types corresponding to the background images, the first side picture storage part is stored with a video file uploading picture and a background selection picture, the first side input display part is used for displaying the video file uploading picture to allow the first user to select the video file to be uploaded and appoint the application scene required by the display main body in the video file, once the first user finishes selection and designation, the first side communication part sends the selected video file and designated application scene information to the cloud server, once the cloud side communication part receives the video file and the application scene information, the background retrieval acquisition part retrieves the background storage part according to the application scene information to acquire at least one corresponding background image, the cloud side communication part sends all the acquired background images to the first user terminal, the first side input display part displays a background selection picture and displays all the received background images in the background selection picture to allow the first user to select one background image as a designated background image, the first side communication part sends the designated background image to the cloud server, once the cloud side communication part receives the designated background image, the background removal part removes the original background of the display main body in the received video file to acquire a transparent background video file, an image selecting part selects a frame of image from a transparent background video file as a representative image, a main body feature extracting part extracts various main body features of a display main body from the representative image, a video storage part at least correspondingly stores the transparent background video file and the main body features, a display image synthesizing part synthesizes the representative image and a specified background image to obtain a display image, a cloud side communication part sends the display image to a first user terminal, the first user presents the display image on a recording medium according to an application scene, once a camera of a second user terminal is aligned with the display image on the recording medium, a second side communication part sends the display image read by the camera to a cloud server as a current display image, a main body feature extracting part extracts the current main body features of the display main body in the current display image, and a video retrieval obtaining part extracts the main body features which are consistent with the current main body features in the video storage part according to the current main body features And once the second side communication part receives the transparent background video file, the playing and overlapping part overlaps the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body and the appointed background image in the current display image to realize the playing and displaying of the transparent background video file.
Action and Effect of the invention
According to the image reality display server provided by the invention, the background removing part can remove the background in the video file to be displayed, the image selecting part extracts the representative image from the transparent background video file after the background is removed, so that the main body characteristic extracting part can extract the main body characteristic of the display main body in the representative image, and the video storage part correspondingly stores the main body characteristic and the transparent background video file during storage, so that the cloud server can directly acquire the current main body characteristic from the current display image and further acquire the corresponding transparent background video file during playing and displaying according to the display image at the second user terminal, and therefore, the second user does not need to perform any coding input operation during playing and displaying, the playing and displaying are more convenient and faster, and the display effect is greatly improved. Meanwhile, the main body features correspond to the display main bodies one to one, so that the display condition of video which does not correspond to the display main bodies does not occur, and the loss of the display communication effect is avoided.
Moreover, in the invention, the background removing part removes the background in the video file, the playing and overlapping part overlaps the transparent background video file according to the relative position in the current display image read by the camera, so that the display main body in the video file can be overlapped with the picture with the recording medium read by the camera in the playing and displaying process, and a second user can see the scene of combining the real scene read by the camera and the dynamic display main body which does not exist in reality, thereby enabling the displaying process to be more vivid and interesting and further improving the displaying effect.
Drawings
FIG. 1 is a block diagram of an image reality presentation system according to an embodiment of the present invention;
fig. 2 is a block diagram of a first user terminal according to an embodiment of the present invention;
fig. 3 is a block diagram of a second user terminal according to an embodiment of the present invention;
fig. 4 is a block diagram of a playback superimposition unit according to an embodiment of the present invention;
fig. 5 is a block diagram of a cloud server according to an embodiment of the present invention;
FIG. 6 is a block diagram of a background removal unit according to an embodiment of the present invention;
FIG. 7 is a schematic diagram showing image synthesis and recording medium fabrication according to an embodiment of the present invention;
FIG. 8 is a flow chart illustrating the process of the present invention;
FIG. 9 is a background removal flow diagram according to an embodiment of the present invention;
FIG. 10 is a flow chart of playing display according to an embodiment of the present invention.
Detailed Description
Hereinafter, the image realistic sensation presentation system, the image realistic sensation presentation method, and the recording medium of the present invention will be described in detail with reference to the drawings.
As a first aspect, the present invention provides an image realistic display server for displaying a video file corresponding to a display image and including a display subject by playing the video file through one display image on a recording medium, the server being connected to at least one first user terminal which stores the video file and is held by a first user, and at least one second user terminal which is held by a second user and has a camera and a playback superimposition unit for playing back and displaying the video file based on the recording medium, the server comprising: the system comprises a background storage part, a background retrieval acquisition part, a background removal part, an image selection part, a main body characteristic extraction part, a video storage part, a display image synthesis part, a video retrieval acquisition part and a server communication part, wherein the background storage part stores various background images for setoff the display main body and application scene types corresponding to the background images, the server communication part receives a video file selected by a first user and appointed application scene information from the first user terminal, the background retrieval acquisition part retrieves the background storage part according to the application scene information to acquire at least one corresponding background image, the server communication part transmits all acquired background images to the first user terminal and receives the background image appointed by the first user from the first user terminal as an appointed background image, and the background removal part removes the original background of the display main body in the received video file to acquire a transparent background video file, an image selecting part selects a frame of image from a transparent background video file as a representative image, a main body feature extracting part extracts various main body features of a display main body from the representative image, a video storage part at least correspondingly stores the transparent background video file and the main body features, a display image synthesizing part synthesizes the representative image and a specified background image to obtain a display image, a server communication part sends the display image to a first user terminal, the first user presents the display image on a recording medium according to an application scene, once a camera of a second user terminal is aligned with the display image on the recording medium, the server communication part receives the display image read by the camera from the second user terminal as a current display image, a main body feature extracting part extracts the current main body features of the display main body in the current display image, and a video retrieval obtaining part extracts the main body features which are consistent with the current main body features in the video storage part according to the current main body features And the server communication part sends the acquired transparent background video file to the second user terminal, so that the playing and overlapping part in the second user terminal overlaps the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body and the specified background image in the current display image to realize the playing and displaying of the transparent background video file.
In the first aspect, there may be provided a technical feature that the background of the display subject in the video file is a monochrome background having a predetermined background color, and the background removing unit includes: a reference frame extraction unit which sequentially extracts image frames from the video file as reference frames according to a predetermined time interval t; the frame conversion unit is used for sequentially converting the reference frame into an HSV space so as to form a first conversion reference frame with HSV attribute values representing all pixel points; the pixel judging unit is used for sequentially judging whether the HSV attribute value of each pixel point in the first conversion reference frame is within the range of the preset background color; the pixel conversion unit is used for sequentially converting pixel points of the HSV attribute value in the first reference frame within the range of the preset background color into black according to the judgment result of the pixel judgment unit, and converting pixel points of the HSV attribute value in the first reference frame not within the range of the preset background color into white so as to obtain a second reference conversion frame; the frame extraction unit to be processed extracts the image frames in the video file one by one to obtain all the image frames contained in the video file as frames to be processed; the pixel point removing unit is used for removing all pixel points corresponding to the white pixel point positions in the plurality of frames to be processed, which have the time point difference with the second reference conversion frame within the range of t/2, according to the positions of the white pixel points in the second reference conversion frame in sequence, so that a plurality of transparent frames are obtained; and the transparent video synthesis unit is used for splicing the transparent frames according to a time sequence and simultaneously carrying out video synthesis on the audio track in the video file and the spliced transparent frames so as to obtain the transparent background video file.
In the first aspect, there may be provided a technical feature that the background of the display subject in the video file is a monochrome background having a predetermined background color, and the background removing unit includes: the frame extraction unit to be processed sequentially extracts image frames from the video file as frames to be processed; the frame conversion unit is used for sequentially converting the frames to be processed into HSV spaces so as to form conversion frames with HSV attribute values representing all pixel points; the pixel judging unit is used for sequentially judging whether the HSV attribute value of each pixel point in the conversion frame is within the range of the preset background color; the pixel point removing unit is used for removing all the pixel points judged by the pixel judging unit to be in the range of the preset background color so as to obtain a transparentized frame; and the transparent video synthesis unit is used for splicing the transparent frames according to a time sequence and simultaneously carrying out video synthesis on the audio track in the video file and the spliced transparent frames so as to obtain the transparent background video file.
In an embodiment having the foregoing two technical features, there may be further provided the technical feature wherein the predetermined background color is green.
In the first aspect, the application scenario may be an invitation letter, a greeting card or a resume, the display subject may be an inviter corresponding to the invitation letter, a card issuer corresponding to the greeting card or a resume deliverer corresponding to the resume, and the subject characteristics may be face recognition characteristics corresponding to the inviter, the card issuer or the resume deliverer.
In the first aspect, the technical feature may be further included, wherein the application scene is a design work presentation, the presentation main body is a corresponding design work, and the main body feature is a shape and color identification feature corresponding to the design work.
As a second aspect, the present invention provides an image realistic sensation presentation method for presenting a video file corresponding to a presentation image and including a presentation main body by playing the video file through one presentation image on a recording medium, the image realistic sensation presentation method being performed by a cloud server which is in communication connection with at least one first user terminal which stores the video file and is held by a first user and at least one second user terminal which is presented by playing based on the recording medium and is held by a second user and has a camera and a playback superimposition unit, the method comprising the steps of: a background storage step of storing various background images of a setoff display main body and application scene categories corresponding to the background images; a background retrieval and acquisition step, wherein once a server receives a video file selected by a first user and appointed application scene information from a first user terminal, the server retrieves and acquires at least one corresponding background image according to the application scene information; a background image sending and receiving step, wherein all the obtained background images are sent to a first user terminal and a background image appointed by a first user is received from the first user terminal to be used as an appointed background image, and a background removing step is carried out to remove the original background of a display main body in the received video file to obtain a transparent background video file; selecting a frame of image from a transparent background video file as a representative image; a main body feature extraction step of extracting various main body features of a display main body from the representative image; a video storage step, at least storing the transparent background video file and the main body characteristics correspondingly; a display image synthesis step, synthesizing the representative image and the appointed background image to obtain a display image; a display image sending step, namely sending the display image to a first user terminal to enable the first user to present the display image on a recording medium according to an application scene; a current display image receiving step, wherein once the camera of the second user terminal is aligned with the display image on the recording medium, the server receives the display image read by the camera from the second user terminal as a current display image; a current main body feature extraction step, wherein the current main body features of the display main body in the current display image are extracted; a video retrieval obtaining step, namely retrieving according to the current main body characteristics to obtain a corresponding transparent background video file; and a transparent background video file sending step, namely sending the obtained transparent background video file to a second user terminal, so that a playing and overlapping part in the second user terminal can overlap the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body in the current display image and the appointed background image to realize the playing and displaying of the transparent background video file.
As a third aspect, the present invention provides a computer-readable recording medium for recording a computer program, wherein the computer program is for causing a cloud server, which stores a video file and is provided with a display main body corresponding to a display image, to perform a playback display of the video file on a recording medium, to communicatively connect at least one first user terminal, which is held by a first user and stores the video file, and at least one second user terminal, which is held by a second user and has a camera and a playback superimposition unit, and which performs playback display based on the recording medium, to perform the steps of: a background storage step of storing various background images of a setoff display main body and application scene categories corresponding to the background images; a background retrieval and acquisition step, wherein once a server receives a video file selected by a first user and appointed application scene information from a first user terminal, the server retrieves and acquires at least one corresponding background image according to the application scene information; a background image sending and receiving step, wherein all the obtained background images are sent to a first user terminal and a background image appointed by a first user is received from the first user terminal to be used as an appointed background image, and a background removing step is carried out to remove the original background of a display main body in the received video file to obtain a transparent background video file; selecting a frame of image from a transparent background video file as a representative image; a main body feature extraction step of extracting various main body features of a display main body from the representative image; a video storage step, at least storing the transparent background video file and the main body characteristics correspondingly; a display image synthesis step, synthesizing the representative image and the appointed background image to obtain a display image; a display image sending step, namely sending the display image to a first user terminal to enable the first user to present the display image on a recording medium according to an application scene; a current display image receiving step, wherein once the camera of the second user terminal is aligned with the display image on the recording medium, the server receives the display image read by the camera from the second user terminal as a current display image; a current main body feature extraction step, wherein the current main body features of the display main body in the current display image are extracted; a video retrieval obtaining step, namely retrieving according to the current main body characteristics to obtain a corresponding transparent background video file; and a transparent background video file sending step, namely sending the obtained transparent background video file to a second user terminal, so that a playing and overlapping part in the second user terminal can overlap the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body in the current display image and the appointed background image to realize the playing and displaying of the transparent background video file.
As a fourth aspect, the present invention provides an image realistic sensation presenting system for presenting a video file including a presentation body corresponding to a presentation image by one presentation image on a recording medium, comprising: at least one first user terminal storing a video file, held by a first user; at least one second user terminal for playing and displaying based on the recording medium, which is held by the second user; and a cloud server, which is respectively connected with the first user terminal and the second user terminal in a communication way, wherein the first user terminal is provided with a first side picture storage part, a first side input display part and a first side communication part, the second user terminal is provided with a camera, a playing superposition part and a second side communication part, the cloud server is provided with a background storage part, a background retrieval acquisition part, a background removal part, an image selection part, a main body characteristic extraction part, a video storage part, a display image synthesis part, a video retrieval acquisition part and a cloud side communication part, the background storage part is stored with various background images for supporting the display main body and application scene types corresponding to the background images, the first side picture storage part is stored with a video file uploading picture and a background selection picture, the first side input display part is used for displaying the video file uploading picture to allow the first user to select the video file to be uploaded and appoint the application scene required by the display main body in the video file, once the first user finishes selection and designation, the first side communication part sends the selected video file and designated application scene information to the cloud server, once the cloud side communication part receives the video file and the application scene information, the background retrieval acquisition part retrieves the background storage part according to the application scene information to acquire at least one corresponding background image, the cloud side communication part sends all the acquired background images to the first user terminal, the first side input display part displays a background selection picture and displays all the received background images in the background selection picture to allow the first user to select one background image as a designated background image, the first side communication part sends the designated background image to the cloud server, once the cloud side communication part receives the designated background image, the background removal part removes the original background of the display main body in the received video file to acquire a transparent background video file, an image selecting part selects a frame of image from a transparent background video file as a representative image, a main body feature extracting part extracts various main body features of a display main body from the representative image, a video storage part at least correspondingly stores the transparent background video file and the main body features, a display image synthesizing part synthesizes the representative image and a specified background image to obtain a display image, a cloud side communication part sends the display image to a first user terminal, the first user presents the display image on a recording medium according to an application scene, once a camera of a second user terminal is aligned with the display image on the recording medium, a second side communication part sends the display image read by the camera to a cloud server as a current display image, a main body feature extracting part extracts the current main body features of the display main body in the current display image, and a video retrieval obtaining part extracts the main body features which are consistent with the current main body features in the video storage part according to the current main body features And once the second side communication part receives the transparent background video file, the playing and overlapping part overlaps the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body and the appointed background image in the current display image to realize the playing and displaying of the transparent background video file.
In the fourth aspect, there may be provided a technical feature wherein the display image combining unit combines the display images in accordance with a predetermined subject background positional relationship, and the second user terminal further includes a playback display unit, and the playback superimposing unit includes: the background image identification and positioning unit is used for identifying and positioning the specified background image in the current display image so as to obtain the position information of the specified background image; a relative position calculation unit which calculates the position information of the display subject based on the subject background position relationship according to the position information of the specified background image; the playing area dividing unit is used for dividing the area where the display main body is located as a playing area according to the position information of the display main body; the playing frame overlapping unit is used for sequentially overlapping each image frame in the transparent background video file to the current display image according to the position of the playing area so as to obtain each playing frame; a playing image forming unit which sequentially forms playing frames into continuous playing images according to a time sequence; and the audio synchronization unit synchronizes the audio in the transparent background video file according to the time sequence, so that the continuous playing image and the audio are synchronously displayed on the playing and displaying part.
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings.
< example >
Fig. 1 is a block diagram of an image reality presentation system according to an embodiment of the present invention.
As shown in fig. 1, the image reality presentation system 100 includes at least one first user terminal 101, at least one second user terminal 102, and a cloud server 103.
The cloud server 103 is used as an image reality display server, and is in communication connection with the first user terminal 101 and the second user terminal 102 through the communication network 104. In this embodiment, the first user terminal 101 and the second user terminal 102 are both multiple, and each of the first user terminal 101 and the second user terminal 102 has corresponding terminal identification information, and during the communication through the communication network 104, information transmission is performed based on the terminal identification information.
The first user terminal 101 is held by a first user having a main body video presentation requirement. That is, the first user has a demand for video presentation of the presentation body, and has prepared a video file containing the presentation body. The video file may be recorded by a camera, a smart phone with a recording function, or a smart camera, and stored in the first user terminal 101. In addition, the first user also makes a corresponding recording medium according to the display image sent back by the cloud server 103 and sends the recording medium to the second user so as to display the second user, for example, the display image is printed on a paper card and the card is sent to the second user, so that the second user can play and display the video file by using the display image after receiving the card.
Fig. 2 is a block diagram of a first user terminal according to an embodiment of the present invention.
As shown in fig. 2, the first user terminal 101 includes a first side screen storage unit 11, a first side input display unit 12, a first side communication unit 13, and a first side control unit 14.
The first side communication unit 13 is configured to perform data information exchange between each component in the first user terminal 101 and the cloud server 103, and the first side control unit 14 is configured to control operations of each component in the first user terminal 101. In this embodiment, the first user terminal 101 further has a first side temporary storage unit 15, which is used for temporarily storing temporary data generated during the operation of each component of the first user terminal 101.
The first side screen storage 11 stores a human-computer interaction screen for the first user to interact with the first user terminal 101, including a video file upload screen and a background selection screen.
The video file upload screen has a video file display area, an application scene display designation area, and a determination button. The video file display area is used for enabling a first user to view video files stored in the first user terminal 101, so that the first user can select the video files needing to be uploaded, the application scene display designated area is used for displaying various application scenes, so that the first user designates the application scenes needing to be uploaded, and the determination button is used for enabling the first user to determine the selected video files and the designated application scenes.
In this embodiment, the application scenes are all application scenes associated with the first user image presentation, such as an invitation, a greeting card or a resume that needs to present the image of the first user as the sender; the related video file is a video which is recorded by the first user according to the display requirement and mainly takes the display subject as a main video by adopting a monochromatic background (for example, in an application scene of an invitation letter, the display subject can be made to record an invitation video including an invitation word).
The background selection screen is used for displaying the background image received by the first user terminal 101 from the cloud server 103 after the first user completes the above selection and designation, so that the user can further select the required background image.
The first side input display part 12 is used for displaying the pictures stored in the first side picture storage part 11, so that the first user can complete the man-machine interaction.
The second user terminal 102 is held by a second user who has a need to play a presentation based on a recording medium so as to view the content that the first user wants to present. That is, the second user has a need to view the play show and has received a record medium with the image of the show issued by the first user.
Fig. 3 is a block diagram of a second user terminal according to an embodiment of the present invention.
As shown in fig. 3, the second user terminal 102 has a camera 21, a playback superimposition unit 22, a playback presentation unit 23, a second-side communication unit 24, and a second-side control unit 25.
The second-side communication unit 24 is configured to perform data information exchange between each component in the second user terminal 102 and the cloud server 103, and the second-side control unit 25 is configured to control operations of each component in the second user terminal 102. In this embodiment, the second user terminal 102 further has a second side temporary storage unit 26 for temporarily storing temporary data generated during the operation of each component of the second user terminal 102.
The camera 21 is used for image reading. Specifically, in the present embodiment, the camera 21 is mainly used for reading an image related to a recording medium to obtain a current presentation image.
The playing and superimposing unit 22 is configured to superimpose the display main body of each frame of image in the transparent background video file onto the display main body of the current display image when receiving the transparent background video file sent by the cloud server 103, so as to implement playing and displaying of the transparent background image.
Fig. 4 is a block diagram of a playback superimposition unit according to an embodiment of the present invention.
As shown in fig. 4, the playback superimposition portion 22 includes a background image recognition positioning unit 221, a relative position calculation unit 222, a playback region division unit 223, a playback frame superimposition unit 224, a playback image formation unit 225, an audio synchronization unit 226, and a playback superimposition control unit 227.
The playback superimposition control unit 227 controls the operations of the respective components in the playback superimposition unit 22.
The background image identification and positioning unit 221 is configured to identify and position a specified background image in the currently displayed image, so as to obtain position information of the specified background image. That is, the background image recognition and positioning unit 221 performs recognition based on shape features on the portion of the specified background image in the currently displayed image, and performs positioning according to the portion thereof having the positioning features. For example, a background pattern composed of a plurality of patterns with specific shapes and colors is specified in the background image, the background image recognition and positioning unit 221 performs overall recognition on the background pattern, and when the overall background pattern enters the range of the camera 21, the recognition is successful; after the recognition is successful, the background image recognition and positioning unit 221 further positions the position of the specific shape pattern therein, thereby obtaining the position information of the specified background image in the image read by the camera 21. In this embodiment, the background image recognition and positioning unit 221 performs recognition and positioning in real time.
The relative position calculating unit 222 is configured to calculate, in real time, position information of the display subject based on the subject background positional relationship according to the position information of the specified background image. In this embodiment, a specific subject background position relationship is adopted in the generation process of the display image, so that the display subject in the display image is located in a specific position range, and the relative position calculation unit 222 performs calculation according to the positioning result of the background image identification and positioning unit 221 based on the subject background position relationship adopted in the generation process, thereby obtaining the position range where the display subject is located in the image picture actually read by the camera 21.
The playing area dividing unit 223 is configured to divide the area where the display main body is located into playing areas according to the position information of the display main body calculated by the relative position calculating unit 222. In this embodiment, the dividing process of the playing area dividing unit 223 is also performed in real time, so that the playing area can move along with the movement of the camera 21.
The playing frame overlapping unit 224 is configured to sequentially overlap each image frame in the transparent background video file onto the current display image according to the position of the playing area, so as to obtain each playing frame.
The playback image forming unit 225 is configured to sequentially form the playback frames into a continuous playback image in chronological order.
The audio synchronization unit 226 synchronizes the audio in the transparent background video file in chronological order, thereby allowing the continuously played images to be presented in synchronization with the audio. That is, the audio synchronization unit 226 synchronizes the audio and the time of each playing frame according to the time relationship between the audio and each frame in the transparent video file, so that each part of the audio and the playing frame are corresponding in time, thereby synchronizing the continuously playing image and the audio in time.
The playing and displaying part 23 is used for playing the images superimposed by the playing and superimposing part 22, and the playing and displaying part 23 respectively acquires the continuous playing images and the audio from the playing image forming unit 225 and the audio synchronizing unit 226 in real time and displays the images and the audio synchronously according to the time sequence. In this embodiment, the playing and displaying part 23 also previews the image read by the camera 21 when the camera 21 reads the image, so that the second user can move the camera 21 according to the preview, and the second user can aim the camera 21 at the displayed image on the recording medium.
Fig. 5 is a block diagram of a cloud server according to an embodiment of the present invention.
As shown in fig. 5, the cloud server 103 includes a background storage unit 31, a background search acquisition unit 32, a background removal unit 33, an image selection unit 34, a main body feature extraction unit 35, a video storage unit 36, a presentation image synthesis unit 37, a video search acquisition unit 38, a cloud-side communication unit 39, and a cloud-side control unit 40.
The cloud-side communication section 39 serves as a server communication section for performing data information exchange between each component in the cloud server 103 and the first user terminal 101 and the second user terminal 102; the cloud-side control unit 40 is configured to control operations of the respective components in the cloud server 103. In this embodiment, the cloud server 103 further has a cloud side temporary storage unit 41, which is used to temporarily store temporary data generated during the working process of each component of the cloud server 103.
The background storage unit 31 stores various background images for setting off the display main body and application scene types corresponding to the background images. In this embodiment, the application scene types include an invitation card, a greeting card, a resume, and the like, and the corresponding background images are a plurality of invitation card templates, greeting card templates, resume templates, and the like.
The background search acquisition unit 32 searches the background storage unit 31 based on the specified application scene information transmitted from the first user terminal 101, and acquires all corresponding background images.
The background removing unit 33 is configured to remove the original background of the display subject in the video file sent from the first user terminal 101 after receiving the video file, so as to obtain a transparent background video file.
Fig. 6 is a block diagram of a background removal unit according to an embodiment of the present invention.
As shown in fig. 6, the background removal section 33 of the present embodiment includes a reference frame extraction unit 331, a frame conversion unit 332, a pixel determination unit 333, a pixel conversion unit 334, a frame to be processed extraction unit 335, a pixel point removal unit 336, a transparent video composition unit 337, a removal buffer unit 338, and a removal control unit 339.
The removal control unit 339 is configured to control operations of the components in the background removal unit 33, and the removal temporary storage unit 338 is configured to temporarily store temporary data generated during operations of the components.
The reference frame extracting unit 331 is configured to sequentially extract image frames from a video file as reference frames at predetermined time intervals t. The time interval t can be preset according to the requirement of the fineness of video processing and the length of each frame in a video file, for example, if the effect of video processing needs to be more refined, the time interval t is set to be equal to the time interval of 3-8 frames so that one frame is extracted from every 3-8 frames, and if the finer effect is not needed, the time interval t can be set to be equal to the time interval of more than 10 frames so that one frame is extracted from every 10 frames or more.
The frame conversion unit 332 is configured to sequentially convert the reference frames into HSV spaces, so that each reference frame forms a first conversion reference frame with an HSV attribute value indicating each pixel therein.
The pixel determination unit 333 is configured to sequentially determine whether HSV attribute values of the respective pixel points in the first conversion reference frame are within a predetermined background color range.
The pixel converting unit 334 is configured to sequentially convert, according to the determination result of the pixel determining unit 333, the pixel points of the HSV attribute values in the first reference frame within the predetermined background color range into black, and convert the pixel points of the HSV attribute values in the first reference frame not within the predetermined background color range into white, so as to obtain a second reference conversion frame.
The frame to be processed extracting unit 335 is configured to extract image frames in the video file one by one to obtain all image frames included in the video file as frames to be processed.
The pixel removing unit 336 sequentially removes all the pixels corresponding to the white pixel positions in the plurality of frames to be processed, which have the time point difference with the second reference conversion frame within the t/2 range, according to the positions of the white pixels in the second reference conversion frame, so as to obtain a plurality of transparentized frames corresponding to the frames to be processed respectively. That is, the pixel removing unit 336 performs pixel removing processing on the to-be-processed frame corresponding to the second reference conversion frame and frames before and after the to-be-processed frame (to-be-processed frames in t/2 ranges before and after) according to the time interval t, and simultaneously performs corresponding removing according to the positions of white pixels included in the second reference conversion frame corresponding to the to-be-processed frames when removing the pixels, so that the pixels at the positions corresponding to the white pixels of the second reference conversion frame in the to-be-processed frames are all removed. Since the white pixel points in the second reference conversion frame correspond to the background color, the background color portion of each frame to be processed becomes transparent after the pixel point removal operation, i.e., becomes a transparentized frame.
The transparent video synthesizing unit 337 is configured to splice the transparentized frames according to a time sequence, and simultaneously perform video synthesis on the audio track in the video file and the spliced transparentized frames, thereby obtaining a transparent background video file.
The image selecting section 34 is configured to select one frame of image from the transparent background video file as a representative image. In this embodiment, the image selection unit 34 directly selects the first frame in the transparent background video file as the representative image.
The main body feature extracting unit 35 is configured to extract various main body features of the display main body from the representative image, or extract corresponding current main body features from the current display image. For example, in this embodiment, the display subject is a first user, and the corresponding subject feature is a face recognition feature of the first user.
The video storage unit 36 is configured to store the main body features extracted by the main body feature extraction unit 35 in one-to-one correspondence with the transparent background video processed by the background removal unit 33.
The presentation image combining unit 37 is configured to combine the representative image selected by the image selecting unit 34 with a specified background image specified by the first user to obtain a presentation image. The display image synthesizing unit 37 obtains a display image by superimposing the image on the current display image according to a predetermined subject background positional relationship, with the opaque portion in the representative image being an image corresponding to the display subject. That is, when superimposed, the opaque portion in the representative image directly covers the corresponding position area of the specified background image.
FIG. 7 is a schematic diagram showing image synthesis and recording medium fabrication according to an embodiment of the present invention.
As shown in fig. 7, the presentation image composing section 37 superimposes the representative image a containing the presentation subject (shown as a bridegroom bride who needs to issue a wedding invitation) on the specified background image B, thereby forming a presentation image C; the first user (i.e. the maker of the post) prints the presentation image C on the post according to his own needs, thereby forming a recording medium D containing the presentation image.
The video retrieval acquiring unit 38 is configured to retrieve the video storage unit 36 according to the current subject feature obtained by extracting the current display image by the subject feature extracting unit 35, so as to obtain a corresponding transparent background video file. That is, a transparent background video file having a subject feature that is consistent with the current subject feature is obtained.
The following describes an image realistic sensation presenting system according to an embodiment of the present invention with reference to the drawings.
FIG. 8 is a flow chart illustrating the process of the present invention.
As shown in fig. 8, in this embodiment, the complete display process of the display main body includes a process in which the first user uploads a corresponding video file, the cloud server performs corresponding processing, and the second user receives the recording medium and plays and displays the recording medium.
In step S1, the first input display unit 12 displays a video file upload screen to allow the first user to select a video file to be uploaded and to specify an application scene required by the display main body in the video file, and the first user proceeds to step S2 after completing the selection and specification.
In step S2, the first side communication part 13 transmits the uploaded video file and the application scene information to the cloud server 103, and then proceeds to step S3.
In step S3, the cloud-side communication unit 39 receives the video file and the application scene information from the first-side communication unit 13, and then proceeds to step S4.
In step S4, the background search acquisition unit 32 searches the background storage unit 31 based on the application scene information to acquire a corresponding background image, and then proceeds to step S5.
In step S5, the first side input display unit 12 displays a background selection screen, displays the received background images on the background selection screen, and allows the first user to select one background image as the designated background image, and the first user proceeds to step S6 after selecting the selected background image.
In step S6, the first side communicator 13 transmits the specified background image to the cloud server 103, and then proceeds to step S7.
In step S7, the service-side communication unit 42 receives the designated background image, and then proceeds to step S8.
In step S8, the background removing unit 33 removes the original background of the display subject in the received video file to obtain a transparent background video file, and then the process proceeds to step S9.
In step S9, the image selection section 34 selects one frame image from the transparent background video file as a representative image, and then proceeds to step S10.
In step S10, the body feature extraction unit 35 extracts various body features of the display body from the representative image, and the process proceeds to step S11.
In step S11, the presentation image synthesizer 37 synthesizes the representative image with the designated background image to obtain the presentation image, and the process proceeds to step S12.
In step S12, the cloud-side communication unit 39 transmits the presentation image to the first user terminal 101, and the first user presents the presentation image on a recording medium according to the application scene required by the first user, and the process proceeds to step S13.
In step S13, when the second user aims the camera 21 at the presentation image on the recording medium, the second side communication section 24 transmits the presentation image read by the camera 21 to the cloud server 103 as the current presentation image, and then proceeds to step S14.
In step S14, the main body feature extraction unit 35 extracts the current main body feature of the display main body in the current display image, and the process proceeds to step S15.
In step S15, the video search/retrieval unit 38 searches the video storage unit 36 according to the current subject feature to obtain the corresponding transparent background video file, and then proceeds to step S16.
In step S16, the cloud-side communication unit 39 transmits the acquired transparent background video file to the second user terminal 102, and the process proceeds to step S17.
In step S17, the second side communication part 24 receives the transparent background video file, and the playing and superimposing part 22 superimposes the display main body in the transparent background video file onto the display main body of the currently displayed image, so that the playing and displaying part 23 plays and displays the superimposed image, and enters an end state after the playing and displaying are completed.
Fig. 9 is a flowchart of background removal according to an embodiment of the invention.
As shown in fig. 9, in step S8 of the present embodiment, the process of removing the background by the background removing unit 33 adopts a method of taking the reference frame at intervals and removing the pixel points according to the reference frame, and specifically includes the following steps.
In step S8-1, the reference frame extracting unit 331 sequentially extracts image frames as reference frames from the video file at predetermined time intervals t, and then proceeds to step S8-2.
In step S8-2, the frame conversion unit 332 sequentially converts the reference frames into HSV spaces to obtain respective first converted reference frames, and then proceeds to step S8-3.
In step S8-3, the pixel determination unit 333 sequentially determines whether the HSV attribute values of the respective pixels in the first conversion reference frame are within the range of the predetermined background color, and then proceeds to step S8-4.
In step S8-4, the pixel converting unit 334 converts the pixel points in the first reference frame according to the determination result of the pixel determining unit 333 to obtain a second reference converted frame, and then proceeds to step S8-5.
In step S8-5, the to-be-processed frame extracting unit 335 extracts the image frames in the video file one by one, obtains all the image frames included in the video file as to-be-processed frames, and then proceeds to step S8-6.
In step S8-6, the pixel removing unit 336 sequentially removes the pixels from the frame to be processed according to the second reference transform frame to form a corresponding transparentized frame, and then proceeds to step S8-7.
In step S8-7, the transparent video composition unit 337 composes the transparencies frames in time order, and performs video composition on the tracks in the video file and the composed transparencies frames to obtain a transparent background video file, and then proceeds to step S9.
FIG. 10 is a flow chart of playing display according to an embodiment of the present invention.
In step S17 of this embodiment, the playing and superimposing unit 22 and the playing and displaying unit 23 adopt a real-time identification and real-time positioning and superimposing manner for the playing and displaying of the transparent background video file, and specifically include the following steps.
In step S17-1, the background image recognition and positioning unit 221 recognizes and positions the designated background image in the currently displayed image to obtain the position information of the designated background image, and then proceeds to step S17-2.
In step S17-2, the relative position calculating unit 222 calculates the position information of the display subject in real time based on the subject background positional relationship according to the position information of the specified background image, and then proceeds to step S17-3.
In step S17-3, the playing region dividing unit 223 divides the region where the display subject is located into playing regions according to the calculated position information of the display subject, and then proceeds to step S17-4.
In step S17-4, the play frame superimposing unit 224 superimposes each image frame in the transparent background video file onto the currently displayed image in sequence according to the position of the play area, so as to obtain each play frame, and then the process goes to step S17-5.
In step S17-5, the playback image forming unit 225 forms the playback frames into continuous playback images in chronological order, and then proceeds to step S17-6.
In step S17-6, the audio synchronizing unit 226 synchronizes the audio in the transparent background video file in chronological order to synchronize the continuously played images with the audio, and then proceeds to step S17-7.
In step S17-7, the playing and displaying part 23 plays the images superimposed by the playing and superimposing part 22 and the audio synchronized by the audio synchronizing unit 226 to realize playing and displaying, and enters an end state after the playing and displaying is completed.
Examples effects and effects
According to the image realistic display system provided by the embodiment, the background removing part can remove the background in the video file to be displayed, the image selecting part extracts the representative image from the transparent background video file after the background is removed, the main body feature extracting part can extract the main body feature of the display main body in the representative image, and the video storage part correspondingly stores the main body feature and the transparent background video file during storage, so that the cloud server can directly acquire the current main body feature from the current display image and further acquire the corresponding transparent background video file during playing and displaying according to the display image at the second user terminal, thereby the second user does not need to perform any coding input operation during playing and displaying, the playing and displaying are more convenient and faster, and the display effect is greatly improved. Meanwhile, the main body features correspond to the display main bodies one to one, so that the display condition of video which does not correspond to the display main bodies does not occur, and the loss of the display communication effect is avoided.
Moreover, in this embodiment, the background removing unit removes the background in the video file, and the playing and superimposing unit and the playing and displaying unit superimpose, play and display the transparent background video file in real time according to the relative position in the current display image read by the camera, so that the display main body in the video file can be superimposed with the picture with the recording medium read by the camera in the playing and displaying process, and the second user can see the scene combined by the real scene read by the camera and the dynamic display main body which does not exist in reality, thereby enabling the displaying process to be more vivid and interesting, and further improving the displaying effect.
In an embodiment, the subject feature employs a face recognition feature, which enables the image realistic display system of an embodiment to be applied to scenes associated with character displays, such as invitations, greeting cards, resumes, and the like. The face recognition feature is adopted as the main feature, and the main feature extraction and retrieval method adopting the face recognition feature has the characteristics of rapidness and accuracy, so that the playing and displaying can be quicker and more accurate. Meanwhile, the playing display adopting the face recognition is more in accordance with the image display requirements of the user, and an image information transmission mode with more vivid images and fast and accurate display is provided.
In the embodiment, the background removing part removes the background in the video file by taking the reference frames at intervals and removing the pixel points according to the reference frames, and simultaneously removes the background pixel points of the frame to be processed in the front and back ranges of the second reference conversion frame according to the time point of the second reference conversion frame during removing, so that the removing accuracy can be ensured, the workload of image processing can be reduced as much as possible, and the background removing can be rapidly and finely completed.
The above embodiments are preferred examples of the present invention and are not intended to limit the scope of the present invention.
For example, in an embodiment, the subject features are face recognition features, thus presenting the subject as a corresponding human figure. However, in the present invention, the main body feature may also be a shape and color identification feature, so that the display main body is a corresponding design work (such as a mechanical design model, an artwork, etc.).
In an embodiment, the background removing part removes the background by taking the reference frame at intervals and removing the pixel points according to the reference frame. In the present invention, the background removing unit may also perform background removal by taking all image frames and performing pixel point removal, specifically: the frame to be processed extracting unit sequentially extracts image frames from the video file as frames to be processed; the frame conversion unit converts the frames to be processed into HSV spaces in sequence so as to form conversion frames with HSV attribute values representing all pixel points in the conversion frames; the pixel judging unit sequentially judges whether the HSV attribute value of each pixel point in the conversion frame is within the range of the preset background color; the pixel point removing unit removes all the pixel points which are judged to be in the range of the preset background color by the pixel judging unit, so that a transparent frame is obtained; the transparent video synthesis unit splices the transparent frames according to the time sequence, and simultaneously carries out video synthesis on the audio track in the video file and the spliced transparent frames so as to obtain the transparent background video file. Compared with the method of taking the reference frame at intervals in the embodiment, the method of taking all the image frames has larger image processing workload and consumes more computing resources, but under the condition that the action amplitude of the display subject in the video file is larger or the requirement on the fineness of background removal is high, the method can achieve better background removal effect. Other known background removal methods may be used for background removal.

Claims (10)

1. An image realistic display server for playing and displaying a video file corresponding to a display image and including a display main body through a display image on a recording medium, communicatively connected to at least one first user terminal storing the video file and held by a first user, and at least one second user terminal held by a second user and having a camera and a playback superimposition portion for playing and displaying the playback display based on the recording medium, comprising:
a background storage unit, a background search acquisition unit, a background removal unit, an image selection unit, a main body feature extraction unit, a video storage unit, a display image synthesis unit, a video search acquisition unit, and a server communication unit,
wherein the background storage unit stores various background images that set off the display main body and application scene types corresponding to the background images,
the server communication part receives the video file selected by the first user and the appointed application scene information from the first user terminal, the background retrieval acquisition part retrieves the background storage part according to the application scene information to acquire at least one corresponding background image,
the server communication section transmits all the acquired background images to the first user terminal and receives the background image specified by the first user as a specified background image from the first user terminal,
the background removing part removes the original background of the display main body in the received video file to obtain a transparent background video file,
the image selecting section selects one frame of image from the transparent background video file as a representative image,
the main body feature extraction unit extracts various main body features of the display main body from the representative image,
the video storage part at least stores the transparent background video file and the main body characteristic correspondingly,
the display image synthesizing unit synthesizes the representative image and the specified background image to obtain the display image,
the server communication unit transmits the presentation image to the first user terminal, and causes the first user to present the presentation image on the recording medium according to the application scene,
the server communication section receiving the presentation image read by the camera from the second user terminal as a current presentation image once the camera of the second user terminal is aligned with the presentation image on the recording medium,
the main body feature extraction section extracts a current main body feature of the presentation main body in the current presentation image,
the video retrieval acquiring part retrieves the main body feature which is consistent with the current main body feature in the video storage part according to the current main body feature to acquire the corresponding transparent background video file,
the server communication part sends the acquired transparent background video file to the second user terminal, so that the playing and overlapping part in the second user terminal overlaps the display main body of each frame of image in the transparent background video file to the display main body in the current display image according to the relative position between the display main body and the appointed background image in the current display image to realize the playing and displaying of the transparent background video file.
2. The image realisation presenting server according to claim 1, wherein:
wherein the background of the display subject in the video file is a monochrome background, the color of which is a predetermined background color,
the background removal section includes:
a reference frame extraction unit which sequentially extracts image frames from the video file as reference frames according to a predetermined time interval t;
the frame conversion unit is used for sequentially converting the reference frame into an HSV space so as to form a first conversion reference frame with HSV attribute values representing all pixel points;
the pixel judging unit is used for sequentially judging whether the HSV attribute value of each pixel point in the first conversion reference frame is within the range of the preset background color;
the pixel conversion unit is used for sequentially converting pixel points of the HSV attribute values in the first conversion reference frame in the range of the preset background color into black according to the judgment result of the pixel judgment unit, and converting pixel points of the HSV attribute values in the first conversion reference frame out of the range of the preset background color into white so as to obtain a second reference conversion frame;
the frame extraction unit to be processed extracts the image frames in the video file one by one to obtain all the image frames contained in the video file as frames to be processed;
the pixel point removing unit is used for removing all pixel points corresponding to the white pixel point positions in the plurality of frames to be processed, which have the time point difference with the second reference conversion frame within the range of t/2, according to the positions of the white pixel points in the second reference conversion frame in sequence, so that a plurality of transparent frames are obtained; and
and the transparent video synthesis unit is used for splicing the transparent frames according to a time sequence and simultaneously carrying out video synthesis on the audio track in the video file and the spliced transparent frames so as to obtain the transparent background video file.
3. The image realisation presenting server according to claim 1, wherein:
wherein the background of the display subject in the video file is a monochrome background, the color of which is a predetermined background color,
the background removal section includes:
the frame extraction unit to be processed sequentially extracts image frames from the video file as frames to be processed;
the frame conversion unit is used for sequentially converting the frames to be processed into HSV spaces so as to form conversion frames with HSV attribute values representing all pixel points;
the pixel judging unit is used for sequentially judging whether the HSV attribute value of each pixel point in the conversion frame is within the range of the preset background color;
the pixel point removing unit is used for removing all the pixel points judged by the pixel judging unit to be in the range of the preset background color so as to obtain a transparentized frame; and
and the transparent video synthesis unit is used for splicing the transparent frames according to a time sequence and simultaneously carrying out video synthesis on the audio track in the video file and the spliced transparent frames so as to obtain the transparent background video file.
4. The image realisation presenting server according to claim 2 or 3, wherein:
wherein the predetermined background color is green.
5. The image realisation presenting server according to claim 1, wherein:
wherein the application scene is an invitation, a greeting card or a resume,
the display main bodies are respectively an inviter corresponding to the invitation letter, a card sender corresponding to the greeting card or a resume deliverer corresponding to the resume,
the subject features are face recognition features corresponding to the inviter, the card issuer, or the resume deliverer, respectively.
6. The image realisation presenting server according to claim 1, wherein:
the application scene is the exhibition of the design works, the exhibition main body is the corresponding design works, and the main body characteristics are the shape and color identification characteristics corresponding to the design works.
7. An image realistic display method for displaying a video file corresponding to a display image and including a display main body by playing the display image on a recording medium by using a cloud server which is connected to at least one first user terminal which stores the video file and is held by a first user and at least one second user terminal which is held by a second user and has a camera and a playback superimposition unit for performing the playback display based on the recording medium, the image realistic display method comprising the steps of:
a background storage step of storing various background images which set off the display main body and application scene categories corresponding to the background images;
a background retrieval acquiring step, wherein once the server receives the video file selected by the first user and the appointed application scene information from the first user terminal, the server retrieves and acquires at least one corresponding background image according to the application scene information;
a background image sending and receiving step of sending all the acquired background images to the first user terminal and receiving the background image designated by the first user as a designated background image from the first user terminal,
a background removing step, namely removing the original background of the display main body in the received video file to obtain a transparent background video file;
selecting a frame of image from the transparent background video file as a representative image;
a main body feature extraction step of extracting various main body features of the display main body from the representative image;
a video storage step, at least storing the transparent background video file and the main body characteristics correspondingly;
a display image synthesis step of synthesizing the representative image and the specified background image to obtain the display image;
a display image sending step of sending the display image to the first user terminal to enable the first user to present the display image on the recording medium according to the application scene;
a current display image receiving step in which the server receives the display image read by the camera from the second user terminal as a current display image once the camera of the second user terminal is aligned with the display image on the recording medium;
a current main body feature extraction step of extracting the current main body feature of the display main body in the current display image;
a video retrieval obtaining step, namely retrieving according to the current main body characteristics to obtain the corresponding transparent background video file; and
and a transparent background video file sending step of sending the acquired transparent background video file to the second user terminal, so that the playing and overlaying part in the second user terminal overlays the display main body of each frame of image in the transparent background video file onto the display main body in the current display image according to the relative position between the display main body and the specified background image in the current display image to realize playing and displaying of the transparent background video file.
8. A computer-readable recording medium for recording a computer program for causing a cloud server, which stores a video file corresponding to a presentation image and including a presentation main body, to be communicatively connected to at least one first user terminal held by a first user and at least one second user terminal held by a second user and having a camera and a presentation superimposition unit, the video file being displayed by playing the video file on a display image on a recording medium, to perform the steps of:
a background storage step of storing various background images which set off the display main body and application scene categories corresponding to the background images;
a background retrieval acquiring step, wherein once the server receives the video file selected by the first user and the appointed application scene information from the first user terminal, the server retrieves and acquires at least one corresponding background image according to the application scene information;
a background image sending and receiving step of sending all the acquired background images to the first user terminal and receiving the background image designated by the first user as a designated background image from the first user terminal,
a background removing step, namely removing the original background of the display main body in the received video file to obtain a transparent background video file;
selecting a frame of image from the transparent background video file as a representative image;
a main body feature extraction step of extracting various main body features of the display main body from the representative image;
a video storage step, at least storing the transparent background video file and the main body characteristics correspondingly;
a display image synthesis step of synthesizing the representative image and the specified background image to obtain the display image;
a display image sending step of sending the display image to the first user terminal to enable the first user to present the display image on the recording medium according to the application scene;
a current display image receiving step in which the server receives the display image read by the camera from the second user terminal as a current display image once the camera of the second user terminal is aligned with the display image on the recording medium;
a current main body feature extraction step of extracting the current main body feature of the display main body in the current display image;
a video retrieval obtaining step, namely retrieving according to the current main body characteristics to obtain the corresponding transparent background video file; and
and a transparent background video file sending step of sending the acquired transparent background video file to the second user terminal, so that the playing and overlaying part in the second user terminal overlays the display main body of each frame of image in the transparent background video file onto the display main body in the current display image according to the relative position between the display main body and the specified background image in the current display image to realize playing and displaying of the transparent background video file.
9. An image realistic display system for displaying a video file corresponding to a display image and including a display subject by playing the display image on a recording medium, comprising:
at least one first user terminal storing the video file, the first user terminal being held by a first user;
at least one second user terminal for performing the playback presentation based on the recording medium, the second user terminal being held by a second user; and
a cloud server in communication connection with the first user terminal and the second user terminal respectively,
wherein the first user terminal has a first side screen storage section, a first side input display section, and a first side communication section,
the second user terminal is provided with a camera, a playing superposition part and a second side communication part,
the cloud server comprises a background storage part, a background retrieval acquisition part, a background removal part, an image selection part, a main body feature extraction part, a video storage part, a display image synthesis part, a video retrieval acquisition part and a cloud side communication part,
the background storage unit stores various background images that set off the display main body and application scene types corresponding to the background images,
the first side picture storage part stores a video file uploading picture and a background selection picture,
the first side input display part displays the video file uploading picture to enable the first user to select the video file needing to be uploaded and designate the application scene needed by the display main body in the video file,
the first side communication part transmits the selected video file and the designated application scene information to the cloud server once the first user completes the selection and designation,
once the cloud-side communication part receives the video file and the application scene information, the background retrieval acquisition part retrieves the background storage part according to the application scene information to acquire at least one corresponding background image,
the cloud-side communication unit transmits all the acquired background images to the first user terminal,
the first side input display part displays the background selection screen and displays all the received background images in the background selection screen to let the first user select one background image as a designated background image,
the first side communication section transmits the specified background image to the cloud server,
once the cloud-side communication unit receives the specified background image, the background removal unit removes the original background of the display subject in the received video file to obtain a transparent background video file,
the image selecting section selects one frame of image from the transparent background video file as a representative image,
the main body feature extraction unit extracts various main body features of the display main body from the representative image,
the video storage part at least stores the transparent background video file and the main body characteristic correspondingly,
the display image synthesizing unit synthesizes the representative image and the specified background image to obtain the display image,
the cloud-side communication unit transmits the presentation image to the first user terminal, and causes the first user to present the presentation image on the recording medium according to the application scene,
the second side communication part sends the presentation image read by the camera as a current presentation image to the cloud server once the camera of the second user terminal is aligned with the presentation image on the recording medium,
the main body feature extraction section extracts a current main body feature of the presentation main body in the current presentation image,
the video retrieval acquiring part retrieves the main body feature which is consistent with the current main body feature in the video storage part according to the current main body feature to acquire the corresponding transparent background video file,
the cloud side communication part sends the acquired transparent background video file to the second user terminal,
once the second side communication part receives the transparent background video file, the playing and overlaying part overlays the display main body of each frame of image in the transparent background video file onto the display main body in the current display image according to the relative position between the display main body and the appointed background image in the current display image so as to realize playing and displaying of the transparent background video file.
10. The image realisation rendering system according to claim 9, characterised in that:
wherein the presentation image synthesizing section synthesizes the presentation image based on a predetermined subject background positional relationship,
the second user terminal also has a play presentation part,
the playing superposition part comprises:
the background image identification and positioning unit is used for identifying and positioning the specified background image in the current display image so as to obtain the position information of the specified background image;
a relative position calculating unit which calculates the position information of the display subject based on the subject background position relationship according to the position information of the specified background image;
the playing area dividing unit is used for dividing the area where the display main body is located according to the position information of the display main body to be used as a playing area;
the playing frame overlapping unit is used for sequentially overlapping each image frame in the transparent background video file to the current display image according to the position of the playing area so as to obtain each playing frame;
a playing image forming unit which sequentially forms playing frames into continuous playing images according to a time sequence; and
and the audio synchronization unit synchronizes the audio in the transparent background video file according to the time sequence, so that the continuous playing image and the audio are synchronously displayed on the playing and displaying part.
CN201810191383.3A 2018-03-08 2018-03-08 Image reality display server, image reality display method, recording medium and image reality display system Active CN110248207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810191383.3A CN110248207B (en) 2018-03-08 2018-03-08 Image reality display server, image reality display method, recording medium and image reality display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810191383.3A CN110248207B (en) 2018-03-08 2018-03-08 Image reality display server, image reality display method, recording medium and image reality display system

Publications (2)

Publication Number Publication Date
CN110248207A CN110248207A (en) 2019-09-17
CN110248207B true CN110248207B (en) 2021-03-16

Family

ID=67882169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810191383.3A Active CN110248207B (en) 2018-03-08 2018-03-08 Image reality display server, image reality display method, recording medium and image reality display system

Country Status (1)

Country Link
CN (1) CN110248207B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110708574B (en) * 2019-10-23 2022-01-21 上海连尚网络科技有限公司 Method and device for publishing information
CN112312178B (en) * 2020-07-29 2022-08-30 上海和煦文旅集团有限公司 Multimedia image processing system of multimedia exhibition room
CN112015936B (en) * 2020-08-27 2021-10-26 北京字节跳动网络技术有限公司 Method, device, electronic equipment and medium for generating article display diagram

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046848A (en) * 2006-03-31 2007-10-03 佳能株式会社 Image processing apparatus and image processing method
KR20140135448A (en) * 2013-05-16 2014-11-26 이현철 Method For Mobile Business Service
US8937620B1 (en) * 2011-04-07 2015-01-20 Google Inc. System and methods for generation and control of story animation
CN104680565A (en) * 2015-01-26 2015-06-03 广州市三川田文化科技股份有限公司 Electronic greeting card manufacturing method combining cell phone Wechat carrier
CN105184787A (en) * 2015-08-31 2015-12-23 广州市幸福网络技术有限公司 Identification camera capable of automatically carrying out portrait cutout and method thereof
CN105389077A (en) * 2014-09-01 2016-03-09 三星电子株式会社 Displaying method of electronic device and electronic device thereof
KR101674567B1 (en) * 2015-05-18 2016-11-22 주식회사 엔씨소프트 Method of matching online and offline players in online game, and system thereof
CN106447674A (en) * 2016-09-30 2017-02-22 北京大学深圳研究生院 Video background removing method
CN107172476A (en) * 2017-06-09 2017-09-15 创视未来科技(深圳)有限公司 A kind of system and implementation method of interactive script recorded video resume

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10080006B2 (en) * 2009-12-11 2018-09-18 Fotonation Limited Stereoscopic (3D) panorama creation on handheld device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046848A (en) * 2006-03-31 2007-10-03 佳能株式会社 Image processing apparatus and image processing method
US8937620B1 (en) * 2011-04-07 2015-01-20 Google Inc. System and methods for generation and control of story animation
KR20140135448A (en) * 2013-05-16 2014-11-26 이현철 Method For Mobile Business Service
CN105389077A (en) * 2014-09-01 2016-03-09 三星电子株式会社 Displaying method of electronic device and electronic device thereof
CN104680565A (en) * 2015-01-26 2015-06-03 广州市三川田文化科技股份有限公司 Electronic greeting card manufacturing method combining cell phone Wechat carrier
KR101674567B1 (en) * 2015-05-18 2016-11-22 주식회사 엔씨소프트 Method of matching online and offline players in online game, and system thereof
CN105184787A (en) * 2015-08-31 2015-12-23 广州市幸福网络技术有限公司 Identification camera capable of automatically carrying out portrait cutout and method thereof
CN106447674A (en) * 2016-09-30 2017-02-22 北京大学深圳研究生院 Video background removing method
CN107172476A (en) * 2017-06-09 2017-09-15 创视未来科技(深圳)有限公司 A kind of system and implementation method of interactive script recorded video resume

Also Published As

Publication number Publication date
CN110248207A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN106170101B (en) Contents providing system, information processing equipment and content reproducing method
CN101142595B (en) Album generating apparatus, album generating method and computer readable medium
US6590608B2 (en) Method and apparatus for managing a plurality of images by classifying them into groups
US7224851B2 (en) Method and apparatus for registering modification pattern of transmission image and method and apparatus for reproducing the same
US20170285916A1 (en) Camera effects for photo story generation
CN110248207B (en) Image reality display server, image reality display method, recording medium and image reality display system
JP4244972B2 (en) Information processing apparatus, information processing method, and computer program
CN102855906A (en) Image processing device and image processing method
CN101164083A (en) Album generating apparatus, album generating method and computer readable medium
JP4423929B2 (en) Image output device, image output method, image output processing program, image distribution server, and image distribution processing program
US9779306B2 (en) Content playback system, server, mobile terminal, content playback method, and recording medium
CN113676692A (en) Video processing method and device in video conference, electronic equipment and storage medium
CN112734937A (en) Tourism system based on panoramic technology
JP4136838B2 (en) Image display method and image display apparatus
JP4891123B2 (en) Image display device, image display method, and program
JP3901015B2 (en) Image output apparatus, image output processing program, and image output method
KR101695209B1 (en) A system and method for composing real-time image and chroma-key Image of subject
KR101399633B1 (en) Method and apparatus of composing videos
FR2474205A1 (en) APPARATUS FOR MAKING PHOTOGRAPHIC SLIDES
JPH05165932A (en) Method and system for editing image
CN113747239B (en) Video editing method and device
JP5295510B2 (en) Image display device, image display method, and program
US20200294552A1 (en) Recording device, recording method, reproducing device, reproducing method, and recording/reproducing device
JP2010134216A (en) Image display apparatus, and digital camera having the same
JP4696762B2 (en) Content creation device, content creation method, and computer program.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant