WO2021237744A1 - 拍摄方法及装置 - Google Patents

拍摄方法及装置 Download PDF

Info

Publication number
WO2021237744A1
WO2021237744A1 PCT/CN2020/093531 CN2020093531W WO2021237744A1 WO 2021237744 A1 WO2021237744 A1 WO 2021237744A1 CN 2020093531 W CN2020093531 W CN 2020093531W WO 2021237744 A1 WO2021237744 A1 WO 2021237744A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
target
barrage
image information
Prior art date
Application number
PCT/CN2020/093531
Other languages
English (en)
French (fr)
Inventor
武小军
邢达明
Original Assignee
北京小米移动软件有限公司南京分公司
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司南京分公司, 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司南京分公司
Priority to PCT/CN2020/093531 priority Critical patent/WO2021237744A1/zh
Priority to EP20824068.9A priority patent/EP3937485A4/en
Priority to CN202080001843.7A priority patent/CN114097217A/zh
Priority to US17/200,104 priority patent/US20210377454A1/en
Publication of WO2021237744A1 publication Critical patent/WO2021237744A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32208Spatial or amplitude domain methods involving changing the magnitude of selected pixels, e.g. overlay of information or super-imposition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Definitions

  • the present disclosure relates to the field of computer communication technology, and in particular to a shooting method and device.
  • Some electronic devices are equipped with cameras and have shooting functions. At present, after starting the camera, the electronic device displays the image collected by the camera on the screen, and shoots after receiving the shooting instruction input by the user to obtain a photo or video.
  • the existing shooting method is relatively simple, the shooting stage is less interesting and the user experience is not good.
  • the present disclosure provides a shooting method and device.
  • a photographing method applied to an electronic device including:
  • the image information includes at least one of the following: image data, shooting scene information, and image classification information.
  • the image information includes the image classification information; the image classification information is obtained in any of the following ways:
  • the image classification information includes content category information
  • the shooting scene information of the image is determined according to the content of the image, and the scene category information corresponding to the shooting scene information is determined.
  • the acquiring target barrage information matching the image information of the image includes any one of the following:
  • the target barrage information is obtained from a barrage library corresponding to the preset image information.
  • the method further includes:
  • the generating a shooting file based on the image and the target barrage information includes any one of the following:
  • the target barrage information is written into a text, and the image file of the image and the text are combined to generate the shooting file.
  • a photographing method applied to a server includes:
  • the acquiring image information of the image collected by the electronic device includes any one of the following:
  • Receiving the image collected and sent by the electronic device, and determining the image information of the image include any of the following:
  • the image information type corresponding to the image information is determined, and the target barrage information corresponding to the image information type is determined according to a pre-established second correspondence between the image information type and the barrage information.
  • the method further includes:
  • the acquiring a target image frame set from the target video includes any one of the following:
  • a photographing device applied to an electronic device including:
  • the image acquisition module is configured to acquire images acquired during the shooting process
  • An information acquisition module configured to acquire target barrage information matching the image information of the image
  • the information display module is configured to display the image and the target barrage information.
  • the image information includes at least one of the following: image data, shooting scene information, and image classification information.
  • the image information includes the image classification information;
  • the information acquisition module includes any one of the following:
  • the first information determining submodule is configured to determine the content category information of the image according to the content of the image when the image classification information includes content category information;
  • the second information sub-module is configured to determine the shooting scene information of the image according to the content of the image and the scene classification information corresponding to the shooting scene information when the image classification information includes scene type information .
  • the information acquisition module includes any one of the following:
  • a sending submodule configured to send the image information to a server, so that the server obtains the target barrage information matched with the image information, and receives the target barrage information sent by the server;
  • the obtaining sub-module is configured to determine whether the image information is the preset image information, and if so, obtain the target barrage information from a barrage library corresponding to the preset image information.
  • the device further includes:
  • a file generating module configured to generate a shooting file based on the image and the target barrage information
  • the file storage module is configured to store the shooting file.
  • the file generation module includes any one of the following:
  • the text combination sub-module is configured to write the target barrage information into the text, and combine the image file of the image and the text to generate the shooting file.
  • a photographing device applied to a server includes:
  • An information acquisition module configured to acquire image information of the image collected by the electronic device
  • An information determining module configured to determine target barrage information matching the image information
  • the information sending module is configured to send the target barrage information to the electronic device.
  • the information acquisition module includes any one of the following:
  • An information receiving sub-module configured to receive the image information sent by the electronic device
  • the information determining sub-module is configured to receive the image collected and sent by the electronic device, and determine the image information of the image.
  • the information determining module includes any one of the following:
  • the first barrage information determining sub-module is configured to determine the target barrage information corresponding to the image information according to a pre-established first correspondence between image information and barrage information;
  • the second barrage information determining sub-module is configured to determine the image information type corresponding to the image information, and determine the image information type corresponding to the image information type according to a second correspondence relationship between the image information type and the barrage information established in advance.
  • Target barrage information is configured to determine the image information type corresponding to the image information, and determine the image information type corresponding to the image information type according to a second correspondence relationship between the image information type and the barrage information established in advance.
  • the device further includes:
  • a video determining module configured to determine a target video, the target video including barrage information
  • a set acquisition module configured to acquire a set of target image frames from the target video
  • An information extraction module configured to extract barrage information in the target image frame set
  • the relationship establishment module is configured to establish the second correspondence between the image information type of the target image frame set and the barrage information therein.
  • the set acquisition module includes any one of the following:
  • the first set obtaining submodule is configured to determine that the image information types of multiple frames of images continuously played in the target video are the same, and obtain the set of target image frames based on the multiple frames of images;
  • the second set obtaining sub-module is configured to obtain images with the same image information type from the target video, and obtain the target image frame set based on the images with the same image information type.
  • a non-transitory computer-readable storage medium on which a computer program is stored. step.
  • a non-transitory computer-readable storage medium on which a computer program is stored. step.
  • an electronic device including:
  • a memory for storing processor executable instructions
  • the processor is configured to:
  • a server including:
  • a memory for storing processor executable instructions
  • the processor is configured to:
  • the electronic device obtains the image collected during the shooting process, obtains target barrage information matching the image information of the image, and displays the image and target barrage information, thereby increasing the interest and sense of interaction in the shooting stage. Improve the user's shooting experience.
  • Fig. 1 is a flowchart showing a photographing method according to an exemplary embodiment
  • Fig. 2 is an image displayed on a screen according to an exemplary embodiment
  • Fig. 3 is a flowchart showing another shooting method according to an exemplary embodiment
  • Fig. 4 is a block diagram showing a photographing device according to an exemplary embodiment
  • Fig. 5 is a block diagram showing another photographing device according to an exemplary embodiment
  • Fig. 6 is a schematic structural diagram showing an electronic device according to an exemplary embodiment
  • Fig. 7 is a schematic diagram showing the structure of a server according to an exemplary embodiment.
  • first, second, third, etc. may be used in this disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein can be interpreted as "when” or “when” or “in response to a certainty”.
  • Fig. 1 shows a flowchart of a photographing method according to an exemplary embodiment. The method shown in Fig. 1 is applied to an electronic device, and the method includes:
  • step 101 an image collected during the shooting process is acquired.
  • the electronic device is equipped with a screen and a camera, and has a display function and a shooting function.
  • a screen and a camera There are many types of applicable electronic devices, such as mobile phones, tablets, cameras, camcorders, and so on.
  • a camera application is installed on the electronic device. After the camera application is started, the camera is activated and the camera is used to capture images. For electronic devices such as cameras and camcorders, they are specially used for image capture. After the electronic device is started, the camera is activated and the camera is used to capture images.
  • the electronic device may obtain the image collected by the camera during the shooting process, or the electronic device may obtain the image collected by the camera during the video shooting process.
  • the image obtained in this step may be a preview image during the shooting process, or may be an image obtained after the user inputs a shooting instruction.
  • step 102 the target barrage information matching the image information of the image is obtained.
  • the electronic device After acquiring the image, acquires the image information of the image.
  • image information may include at least one of the following: image data, shooting scene information, and image classification information.
  • the image data may include at least one of the following: the image itself, image color data, image brightness data, and so on.
  • the shooting scene information indicates the specific shooting scene of the image.
  • the image classification information indicates the classification of the image.
  • image classification information There are multiple types of image classification information, such as scene type information, content category information, and so on.
  • the scene type information indicates the type of the shooting scene corresponding to the image
  • the content type information indicates the type of the shooting content corresponding to the image.
  • scene information There are multiple types of scene information, such as indoor scenes, outdoor scenes, gourmet scenes, and landscape scenes.
  • the content category information can be classified according to various types, for example, the species of the photographed object, the attribute information of the photographed object, and the appearance of the photographed object.
  • the subject is classified according to species, and the content category information obtained can include: people, animals, objects, food, etc.
  • the attribute information of the subject can include: gender, age, height, weight, appearance, etc.
  • the subject is classified according to the attribute information of the subject, and the content category information obtained can include: male, female, child, adult, elderly, Good-looking, not beautiful-looking, etc.
  • the shooting objects are classified according to their appearance, and the obtained content category information may include: circles, squares, triangles, regular graphics, irregular graphics, and so on.
  • Electronic equipment can obtain image information through image recognition, network models and other means.
  • the electronic device there are multiple ways for the electronic device to obtain the image classification information of the image.
  • the first method in the case where the image classification information includes content classification information, the content classification information of the image is determined according to the content of the image.
  • the second way in the case where the image classification information includes scene category information, according to the content of the image, determine the shooting scene information of the image, and determine the scene category information corresponding to the shooting scene information.
  • the electronic device determines that the food is included in the image according to the content of the image, and determines the food scene corresponding to the food.
  • the barrage information corresponding to different image information is different.
  • the electronic device can obtain target barrage information that matches the image information in a variety of ways. For example, after the electronic device obtains the image information, it may send the image information to the server, so that the server obtains the target barrage information matched with the image information, and then receives the target barrage information sent by the server.
  • the electronic device can send the image to the server, and the server can directly determine the target barrage information that matches the image, or the server can determine the image color information, image brightness information, shooting scene information, and image One or more of the classified information, to determine the target barrage information that matches the information.
  • the electronic device sends this type of information to the server, and the server determines the target barrage information that matches this type of information .
  • the operation of determining target barrage information matching the image information is performed by the server, which reduces the computing pressure of the electronic device, and the electronic device does not need to store the barrage information, which reduces the storage pressure of the electronic device.
  • the electronic device stores a barrage library locally, and the electronic device can obtain target barrage information matching the image information from the locally stored barrage library.
  • the barrage library corresponding to different image information is different. For example, a barrage library about food is established for food scenes, and a barrage library about scenery is established for scenery scenes.
  • the storage space of the electronic device is limited, and usually only a barrage library corresponding to some image information can be stored. Based on this, the electronic device can determine whether the image information is preset image information, and if so, obtain the target barrage information from the barrage library corresponding to the image information.
  • the electronic device only stores a barrage library about food.
  • the electronic device can obtain matching barrage information from the local barrage library during the process of shooting food.
  • the electronic device cannot obtain the matching barrage information from the local barrage library during the shooting of the scenery. Get the matching barrage information from the barrage library.
  • the electronic device has a barrage library locally, and the electronic device obtains barrage information from the local barrage library, without interacting with the server, not relying on the network, and it can still be obtained even if the network connection is not good.
  • the barrage display can still be performed.
  • step 103 the image and target barrage information are displayed.
  • the electronic device After the electronic device obtains the image and target barrage information, it displays the image and target barrage information on the screen to achieve the barrage effect, increase the fun and interaction of the shooting stage, and improve the user's shooting experience.
  • FIG. 2 is an image displayed on a screen according to an exemplary embodiment.
  • the image shown in FIG. 2 is a gourmet image.
  • the electronic device After acquiring the content category information of the image as the gourmet category, the electronic device acquires information corresponding to the gourmet category.
  • the barrage message "I love meaty”, and the barrage message "I love meaty” is displayed on the image.
  • the target barrage information may include comment information on the content of the currently acquired image. There are many information forms of target barrage information, such as text, emoticons, symbols, and so on.
  • the icon barrage information may also include the display mode of the comment information, such as the display time, the pop-up position in the image, the pop-up direction, and the pop-up speed.
  • the comment information can be displayed on the screen according to a preset display mode.
  • the comment information may be displayed on the screen according to the display mode included in the target barrage information.
  • the electronic device can provide an edit entry for the barrage information, so that the user can edit the barrage information in the local barrage library, and improve the user experience.
  • the first display mode displays the target barrage information on the upper layer of the image
  • the second display mode displays the target barrage information on the image and the target barrage. The information is displayed in different areas
  • the third display mode in the case that the electronic device includes two or more screens, the image and target barrage information are displayed in a separate screen.
  • the screen includes a first display area and a second display area, the image is displayed in the first display area, and the target barrage information is displayed in the second display area.
  • the electronic device includes two screens, the image is displayed on the first screen, and the target barrage information is displayed on the other screen.
  • the electronic device may provide an edit entry for the display mode, so that the user can edit the display mode, and improve the user experience.
  • the electronic device may generate a shooting file according to the image and target barrage information, and store the shooting file.
  • the shooting files can be displayed on a bullet screen.
  • the shooting file can be a photo file or a video file.
  • the target barrage information is written into the image file of the image to generate the shooting file.
  • the image file is modified.
  • the target barrage information is written into the text, and the image file and the text of the image are combined to generate a shooting file.
  • the image file carries the text.
  • the electronic device When the electronic device displays photos based on the photo file, after reading the xmp metadata, it reads the length information in the xmp metadata and reads the data of the corresponding length from the end of the photo file to obtain the target barrage information.
  • Target barrage information is displayed for barrage.
  • Image files in heif format support custom data as independent data blocks.
  • the target barrage information can be written into text, and the image file and text in heif format can be combined to obtain a photo file with barrage display effect.
  • the captured video can be encapsulated into this format, the target barrage information can be written into the text, the video file and the text with the barrage information can be combined to obtain a barrage Video file showing the effect.
  • the mkv encapsulation format supports internal custom data structures, such as multiple audio tracks, multiple subtitles, etc., combining text with target barrage information and video files in mkv format.
  • the shooting file is generated by combining the image file and the text with the bullet screen information.
  • the user controls the electronic device to turn on or off the bullet screen display by triggering a preset option or button during the shooting process, and the image file is not affected.
  • the electronic device obtains the image collected during the shooting process, obtains target barrage information matching the image information of the image, and displays the image and target barrage information, thereby increasing the interest and sense of interaction in the shooting stage. Improve the user's shooting experience.
  • Fig. 3 shows a flowchart of a shooting method according to an exemplary embodiment.
  • the method shown in Fig. 1 is applied to a server, and the method includes:
  • step 201 image information of an image collected by an electronic device is acquired.
  • the image information may include at least one of the following: image data, shooting scene information, and image classification information.
  • image data may include at least one of the following: the image itself, image color data, image brightness data, and so on.
  • the shooting scene information indicates the specific shooting scene of the image.
  • image classification information indicates the classification of the image.
  • image classification information indicates the classification of the image.
  • image classification information such as scene type information, content category information, and so on.
  • the server obtains the image information of the image collected by the electronic device.
  • the first way the server receives the image information sent by the electronic device
  • the second way the server receives the image collected and sent by the electronic device, and determines the image Image information.
  • the image information of the image is obtained, and the image information is sent to the server.
  • the server directly obtains the image information from the electronic device.
  • the image information in this manner may include at least one of the following: the image itself, image color data, image brightness data, shooting scene information, scene type information, content category information, and so on.
  • the electronic device After the electronic device collects the image, it sends the image to the server.
  • the server determines the image information according to the image sent by the electronic device.
  • the image information in this manner may include at least one of the following: image color data, image brightness data, shooting scene information, scene type information, content category information, and so on.
  • step 202 the target barrage information matching the image information is determined.
  • the server pre-establishes a first correspondence between image information and barrage information.
  • the server executes this step, it determines the target barrage information corresponding to the current image information according to the first correspondence.
  • the server pre-establishes the corresponding relationship between the designated building and the barrage information.
  • the server determines the barrage information corresponding to the designated building according to the corresponding relationship between the designated building and the barrage information.
  • the server directly determines target barrage information through image information.
  • the server pre-establishes the second correspondence between the image information type and the barrage information.
  • the server determines the image information type corresponding to the current image information, and determines the target barrage information corresponding to the image information type according to the second pre-established correspondence relationship.
  • the image information type may indicate the information category of the image.
  • the image information type may include at least one of an image content type and an image scene type, where the image content type indicates the content category of the image, and the image scene type indicates the shooting of the image.
  • Scene category indicates the shooting of the image.
  • the image information type can be set corresponding to the image classification information in the image information.
  • the image information type can be the same as the division rule of the image classification information.
  • the image content type in the image information type can be the same as the image classification information.
  • the classification rules of the content category information in the classification information are the same, and the image scene types in the image information types may be the same as the classification rules of the scene type information in the image classification information.
  • the image information type may include an image scene type.
  • the image information type may include an image content type.
  • the server pre-establishes the corresponding relationship between the food category and the barrage information.
  • the server determines the food category corresponding to the food, and the server determines the barrage information corresponding to the food category according to the pre-established correspondence.
  • the server determines the image information type according to the image information, and determines the target barrage information according to the image information type.
  • the server may establish the second correspondence between the image information type and the barrage information in the following manner:
  • the target video includes barrage information.
  • the target video When the target video is played, the barrage is displayed, showing the barrage effect.
  • the target video can be a short video, a TV series, etc.
  • Some video sites provide videos that include barrage information, and you can obtain targeted videos from such sites.
  • the server can determine the image information of the image frame in the target video, and then determine the image information type of the image frame, and collect the image frames with the same image information type to obtain the target image frame set.
  • One way it is determined that the image information types of multiple frames of images continuously played in the target video are the same, and the target image frame set is obtained based on the multiple frames of images.
  • the server determines whether the image information types of the multiple frames of images continuously played within a preset time period are the same, and if they are the same, the target image frame set is obtained based on the multiple frames of images continuously played within the preset time duration.
  • the server determines that the image information types of the preset number of continuously played image frames are all the same, and obtains the target image frame set based on the preset number of continuously played image frames.
  • the target video is a food recording video
  • the image content types of the multi-frame images continuously played within the preset time period acquired by the server are all food categories, and the target image frame set is obtained based on the multi-frame images continuously played within the preset time duration.
  • the second method Obtain images with the same image information type from the target video, and obtain the target image frame set based on the images with the same image information type.
  • images with the same image information type may be continuous frames or non-continuous frames.
  • the server After the server determines the image information type of the image frame in the target video, it can configure a label for the image frame.
  • the label indicates the image information type.
  • the server can obtain the target image with the same image information type from the target video by identifying the label. Frame collection.
  • the barrage information may include comment information, and may also include the display mode of the comment information.
  • comment information There are many types of comment information, such as text, emoticons, symbols, etc.
  • display the comment information such as the display time, the pop-up position in the image, the pop-up direction, and the pop-up speed.
  • the server may only extract the comment information from the target image frame set, or may extract the comment information and the display mode of the comment information from the target image frame set.
  • the server may place the barrage information in the target image frame collection in the target barrage library, which is a barrage library established for the image information type of the target image frame collection.
  • the above methods can be used to establish respective barrage libraries for different types of image information.
  • step 203 the target barrage information is sent to the electronic device.
  • the server obtains the image information of the image collected by the electronic device, determines the target barrage information matching the image information, and sends the target barrage information to the electronic device, so that the electronic device displays the image and the target comment on the screen Information, presenting the bullet screen display effect, increasing the fun and interaction of the shooting stage, and improving the user's shooting experience.
  • the present disclosure also provides embodiments of application function realization apparatus and corresponding electronic equipment.
  • Fig. 4 is a block diagram showing a photographing device according to an exemplary embodiment, which is applied to electronic equipment.
  • the device includes: an image acquisition module 31, an information acquisition module 32, and an information display module 33; wherein,
  • the image acquisition module 31 is configured to acquire images acquired during the shooting process
  • the information acquiring module 32 is configured to acquire target barrage information matching the image information of the image;
  • the information display module 33 is configured to display the image and the target barrage information.
  • the image information includes at least one of the following: image data, shooting scene information, and image classification information.
  • the image information may include the image classification information;
  • the information acquisition module 32 may include any of the following: a first information determination sub-module, a second information sub-module; wherein,
  • the first information determining submodule is configured to determine the content category information of the image according to the content of the image when the image classification information includes content category information;
  • the second information sub-module is configured to determine the shooting scene information of the image according to the content of the image, and determine the scene corresponding to the shooting scene information when the image classification information includes scene category information Category information.
  • the information acquisition module 32 may include any one of the following: a sending submodule, an acquisition submodule; wherein,
  • the sending submodule is configured to send the image information to a server, so that the server obtains the target barrage information matched with the image information, and receives the target barrage information sent by the server ;
  • the obtaining submodule is configured to determine whether the image information is the preset image information, and if so, obtain the target barrage information from a barrage library corresponding to the preset image information.
  • the device may further include: a file generation module and a file storage module; wherein,
  • the file generating module is configured to generate a shooting file based on the image and the target barrage information
  • the file storage module is configured to store the shooting file.
  • the file generation module may include any of the following: an information writing sub-module, a text combination sub-module; wherein,
  • the information writing sub-module is configured to write the target barrage information into the image file of the image to generate the shooting file;
  • the text combination sub-module is configured to write the target barrage information into the text, and combine the image file of the image and the text to generate the shooting file.
  • Fig. 5 is a block diagram showing another photographing device according to an exemplary embodiment, which is applied to a server, and the device includes: an information acquisition module 41, an information determination module 42, and an information sending module 43; wherein,
  • the information acquiring module 41 is configured to acquire image information of the image collected by the electronic device;
  • the information determining module 42 is configured to determine target barrage information that matches the image information
  • the information sending module 43 is configured to send the target barrage information to the electronic device.
  • the information acquisition module 41 may include any of the following: an information receiving submodule, an information determining submodule; wherein,
  • the information receiving submodule is configured to receive the image information sent by the electronic device
  • the information determining sub-module is configured to receive the image collected and sent by the electronic device, and determine the image information of the image.
  • the information determining module 42 may include any one of the following: a first barrage information determining submodule, a second barrage information determining submodule, Sub-module; among them,
  • the first barrage information determining submodule is configured to determine the target barrage information corresponding to the image information according to a pre-established first correspondence between image information and barrage information;
  • the second barrage information determining sub-module is configured to determine the image information type corresponding to the image information, and determine the image information type corresponding to the second corresponding relationship between the image information type and the barrage information established in advance.
  • the target barrage information is configured to determine the image information type corresponding to the image information, and determine the image information type corresponding to the second corresponding relationship between the image information type and the barrage information established in advance.
  • the device may further include: a video determination module, a collection acquisition module, an information extraction module, and a relationship establishment module; wherein,
  • the video determining module is configured to determine a target video, the target video including barrage information
  • the set acquisition module is configured to acquire a set of target image frames from the target video
  • the information extraction module is configured to extract barrage information in the target image frame set
  • the relationship establishment module is configured to establish the second correspondence between the image information type of the target image frame set and the barrage information therein.
  • the set acquisition module may include any one of the following: a first set acquisition sub-module, a second set acquisition sub-module; wherein,
  • the first set obtaining submodule is configured to determine that the image information types of the multiple frames of images continuously played in the target video are the same, and obtain the set of target image frames based on the multiple frames of images;
  • the second set obtaining submodule is configured to obtain images with the same image information type from the target video, and obtain the target image frame set based on the images with the same image information type.
  • the relevant part can refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one unit. Locally, or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the present disclosure. Those of ordinary skill in the art can understand and implement without creative work.
  • an embodiment of the present disclosure provides an electronic device, including: a screen; a camera; a processor; a memory for storing executable instructions of the processor; wherein the above-mentioned processor is configured as:
  • Fig. 6 is a schematic structural diagram showing an electronic device 1600 according to an exemplary embodiment.
  • the device 1600 may be user equipment, which may specifically be a mobile phone, a computer, a digital broadcasting electronic device, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, a wearable device such as a smart watch, Smart glasses, smart bracelets, smart running shoes, etc.
  • the device 1600 may include one or more of the following components: a processing component 1602, a memory 1604, a power supply component 1606, a multimedia component 1608, an audio component 1610, an input/output (I/O) interface 1612, a sensor component 1614, And the communication component 1616.
  • a processing component 1602 a memory 1604, a power supply component 1606, a multimedia component 1608, an audio component 1610, an input/output (I/O) interface 1612, a sensor component 1614, And the communication component 1616.
  • the processing component 1602 generally controls the overall operations of the device 1600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 1602 may include one or more processors 1620 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 1602 may include one or more modules to facilitate the interaction between the processing component 1602 and other components.
  • the processing component 1602 may include a multimedia module to facilitate the interaction between the multimedia component 1608 and the processing component 1602.
  • the memory 1604 is configured to store various types of data to support the operation of the device 1600. Examples of such data include instructions for any application or method operating on the device 1600, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 1604 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 1606 provides power for various components of the device 1600.
  • the power supply component 1606 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the device 1600.
  • the multimedia component 1608 includes a screen that provides an output interface between the above-mentioned device 1600 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The above-mentioned touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the above-mentioned touch or sliding operation.
  • the multimedia component 1608 includes a front camera and/or a rear camera. When the device 1600 is in an operation mode, such as an adjustment mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1610 is configured to output and/or input audio signals.
  • the audio component 1610 includes a microphone (MIC), and when the device 1600 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive external audio signals.
  • the received audio signal may be further stored in the memory 1604 or transmitted via the communication component 1616.
  • the audio component 1610 further includes a speaker for outputting audio signals.
  • the I/O interface 1612 provides an interface between the processing component 1602 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 1614 includes one or more sensors for providing the device 1600 with various aspects of status assessment.
  • the sensor component 1614 can detect the on/off status of the device 1600 and the relative positioning of the components.
  • the above components are the display and the keypad of the device 1600.
  • the sensor component 1614 can also detect the position change of the device 1600 or a component of the device 1600. The presence or absence of contact between the user and the device 1600, the orientation or acceleration/deceleration of the device 1600, and the temperature change of the device 1600.
  • the sensor component 1614 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 1614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 1614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 1616 is configured to facilitate wired or wireless communication between the apparatus 1600 and other devices.
  • the device 1600 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 1616 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the aforementioned communication component 1616 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the apparatus 1600 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing equipment (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component is used to implement the above method.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing equipment
  • PLD programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component is used to implement the above method.
  • a non-transitory computer-readable storage medium such as a memory 1604 including instructions.
  • the instructions in the storage medium are executed by the processor 1620 of the device 1600, the device 1600 can perform shooting.
  • a method the method includes: acquiring an image collected during a shooting process, acquiring target barrage information matching the image information of the image, and displaying the image and the target barrage information.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • FIG. 7 is a schematic structural diagram of a server according to an exemplary embodiment.
  • the server shown in FIG. 7 may include: Memory 520, processor 530 and external interface 540;
  • the external interface 540 is used to obtain data
  • the memory 520 is used to store machine-readable instructions corresponding to shooting
  • the processor 530 is configured to read the machine-readable instructions on the memory 520 and execute the instructions to implement the following operations:
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)
  • Television Signal Processing For Recording (AREA)
  • Information Transfer Between Computers (AREA)
  • Studio Devices (AREA)

Abstract

本公开提供一种拍摄方法及装置。所述方法应用于电子设备,所述电子设备安装有屏幕和摄像头,所述方法包括:获取在拍摄过程中采集的图像,获取与图像的图像信息匹配的目标弹幕信息,显示图像和目标弹幕信息,从而增加了拍摄阶段的趣味性和互动感,提高了用户的拍摄体验。

Description

拍摄方法及装置 技术领域
本公开涉及计算机通信技术领域,尤其涉及一种拍摄方法及装置。
背景技术
一些电子设备安装有摄像头,具有拍摄功能。目前,电子设备在启动摄像头后,在屏幕上显示摄像头采集的图像,在接收到用户输入的拍摄指令后进行拍摄,获得照片或视频。
现有的拍摄方法比较简单,拍摄阶段的趣味性较差,用户体验不佳。
发明内容
为克服相关技术中存在的问题,本公开提供了一种拍摄方法及装置。
根据本公开实施例的第一方面,提供了一种拍摄方法,应用于电子设备,所述方法包括:
获取在拍摄过程中采集的图像;
获取与所述图像的图像信息匹配的目标弹幕信息;
显示所述图像和所述目标弹幕信息。
可选地,所述图像信息包括以下至少一项:图像数据、拍摄场景信息、图像分类信息。
可选地,所述图像信息包括所述图像分类信息;所述图像分类信息通过下面任意一种方式获取:
在所述图像分类信息包括内容类别信息的情况下,根据所述图像的内容,确定所述图像的内容类别信息;
在所述图像分类信息包括场景类别信息的情况下,根据所述图像的内 容,确定所述图像的拍摄场景信息,确定所述拍摄场景信息对应的场景类别信息。
可选地,所述获取与所述图像的图像信息匹配的目标弹幕信息,包括以下任一项:
将所述图像信息发送给服务器,以使所述服务器获取与所述图像信息配的所述目标弹幕信息,接收所述服务器发送的所述目标弹幕信息;
确定所述图像信息是否为所述预设图像信息,若是,则从所述预设图像信息对应的弹幕库中获取所述目标弹幕信息。
可选地,所述方法还包括:
基于所述图像和所述目标弹幕信息,生成拍摄文件;
存储所述拍摄文件。
可选地,所述基于所述图像和所述目标弹幕信息,生成拍摄文件,包括以下任一项:
将所述目标弹幕信息写入所述图像的图像文件中,生成所述拍摄文件;
将所述目标弹幕信息写入文本中,组合所述图像的图像文件和所述文本,生成所述拍摄文件。
根据本公开实施例的第二方面,提供了一种拍摄方法,应用于服务器,所述方法包括:
获取所述电子设备采集的图像的图像信息;
确定与所述图像信息匹配的目标弹幕信息;
将所述目标弹幕信息发送给所述电子设备。
可选地,所述获取所述电子设备采集的图像的图像信息,包括以下任一项:
接收所述电子设备发送的所述图像信息;
接收所述电子设备采集并发送的图像,确定所述图像的所述图像信息。可选地,包括以下任一项:
根据预先建立的图像信息与弹幕信息的第一对应关系,确定所述图像信息对应的所述目标弹幕信息;
确定所述图像信息对应的图像信息类型,根据预先建立的图像信息类型与弹幕信息的第二对应关系,确定所述图像信息类型对应的所述目标弹幕信息。
可选地,所述方法还包括:
确定目标视频,所述目标视频包括弹幕信息;
从所述目标视频中获取目标图像帧集合;
提取所述目标图像帧集合中的弹幕信息;
建立所述目标图像帧集合的图像信息类型与其中的弹幕信息的所述第二对应关系。
可选地,所述从所述目标视频中获取目标图像帧集合,包括以下任一项:
确定所述目标视频中连续播放的多帧图像的图像信息类型相同,基于所述多帧图像获得所述目标图像帧集合;
从所述目标视频中获取具有相同图像信息类型的图像,基于所述具有相同图像信息类型的图像获得所述目标图像帧集合。
根据本公开实施例的第三方面,提供了一种拍摄装置,应用于电子设备,所述装置包括:
图像获取模块,被配置为获取在拍摄过程中采集的图像;
信息获取模块,被配置为获取与所述图像的图像信息匹配的目标弹幕信息;
信息显示模块,被配置为显示所述图像和所述目标弹幕信息。
可选地,所述图像信息包括以下至少一项:图像数据、拍摄场景信息、图像分类信息。
可选地,所述图像信息包括所述图像分类信息;所述信息获取模块,包括以下任一项:
第一信息确定子模块,被配置为在所述图像分类信息包括内容类别信息的情况下,根据所述图像的内容,确定所述图像的内容类别信息;
第二信息子模块,被配置为在所述图像分类信息包括场景类别信息的情况下,根据所述图像的内容,确定所述图像的拍摄场景信息,确定所述拍摄场景信息对应的场景类别信息。
可选地,所述信息获取模块,包括以下任一项:
发送子模块,被配置为将所述图像信息发送给服务器,以使所述服务器获取与所述图像信息配的所述目标弹幕信息,接收所述服务器发送的所述目标弹幕信息;
获取子模块,被配置为确定所述图像信息是否为所述预设图像信息,若是,则从所述预设图像信息对应的弹幕库中获取所述目标弹幕信息。
可选地,所述装置还包括:
文件生成模块,被配置为基于所述图像和所述目标弹幕信息,生成拍摄文件;
文件存储模块,被配置为存储所述拍摄文件。
可选地,所述文件生成模块,包括以下任一项:
信息写入子模块,被配位置将所述目标弹幕信息写入所述图像的图像 文件中,生成所述拍摄文件;
文本组合子模块,被配位置将所述目标弹幕信息写入文本中,组合所述图像的图像文件和所述文本,生成所述拍摄文件。
根据本公开实施例的第四方面,提供了一种拍摄装置,应用于服务器,所述装置包括:
信息获取模块,被配置为获取所述电子设备采集的图像的图像信息;
信息确定模块,被配置为确定与所述图像信息匹配的目标弹幕信息;
信息发送模块,被配置为将所述目标弹幕信息发送给所述电子设备。
可选地,所述信息获取模块,包括以下任一项:
信息接收子模块,被配置为接收所述电子设备发送的所述图像信息;
信息确定子模块,被配置为接收所述电子设备采集并发送的图像,确定所述图像的所述图像信息。
可选地,所述信息确定模块,包括以下任一项:
第一弹幕信息确定子模块,被配置为根据预先建立的图像信息与弹幕信息的第一对应关系,确定所述图像信息对应的所述目标弹幕信息;
第二弹幕信息确定子模块,被配置为确定所述图像信息对应的图像信息类型,根据预先建立的图像信息类型与弹幕信息的第二对应关系,确定所述图像信息类型对应的所述目标弹幕信息。
可选地,所述装置还包括:
视频确定模块,被配置为确定目标视频,所述目标视频包括弹幕信息;
集合获取模块,被配置为从所述目标视频中获取目标图像帧集合;
信息提取模块,被配置为提取所述目标图像帧集合中的弹幕信息;
关系建立模块,被配置为建立所述目标图像帧集合的图像信息类型与 其中的弹幕信息的所述第二对应关系。
可选地,所述集合获取模块,包括以下任一项:
第一集合获得子模块,被配置为确定所述目标视频中连续播放的多帧图像的图像信息类型相同,基于所述多帧图像获得所述目标图像帧集合;
第二集合获得子模块,被配置为从所述目标视频中获取具有相同图像信息类型的图像,基于所述具有相同图像信息类型的图像获得所述目标图像帧集合。
根据本公开实施例的第五方面,提供了一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述第一方面中任一项所述方法的步骤。
根据本公开实施例的第六方面,提供了一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述第二方面中任一项所述方法的步骤。
根据本公开实施例的第七方面,提供了一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
获取在拍摄过程中采集的图像;
获取与所述图像的图像信息匹配的目标弹幕信息;
显示所述图像和所述目标弹幕信息。
根据本公开实施例的第八方面,提供了一种服务器,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
获取所述电子设备采集的图像的图像信息;
确定与所述图像信息匹配的目标弹幕信息;
将所述目标弹幕信息发送给所述电子设备。
本公开实施例提供的技术方案可以包括以下有益效果:
本公开实施例中,电子设备获取在拍摄过程中采集的图像,获取与图像的图像信息匹配的目标弹幕信息,显示图像和目标弹幕信息,从而增加了拍摄阶段的趣味性和互动感,提高了用户的拍摄体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是根据一示例性实施例示出的一种拍摄方法流程图;
图2是根据一示例性实施例示出的一种屏幕显示的图像;
图3是根据一示例性实施例示出的另一种拍摄方法流程图;
图4是根据一示例性实施例示出的一种拍摄装置框图;
图5是根据一示例性实施例示出的另一种拍摄装置框图;
图6是根据一示例性实施例示出的一种电子设备的结构示意图;
图7是根据一示例性实施例示出的一种服务器的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
图1根据一示例性实施例示出的一种拍摄方法流程图,图1所示的方法应用于电子设备,所述方法包括:
在步骤101中,获取在拍摄过程中采集的图像。
本公开实施例中,电子设备安装有屏幕和摄像头,具有显示功能和拍摄功能。适用的电子设备有多种,例如,手机、平板、照相机、摄像机等。
对于手机、平板这类电子设备,电子设备上安装有相机应用,在相机应用启动后,摄像头启用,使用摄像头采集图像。对于照相机、摄像机 这类电子设备,专门用于图像拍摄,在电子设备启动后,摄像头启用,使用摄像头采集图像。
电子设备可以在拍照过程中获取摄像头采集的图像,或者,电子设备可以在拍摄视频过程中获取摄像头采集的图像。
本步骤中获取的图像可以是拍摄过程中的预览图像,或者,可以是用户输入拍摄指令之后得到的图像。
在步骤102中,获取与图像的图像信息匹配的目标弹幕信息。
电子设备在获取图像后,获取图像的图像信息。图像信息有多种,例如,图像信息可以包括以下至少一项:图像数据、拍摄场景信息、图像分类信息。
其中,图像数据有多种,例如图像数据可以包括以下至少一项:图像本身,图像色彩数据、图像亮度数据等。
拍摄场景信息指示图像的具体拍摄场景。
图像分类信息指示图像的分类。图像分类信息有多种,例如,场景类型信息、内容类别信息等。场景类型信息指示图像对应的拍摄场景所属类型,内容类型信息指示图像对应的拍摄内容所属类型。
场景类型信息有多种,例如室内场景、户外场景、美食场景、风景场景等。
内容类别信息的划分依据有多种,例如,拍摄对象的物种、拍摄对象的属性信息、拍摄对象的外观等。依据物种对拍摄对象进行划分,所得的内容类别信息可以包括:人、动物、物体、食物等。拍摄对象的属性信息可以包括:性别、年龄、身高、体重、外貌等,依据拍摄对象的属性信息对拍摄对象进行划分,所得的内容类别信息可以包括:男性、女性、儿童、成人、老年人、长相漂亮、长相不漂亮等。依据外观对拍摄对象进行划分,所得的内容类别信息可以包括:圆形、方形、三角形、规则图形、 不规则图形等。
电子设备可以通过图像识别、网络模型等手段,获取图像信息。
在一个实施例中,电子设备获取图像的图像分类信息的方式有多种,例如,第一种方式:在图像分类信息包括内容类别信息的情况下,根据图像的内容,确定图像的内容类别信息;第二种方式:在图像分类信息包括场景类别信息的情况下,根据图像的内容,确定图像的拍摄场景信息,确定拍摄场景信息对应的场景类别信息。
针对第二种方式,例如,在拍摄食物的过程中,电子设备根据图像的内容确定图像中包括食物,确定食物对应的美食场景。
在一个实施例中,不同的图像信息对应的弹幕信息不同。
电子设备可以通过多种方式获取与图像信息匹配的目标弹幕信息。例如,电子设备在获取图像信息后,可以将图像信息发送给服务器,以使服务器获取与图像信息配的目标弹幕信息,之后接收服务器发送的目标弹幕信息。
当图像信息包括图像本身时,电子设备可以将图像发送给服务器,服务器可以直接确定与图像匹配的目标弹幕信息,或者,服务器可以确定图像的图像色彩信息、图像亮度信息、拍摄场景信息、图像分类信息中的一种或多种,确定与这些信息匹配的目标弹幕信息。
当图像信息包括图像色彩信息、图像亮度信息、拍摄场景信息、图像分类信息中的一种或多种时,电子设备将这类信息发送给服务器,服务器确定与这类信息匹配的目标弹幕信息。
本例子中,确定与图像信息匹配的目标弹幕信息的操作由服务器执行,减轻了电子设备的运算压力,电子设备不用存储弹幕信息,减轻了电子设备的存储压力。
又如,电子设备本地存有弹幕库,电子设备可以从本地存储的弹幕 库中获取与图像信息匹配的目标弹幕信息。
在一些情况下,不同图像信息对应的弹幕库不同。例如,针对美食场景建立了关于美食的弹幕库,针对风景场景建立了关于风景的弹幕库。
电子设备的存储空间有限,通常只能存储与一些图像信息对应的弹幕库。基于此,电子设备可以确定图像信息是否为预设图像信息,如果是,则从图像信息对应的弹幕库中获取目标弹幕信息。
例如,电子设备仅存储了关于美食的弹幕库,电子设备在拍摄食物的过程中,可以从本地的弹幕库中获取匹配的弹幕信息,电子设备在拍摄风景的过程中,无法从本地的弹幕库中获取匹配的弹幕信息。
本例子中,电子设备本地存有弹幕库,电子设备从本地的弹幕库中获取弹幕信息,无需与服务器进行交互,不依赖于网络,即使在网络连接不佳的情况下仍可获取到弹幕信息,仍可进行弹幕展示。
在步骤103中,显示图像和目标弹幕信息。
电子设备在获取图像和目标弹幕信息后,在屏幕上显示图像和目标弹幕信息,实现了弹幕效果,增加了拍摄阶段的趣味性和互动感,提高了用户的拍摄体验。
例如,图2是根据一示例性实施例示出的一种屏幕显示的图像,图2所示的图像为美食图像,电子设备在获取图像的内容类别信息为美食类别后,获取与美食类别对应的弹幕信息“我爱吃肉肉”,并在图像上显示弹幕信息“我爱吃肉肉”。
在一个实施例中,目标弹幕信息可以包括针对当前获取的图像的内容的评论信息。目标弹幕信息的信息形式有多种,例如文字、表情图标、符号等。图标弹幕信息还可以包括评论信息的显示方式,例如显示时间、在图像中的弹出位置、弹出方向、弹出速度等。
当目标弹幕信息仅包括评论信息时,可以按照预设的显示方式,在 屏幕上显示评论信息。当目标弹幕信息包括评论信息和评论信息的显示方式时,可以按照目标弹幕信息包括的显示方式在屏幕上显示评论信息。
电子设备可以提供弹幕信息的编辑入口,使得用户能够对本地弹幕库中的弹幕信息进行编辑,提高用户的使用体验。
在一个实施例中,显示图像和目标弹幕信息的方式有多种,例如,第一种显示方式:将目标弹幕信息显示在图像的上层;第二种显示方式:对图像和目标弹幕信息进行分区域显示;第三种显示方式:在电子设备包括两块或多块屏幕的情况下,对图像和目标弹幕信息进行分屏显示。
针对第二种显示方式,例如,屏幕包括第一显示区域和第二显示区域,将图像显示在第一显示区域内,将目标弹幕信息显示在第二显示区域内。
针对第三种显示方式,例如,电子设备包括两块屏幕,将图像显示在第一块屏幕上,将目标弹幕信息显示在另一块屏幕上。
电子设备可以提供显示方式的编辑入口,使得用户能够对显示方式进行编辑,提高用户的使用体验。
在一个实施例中,电子设备可以根据图像和目标弹幕信息,生成拍摄文件,并存储拍摄文件。拍摄文件能够进行弹幕展示。拍摄文件可以是照片文件或视频文件。
基于图像和目标弹幕信息生成拍摄文件的方式有多种,例如,将目标弹幕信息写入图像的图像文件中,生成拍摄文件,本例子中,图像文件被修改。又如,将目标弹幕信息写入文本中,组合图像的图像文件和文本,生成拍摄文件,本例子中,图像文件携带文本。
对于Jpeg格式的照片,将写有目标弹幕信息的文本的长度,写入图像的图像文件的xmp元数据中,并将文本数据添加到图像文件的尾部,从而获得具有弹幕展示效果的照片文件。
电子设备在基于照片文件进行照片展示时,在读取到xmp元数据后,读取xmp元数据中的长度信息,从照片文件的尾部读取相应长度的数据,从而获得目标弹幕信息,基于目标弹幕信息进行弹幕展示。
heif格式的图像文件支持将自定义数据作为独立的数据块。在拍摄照片过程中,可以将目标弹幕信息写入文本,组合heif格式的图像文件和文本,获得具有弹幕展示效果的照片文件。
对于支持在内部自定义数据结构的视频封装格式,可以将拍摄的视频封装成这类格式,可以将目标弹幕信息写入文本,组合视频文件和写有弹幕信息的文本,获得具有弹幕展示效果的视频文件。
例如,mkv封装格式支持在内部自定义数据结构,比如多音轨,多字幕等,组合写有目标弹幕信息的文本与mkv格式的视频文件。
使用组合图像文件和写有弹幕信息的文本的方式生成拍摄文件,用户在拍摄过程中通过触发预设选项或按键,控制电子设备开启或关闭弹幕展示,图像文件不受影响。
本公开实施例中,电子设备获取在拍摄过程中采集的图像,获取与图像的图像信息匹配的目标弹幕信息,显示图像和目标弹幕信息,从而增加了拍摄阶段的趣味性和互动感,提高了用户的拍摄体验。
图3根据一示例性实施例示出的一种拍摄方法流程图,图1所示的方法应用于服务器,所述方法包括:
在步骤201中,获取电子设备采集的图像的图像信息。
图像信息可以包括以下至少一项:图像数据、拍摄场景信息、图像分类信息。其中,图像数据有多种,例如图像数据可以包括以下至少一项:图像本身,图像色彩数据、图像亮度数据等。拍摄场景信息指示图像的具体拍摄场景。图像分类信息指示图像的分类。图像分类信息有多种,例如,场景类型信息、内容类别信息等。
服务器获取电子设备采集的图像的图像信息的方式有多种,例如,第一种方式:服务器接收电子设备发送的图像信息;第二种方式:服务器接收电子设备采集并发送的图像,确定图像的图像信息。
针对第一种方式,电子设备采集图像后,获取图像的图像信息,并将图像信息发送给服务器。相应地,服务器直接从电子设备获取到图像信息。
本方式中的图像信息可以包括以下至少一项:图像本身、图像色彩数据、图像亮度数据、拍摄场景信息、场景类型信息、内容类别信息等。
针对第二种方式,电子设备采集图像后,将图像发送给服务器。相应地,服务器根据电子设备发送的图像确定出图像信息。
本方式中的图像信息可以包括以下至少一项:图像色彩数据、图像亮度数据、拍摄场景信息、场景类型信息、内容类别信息等。
在步骤202中,确定与图像信息匹配的目标弹幕信息。
一个实施例中,服务器预先建立了图像信息与弹幕信息的第一对应关系。服务器在执行本步骤时,根据第一对应关系,确定当前的图像信息对应的目标弹幕信息。
例如,服务器预先建立了指定建筑与弹幕信息的对应关系,应用中,当图像中包括指定建筑时,服务器根据指定建筑与弹幕信息的对应关系,确定指定建筑对应的弹幕信息。
本实施例中,服务器通过图像信息直接确定目标弹幕信息。
在一个实施例中,服务器预先建立了图像信息类型与弹幕信息的第二对应关系。服务器在执行本步骤时,确定当前的图像信息对应的图像信息类型,根据预先建立的第二对应关系,确定该图像信息类型对应的目标弹幕信息。
这里,图像信息类型可以指示图像的信息类别,例如,图像信息类型可以包括图像内容类型、图像场景类型中的至少一种,其中,图像内容类型指示图像的内容类别,图像场景类型指示图像的拍摄场景类别。在一种可行的示例中,图像信息类型可以与图像信息中的图像分类信息对应设置,例如,图像信息类型可以与图像分类信息的划分规则相同,比如图像信息类型中的图像内容类型可以与图像分类信息中的内容类别信息的划分规则相同,图像信息类型中的图像场景类型可以与图像分类信息中的场景类型信息的划分规则相同。
当图像信息包括拍摄场景信息、场景类型信息中的至少一种时,图像信息类型可以包括图像场景类型。当图像信息包括图像数据、内容类别信息中的至少一种时,图像信息类型可以包括图像内容类型。
例如,服务器预先建立了美食类别与弹幕信息的对应关系,应用中,当图像中包括食物时,确定食物对应美食类别,服务器根据预先建立的对应关系,确定美食类别对应的弹幕信息。
本实施例中,服务器根据图像信息确定图像信息类型,根据图像信息类型确定目标弹幕信息。
在一个实施例中,服务器可以通过下面方式,建立图像信息类型与弹幕信息的第二对应关系:
首先,确定目标视频,目标视频包括弹幕信息。
目标视频在播放时进行弹幕展示,呈现弹幕效果。目标视频可以是短视频、电视剧等。
某些视频网站提供包括弹幕信息的视频,可以从这类网站获取目标视频。
其次,从目标视频中获取目标图像帧集合。
服务器可以确定目标视频中图像帧的图像信息,进而确定图像帧的 图像信息类型,集合图像信息类型相同的图像帧,获得目标图像帧集合。
一种方式:确定目标视频中连续播放的多帧图像的图像信息类型相同,基于多帧图像获得目标图像帧集合。
例如,服务器确定预设时长内连续播放的多帧图像的图像信息类型是否相同,若相同,则基于预设时长内连续播放的多帧图像,获得目标图像帧集合。
又如,服务器确定连续播放的预设数量的图像帧的图像信息类型均相同,基于连续播放的预设数量的图像帧,获得目标图像帧集合。
示例性地,目标视频为美食记录视频,服务器获取预设时长内连续播放的多帧图像的图像内容类型均为美食类别,基于预设时长内连续播放的多帧图像,获得目标图像帧集合。
第二种方式:从目标视频中获取具有相同图像信息类型的图像,基于具有相同图像信息类型的图像获得目标图像帧集合。
本方式中,具有相同图像信息类型的图像可能是连续帧,也可能是非连续帧。
服务器在确定出目标视频中图像帧的图像信息类型后,可以给图像帧配置一标签,标签指示图像信息类型,服务器可以通过识别标签的方式,从目标视频中获取具有相同图像信息类型的目标图像帧集合。
再次,提取目标图像帧集合中的弹幕信息。
弹幕信息可以包括评论信息,还可以包括评论信息的显示方式。评论信息的信息形式有多种,例如文字、表情图标、符号等。评论信息的显示方式有多种,例如显示时间、在图像中的弹出位置、弹出方向、弹出速度等。
服务器可以仅从目标图像帧集合中提取评论信息,或者,可以从目 标图像帧集合中提取评论信息和评论信息的显示方式。
最后,建立目标图像帧集合的图像信息类型与其中的弹幕信息的对应关系。
可以通过上述方法,建立多组图像信息类型与弹幕信息的对应关系。
服务器可以将目标图像帧集合中的弹幕信息置于目标弹幕库中,目标弹幕库是针对目标图像帧集合的图像信息类型建立的弹幕库。可以通过上述方法,为不同的图像信息类型建立各自的弹幕库。
在步骤203中,将目标弹幕信息发送给电子设备。
本公开实施例中,服务器获取电子设备采集的图像的图像信息,确定与图像信息匹配的目标弹幕信息,将目标弹幕信息发送给电子设备,以使电子设备在屏幕上显示图像和目标评论信息,呈现弹幕展示效果,增加了拍摄阶段的趣味性和互动感,提高了用户的拍摄体验。
对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。
其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本公开所必须的。
与前述应用功能实现方法实施例相对应,本公开还提供了应用功能实现装置及相应的电子设备的实施例。
图4是根据一示例性实施例示出的一种拍摄装置框图,应用于电子设备,所述装置包括:图像获取模块31、信息获取模块32和信息显示模块33;其中,
所述图像获取模块31,被配置为获取在拍摄过程中采集的图像;
所述信息获取模块32,被配置为获取与所述图像的图像信息匹配的 目标弹幕信息;
所述信息显示模块33,被配置为显示所述图像和所述目标弹幕信息。
在一个可选的实施例中,在图4所示的拍摄装置的基础上,所述图像信息包括以下至少一项:图像数据、拍摄场景信息、图像分类信息。
在一个可选的实施例中,所述图像信息可以包括所述图像分类信息;所述信息获取模块32,可以包括以下任一项:第一信息确定子模块、第二信息子模块;其中,
所述第一信息确定子模块,被配置为在所述图像分类信息包括内容类别信息的情况下,根据所述图像的内容,确定所述图像的内容类别信息;
所述第二信息子模块,被配置为在所述图像分类信息包括场景类别信息的情况下,根据所述图像的内容,确定所述图像的拍摄场景信息,确定所述拍摄场景信息对应的场景类别信息。
在一个可选的实施例中,在图4所示的拍摄装置的基础上,所述信息获取模块32,可以包括以下任一项:发送子模块、获取子模块;其中,
所述发送子模块,被配置为将所述图像信息发送给服务器,以使所述服务器获取与所述图像信息配的所述目标弹幕信息,接收所述服务器发送的所述目标弹幕信息;
所述获取子模块,被配置为确定所述图像信息是否为所述预设图像信息,若是,则从所述预设图像信息对应的弹幕库中获取所述目标弹幕信息。
在一个可选的实施例中,在图4所示的拍摄装置的基础上,所述装置还可以包括:文件生成模块和文件存储模块;其中,
所述文件生成模块,被配置为基于所述图像和所述目标弹幕信息,生成拍摄文件;
所述文件存储模块,被配置为存储所述拍摄文件。
在一个可选的实施例中,所述文件生成模块,可以包括以下任一项:信息写入子模块、文本组合子模块;其中,
所述信息写入子模块,被配位置将所述目标弹幕信息写入所述图像的图像文件中,生成所述拍摄文件;
所述文本组合子模块,被配位置将所述目标弹幕信息写入文本中,组合所述图像的图像文件和所述文本,生成所述拍摄文件。
图5是根据一示例性实施例示出的另一种拍摄装置框图,应用于服务器,所述装置包括:信息获取模块41、信息确定模块42和信息发送模块43;其中,
所述信息获取模块41,被配置为获取所述电子设备采集的图像的图像信息;
所述信息确定模块42,被配置为确定与所述图像信息匹配的目标弹幕信息;
所述信息发送模块43,被配置为将所述目标弹幕信息发送给所述电子设备。
在一个可选的实施例中,在图5所示的拍摄装置的基础上,所述信息获取模块41,可以包括以下任一项:信息接收子模块、信息确定子模块;其中,
所述信息接收子模块,被配置为接收所述电子设备发送的所述图像信息;
所述信息确定子模块,被配置为接收所述电子设备采集并发送的图像,确定所述图像的所述图像信息。
在一个可选的实施例中,在图5所示的拍摄装置的基础上,所述信 息确定模块42,可以包括以下任一项:第一弹幕信息确定子模块、第二弹幕信息确定子模块;其中,
所述第一弹幕信息确定子模块,被配置为根据预先建立的图像信息与弹幕信息的第一对应关系,确定所述图像信息对应的所述目标弹幕信息;
所述第二弹幕信息确定子模块,被配置为确定所述图像信息对应的图像信息类型,根据预先建立的图像信息类型与弹幕信息的第二对应关系,确定所述图像信息类型对应的所述目标弹幕信息。
在一个可选的实施例中,所述装置还可以包括:视频确定模块、集合获取模块、信息提取模块和关系建立模块;其中,
所述视频确定模块,被配置为确定目标视频,所述目标视频包括弹幕信息;
所述集合获取模块,被配置为从所述目标视频中获取目标图像帧集合;
所述信息提取模块,被配置为提取所述目标图像帧集合中的弹幕信息;
所述关系建立模块,被配置为建立所述目标图像帧集合的图像信息类型与其中的弹幕信息的所述第二对应关系。
在一个可选的实施例中,所述集合获取模块,可以包括以下任一项:第一集合获得子模块、第二集合获得子模块;其中,
所述第一集合获得子模块,被配置为确定所述目标视频中连续播放的多帧图像的图像信息类型相同,基于所述多帧图像获得所述目标图像帧集合;
所述第二集合获得子模块,被配置为从所述目标视频中获取具有相同图像信息类型的图像,基于所述具有相同图像信息类型的图像获得所述目标图像帧集合。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意 性的,其中上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本公开方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
相应的,一方面,本公开实施例提供了一种电子设备,包括:屏幕;摄像头;处理器;用于存储处理器可执行指令的存储器;其中,上述处理器被配置为:
获取在拍摄过程中采集的图像;
获取与所述图像的图像信息匹配的目标弹幕信息;
显示所述图像和所述目标弹幕信息。
图6是根据一示例性实施例示出的一种电子设备1600的结构示意图。例如,装置1600可以是用户设备,可以具体为移动电话,计算机,数字广播电子设备,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理,可穿戴设备如智能手表、智能眼镜、智能手环、智能跑鞋等。
参照图6,装置1600可以包括以下一个或多个组件:处理组件1602,存储器1604,电源组件1606,多媒体组件1608,音频组件1610,输入/输出(I/O)的接口1612,传感器组件1614,以及通信组件1616。
处理组件1602通常控制装置1600的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1602可以包括一个或多个处理器1620来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件1602可以包括一个或多个模块,便于处理组件1602和其他组件之间的交互。例如,处理组件1602可以包括多媒体模块,以方便多媒体组件1608和处理组件1602之间的交互。
存储器1604被配置为存储各种类型的数据以支持在设备1600的操作。这些数据的示例包括用于在装置1600上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1604可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件1606为装置1600的各种组件提供电力。电源组件1606可以包括电源管理***,一个或多个电源,及其他与为装置1600生成、管理和分配电力相关联的组件。
多媒体组件1608包括在上述装置1600和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。上述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与上述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1608包括一个前置摄像头和/或后置摄像头。当设备1600处于操作模式,如调整模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜***或具有焦距和光学变焦能力。
音频组件1610被配置为输出和/或输入音频信号。例如,音频组件1610包括一个麦克风(MIC),当装置1600处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1604或经由通信组件1616发送。在一些实施例中,音频组件1610还包括一个扬声器,用于输出音频信号。
I/O接口1612为处理组件1602和***接口模块之间提供接口,上述 ***接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1614包括一个或多个传感器,用于为装置1600提供各个方面的状态评估。例如,传感器组件1614可以检测到设备1600的打开/关闭状态,组件的相对定位,例如上述组件为装置1600的显示器和小键盘,传感器组件1614还可以检测装置1600或装置1600一个组件的位置改变,用户与装置1600接触的存在或不存在,装置1600方位或加速/减速和装置1600的温度变化。传感器组件1614可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1614还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1614还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件1616被配置为便于装置1600和其他设备之间有线或无线方式的通信。装置1600可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件1616经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,上述通信组件1616还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置1600可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非临时性计算机可读存储介质,例如包括指令的存储器1604,当存储介质中的指令由装置1600的处理器 1620执行时,使得装置1600能够执行拍摄方法,该方法包括:获取在拍摄过程中采集的图像,获取与所述图像的图像信息匹配的目标弹幕信息,显示所述图像和所述目标弹幕信息。
所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
相应的,另一方面,本公开实施例提供了一种服务器,图7是根据一示例性实施例示出的一种服务器的结构示意图,图7所示的服务器可以包括:通过内部总线510连接的存储器520、处理器530和外部接口540;
其中,外部接口540,用于获取数据;
存储器520,用于存储拍摄对应的机器可读指令;
处理器530,用于读取所述存储器520上的所述机器可读指令,并执行所述指令以实现如下操作:
获取所述电子设备采集的图像的图像信息;
确定与所述图像信息匹配的目标弹幕信息;
将所述目标弹幕信息发送给所述电子设备。
所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的 精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (26)

  1. 一种拍摄方法,其特征在于,应用于电子设备,所述方法包括:
    获取在拍摄过程中采集的图像;
    获取与所述图像的图像信息匹配的目标弹幕信息;
    显示所述图像和所述目标弹幕信息。
  2. 根据权利要求1所述的方法,其特征在于,所述图像信息包括以下至少一项:图像数据、拍摄场景信息、图像分类信息。
  3. 根据权利要求2所述的方法,其特征在于,所述图像信息包括所述图像分类信息;所述图像分类信息通过下面任意一种方式获取:
    在所述图像分类信息包括内容类别信息的情况下,根据所述图像的内容,确定所述图像的内容类别信息;
    在所述图像分类信息包括场景类别信息的情况下,根据所述图像的内容,确定所述图像的拍摄场景信息,确定所述拍摄场景信息对应的场景类别信息。
  4. 根据权利要求1所述的方法,其特征在于,所述获取与所述图像的图像信息匹配的目标弹幕信息,包括以下任一项:
    将所述图像信息发送给服务器,以使所述服务器获取与所述图像信息配的所述目标弹幕信息,接收所述服务器发送的所述目标弹幕信息;
    确定所述图像信息是否为所述预设图像信息,若是,则从所述预设图像信息对应的弹幕库中获取所述目标弹幕信息。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    基于所述图像和所述目标弹幕信息,生成拍摄文件;
    存储所述拍摄文件。
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述图像和所述目标弹幕信息,生成拍摄文件,包括以下任一项:
    将所述目标弹幕信息写入所述图像的图像文件中,生成所述拍摄文件;
    将所述目标弹幕信息写入文本中,组合所述图像的图像文件和所述文本,生成所述拍摄文件。
  7. 一种拍摄方法,其特征在于,应用于服务器,所述方法包括:
    获取所述电子设备采集的图像的图像信息;
    确定与所述图像信息匹配的目标弹幕信息;
    将所述目标弹幕信息发送给所述电子设备。
  8. 根据权利要求7所述的方法,其特征在于,所述获取所述电子设备采集的图像的图像信息,包括以下任一项:
    接收所述电子设备发送的所述图像信息;
    接收所述电子设备采集并发送的图像,确定所述图像的所述图像信息。
  9. 根据权利要求7所述的方法,其特征在于,所述确定与所述图像信息匹配的目标弹幕信息,包括以下任一项:
    根据预先建立的图像信息与弹幕信息的第一对应关系,确定所述图像信息对应的所述目标弹幕信息;
    确定所述图像信息对应的图像信息类型,根据预先建立的图像信息类型与弹幕信息的第二对应关系,确定所述图像信息类型对应的所述目标弹幕信息。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    确定目标视频,所述目标视频包括弹幕信息;
    从所述目标视频中获取目标图像帧集合;
    提取所述目标图像帧集合中的弹幕信息;
    建立所述目标图像帧集合的图像信息类型与其中的弹幕信息的所述第二对应关系。
  11. 根据权利要求10所述的方法,其特征在于,所述从所述目标视频中获取目标图像帧集合,包括以下任一项:
    确定所述目标视频中连续播放的多帧图像的图像信息类型相同,基于所述多帧图像获得所述目标图像帧集合;
    从所述目标视频中获取具有相同图像信息类型的图像,基于所述具有相同图像信息类型的图像获得所述目标图像帧集合。
  12. 一种拍摄装置,其特征在于,应用于电子设备,所述装置包括:
    图像获取模块,被配置为获取在拍摄过程中采集的图像;
    信息获取模块,被配置为获取与所述图像的图像信息匹配的目标弹幕信息;
    信息显示模块,被配置为显示所述图像和所述目标弹幕信息。
  13. 根据权利要求12所述的装置,其特征在于,所述图像信息包括以下至少一项:图像数据、拍摄场景信息、图像分类信息。
  14. 根据权利要求13所述的装置,其特征在于,所述图像信息包括所述图像分类信息;所述信息获取模块,包括以下任一项:
    第一信息确定子模块,被配置为在所述图像分类信息包括内容类别信息的情况下,根据所述图像的内容,确定所述图像的内容类别信息;
    第二信息子模块,被配置为在所述图像分类信息包括场景类别信息的情况下,根据所述图像的内容,确定所述图像的拍摄场景信息,确定所述拍摄场景信息对应的场景类别信息。
  15. 根据权利要求12所述的装置,其特征在于,所述信息获取模块,包括以下任一项:
    发送子模块,被配置为将所述图像信息发送给服务器,以使所述服务器获取与所述图像信息配的所述目标弹幕信息,接收所述服务器发送的所述目标弹幕信息;
    获取子模块,被配置为确定所述图像信息是否为所述预设图像信息,若是,则从所述预设图像信息对应的弹幕库中获取所述目标弹幕信息。
  16. 根据权利要求12所述的装置,其特征在于,所述装置还包括:
    文件生成模块,被配置为基于所述图像和所述目标弹幕信息,生成拍摄文件;
    文件存储模块,被配置为存储所述拍摄文件。
  17. 根据权利要求16所述的装置,其特征在于,所述文件生成模块,包括以下任一项:
    信息写入子模块,被配位置将所述目标弹幕信息写入所述图像的图像文件中,生成所述拍摄文件;
    文本组合子模块,被配位置将所述目标弹幕信息写入文本中,组合所述图像的图像文件和所述文本,生成所述拍摄文件。
  18. 一种拍摄装置,其特征在于,应用于服务器,所述装置包括:
    信息获取模块,被配置为获取所述电子设备采集的图像的图像信息;
    信息确定模块,被配置为确定与所述图像信息匹配的目标弹幕信息;
    信息发送模块,被配置为将所述目标弹幕信息发送给所述电子设备。
  19. 根据权利要求18所述的装置,其特征在于,所述信息获取模块,包括以下任一项:
    信息接收子模块,被配置为接收所述电子设备发送的所述图像信息;
    信息确定子模块,被配置为接收所述电子设备采集并发送的图像,确定所述图像的所述图像信息。
  20. 根据权利要求18所述的装置,其特征在于,所述信息确定模块,包括以下任一项:
    第一弹幕信息确定子模块,被配置为根据预先建立的图像信息与弹幕信息的第一对应关系,确定所述图像信息对应的所述目标弹幕信息;
    第二弹幕信息确定子模块,被配置为确定所述图像信息对应的图像信息类型,根据预先建立的图像信息类型与弹幕信息的第二对应关系,确定所述图像信息类型对应的所述目标弹幕信息。
  21. 根据权利要求20所述的装置,其特征在于,所述装置还包括:
    视频确定模块,被配置为确定目标视频,所述目标视频包括弹幕信息;
    集合获取模块,被配置为从所述目标视频中获取目标图像帧集合;
    信息提取模块,被配置为提取所述目标图像帧集合中的弹幕信息;
    关系建立模块,被配置为建立所述目标图像帧集合的图像信息类型与其中的弹幕信息的所述第二对应关系。
  22. 根据权利要求21所述的装置,其特征在于,所述集合获取模块,包括以下任一项:
    第一集合获得子模块,被配置为确定所述目标视频中连续播放的多帧图像的图像信息类型相同,基于所述多帧图像获得所述目标图像帧集合;
    第二集合获得子模块,被配置为从所述目标视频中获取具有相同图像信息类型的图像,基于所述具有相同图像信息类型的图像获得所述目标图像帧集合。
  23. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1~6中任一项所述方法的步骤。
  24. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求7~11中任一项所述方法的步骤。
  25. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    获取在拍摄过程中采集的图像;
    获取与所述图像的图像信息匹配的目标弹幕信息;
    显示所述图像和所述目标弹幕信息。
  26. 一种服务器,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    获取所述电子设备采集的图像的图像信息;
    确定与所述图像信息匹配的目标弹幕信息;
    将所述目标弹幕信息发送给所述电子设备。
PCT/CN2020/093531 2020-05-29 2020-05-29 拍摄方法及装置 WO2021237744A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2020/093531 WO2021237744A1 (zh) 2020-05-29 2020-05-29 拍摄方法及装置
EP20824068.9A EP3937485A4 (en) 2020-05-29 2020-05-29 PHOTOGRAPHIC SHOOTING METHOD AND APPARATUS
CN202080001843.7A CN114097217A (zh) 2020-05-29 2020-05-29 拍摄方法及装置
US17/200,104 US20210377454A1 (en) 2020-05-29 2021-03-12 Capturing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093531 WO2021237744A1 (zh) 2020-05-29 2020-05-29 拍摄方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/200,104 Continuation US20210377454A1 (en) 2020-05-29 2021-03-12 Capturing method and device

Publications (1)

Publication Number Publication Date
WO2021237744A1 true WO2021237744A1 (zh) 2021-12-02

Family

ID=78704423

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093531 WO2021237744A1 (zh) 2020-05-29 2020-05-29 拍摄方法及装置

Country Status (4)

Country Link
US (1) US20210377454A1 (zh)
EP (1) EP3937485A4 (zh)
CN (1) CN114097217A (zh)
WO (1) WO2021237744A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412742B (zh) * 2022-09-02 2024-05-14 北京达佳互联信息技术有限公司 直播间内下发评论容器的方法、装置及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120188396A1 (en) * 2011-01-24 2012-07-26 Samsung Electronics Co., Ltd. Digital photographing apparatuses, methods of controlling the same, and computer-readable storage media
CN103327270A (zh) * 2013-06-28 2013-09-25 腾讯科技(深圳)有限公司 一种图像处理方法、装置和终端
CN106803909A (zh) * 2017-02-21 2017-06-06 腾讯科技(深圳)有限公司 一种视频文件的生成方法及终端
CN108924624A (zh) * 2018-08-03 2018-11-30 百度在线网络技术(北京)有限公司 信息处理方法和装置
CN109348120A (zh) * 2018-09-30 2019-02-15 烽火通信科技股份有限公司 一种拍摄方法、图像的显示方法、***及设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5423052B2 (ja) * 2009-02-27 2014-02-19 株式会社ニコン 画像処理装置、撮像装置及びプログラム
US8301202B2 (en) * 2009-08-27 2012-10-30 Lg Electronics Inc. Mobile terminal and controlling method thereof
US9317531B2 (en) * 2012-10-18 2016-04-19 Microsoft Technology Licensing, Llc Autocaptioning of images
US10210218B2 (en) * 2015-06-16 2019-02-19 Salesforce.Com, Inc. Processing a file to generate a recommendation using a database system
US20170132821A1 (en) * 2015-11-06 2017-05-11 Microsoft Technology Licensing, Llc Caption generation for visual media
US10290110B2 (en) * 2016-07-05 2019-05-14 Intel Corporation Video overlay modification for enhanced readability
CN106982387B (zh) * 2016-12-12 2020-09-18 阿里巴巴集团控股有限公司 弹幕的显示、推送方法及装置及弹幕应用***
CN110784759B (zh) * 2019-08-12 2022-08-12 腾讯科技(深圳)有限公司 弹幕信息处理方法、装置、电子设备及存储介质
CN110740387B (zh) * 2019-10-30 2021-11-23 深圳Tcl数字技术有限公司 一种弹幕编辑方法、智能终端及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120188396A1 (en) * 2011-01-24 2012-07-26 Samsung Electronics Co., Ltd. Digital photographing apparatuses, methods of controlling the same, and computer-readable storage media
CN103327270A (zh) * 2013-06-28 2013-09-25 腾讯科技(深圳)有限公司 一种图像处理方法、装置和终端
CN106803909A (zh) * 2017-02-21 2017-06-06 腾讯科技(深圳)有限公司 一种视频文件的生成方法及终端
CN108924624A (zh) * 2018-08-03 2018-11-30 百度在线网络技术(北京)有限公司 信息处理方法和装置
CN109348120A (zh) * 2018-09-30 2019-02-15 烽火通信科技股份有限公司 一种拍摄方法、图像的显示方法、***及设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3937485A4 *

Also Published As

Publication number Publication date
EP3937485A1 (en) 2022-01-12
EP3937485A4 (en) 2022-01-12
CN114097217A (zh) 2022-02-25
US20210377454A1 (en) 2021-12-02

Similar Documents

Publication Publication Date Title
KR101680714B1 (ko) 실시간 동영상 제공 방법, 장치, 서버, 단말기기, 프로그램 및 기록매체
CN106791893B (zh) 视频直播方法及装置
JP6405470B2 (ja) 動画撮影方法及びその装置、プログラム、及び記憶媒体
KR101910346B1 (ko) 이미지 처리 방법 및 장치
WO2022062896A1 (zh) 直播互动方法及装置
CN106165430A (zh) 视频直播方法及装置
EP3136391A1 (en) Method, device and terminal device for video effect processing
WO2016192325A1 (zh) 视频文件的标识处理方法及装置
EP3258414B1 (en) Prompting method and apparatus for photographing
US11641493B2 (en) Method and electronic device for displaying bullet screens
WO2022198934A1 (zh) 卡点视频的生成方法及装置
US11949979B2 (en) Image acquisition method with augmented reality anchor, device, apparatus and storage medium
JP6333990B2 (ja) パノラマ写真の生成方法および装置
CN109922252B (zh) 短视频的生成方法及装置、电子设备
CN112261481B (zh) 互动视频的创建方法、装置、设备及可读存储介质
WO2017080084A1 (zh) 字体添加方法及装置
WO2020093798A1 (zh) 一种显示目标图像的方法、装置、终端及存储介质
CN109618192B (zh) 播放视频的方法、装置、***和存储介质
CN112788354A (zh) 直播互动方法、装置、电子设备、存储介质及程序产品
JP2017532618A (ja) 情報処理方法及び装置
CN113032627A (zh) 视频分类方法、装置、存储介质及终端设备
CN109145878B (zh) 图像提取方法及装置
CN108986803B (zh) 场景控制方法及装置、电子设备、可读存储介质
CN107105311B (zh) 直播方法及装置
WO2021237744A1 (zh) 拍摄方法及装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020824068

Country of ref document: EP

Effective date: 20201222

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20824068

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE