CN113784058A - Image generation method and device, storage medium and electronic equipment - Google Patents

Image generation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113784058A
CN113784058A CN202111056016.0A CN202111056016A CN113784058A CN 113784058 A CN113784058 A CN 113784058A CN 202111056016 A CN202111056016 A CN 202111056016A CN 113784058 A CN113784058 A CN 113784058A
Authority
CN
China
Prior art keywords
video material
identifier
label
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111056016.0A
Other languages
Chinese (zh)
Inventor
杨青青
飞苹果
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lairimeng Information Technology Co ltd
Original Assignee
Shanghai Lairimeng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lairimeng Information Technology Co ltd filed Critical Shanghai Lairimeng Information Technology Co ltd
Priority to CN202111056016.0A priority Critical patent/CN113784058A/en
Publication of CN113784058A publication Critical patent/CN113784058A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses an image generation method, an image generation device, a storage medium and electronic equipment. The method comprises the following steps: acquiring video material information acquired in an interaction process; performing label processing on the video material information to obtain a label video material, wherein a label set for the video material information comprises a user identifier, a time identifier and a camera identifier; and combining the label video materials based on the labels corresponding to the label video materials to generate a user interaction image. Through the technical scheme, the label processing is carried out on the video material information, the label video material is combined by using the label, the generation efficiency of the user interaction image is improved, further, the interaction image is automatically generated in the interaction process, the image recording of the interaction process is realized, and the display effect is improved.

Description

Image generation method and device, storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image generation method, an image generation device, a storage medium and electronic equipment.
Background
With the popularization of various entertainment modes, places such as amusement parks, escape from secret rooms and the like have become entertainment choices for many people.
At present, an automatic photographing system is usually arranged in an immersed experience place such as a playground and a secret room escape, and the automatic photographing system can only photograph at a fixed point. For example, when the roller coaster dives downwards, the expression of the tourist is captured, and the captured photo is used as a video souvenir.
Because the image keepsake that the automatic shooting system obtained is a static picture, make the interactive process of visitor and scene difficult to record to the bandwagon effect of picture is relatively poor, is difficult to satisfy visitor's memorial demand.
Disclosure of Invention
The embodiment of the invention provides an image generation method, an image generation device, a storage medium and electronic equipment, which are used for recording an interaction process and improving a display effect.
In a first aspect, an embodiment of the present invention provides an image generating method, including:
acquiring video material information acquired in an interaction process;
performing label processing on the video material information to obtain a label video material, wherein a label set for the video material information comprises a user identifier, a time identifier and a camera identifier;
and combining the label video materials based on the labels corresponding to the label video materials to generate a user interaction image.
In a second aspect, an embodiment of the present invention further provides an image generating apparatus, including:
the information acquisition module is used for acquiring video material information acquired in the interaction process;
the label processing module is used for carrying out label processing on the video material information to obtain a label video material, wherein a label set for the video material information comprises a user identifier, a time identifier and a camera identifier;
and the image generation module is used for combining the label video materials based on the labels corresponding to the label video materials to generate the user interaction image.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the image generation method according to any of the embodiments of the present invention.
In a fourth aspect, the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the image generation method according to any one of the embodiments of the present invention.
The method comprises the steps of acquiring video material information acquired in an interaction process; performing label processing on the video material information to obtain a label video material, wherein the label set for the video material information comprises a user identifier, a time identifier and a camera identifier; and combining the label video materials based on the labels corresponding to the label video materials to generate the user interaction image. Through the technical scheme, the label processing is carried out on the video material information, the label video material is combined by using the label, the generation efficiency of the user interaction image is improved, further, the interaction image is automatically generated in the interaction process, the image recording of the interaction process is realized, and the display effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, a brief description is given below of the drawings used in describing the embodiments. It should be clear that the described figures are only views of some of the embodiments of the invention to be described, not all, and that for a person skilled in the art, other figures can be derived from these figures without inventive effort.
Fig. 1 is a schematic flowchart illustrating an image generating method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image generating method according to a second embodiment of the present invention;
fig. 3 is a schematic flowchart of an image generating method according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of an image template according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image generating apparatus according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of an image generating method according to an embodiment of the present invention, where the embodiment is applicable to a case where an image is automatically generated during an interaction between a user and a real scene, and the method may be executed by an image generating apparatus provided in an embodiment of the present invention, where the apparatus may be implemented by software and/or hardware, and the apparatus may be configured on an electronic computing device, for example, a terminal and/or a server. The method specifically comprises the following steps:
and S110, acquiring video material information acquired in the interaction process.
The video material information may be one or more videos collected during the interaction process between the user and the real scene, and the real scene may include, but is not limited to, an immersive experience scene such as a playground, a backroom, or a ghost house. For example, the video material information may be a video clip of a tourist when the roller coaster is pitched down, and the video material information may also be a video clip in which the user is frightened in a haunted house.
In the embodiment of the present invention, the obtaining of the video material information collected in the interactive process may include: the method comprises the steps that video material information is collected in real time based on video collection equipment, or the video material information is obtained from a preset storage position, or the video material information sent by target equipment is received.
On the basis of the above embodiment, before obtaining the video material information collected in the interactive process, the method further includes: acquiring image shooting information, wherein the image shooting information comprises a camera identifier; and controlling the camera corresponding to the camera identification to shoot based on the camera identification to obtain video material information.
The image shooting information can be understood as trigger information for triggering the camera to shoot, and the determining mode of the image shooting information comprises the following steps: determining image shooting information based on preset event information; or determining image shooting information based on the user position information; still alternatively, the image capture information is determined based on the face information of the user.
The preset event information can be script events acquired from a dynamic script database. For example, the preset event information may be a lightning special effect, and the image capturing information is generated while the lightning special effect is triggered.
The user position information refers to current position information of the user, and in some optional embodiments, the user arrives at a specified position or generates image shooting information within a preset distance range.
The face information of the user can be the face characteristic information of the user and can be used for identity recognition. Illustratively, a face recognition camera is arranged in the scene, and image shooting information is generated when the face recognition camera detects a face.
And S120, performing label processing on the video material information to obtain a label video material, wherein the label set for the video material information comprises a user identifier, a time identifier and a camera identifier.
The tagged video material refers to a video material with a tag, and the tag may include, but is not limited to, a user identifier, a time identifier, and a camera identifier, where the user identifier may be an identity of a guest or a participant, and specifically may be an ID, a nickname, or a name of a user. The time stamp may be the time when the video was started, the time when the video was finished, or a timestamp, and in some embodiments, the time stamp may also be an ascending number. It can be understood that the time stamp corresponds to the stage of the plot development, and the plot corresponding to the video material information can be confirmed through the time stamp. In different scenarios, there are multiple cameras, each with a unique camera identification.
Specifically, in some embodiments, feature extraction may be performed on each image frame in the video material information to obtain a video description text, and the keywords extracted from the video description text are used as tags and set on the corresponding video material in the video material information to obtain a tagged video material.
For example, the video material information may be a video of a visitor in a secret room exploration, the video includes, but is not limited to, a face picture of the user, time and a seat number for capturing a current picture, the time may be displayed in the upper right corner of the video picture, and the seat number for capturing the current picture may be displayed in the upper left corner of the video picture, and such an arrangement may reduce the occlusion of the video picture. By extracting the features of each image frame in the video, a video description text containing a user ID, time and a machine position number for shooting a current picture can be obtained, the user ID is used as a time identifier, the time is used as a time identifier, the machine position number for shooting the current picture is used as a camera identifier, and the user identifier, the time identifier and the camera identifier are arranged in corresponding video materials to obtain a label video material.
In another embodiment, the tag processing may be performed in real time during the generation of the video material information, i.e., during the shooting of the video material.
Illustratively, in the shooting process, a face image captured currently can be matched in a user database through a face recognition technology to obtain a user identifier, the current shooting time is used as a time identifier, a machine position number for shooting a current picture is used as a camera identifier, and the user identifier, the time identifier and the camera identifier are arranged in corresponding video materials to obtain a labeled video material.
And S130, combining the label video materials based on the labels corresponding to the label video materials to generate a user interaction image.
The user interaction image can be understood as the image content which is manufactured, and the user interaction image can be presented in various formats and can be published on different social media platforms. Formats may include, but are not limited to, MP4, FLV, RMVB, and the like.
Specifically, in some embodiments, the tag video materials are combined based on one of a user identifier, a time identifier, and a camera identifier of each tag video material, so as to generate a user interaction image. Illustratively, the label video materials can be sorted according to the value of the time identifier, and the label video materials are combined into the user interaction image according to the sorting result; in some embodiments, the tag video materials are combined based on a plurality of user identifiers, time identifiers, and camera identifiers of the tag video materials to generate the user interaction image. For example, the tag video materials may be sorted according to the value of the time identifier, then the tag video materials shot by the interested airplane are selected according to the camera identifier, and the finally screened tag video materials are combined according to the sorting result of the time identifier to generate the user interactive image.
In some alternative embodiments, after generating the user-interactive imagery, the user may modify the user-interactive imagery through a dedicated platform or application. Illustratively, the user appearance may be modified, such as buffing, face thinning, etc.; or, modify the lens picture, for example, change the front lens into the side lens; or, the image type is modified, for example, a horror type is converted into a suspicion type, and by modifying the image type, the background music corresponding to the user interactive image is also changed, so as to ensure the uniformity of the user interactive image and the image type.
The embodiment of the invention provides an image generation method, which comprises the steps of acquiring video material information acquired in an interaction process; performing label processing on the video material information to obtain a label video material, wherein the label set for the video material information comprises a user identifier, a time identifier and a camera identifier; and combining the label video materials based on the labels corresponding to the label video materials to generate the user interaction image. Through the technical scheme, the label processing is carried out on the video material information, the label video material can be accurately combined by utilizing the label, the generation efficiency of the user interaction image is improved, further, the automatic generation of the interaction image is realized in the interaction process, the image display of the interaction process is realized, and the display effect is improved.
Example two
Fig. 2 is a schematic flow chart of an image generating method according to a second embodiment of the present invention, and based on the second embodiment, the "tag processing is performed on the video material information to obtain a tagged video material" is further refined. The specific implementation manner of the method can be seen in the detailed description of the technical scheme. The technical terms that are the same as or corresponding to the above embodiments are not repeated herein. As shown in fig. 2, the method of the embodiment of the present invention specifically includes the following steps:
and S210, acquiring video material information acquired in the interaction process.
And S220, acquiring user information, time information and camera information for shooting corresponding to the video material information.
The user information may be information that a user can be identified, and the user information may include, but is not limited to, face information, voice information, fingerprint information, and the like. In some embodiments, the user information may be obtained by an acquisition device in the scene, for example, face information may be obtained by a face recognition apparatus; or acquiring sound information through a microphone; alternatively, the fingerprint information is obtained by a fingerprint collector. In some embodiments, the user information may also be identified by an RFID identifier, for example, the wearable device of the user is provided with an RFID identifier, the RFID identifier has a unique correspondence with the user information, and the user information may be acquired by identifying the RFID identifier.
The time information can be the shooting time of the video material and also can be a timeline node of the plot development. In some embodiments, the time information may be obtained by a clock module of each scene device, including but not limited to a cell phone, a camera, and the like. The camera information may include, but is not limited to, camera location and camera number. The camera position may be any position in the scene, for example, a forward position or a side position. The camera number is a number for the camera for convenience of management, and it is understood that the camera number has a correspondence with the camera position.
S230, generating a user identifier, a time identifier and a camera identifier of the video material information based on the user information, the time information and the shooting camera information, and setting the user identifier, the time identifier and the camera identifier as labels of the video material information to obtain labeled video materials.
In some embodiments, the user information, the time information, and the camera information for shooting can be directly used as the user identifier, the time identifier, and the camera identifier of the video material information; in some embodiments, key information extraction may be performed on the user information, the time information, and the camera information for shooting, and the extracted key information may be used as a user identifier, a time identifier, and a camera identifier, where the key information extraction method may be a natural language processing technique.
After the user identifier, the time identifier, and the camera identifier are obtained, in some embodiments, a mapping relationship table may be established to perform label processing on the video material information, specifically, each video material information is numbered, and a mapping relationship between the number of the video material information and the user identifier, the time identifier, and the camera identifier is established in the relationship table, so that the label processing on the video material information is realized, and the labeled video material is obtained. In some embodiments, the user identification, the time identification, and the camera identification may be stored in a video file of the video material information, resulting in tagged video material.
And S240, combining the label video materials based on the labels corresponding to the label video materials to generate the user interaction image.
The combination method includes, but is not limited to, directly combining the screened tag video materials and inserting the screened tag video materials into an image template for combination.
For example, in some embodiments, the tag video materials are screened and then directly combined, and specifically, the tag video materials may be screened according to tags, for example, the tag video materials are screened according to a user identifier, and the screened tag video materials corresponding to the user identifier are directly combined to generate a user interactive image. In another embodiment, the filtered tag video materials are inserted into an image template for combination, specifically, the user identifiers of one or more users in the interaction process are matched in the tag video materials, the tag video materials corresponding to the one or more user identifiers are determined, and then the matched tag video materials are inserted into the image template according to one or more of the time identifiers and the camera identifiers of the matched tag video materials to generate the user interaction image. In the embodiment, the label video materials are inserted into the image template through the labels, so that the video materials are accurately inserted, and the storyline of the generated user interaction image is more coherent.
The embodiment of the invention provides an image generation method, which comprises the steps of acquiring video material information acquired in an interaction process; acquiring user information, time information and camera information for shooting corresponding to each video material information; the user identification, the time identification and the camera identification of the video material information are generated based on the user information, the time information and the camera information for shooting, and the user identification, the time identification and the camera identification are set to be labels of the video material information to obtain the label video material, so that the automatic label setting of the video material information is realized, and the label video material generation efficiency is improved.
EXAMPLE III
Fig. 3 is a flowchart illustrating an image generating method according to a third embodiment of the present invention, where the third embodiment of the present invention may be combined with various alternatives in the foregoing embodiments. In the embodiment of the present invention, optionally, the tag set for the video material information further includes at least one extension identifier; correspondingly, the method further comprises the following steps: performing feature recognition in the video material information based on a recognition rule corresponding to the extension identifier, and determining the extension identifier corresponding to the video material information based on a feature recognition result; and combining the label video materials based on the labels corresponding to the label video materials to generate a user interaction image, including: and combining the label video materials based on one or more of the user identification, the time identification, the camera identification and the extension identification of the label video materials to generate a user interaction image.
As shown in fig. 3, the method of the embodiment of the present invention specifically includes the following steps:
and S310, acquiring video material information acquired in the interaction process.
And S320, performing label processing on the video material information to obtain a label video material, wherein the label set for the video material information comprises a user identifier, a time identifier, a camera identifier and an extension identifier.
The extension identifiers can be set according to scenarios, different scenarios can correspond to different extension identifiers, and the extension identifiers can include but are not limited to gender, special props, user scores and the like.
Specifically, in some embodiments, a mapping relationship table may be established to perform label processing on video material information, for example, as shown in table 1, each video material information is numbered, and mapping relationships between the numbers of the video material information and the user identifier, the time identifier, the camera identifier, and the extension identifier are established in the relationship table, so that the label processing on the video material information is realized, and a labeled video material is obtained. Wherein, Male, Female and Both in the extension mark are sex marks, Male represents Male, Female represents Female and Both represents that Male and Female are Both present. In some embodiments, the user identifier, the time identifier, the camera identifier, and the extension identifier may be stored in a video file of the video material information to obtain the tagged video material.
TABLE 1
Numbering User identification Time identification Camera identification Extended identity
1 a 1 1 Male
2 b 1 2 Female
3 a,b 2 4 Both
4 b 3 1 Female
5 a 3 3 Male
In some optional embodiments, feature recognition is performed in the video material information based on a recognition rule corresponding to the extension identifier, and the extension identifier corresponding to the video material information is determined based on a feature recognition result.
The identification rule is a rule that changes with the change of the extension identifier. Illustratively, when the extension identifier is a gender, the identification rule is a gender identification rule, and feature identification is performed in the video material information according to the feature or the tone color to obtain the gender of the user in the video material information. And the gender of the user in the video material information is used as an extension identifier.
S330, combining the label video materials based on one or more of the user identification, the time identification, the camera identification and the extension identification of the label video materials to generate a user interaction image.
Specifically, in some embodiments, each tagged video material is combined based on one of a user identifier, a time identifier, a camera identifier, and an extension identifier of each tagged video material, so as to generate a user interaction image. Illustratively, the label video materials can be sorted according to the value of the time identifier, and the label video materials are combined into the user interaction image according to the sorting result; in some embodiments, the tag video materials are combined based on a plurality of user identifiers, time identifiers, camera identifiers, and extension identifiers of the tag video materials to generate the user interaction image. For example, the tag video materials may be sorted according to the numerical value of the time identifier, then the user with the score higher than the preset score may be selected through the extension identifier, and the finally screened tag video materials are combined according to the sorting result of the time identifier to generate the user interaction image.
On the basis of the foregoing embodiment, the generating a user interaction image by combining each of the tag video materials based on one or more of a user identifier, a time identifier, a camera identifier, and an extension identifier of each of the tag video materials includes: matching in each label video material based on the user identification of at least one user in the interaction process, and determining the label video material corresponding to the at least one user identification; and inserting the matched tag video material into an image template based on one or more of the time identifier, the camera identifier and the extension identifier of the matched tag video material to generate a user interactive image.
The image template may be a video template into which the tagged video material is inserted. In some embodiments, the image template may have a plurality of types, and the types of the image template include a literature type, a cheerful type, and the like, and may be set according to a preference of a user. In some embodiments, the corresponding image template may be selected according to the scenario.
Specifically, user identifications of one or more users in the interaction process are matched in each label video material, the label video material corresponding to the one or more user identifications is determined, and then the label video material obtained through matching is inserted into an image template according to one or more of the time identification, the camera identification and the extension identification of the label video material obtained through matching, so that a user interaction image is generated. In the embodiment, the label video materials are inserted into the image template through the labels, so that the video materials are accurately inserted, and the storyline of the generated user interaction image is more coherent.
On the basis of the above embodiment, the inserting the tag video material obtained by matching into an image template based on one or more of the time identifier, the camera identifier, and the extension identifier of the tag video material obtained by matching, and generating a user interactive image includes: matching one or more of the time identifier, the camera identifier and the extension identifier of each matched label video material based on the video screening conditions at each position to be filled in the image template, and determining a target video material matched at each position to be filled; and inserting the target video material into the corresponding position to be filled in the image template to obtain the user interaction image.
The video screening conditions can be preset in the image template, and different positions to be filled are opposite to different video screening conditions.
For example, the video screening condition of the position to be filled may be a video material with an inserted camera identifier of 1 and a time identifier of 3, and when the matched tag video material meets the video screening condition of the position to be filled, the matched tag video material is determined as a target video material, and the target video material is inserted into a corresponding position to be filled in the image template, so as to obtain the user interaction image.
On the basis of the foregoing embodiment, before inserting the matched tagged video material into the image template based on one or more of the time identifier, the camera identifier, and the extension identifier of the matched tagged video material, the method further includes: and calling a corresponding image template based on the scenario corresponding to the interactive process, wherein the image template comprises scenario materials corresponding to the scenario and used for connecting the label video materials.
The scenario corresponding to the interactive process may be generated by a dynamic scenario database. Specifically, the user interacts with the scene to call a story script in the dynamic scenario database, and the story script promotes the development of a story to generate a scenario with high association degree with the user. The scenario material can be a video material such as a film head, a film tail or special effect animation. As shown in fig. 4, the image template includes scenario material and a to-be-filled position for inserting the tag video material, the tag video material includes but is not limited to a system setting shooting material and an automatic detection shooting material, the system setting shooting material is a video material shot according to the scenario script, and the automatic detection shooting material is a video material that is automatically shot when the user is detected to enter a preset range.
Specifically, the scenario identifiers corresponding to the scenarios are matched in the image template library to obtain the image template matched with the scenarios.
The embodiment of the invention provides an image generation method, which comprises the steps of matching user identifications of one or more users in an interaction process in all label video materials, determining the label video materials corresponding to the one or more user identifications, and then inserting the matched label video materials into an image template according to one or more of time identifications, camera identifications and extension identifications of the matched label video materials, so that the video materials are accurately inserted, and the story plot of the generated user interaction image is more coherent.
Example four
Fig. 5 is a schematic structural diagram of an image generating device according to a fourth embodiment of the present invention, where the image generating device provided in this embodiment may be implemented by software and/or hardware, and may be configured in a terminal and/or a server to implement the image generating method according to the fourth embodiment of the present invention. The device may specifically include: an information acquisition module 410, a label processing module 420 and an image generation module 430.
The information acquisition module 410 is configured to acquire video material information acquired in an interaction process; the tag processing module 420 is configured to perform tag processing on the video material information to obtain a tagged video material, where a tag set for the video material information includes a user identifier, a time identifier, and a camera identifier; the image generating module 430 is configured to combine the tag video materials based on the tags corresponding to the tag video materials, and generate a user interaction image.
The embodiment of the invention provides an image generation device, which is used for acquiring video material information acquired in an interaction process; performing label processing on the video material information to obtain a label video material, wherein the label set for the video material information comprises a user identifier, a time identifier and a camera identifier; and combining the label video materials based on the labels corresponding to the label video materials to generate the user interaction image. Through the technical scheme, the label processing is carried out on the video material information, the label video material is combined by using the label, the generation efficiency of the user interaction image is improved, further, the interaction image is automatically generated in the interaction process, the image recording of the interaction process is realized, and the display effect is improved.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the tag processing module 420 may be further configured to:
acquiring user information, time information and camera information for shooting corresponding to each video material information;
and generating a user identifier, a time identifier and a camera identifier of the video material information based on the user information, the time information and the camera information for shooting, and setting the user identifier, the time identifier and the camera identifier as a label of the video material information to obtain a labeled video material.
On the basis of any optional technical scheme in the embodiment of the present invention, optionally, the tag set for the video material information further includes at least one extension identifier; correspondingly, the device further comprises:
the extended identifier determining module is used for performing feature recognition in the video material information based on a recognition rule corresponding to the extended identifier and determining the extended identifier corresponding to the video material information based on a feature recognition result;
and, the image generation module 430 may further include:
and the interactive image generating unit is used for combining each label video material based on one or more of the user identification, the time identification, the camera identification and the extension identification of each label video material to generate a user interactive image.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the interactive image generating unit includes:
the material matching subunit is used for matching in each label video material based on the user identification of at least one user in the interaction process, and determining the label video material corresponding to the at least one user identification;
and the material inserting subunit is used for inserting the matched tag video material into the image template based on one or more of the time identifier, the camera identifier and the extension identifier of the matched tag video material to generate a user interaction image.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the material insertion subunit may further be configured to:
matching one or more of the time identifier, the camera identifier and the extension identifier of each matched label video material based on the video screening conditions at each position to be filled in the image template, and determining a target video material matched at each position to be filled;
and inserting the target video material into the corresponding position to be filled in the image template to obtain the user interaction image.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, before inserting the matched tag video material into the image template based on one or more of a time identifier, a camera identifier, and an extension identifier of the matched tag video material, the apparatus further includes:
and the template calling subunit is used for calling the corresponding image template based on the scenario corresponding to the interactive process, wherein the image template comprises scenario materials corresponding to the scenario and used for connecting the label video materials.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, before obtaining video material information acquired in an interaction process, the apparatus may further be configured to:
acquiring image shooting information, wherein the image shooting information comprises a camera identifier;
and controlling the camera corresponding to the camera identification to shoot based on the camera identification to obtain video material information.
The image generation device provided by the embodiment of the invention can execute the image generation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 6 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 6, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 36 having a set (at least one) of program modules 26 may be stored, for example, in system memory 28, such program modules 26 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 26 generally perform the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown in FIG. 6, the network adapter 20 communicates with the other modules of the electronic device 12 via the bus 18. It should be appreciated that although not shown in FIG. 6, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement an image generation method provided in the present embodiment.
EXAMPLE six
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform an image generating method, including:
acquiring video material information acquired in an interaction process;
performing label processing on the video material information to obtain a label video material, wherein a label set for the video material information comprises a user identifier, a time identifier and a camera identifier;
and combining the label video materials based on the labels corresponding to the label video materials to generate a user interaction image.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image generation method, comprising:
acquiring video material information acquired in an interaction process;
performing label processing on the video material information to obtain a label video material, wherein a label set for the video material information comprises a user identifier, a time identifier and a camera identifier;
and combining the label video materials based on the labels corresponding to the label video materials to generate a user interaction image.
2. The method of claim 1, wherein tagging the video material information to obtain tagged video material comprises:
acquiring user information, time information and camera information for shooting corresponding to each video material information;
and generating a user identifier, a time identifier and a camera identifier of the video material information based on the user information, the time information and the camera information for shooting, and setting the user identifier, the time identifier and the camera identifier as a label of the video material information to obtain a labeled video material.
3. The method according to claim 1, wherein the tag set for the video material information further comprises at least one extension identifier;
correspondingly, the method further comprises the following steps:
performing feature recognition in the video material information based on a recognition rule corresponding to the extension identifier, and determining the extension identifier corresponding to the video material information based on a feature recognition result;
and combining the label video materials based on the labels corresponding to the label video materials to generate a user interaction image, including:
and combining the label video materials based on one or more of the user identification, the time identification, the camera identification and the extension identification of the label video materials to generate a user interaction image.
4. The method of claim 3, wherein combining each of the tagged video materials based on one or more of a user identification, a time identification, a camera identification, and an extension identification of each of the tagged video materials to generate a user interaction image comprises:
matching in each label video material based on the user identification of at least one user in the interaction process, and determining the label video material corresponding to the at least one user identification;
and inserting the matched tag video material into an image template based on one or more of the time identifier, the camera identifier and the extension identifier of the matched tag video material to generate a user interactive image.
5. The method of claim 4, wherein the inserting the matched tagged video material into an image template based on one or more of a time identifier, a camera identifier, and an extension identifier of the matched tagged video material, and generating a user interaction image comprises:
matching one or more of the time identifier, the camera identifier and the extension identifier of each matched label video material based on the video screening conditions at each position to be filled in the image template, and determining a target video material matched at each position to be filled;
and inserting the target video material into the corresponding position to be filled in the image template to obtain the user interaction image.
6. The method of claim 4, wherein prior to inserting the matched tagged video material into the imagery template based on one or more of a temporal identification, a camera identification, and an extension identification of the matched tagged video material, the method further comprises:
and calling a corresponding image template based on the scenario corresponding to the interactive process, wherein the image template comprises scenario materials corresponding to the scenario and used for connecting the label video materials.
7. The method of claim 1, wherein prior to obtaining video material information captured during an interaction, the method further comprises:
acquiring image shooting information, wherein the image shooting information comprises a camera identifier;
and controlling the camera corresponding to the camera identification to shoot based on the camera identification to obtain video material information.
8. An image generating apparatus, comprising:
the information acquisition module is used for acquiring video material information acquired in the interaction process;
the label processing module is used for carrying out label processing on the video material information to obtain a label video material, wherein a label set for the video material information comprises a user identifier, a time identifier and a camera identifier;
and the image generation module is used for combining the label video materials based on the labels corresponding to the label video materials to generate the user interaction image.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the image generation method of any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the image generation method of any one of claims 1 to 7 when executed by a computer processor.
CN202111056016.0A 2021-09-09 2021-09-09 Image generation method and device, storage medium and electronic equipment Pending CN113784058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111056016.0A CN113784058A (en) 2021-09-09 2021-09-09 Image generation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111056016.0A CN113784058A (en) 2021-09-09 2021-09-09 Image generation method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113784058A true CN113784058A (en) 2021-12-10

Family

ID=78841991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111056016.0A Pending CN113784058A (en) 2021-09-09 2021-09-09 Image generation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113784058A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041627A (en) * 2023-09-25 2023-11-10 宁波均联智行科技股份有限公司 Vlog video generation method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024447A (en) * 2012-12-31 2013-04-03 合一网络技术(北京)有限公司 Method and server capable of achieving mobile end editing and cloud end synthesis of multiple videos shot in same place and at same time
CN108124187A (en) * 2017-11-24 2018-06-05 互影科技(北京)有限公司 The generation method and device of interactive video
CN108769560A (en) * 2018-05-31 2018-11-06 广州富勤信息科技有限公司 The production method of medelling digitized video under a kind of high velocity environment
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN110855904A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium
CN111654619A (en) * 2020-05-18 2020-09-11 成都市喜爱科技有限公司 Intelligent shooting method and device, server and storage medium
CN113163272A (en) * 2020-01-07 2021-07-23 海信集团有限公司 Video editing method, computer device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024447A (en) * 2012-12-31 2013-04-03 合一网络技术(北京)有限公司 Method and server capable of achieving mobile end editing and cloud end synthesis of multiple videos shot in same place and at same time
CN108124187A (en) * 2017-11-24 2018-06-05 互影科技(北京)有限公司 The generation method and device of interactive video
CN108769560A (en) * 2018-05-31 2018-11-06 广州富勤信息科技有限公司 The production method of medelling digitized video under a kind of high velocity environment
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN110855904A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium
CN113163272A (en) * 2020-01-07 2021-07-23 海信集团有限公司 Video editing method, computer device and storage medium
CN111654619A (en) * 2020-05-18 2020-09-11 成都市喜爱科技有限公司 Intelligent shooting method and device, server and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041627A (en) * 2023-09-25 2023-11-10 宁波均联智行科技股份有限公司 Vlog video generation method and electronic equipment
CN117041627B (en) * 2023-09-25 2024-03-19 宁波均联智行科技股份有限公司 Vlog video generation method and electronic equipment

Similar Documents

Publication Publication Date Title
EP3996381A1 (en) Cover image determination method and apparatus, and device
WO2022001593A1 (en) Video generation method and apparatus, storage medium and computer device
CN110297943B (en) Label adding method and device, electronic equipment and storage medium
CN107862315B (en) Subtitle extraction method, video searching method, subtitle sharing method and device
CN110557678B (en) Video processing method, device and equipment
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN110914872A (en) Navigating video scenes with cognitive insights
US20190222806A1 (en) Communication system and method
RU2018137829A (en) METHOD, DEVICE AND INFORMATION DISPLAY SYSTEM
CN107333090B (en) Video conference data processing method and platform
CN110675433A (en) Video processing method and device, electronic equipment and storage medium
CN112653902B (en) Speaker recognition method and device and electronic equipment
CN108848334A (en) A kind of method, apparatus, terminal and the storage medium of video processing
CN111428450A (en) Conference summary processing method based on social application and electronic equipment
CN202998337U (en) Video program identification system
CN113709545A (en) Video processing method and device, computer equipment and storage medium
CN113784058A (en) Image generation method and device, storage medium and electronic equipment
CN112785741A (en) Check-in system and method, computer equipment and storage equipment
WO2023160288A1 (en) Conference summary generation method and apparatus, electronic device, and readable storage medium
CN113014852A (en) Information prompting method, device and equipment
EP4322515A1 (en) Auxiliary image capture methods and apparatuses for pets
CN113709521B (en) System for automatically matching background according to video content
CN112843691B (en) Method and device for shooting image, electronic equipment and storage medium
CN113923477A (en) Video processing method, video processing device, electronic equipment and storage medium
KR102032817B1 (en) Apparatus for inserting an advertisement in 360 degree video based on intelligent object recognition and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination