CN115037874A - Photographing method and device and electronic equipment - Google Patents

Photographing method and device and electronic equipment Download PDF

Info

Publication number
CN115037874A
CN115037874A CN202210482043.2A CN202210482043A CN115037874A CN 115037874 A CN115037874 A CN 115037874A CN 202210482043 A CN202210482043 A CN 202210482043A CN 115037874 A CN115037874 A CN 115037874A
Authority
CN
China
Prior art keywords
image
input
user
template
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210482043.2A
Other languages
Chinese (zh)
Inventor
李莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210482043.2A priority Critical patent/CN115037874A/en
Publication of CN115037874A publication Critical patent/CN115037874A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application discloses a photographing method and device and electronic equipment, and belongs to the field of camera shooting. The photographing method comprises the following steps: displaying a co-photographing template in response to a first input of a user to a photographing preview interface, wherein the co-photographing template comprises a first co-photographing region and a second co-photographing region; in response to a second input by the user, associating a target lighting area with a target contact, wherein the target lighting area includes at least one of: a first and a second lighting area; sending first information to the target contact person; receiving second information sent by the target contact person, and displaying a first image corresponding to the second information in the target photo area; generating a photographic image corresponding to the photographic template based on the first image.

Description

Photographing method and device and electronic equipment
Technical Field
The application belongs to the field of camera shooting, and particularly relates to a shooting method and device and electronic equipment.
Background
In the process of using the device by the terminal user, photographing becomes an important scene for using the terminal device, and the photographing experience is more and more emphasized.
In the prior art, when a user needs to take a photo of multiple people, and multiple people are not together, the user can only obtain the photo of other people first, and then edit and splice the photo of other people in sequence through software by himself to obtain the final photo of multiple people. This operation is very inconvenient.
Disclosure of Invention
The embodiment of the application aims to provide a photographing method, a photographing device and electronic equipment, and the problem that operation of generating a co-shooting photo is inconvenient in the prior art can be solved.
In a first aspect, an embodiment of the present application provides a photographing method, including:
responding to a first input of a user to a shooting preview interface, and displaying a co-shooting template, wherein the co-shooting template comprises a first co-shooting area and a second co-shooting area;
in response to a second input by the user, associating a target lighting area with a target contact, wherein the target lighting area comprises at least one of: a first and a second lighting area;
sending first information to the target contact person;
receiving second information sent by the target contact person, and displaying a first image corresponding to the second information in the target photo area;
generating a photographic image corresponding to the photographic template based on the first image.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including:
the system comprises a photo template display module, a photo template display module and a photo template display module, wherein the photo template display module is used for responding to first input of a user to a shooting preview interface and displaying a photo template, and the photo template comprises a first photo area and a second photo area;
a contact determination module to associate a target lighting area with a target contact in response to a second input by a user, wherein the target lighting area comprises at least one of: a first and a second lighting area;
the first information sending module is used for sending first information to the target contact person;
the first image display module is used for receiving second information sent by the target contact person and displaying a first image corresponding to the second information in the target photo area;
and the photographic image generating module is used for generating a photographic image corresponding to the photographic template based on the first image.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
According to the photographing method in the embodiment of the application, the group photo template is displayed through first input of a user, then the target group photo area is associated with the target contact through second input of the user, and first information is sent to the target contact; and receiving second information sent by the target contact person, realizing that the first image returned by the target contact person is displayed in the target photo region, and finally synthesizing the photo image corresponding to the photo template based on the first image without synthesizing and editing the received first image by a user, so that the process of generating the photo image is simple and convenient to operate.
Drawings
Fig. 1 is a flowchart of a photographing method disclosed in an embodiment of the present application;
FIG. 2 is a second flowchart of a photographing method disclosed in the embodiments of the present application;
FIG. 3 is a third flowchart of a photographing method disclosed in the embodiments of the present application;
FIG. 4 is a schematic interface diagram of a photographing method disclosed in the embodiments of the present application;
FIG. 5 is a second schematic interface diagram of a photographing method disclosed in the embodiments of the present application;
FIG. 6 is a third schematic interface diagram of a photographing method disclosed in the embodiment of the present application;
FIG. 7 is a fourth schematic interface diagram of a photographing method disclosed in the embodiments of the present application;
FIG. 8 is a fifth schematic interface diagram of a photographing method disclosed in the embodiments of the present application;
FIG. 9 is a fourth flowchart of a photographing method disclosed in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a photographing device disclosed in the embodiment of the present application;
fig. 11 is one of the hardware configuration diagrams of the electronic device disclosed in the embodiment of the present application;
fig. 12 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The photographing method, the photographing device and the electronic device provided by the embodiment of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The embodiment of the application provides a photographing method, which is shown in fig. 1 and includes:
step 101, responding to a first input of a user to a shooting preview interface, and displaying a photo template, wherein the photo template comprises a first photo area and a second photo area.
The shooting preview interface may be an initial interface after entering the camera application. In this shooting preview interface, each setting control and a shooting preview area are generally included. And displaying a picture shot by the front camera or the rear camera in the shooting preview area.
The first input may be various, such as a click input to a photographic template setting control displayed in the photographic preview interface, a shortcut gesture operation input through the photographic preview interface, a click input in a pop-up photographic template option, and the like.
In this embodiment, the lighting template includes at least two regions: a first and a second co-illumination area. The form of the photographic template may also be varied, for example a square frame-shaped photographic template, a circular photographic template, etc. Taking a frame-shaped photo template as an example, the photo template includes N × M small boxes, and each small box corresponds to one photo region. Since the small boxes are arranged in sequence, the arrangement sequence of the co-reference template area is also determined.
In this step, by generating a lighting template including a plurality of lighting regions, the contacts can be associated with the corresponding lighting regions in the subsequent steps, respectively, so as to provide a basis for the subsequent generation of a lighting image.
Step 102, responding to a second input of the user, associating a target lighting area with a target contact person, wherein the target lighting area comprises at least one of the following items: the first and second illumination areas.
The second input may be various, such as a click input of the user on the target lighting area, a long-press input, and the like.
The target lighting area may be all lighting areas in the lighting template, or may be a part of lighting areas in the lighting template.
And under the condition that a plurality of target lighting areas are provided, determining the target contact corresponding to each target lighting area through second input of the user to each target lighting area in sequence.
Specifically, under the condition that a second input of the user for the target lighting area is received, a contact list can be displayed in the shooting preview interface, and a contact is selected from the contact list, so that the association between the target lighting area and the target contact is realized. The manner of displaying the contact list may be to open a corresponding contact application, such as an address book, a chat tool, and the like.
And 103, sending first information to the target contact person.
In this embodiment, after the target contact associated with the target co-shooting area is determined, the first information is sent to the target contact by clicking a "determination" control in the shooting preview interface.
In addition, when a plurality of target photo areas are provided, in the process of determining the target contact person corresponding to each target photo area, the first information is not sent to the corresponding target contact person, but the 'determination' control in the shooting preview interface is clicked when the target contact persons corresponding to all the target photo areas are determined, and the first information is synchronously sent to the target contact persons.
Specifically, the first information may be in various forms, such as a pop-up window, a prompt message, and the like. Under the condition that the target contact person receives the first information, the target contact person can automatically start a camera application in a corresponding contact person terminal to shoot so as to generate a first image; or the target contact person can select the first image from the images stored in the target contact person terminal.
And 104, receiving second information sent by the target contact person, and displaying a first image corresponding to the second information in the target photo area.
Specifically, the second information includes a first image returned by the target contact terminal. And analyzing the second information through the user terminal to obtain a first image.
Further, the size of the target photographic region can be adjusted by the user, and can be adjusted up or down, so that the first image in the target photographic region can be displayed more clearly.
And 105, generating a photo combination image corresponding to the photo combination template based on the first image.
Specifically, the step may be triggered in response to a click operation of a photographing control of the photographing preview interface by a user, a shortcut gesture operation in the photographing preview interface, a pressing operation of a shortcut entity key of the terminal, and the like.
The generated combined images are synthesized according to the display sequence of the first image in the target combined area. And under the condition that the user inputs a third input command, removing the photographic template, and synthesizing the first images in the target photographic areas according to the corresponding positions and the sequence to generate the photographic images.
The photographing method disclosed by the embodiment of the application comprises the steps of firstly displaying a group photo template through first input of a user, then associating a target group photo area with a target contact through second input of the user, and sending first information to the target contact; and receiving second information sent by the target contact person, realizing that the first image returned by the target contact person is displayed in the target photo region, and finally synthesizing the photo image corresponding to the photo template based on the first image without synthesizing and editing the received first image by a user, so that the process of generating the photo image is simple and convenient to operate.
In one embodiment of the present application, in addition to generating a photographic image based on a first image, the first image and a second image of a user may be combined to generate the photographic image. Specifically, after step 101, the method further comprises:
step S11, in response to a third input by the user, displays the second image in the third lighting area.
Wherein, the co-illumination template further comprises a third co-illumination area. The third illumination region may be another illumination region excluding the first illumination region and the second illumination region, or may be the first illumination region or the second illumination region.
The third input may be a user click input, a long press input, or the like, for the third lighting region.
In one specific use, step S11 includes: and responding to a third input of the user to the third lighting area, and displaying a shooting image of the camera in the third lighting area.
In this way, the camera of the user terminal can be woken up through the third input of the user, and the image taken by the camera is displayed in the third photographing area. Wherein, the camera can be a front camera or a rear camera.
In another specific use, step S11 includes: displaying an image selected from an album in the third lighting area in response to a third input by the user to the third lighting area.
In the use mode, the album application interface can be displayed through the third input of the user, and the user exits the album application interface after the user selects the image.
Correspondingly, step 105 specifically includes: generating a photographic image corresponding to the photographic template based on the first image and the second image.
In an embodiment of the present application, the first input includes a first sub-input and a second sub-input, referring to fig. 2, step 101 specifically includes:
step 201, responding to a first sub-input of a user to a template control, and displaying at least one preset template in the shooting preview interface.
The template control can be a virtual control displayed in the shooting preview interface, and the first sub-input can be a clicking input of the template control by a user.
Step 202, responding to a second sub-input of the user to the at least one preset template, and displaying the photo template.
The second sub-input may be a sliding input, a clicking input, and the like of the user on the preset template.
The form of the photo template can be various, such as a square photo template, a circular photo template, and the like. The photo region included in the photo template can also be defined by the user.
In addition to the fixed co-photographing template pattern, in a specific mode, the touch sliding operation of the user in the photographing preview interface can be allowed, an irregular co-photographing template frame is generated to serve as the boundary of the template, and an irregular co-photographing area frame is generated in the template, so that the interactive operation with the user is enhanced, and the operation freedom of the user is improved.
Through the steps 201-202, selection of a co-shooting template can be achieved.
In one embodiment of the application, the first image may be adjusted in addition to the selection of the reference template. Correspondingly, after step 104, the method further comprises:
step S21, in response to a fourth input by the user, updating display parameter information of the first image, the display parameter information including at least one of: position information, size information, image parameter information.
For example, through a shortcut gesture operation command, the size information of each first image is adjusted to be larger or smaller according to the use requirement; or at least two first images are exchanged to adjust the position information of the first images so as to meet the requirements of different users; or adjust information such as a tone, a filter, and the like of each first image.
In an embodiment of the present application, the second input includes a third sub-input, a fourth sub-input, and a fifth sub-input, referring to fig. 3, step 102 specifically includes:
step 301, responding to a third sub-input of the user to the target lighting area, and displaying at least one application icon.
The social application can be various, such as an address book, various chat tools, and the like.
Step 302, responding to a fourth sub-input of the at least one application icon by the user, and displaying a target contact list.
Step 303, determining the target contact person in response to the fifth sub-input of the user to the target contact person list.
Through the steps 301-303, the determination of the target contact person can be realized, and the association between the target co-illumination area and the target contact person is further realized.
In one embodiment of the present application, in addition to generating a photographic image, a background image in the photographic image may be customized. Specifically, after step 104, the method further comprises:
in response to a fifth input by the user to the first image, taking the background image of the first image as the background image of the co-ordinate image.
Specifically, the fifth input may be a long press input to the first image, popping up "is the background taken as a photographic background? "after the click confirmation, the background image of the first image is used as the background image of the group photo image.
In addition, the background image of the second image may be used as the background image of the photographic image. Specifically, after the third photographing region displays the second image, the method further includes: in response to a sixth input by the user to the second image, taking the background image of the second image as the background image of the co-ordinate image.
Specifically, the sixth input may be a long press input to the second image, popping up "is the background taken as a co-lighting background? "after the click confirmation, the background image of the second image is used as the background image of the co-shot image.
In addition, the background image can be customized, except that the background image of the first image or the background image of the second image is used as the background image of the co-shot image. Specifically, before generating a photographic image corresponding to the photographic template based on the first image, the method further includes:
s31, in response to a seventh input by the user, displaying a background image selection window.
Wherein, the seventh input may be a shortcut gesture input, a long-press input, or the like, performed on the lighting template.
S32, responding to the eighth input of the user to the background image selection window, determining the background image of the co-photographing image.
In a specific embodiment of the present application, referring to fig. 4 to fig. 8, a flowchart of an actual operation of a photographing method of the present embodiment is shown.
Specifically, referring to fig. 9, the method of the present embodiment includes:
step 901, in response to a first sub-input of the template control 401 by the user, displaying at least one preset template 402 in the shooting preview interface 400, as shown in fig. 4.
Step 902, displaying the co-illumination template 403 in response to a second sub-input of the user to the at least one preset template 402.
Therein, the lighting template 403 includes a first lighting area 4031, a second lighting area 4032, a third lighting area 4033 and a fourth lighting area 4034.
Step 903, responding to a third input of the user, displays the second image 404 in the third lighting area 4033, as shown in fig. 5.
Step 904, in response to a second input by the user, associates the first, second and fourth photo areas 4031, 4032 and 4034, respectively, with the target contact, as shown in fig. 5.
The specific process of determining the contact of each matching area has been described in detail in the foregoing embodiments, and is not repeated here.
Step 905, sending first information to the target contact person, receiving second information sent by the target contact person, and displaying a first image 405 corresponding to the second information in the first photo region 4031, the second photo region 4032, and the fourth photo region 4034, respectively, as shown in fig. 6.
Step 906, in response to a fifth input by the user to the first image 405 of the fourth photographic region 4034, pop-up "is this background a photographic background? "after confirmation of clicking, the background image of the first image 405 is used as the background image of the merged image, as shown in fig. 7.
Step 907, generating a co-photography image 406 corresponding to the co-photography template based on the first image 405 and the second image 404, as shown in fig. 8.
According to the photographing method in the embodiment of the application, the photo template 403 is displayed through first input of a user, then a second image 404 is displayed in a third photo area 4033 in response to third input of the user, the target photo area is associated with a target contact through second input of the user, and first information is sent to the target contact; and receiving second information sent by the target contact person, realizing that a first image 405 returned by the target contact person is displayed in the target photo region, and finally generating a photo image 406 corresponding to the photo template 403 based on the first image 405 and the second image 404 without synthesizing and editing the received first image by a user, so that the process of generating the photo image is simple and convenient to operate.
According to the photographing method provided by the embodiment of the application, the execution main body can be a photographing device. The embodiment of the present application takes a photographing apparatus executing a photographing method as an example, and the photographing apparatus provided in the embodiment of the present application is described.
An embodiment of the present application provides a photographing apparatus, see fig. 10, including:
a photo template display module 1001, configured to display a photo template in response to a first input of a user to a shooting preview interface, where the photo template includes a first photo region and a second photo region;
a contact determination module 1002 for associating a target lighting area with a target contact in response to a second input by the user, wherein the target lighting area comprises at least one of: a first and a second lighting area;
a first information sending module 1003, configured to send first information to the target contact;
a first image display module 1004, configured to receive second information sent by the target contact, and display a first image corresponding to the second information in the target co-shooting area;
a co-photography image generating module 1005, configured to generate a co-photography image corresponding to the co-photography template based on the first image.
Optionally, the photographic template further comprises: a third illumination region;
the device further comprises: a second image display module for displaying a second image in the third photography area in response to a third input by the user after the display of the photography template;
the merged image generating module 1005 is specifically configured to: generating a photographic image corresponding to the photographic template based on the first image and the second image.
Optionally, the second image display module is specifically configured to:
responding to a third input of the user to the third lighting area, and displaying a shooting image of a camera in the third lighting area;
or the like, or a combination thereof,
displaying an image selected from an album in the third lighting area in response to a third input by the user to the third lighting area.
Optionally, the first input comprises a first sub-input and a second sub-input;
the lighting template display module 1001 is specifically configured to:
responding to a first sub-input of a user to a template control, and displaying at least one preset template in the shooting preview interface;
and responding to a second sub-input of the user to the at least one preset template, and displaying the photo template.
Optionally, the second input comprises a third sub-input, a fourth sub-input, and a fifth sub-input;
the contact determination module 1002 is specifically configured to:
displaying at least one application icon in response to a third sub-input of the user to the target co-illumination area;
displaying a target contact list in response to a fourth sub-input of the at least one application icon by the user;
and responding to a fifth sub-input of the user to the target contact list, and determining the target contact.
Optionally, the apparatus further comprises: a parameter information updating module, configured to update display parameter information of the first image in response to a fourth input of a user after the first image corresponding to the second information is displayed in the target photographic region, where the display parameter information includes at least one of: position information, size information, image parameter information.
Optionally, the apparatus further comprises: a first background determination module, configured to, after the first image corresponding to the second information is displayed in the target photographic area, in response to a fifth input of the first image by a user, take a background image of the first image as a background image of the photographic image.
Optionally, the apparatus further comprises: a second background determination module, configured to, in response to a sixth input of the second image by the user after the second image is displayed in the third photographic area, take a background image of the second image as a background image of the photographic image.
Optionally, the apparatus further comprises: a third background determination module, configured to display a background image selection window in response to a seventh input by the user before generating the photo album image corresponding to the photo album template based on the first image; determining a background image of the photographic image in response to an eighth input by a user to the background image selection window.
The photographing device in the embodiment of the application displays the photo combination template through a first input of a user, then associates a target photo combination area with a target contact through a second input of the user, and sends first information to the target contact; and receiving second information sent by the target contact person, realizing that the first image returned by the target contact person is displayed in the target photo region, and finally synthesizing the photo image corresponding to the photo template based on the first image without synthesizing and editing the received first image by a user, so that the process of generating the photo image is simple and convenient to operate.
The photographing apparatus in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The photographing device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 9, implement the same technical effect, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 11, an electronic device 1100 is further provided in an embodiment of the present application, and includes a processor 1101 and a memory 1102, where the memory 1102 stores a program or an instruction that can be executed on the processor 1101, and when the program or the instruction is executed by the processor 1101, the steps of the foregoing photographing method embodiment are implemented, and the same technical effects can be achieved, and are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensors 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, and processor 1210.
Those skilled in the art will appreciate that the electronic device 1200 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1210 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the display unit 1206 is configured to: displaying a co-photographing template in response to a first input of a user to a photographing preview interface, wherein the co-photographing template comprises a first co-photographing region and a second co-photographing region;
a processor 1210 configured to associate a target lighting area with a target contact in response to a second input by a user, wherein the target lighting area comprises at least one of: a first and a second lighting area;
a processor 1210 configured to send first information to the target contact;
a processor 1210, configured to receive second information sent by the target contact; a display unit 1206 for displaying a first image corresponding to the second information in the target photographic area;
a processor 1210 configured to generate a photographic image corresponding to the photographic template based on the first image.
The electronic equipment in the embodiment of the application displays the photo template through a first input of a user, then associates a target photo area with a target contact through a second input of the user, and sends first information to the target contact; and receiving second information sent by the target contact person, realizing that the first image returned by the target contact person is displayed in the target co-photography area, and finally synthesizing the co-photography image corresponding to the co-photography template based on the first image without synthesizing and editing the received first image by a user, so that the process of generating the co-photography image is simple and convenient to operate.
Optionally, the display unit 1206 is further configured to: displaying a second image in the third photographic region in response to a third input by the user after the displaying of the photographic template;
the processor 1210 is specifically configured to: generating a photographic image corresponding to the photographic template based on the first image and the second image.
Optionally, the display unit 1206 is specifically configured to: responding to a third input of the user to the third lighting area, and displaying a shooting image of a camera in the third lighting area;
or the like, or, alternatively,
displaying an image selected from an album in the third lighting area in response to a third input by the user to the third lighting area.
Optionally, the first input includes a first sub-input and a second sub-input, and the display unit 1206 is specifically configured to: responding to a first sub-input of a user to a template control, and displaying at least one preset template in the shooting preview interface;
and responding to a second sub-input of the user to the at least one preset template, and displaying the photo template.
Optionally, the second input includes a third sub-input, a fourth sub-input, and a fifth sub-input, and the display unit 1206 is specifically configured to: displaying at least one application icon in response to a third sub-input of the user to the target co-illumination area;
the display unit 1206 is specifically configured to: displaying a target contact list in response to a fourth sub-input by the user to the at least one application icon;
the processor 1210 is specifically configured to: and responding to a fifth sub-input of the user to the target contact list, and determining the target contact.
Optionally, the processor 1210 is further configured to: after the target co-illumination area displays the first image corresponding to the second information, updating display parameter information of the first image in response to a fourth input of a user, wherein the display parameter information comprises at least one of the following items: the position information, the size information and the image parameter information, so that different display requirements of a user on the first image can be met.
Optionally, the processor 1210 is further configured to: and after the first image corresponding to the second information is displayed in the target co-shooting area, responding to a fifth input of a user to the first image, and taking the background image of the first image as the background image of the co-shooting image, so that a user-defined background image of the co-shooting image is realized, and the diversity of generating the co-shooting image is improved.
Optionally, the processor 1210 is further configured to: and after the second image is displayed in the third photographic area, responding to a sixth input of the user to the second image, and taking the background image of the second image as the background image of the photographic image, so that the user-defined background image of the photographic image is realized, and the diversity of the generated photographic image is improved.
Optionally, the display unit 1206 is further configured to: displaying a background image selection window in response to a seventh input by a user before generating a photographic image corresponding to the photographic template based on the first image;
a processor 1210 further configured to: and responding to the eighth input of the user to the background image selection window, and determining the background image of the co-shooting image, so that the self-defined background image of the co-shooting image is realized, and the diversity of the generated co-shooting image is improved.
It should be understood that, in the embodiment of the present application, the input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics Processing Unit 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes at least one of a touch panel 12071 and other input devices 12072. A touch panel 12071, also referred to as a touch screen. The touch panel 12071 may include two parts of a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1209 may be used to store software programs as well as various data. The memory 1209 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1209 may include volatile memory or nonvolatile memory, or the memory 1209 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1209 in the embodiments of the subject application include, but are not limited to, these and any other suitable types of memory.
Processor 1210 may include one or more processing units; optionally, processor 1210 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing photographing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the foregoing photographing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
The embodiment of the present application provides a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the above-mentioned embodiment of the photographing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method of taking a picture, comprising:
displaying a co-photographing template in response to a first input of a user to a photographing preview interface, wherein the co-photographing template comprises a first co-photographing region and a second co-photographing region;
in response to a second input by the user, associating a target lighting area with a target contact, wherein the target lighting area includes at least one of: a first and a second lighting area;
sending first information to the target contact person;
receiving second information sent by the target contact person, and displaying a first image corresponding to the second information in the target photo area;
generating a photographic image corresponding to the photographic template based on the first image.
2. The photographing method according to claim 1, wherein the photographic template further comprises: a third illumination region;
after the displaying the photographic template, the method further comprises:
displaying a second image in the third photographing region in response to a third input by the user;
generating a photographic image corresponding to the photographic template based on the first image, comprising:
generating a photographic image corresponding to the photographic template based on the first image and the second image.
3. The photographing method of claim 2, wherein in response to a third input by the user, displaying a second image in the third photographing region comprises:
responding to a third input of the user to the third lighting area, and displaying a shot image of a camera in the third lighting area;
or the like, or, alternatively,
displaying an image selected from an album in the third lighting area in response to a third input by the user to the third lighting area.
4. The photographing method according to claim 1, wherein the first input includes a first sub input and a second sub input;
the displaying the co-photographing template in response to a first input of the user to the photographing preview interface includes:
responding to a first sub-input of a user to a template control, and displaying at least one preset template in the shooting preview interface;
and responding to a second sub-input of the user to the at least one preset template, and displaying the photo template.
5. The photographing method according to claim 1, wherein the second input includes a third sub input, a fourth sub input, and a fifth sub input;
the associating the target lighting area with the target contact in response to the second input by the user comprises:
displaying at least one application icon in response to a third sub-input of the user to the target co-illumination area;
displaying a target contact list in response to a fourth sub-input by the user to the at least one application icon;
and responding to a fifth sub-input of the user to the target contact list, and determining the target contact.
6. The photographing method according to claim 1, wherein after the target photographic region displays the first image corresponding to the second information, the method further comprises:
updating display parameter information of the first image in response to a fourth input by a user, the display parameter information including at least one of: position information, size information, image parameter information.
7. The photographing method according to claim 1, wherein after the target photographic region displays the first image corresponding to the second information, the method further comprises:
in response to a fifth input by a user to the first image, taking a background image of the first image as a background image of the co-ordinate image.
8. The photographing method according to claim 2, wherein after the third photographing region displays the second image, the method further comprises:
in response to a sixth input by the user to the second image, taking the background image of the second image as the background image of the co-ordinate image.
9. The photographing method according to claim 1, wherein before generating the photographic image corresponding to the photographic template based on the first image, the method further comprises:
displaying a background image selection window in response to a seventh input by the user;
determining a background image of the photographic image in response to an eighth input by a user to the background image selection window.
10. A photographing apparatus, comprising:
the system comprises a photo template display module, a photo template display module and a photo template display module, wherein the photo template display module is used for responding to first input of a user to a shooting preview interface and displaying a photo template, and the photo template comprises a first photo area and a second photo area;
a contact determination module to associate a target lighting area with a target contact in response to a second input by a user, wherein the target lighting area comprises at least one of: a first and a second co-illumination areas;
the first information sending module is used for sending first information to the target contact person;
the first image display module is used for receiving second information sent by the target contact person and displaying a first image corresponding to the second information in the target photo area;
and the photographic image generating module is used for generating a photographic image corresponding to the photographic template based on the first image.
11. An electronic device comprising a processor and a memory, said memory storing a program or instructions executable on said processor, said program or instructions, when executed by said processor, implementing the steps of the photographing method according to any one of claims 1-9.
CN202210482043.2A 2022-05-05 2022-05-05 Photographing method and device and electronic equipment Pending CN115037874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210482043.2A CN115037874A (en) 2022-05-05 2022-05-05 Photographing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210482043.2A CN115037874A (en) 2022-05-05 2022-05-05 Photographing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115037874A true CN115037874A (en) 2022-09-09

Family

ID=83118945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210482043.2A Pending CN115037874A (en) 2022-05-05 2022-05-05 Photographing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115037874A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115052107A (en) * 2022-06-07 2022-09-13 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115052107A (en) * 2022-06-07 2022-09-13 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
WO2016106997A1 (en) Screen capture method and device, and mobile terminal
CN112911147B (en) Display control method, display control device and electronic equipment
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113014801A (en) Video recording method, video recording device, electronic equipment and medium
CN113596555B (en) Video playing method and device and electronic equipment
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN115037874A (en) Photographing method and device and electronic equipment
WO2023143529A1 (en) Photographing method and apparatus, and electronic device
WO2023155858A1 (en) Document editing method and apparatus
WO2023093669A1 (en) Video filming method and apparatus, and electronic device and storage medium
WO2023005908A1 (en) Photographing method and apparatus, device, and storage medium
WO2022228301A1 (en) Document generation method and apparatus and electronic device
CN112367487B (en) Video recording method and electronic equipment
CN112202958B (en) Screenshot method and device and electronic equipment
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN113961113A (en) Image processing method and device, electronic equipment and readable storage medium
CN114253449A (en) Screen capturing method, device, equipment and medium
CN114025237A (en) Video generation method and device and electronic equipment
CN112261483A (en) Video output method and device
CN114390205B (en) Shooting method and device and electronic equipment
CN115589457A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN116027950A (en) Screen capturing method and screen capturing device
CN116320660A (en) Note generation method, device, electronic equipment and readable storage medium
CN117331469A (en) Screen display method, device, electronic equipment and readable storage medium
CN114491090A (en) Multimedia file generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination