CN112449098A - Shooting method, device, terminal and storage medium - Google Patents

Shooting method, device, terminal and storage medium Download PDF

Info

Publication number
CN112449098A
CN112449098A CN201910809688.0A CN201910809688A CN112449098A CN 112449098 A CN112449098 A CN 112449098A CN 201910809688 A CN201910809688 A CN 201910809688A CN 112449098 A CN112449098 A CN 112449098A
Authority
CN
China
Prior art keywords
curve
marking
face
image
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910809688.0A
Other languages
Chinese (zh)
Other versions
CN112449098B (en
Inventor
汤海燕
郑兆廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910809688.0A priority Critical patent/CN112449098B/en
Priority to CN202310312887.7A priority patent/CN116320721A/en
Publication of CN112449098A publication Critical patent/CN112449098A/en
Application granted granted Critical
Publication of CN112449098B publication Critical patent/CN112449098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to the field of photography technologies, and in particular, to a shooting method, an apparatus, a terminal, and a storage medium. Wherein, the method comprises the following steps: determining a target template image, wherein the target template image comprises a first face image; capturing a current image, wherein the current image comprises a second face image; determining a first marking curve pointing to a first face image, wherein the first marking curve is used for identifying the angle of the face in the first face image in a target template image; generating a second marking curve according to the second face image and a preset marking curve construction method, wherein the second marking curve is used for identifying the angle of the face in the second face image in the current image; when the first marker curve and the second marker curve match, shooting processing is performed. The scheme of the invention not only simplifies the operation of simulating photographing, but also improves the effect of simulating photographing, and has better user experience.

Description

Shooting method, device, terminal and storage medium
Technical Field
The present invention relates to the field of photography technologies, and in particular, to a shooting method, an apparatus, a terminal, and a storage medium.
Background
With the development of mobile communication technology and the popularization of intelligent terminal devices, the intelligent terminal devices have become one of the indispensable tools in people's life. Due to the improvement of the front camera pixel of the intelligent terminal equipment and the diversification of the social products and the social platforms, users increasingly like to use the terminal equipment for self-shooting and enjoy self-shooting in the social products and the social platforms. At the auto heterodyne in-process, the user can not only adopt openly to shoot usually, still can adopt the multi-angle to shoot, for example raise terminal equipment, from top to bottom shoot, perhaps only shoot the face of one side etc.. However, in order to obtain a self-timer photo with a good performance, the user is often required to continuously adjust the self-timer angle and the expression, which wastes time, and even if the user takes time to adjust for many times, a good shooting result cannot be obtained.
Disclosure of Invention
The invention provides a shooting method, a shooting device, a shooting terminal and a shooting storage medium, which can improve the convenience of shooting operation and optimize the shooting effect.
In one aspect, the present invention provides a shooting method, including:
determining a target template image, wherein the target template image comprises a first face image;
capturing a current image, wherein the current image comprises a second face image;
determining a first marking curve pointing to the first face image, wherein the first marking curve is used for identifying the angle of the face in the first face image in the target template image;
generating a second marking curve according to the second face image and a preset marking curve construction method, wherein the second marking curve is used for identifying the angle of the face in the second face image in the current image;
and when the first marking curve is matched with the second marking curve, shooting processing is carried out.
In another aspect, the present invention provides a photographing apparatus including:
the target template image determining module is used for determining a target template image, and the target template image comprises a first face image;
the current image acquisition module is used for capturing a current image, and the current image comprises a second face image;
a first marker curve determining module, configured to determine a first marker curve pointing to the first face image, where the first marker curve is used to identify an angle of a face in the first face image in the target template image;
a second marking curve generating module, configured to generate a second marking curve according to the second face image and a preset marking curve constructing method, where the second marking curve is used to identify an angle of a face in the second face image in the current image;
and the shooting processing module is used for carrying out shooting processing when the first marking curve is matched with the second marking curve.
In another aspect, the present invention provides a terminal, including a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the above-mentioned shooting method.
In another aspect, the present invention provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, loaded and executed by a processor to implement the above-mentioned photographing method.
The shooting method, the shooting device, the shooting terminal and the shooting storage medium have the following beneficial effects:
in the shooting process, a second marking curve is generated for the current image in real time, the second marking curve is compared with a first marking curve of a target template image, shooting processing is carried out when the second marking curve is matched with the first marking curve, because the second marking curve is used for identifying the angle of the face in the second face image in the current image, the first marking curve is used for identifying the angle of the face in the first face image in the target template image, and shooting processing is carried out when the second marking curve is matched with the first marking curve, the angle of the face in the shot picture can be ensured to be close to or consistent with the angle of the face in the target template image, the effect of imitating shooting is improved, meanwhile, because the target template image is used for providing adjustment reference of the face angle for a user in shooting, the face angle is also automatically analyzed and matched, the operation of imitating the shooting is greatly simplified, the difficulty of simulating shooting is reduced, and better user experience is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an interaction scenario between a user and a terminal according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a shooting method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method for constructing a marking curve according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a method of photographing processing according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for determining whether two curves match according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a principle of determining whether two curves match according to an embodiment of the present invention;
fig. 7 is a schematic view of an application scenario of the shooting method provided by the embodiment of the invention;
fig. 8 is a schematic flowchart of a front end interacting with a background to implement a shooting method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating a process of taking a picture by using the shooting method according to the embodiment of the present invention;
fig. 10 is a schematic structural diagram of a shooting device according to an embodiment of the present invention;
fig. 11 is a block diagram of a hardware structure of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to make the objects, technical solutions and advantages disclosed in the embodiments of the present invention more clearly apparent, the embodiments of the present invention are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the invention and are not intended to limit the embodiments of the invention. First, the related art and related concepts related to the embodiments of the present invention are described:
APP: the Application program Application is an abbreviation, and mainly refers to software installed on a smart phone, so that the defects and individuation of an original system are overcome, and richer use experience is provided for a user.
Shading: refers to an image that is overlaid on top of the screen.
Self-photographing: the method is characterized in that a user takes a picture for himself by using a front camera of a mobile phone.
The point positions of the human face are as follows: when a portrait photo or image is recognized, the system automatically recognizes the face outline and the five sense organs, marks the edges of the face and the five sense organs with mark points, marks the point bitmap of the face and the five sense organs, and marks the point bitmap with 88 points generally.
For convenience of explaining the advantages of the method in the embodiment of the present invention, at the beginning of the detailed description of the technical solution in the embodiment of the present invention, first, the related contents in the prior art are detailed:
with the diversification of social products and social platforms, more and more users like to record daily life with self-photographs and share displays on the social platforms. However, the expression and facial angle of the user during the self-photographing are easy to be single, and the expressive force of the self-photographing is insufficient. In order to help a user to find more self-timer expressions and self-timer angles and show more diversified life scene records, the user can find more beautiful and different self, and some photographing software is gradually added with a function of assisting self-timer.
For example, an APP sets a "composition" function in self-timer shooting, provides two imitation shooting modes, namely a mask mode and a wire frame mode, when a user takes a picture, a composition template is displayed in a user's view frame in a semi-transparent mask or a wire frame mode in an overlaid manner, and the user can realize imitation shooting by aligning the composition template. However, neither of the two modes of composition template imitation photography is good for the user to align the face and obtain a self-photographed picture with good expression. When the shooting mode is the mask mode, the composition template is used as a semitransparent mask to seriously shield the face of the user, and the user cannot clearly see the face angle and expression details; when the shooting mode is the composition mode, the composition template only appears as a wire frame, and the wire frame only approximately marks a cross straight line at the center of the face, so that the angle difference of the face of a user from the top, the overlooking, slightly left and slightly right cannot be reflected, and the image cannot be completely consistent with the composition template. And when the user confirms to shoot, the user looks at the screen and looks for the shooting button to shoot, and the original simulated eye spirit cannot be kept.
In view of the defects of the prior art, the embodiment of the invention provides a shooting scheme, which can generate two sets of facial feature positioning marks according to the face in the target template image and the face in the current image, automatically perform shooting processing when the two sets of facial feature positioning marks are matched, and improve the expressive force of a picture because the angles of the face in the current image and the face in the target template image are consistent. The technical solution in the embodiments of the present invention is clearly and completely described below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an interaction scenario between a user 10 and a terminal 20 provided by an embodiment of the present application is shown. Interaction between the terminal 20 and the user 10 is possible. The terminal 20 may be a smart phone, a camera, a desktop computer, a tablet computer, a notebook computer, a digital assistant, a smart wearable device, or other types of physical devices; wherein, wearable equipment of intelligence can include intelligent bracelet, intelligent wrist-watch, intelligent glasses, intelligent helmet etc.. Of course, the terminal 20 is not limited to the electronic device having a certain entity, and may be software running in the electronic device. Specifically, for example, the terminal 20 may be a web page provided to the user by a service provider such as a WeChat or a microblog, or may be an application provided to the user by the service provider.
The terminal 20 may include a camera, a display screen, a storage device and a processor connected by a data bus. The display screen is used for displaying data such as a self-timer image, and the display screen can be a touch screen of a mobile phone or a tablet computer. The storage device is used for storing program codes, data and data of the shooting device, and the storage device may be a memory of the terminal 20, and may also be a storage device such as a smart media card (smart media card), a secure digital card (secure digital card), and a flash memory card (flash card). The processor may be a single core or multi-core processor.
Fig. 2 is a schematic flowchart of a shooting method provided in an embodiment of the present invention, which may be implemented by the terminal 20, and the present specification provides the method operation steps as described in the embodiment or the flowchart, but may include more or less operation steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In actual implementation, the system or client product may execute sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures. Referring to fig. 2, the photographing method includes:
s201, determining a target template image, wherein the target template image comprises a first face image.
The embodiment of the invention provides a scheme for simulating a target template image to take a picture, and provides the target template for a user to refer to a picture taking posture and an expression, wherein the target template image can comprise a background image and a first face image, and the user simulates the first face image to adjust the picture taking angle and the expression. In practice, the target template image may be an image automatically configured by application software, or may be an image selected and determined by a user. In one possible embodiment, the background automatically allocates a template image for the user to imitate taking a picture, wherein the template image is the target template image. Specifically, the background can collect the basic information of the user, such as the sex, the character, the favorite photographing style and other personalized information, and configure the target template image for the user according to the personalized information of the user.
In another possible embodiment, a plurality of template images may be provided for the user to select, and the template image selected by the user is used as the target template image. The method specifically comprises the following steps: displaying the template image; acquiring the selection operation of a user on a template image; and taking the template image corresponding to the selection operation as the target template image. For example, a plurality of template images are displayed on a screen, a touch operation of a user on the screen is received, and the template image corresponding to the touch operation is used as a target template image selected by the user.
S203, capturing a current image, wherein the current image comprises a second face image.
The current image is a real-time image of a shooting object acquired by a camera device, and the camera device can be a camera, a camera integrated with a smart phone, or other equipment with a camera shooting function. The current image may include a background image and a second face image, and a user corresponding to the second face image imitates the first face image in the target template image to perform shooting angle and expression adjustment, so as to obtain an image with a similar expression effect as the target template image.
S205, determining a first marking curve pointing to the first face image, wherein the first marking curve is used for identifying the angle of the face in the first face image in the target template image.
The face angle and the face expression can be related to the face image shooting simulation, and the angle of the face in the image is identified by adopting the marking curve. Specifically, a first marking curve is adopted to mark the angle of the face in the first face image in the target template image. The method for determining the first marking curve may include: acquiring a first marking curve corresponding to a first face image in the target template image according to the mapping relation between the target template image and the first marking curve; or generating a first marking curve according to the first face image and the marking curve construction method.
According to the embodiment of the invention, a first marking curve can be generated in real time according to a first face image; or generating a first marking curve in advance according to the first face image, then establishing a mapping relation between the first marking curve and a target template image to which the first face image belongs, and then directly obtaining the first marking curve corresponding to the target template image according to the mapping relation. The first marker curve may be generated in real time or generated in advance, and may be obtained by a marker curve construction method, which will be described in detail below.
Fig. 3 is a schematic flow chart of a method for constructing a marker curve according to an embodiment of the present invention, please refer to fig. 3, where the method for constructing a marker curve may include:
s301, obtaining a target face image of the marking curve to be determined.
S303, carrying out feature point analysis on the target face image to determine a target feature point sequence of the target face image.
Specifically, a face feature point detection technology can be adopted to perform face recognition on the target face image, and the position of a key region of the face in the target face image is determined. The detection of human face feature points, also called human face key point detection, positioning or human face alignment, refers to the positioning of key areas of the human face, including eyebrows, eyes, nose, mouth, face contour, etc., given a human face image. The detection methods of human face feature points are roughly divided into three types: model-based ASM (active Shape model) and AAM (active appearance model), CPR (cascaded position regression) based on cascading Shape regression, and deep learning based methods. The embodiment of the invention does not limit the specific method for identifying the face characteristic points, and any method is adopted for identifying the face characteristic points.
In one possible embodiment, this step may include: adopting a human face feature point detection algorithm to perform human face recognition on the target human face image to obtain a feature point set corresponding to the human face in the target human face image, wherein the feature point set comprises feature points respectively corresponding to eyebrows, a nose and a mouth; acquiring feature points corresponding to the tail ends of the eyebrows, the middle points of the eyebrows, the nose tips and the middle points of the lips in the feature point set, and taking the acquired feature points as target feature points; transversely sorting target characteristic points corresponding to the tail ends of the eyebrows and the middle points of the eyebrows to obtain a transverse curve characteristic point sequence, and longitudinally sorting target characteristic points corresponding to the middle points of the nose tips and the lips to obtain a longitudinal curve characteristic point sequence; and taking the transverse curve characteristic point sequence and the longitudinal curve characteristic point sequence as the target characteristic point sequence.
S305, generating a marking curve of the target face image according to the target characteristic point sequence.
The target feature point sequence may include a first feature point sequence and a second feature point sequence, the first feature point sequence may include feature points corresponding to eyebrows of a face in the target face image, and the second feature point sequence may include feature points corresponding to a nose and a mouth of the face in the target face image. The generating a marking curve of the target face image according to the target feature point sequence may include: generating a transverse curve according to the first characteristic point sequence; generating a longitudinal curve according to the second characteristic point sequence; and combining the transverse curve and the longitudinal curve to obtain the marking curve. In one possible embodiment, combining the transverse curve with the longitudinal curve may include: extending the longitudinal curve to intersect the transverse curve.
The generating of the first marking curve from the first face image may include: performing characteristic point analysis on the first face image to determine a target characteristic point sequence of the first face image; and generating a marking curve of the first face image according to the target characteristic point sequence of the first face image.
And S207, generating a second marking curve according to the second face image and a preset marking curve construction method, wherein the second marking curve is used for identifying the angle of the face in the second face image in the current image.
The marking curve construction method is the same as the marking curve method in step S206, and is not described herein again.
The generating a second marker curve according to the second face image and a preset marker curve construction method may include: performing feature point analysis on the second face image to determine a target feature point sequence of the second face image; and generating a marking curve of the second face image according to the target characteristic point sequence of the second face image.
And S211, when the first marking curve is matched with the second marking curve, shooting processing is carried out.
Fig. 5 is a schematic flow chart of a method for determining whether two curves are matched according to an embodiment of the present invention, please refer to fig. 5, where determining whether a first labeled curve and a second labeled curve are matched may include:
s501, determining a first mark point set of the first mark curve and a second mark point set of the second mark curve, wherein the first mark points in the first mark point set correspond to the second mark points in the second mark point set in a one-to-one mode.
S503, calculating the distance between the second mark point and the corresponding first mark point, and judging whether the distance is smaller than a preset distance threshold value.
And S505, when the distance between each second mark point in the second mark point set and the corresponding first mark point is smaller than the distance threshold, judging that the second mark curve is matched with the first mark curve.
Specifically, the first marked curve includes a transverse curve and a longitudinal curve, and the first marked point set may include two endpoints and a middle point of the transverse curve and two endpoints and a middle point of the longitudinal curve; the second marker curve includes a transverse curve and a longitudinal curve, and the second marker point set may include two end points and one midpoint of the transverse curve and two end points and one midpoint of the longitudinal curve. When the first marking curve is compared with the second marking curve to determine whether the first marking curve is matched with the second marking curve, the transverse curve of the first marking curve and the transverse curve of the second marking curve can be compared, the longitudinal curve of the first marking curve and the longitudinal curve of the second marking curve are compared, and when the comparison result of the first marking curve and the longitudinal curve of the second marking curve are matched, the first marking curve is determined to be matched with the second marking curve.
Fig. 6 is a schematic diagram illustrating a principle of determining whether two curves are matched according to an embodiment of the present invention, and an exemplary method for determining whether two curves are matched is described below.
As shown in fig. 6, the marked points of one curve include an endpoint a, an endpoint B, and a midpoint C, and the marked points of the other curve include an endpoint a ', an endpoint B', and a midpoint C ', where the endpoint a' corresponds to the endpoint a, the endpoint B 'corresponds to the endpoint B, and the midpoint C' corresponds to the midpoint C. Calculating the distance d between the end point A' and the end point A1Distance d between end point B' and end point B2Distance d between midpoint C' and midpoint C3(ii) a Respectively connecting the distance threshold d with d1、d2And d3Comparing and judging d1、d2And d3Whether it is less than a distance threshold d; if d is1、d2And d3And if the two curves are smaller than the distance threshold d, judging that the two curves are matched, otherwise, judging that the two curves are not matched.
In practical applications, the position of the first marking curve in the screen is unchanged, and the second marking curve changes along with the change of the face angle of the shooting object, so that the second marking curve can be gradually close to the first marking curve until the second marking curve is matched with the first marking curve by adjusting the face angle of the shooting object.
When the first marking curve and the second marking curve are matched, angle matching information can be generated, and the angle matching information is displayed. The angle matching information is used for identifying that the angle of the face in the second face image in the current image is matched with the angle of the face in the first face image in the target template image. In one possible embodiment, when the first marking curve and the second marking curve match, the first marking curve and the second marking curve may be subjected to a fusion process, and a result of the fusion process may be displayed, where the fusion process may be to merge the first marking curve and the second marking curve and/or change a display color. In another possible embodiment, when the first marking curve and the second marking curve are matched, a voice prompt is sent out to remind the user that the current shooting angle is basically consistent with the face angle in the target template image, and the current shooting angle is kept.
Preferably, before S211, S209 may be further included.
S209, displaying the current image, the first marking curve and the second marking curve.
Specifically, the current image, the first mark curve and the second mark curve may be displayed in a view finder, the first mark curve and the second mark curve are displayed in a superimposed manner on the current image, the second mark curve changes in real time along with a second face image in the current image, the first mark curve does not change, and a user can adjust the self-face shooting angle by referring to the second mark curve until the second mark curve presented in real time is aligned with the first mark curve. When the present embodiment is applied to an electronic device having a screen, the finder frame may be the screen of the electronic device.
In a possible embodiment, the target template image, the current image, the first marked curve and the second marked curve may also be simultaneously displayed on a screen, where the screen includes a finder frame and a reference frame, and the reference frame may be disposed outside the finder frame or may be superimposed on the finder frame, for example, disposed at an upper left corner, an upper right corner, a lower left corner or a lower right corner of the finder frame; the target template image can be displayed in the reference frame, the current image is displayed in the view frame, the first marking curve and the second marking curve can be displayed on the current image in an overlapping mode, the second marking curve is attached to the face of the second face image in the current image, and when the angle of the face corresponding to the second face image changes, the second marking curve changes along with the change of the angle, so that a user can check whether the face shooting angle of the user is consistent with that of the target template image in real time. Wherein, the position of the first mark curve in the viewfinder frame can be determined by the following method: superposing and displaying a first marking curve on a target template image in a reference frame, wherein the first marking curve is attached to the face of a first face image in the target template image; acquiring the size proportion of a reference frame and the view frame; magnifying the first marking curve according to the size ratio; the first marking curve falls into the viewfinder after being amplified.
In addition, can also be in superimposed display face wire frame on the current image, the face wire frame corresponds with the face of first face image in the target template image, and the face wire frame shows the approximate region of the face of first face image in the target template image, and the user can refer to face wire frame adjustment head position earlier in the shooting process, then refers to first mark curve fine setting face angle, can improve the speed that the imitation was shot.
Fig. 4 is a flowchart illustrating a method of shooting processing according to an embodiment of the present invention, and referring to fig. 4, the shooting processing may include:
s401, when the first mark curve is matched with the second mark curve, obtaining expression prompt information generated according to the expression of the face in the first face image;
s403, displaying the expression prompt information and starting photographing timing;
and S405, recording the current image when the photographing time reaches a preset duration.
Specifically, the expression prompt information may be a facial expression extracted from the first face image in advance, and the expression prompt information may be obtained through manual sorting or may be obtained through expression recognition of a face in the first face image. The expression prompt information may include an expression type and an eye gaze direction. The expression prompt information can be displayed in a voice form, so that a user does not need to change the current head posture, and only needs to adjust the expression according to the voice prompt, so that the consistency with the face angle and the expression of the target template image can be kept to the maximum extent, and the operation of the user is facilitated. And when the expression prompt information is displayed, photographing timing is carried out, when the photographing timing reaches a preset time, a picture is automatically photographed, a current image is recorded, and the user does not need to manually photograph. The photographing timing may be presented in a voice and/or image form, such as a voice player for timing and synchronously displaying a timing picture in a screen, and the photographing timing mode may be countdown.
Fig. 7 is a schematic view of an application scenario of the shooting method according to the embodiment of the present invention. Referring to fig. 7, fig. 7 shows a scene of performing self-photographing on a smart phone by using the method provided by the embodiment, wherein a current picture (i.e., an image of a user) captured by a camera is displayed in a screen, a target template image is displayed in a lower left corner of the screen, and a human face wireframe, a second marker curve corresponding to the current image, and a first marker curve corresponding to the target template image are displayed in an overlapping manner on the current picture. The user firstly refers to the target template image to carry out preliminary face angle adjustment, so that the face in the current picture completely enters the face line frame, then the face angle is finely adjusted according to the first marking curve until the second marking curve is matched with the first marking curve, and shooting angle simulation is completed. In the self-photographing process, the function of imitating the self-photographing of the target template image is added, the face position and the face angle positioning can be automatically generated according to the target template image, and a user can more conveniently imitate different self-photographing angles and self-photographing expressions, so that the expressive force of self-photographing is enhanced, and the self-confidence and the liveness of the user in a social network are increased.
The embodiment of the invention has the following beneficial effects: in the shooting process, a second marking curve is generated for the current image in real time, the second marking curve is compared with a first marking curve of a target template image, shooting processing is carried out when the second marking curve is matched with the first marking curve, because the second marking curve is used for identifying the angle of the face in the second face image in the current image, the first marking curve is used for identifying the angle of the face in the first face image in the target template image, and shooting processing is carried out when the second marking curve is matched with the first marking curve, the angle of the face in the shot picture can be ensured to be close to or consistent with the angle of the face in the target template image, the effect of imitating shooting is improved, meanwhile, because the target template image is used for providing adjustment reference of the face angle for a user in shooting, the face angle is also automatically analyzed and matched, the operation of imitating the shooting is greatly simplified, the difficulty of simulating shooting is reduced, and better user experience is achieved.
In addition, expression prompt information is generated according to the target template image, and a user can further adjust the self expression through the expression prompt information, so that a photo with higher similarity to the angle and the expression of the face in the target template image can be obtained. According to the embodiment of the invention, the picture is automatically shot after the shooting angle and the expression of the user are adjusted, so that the user operation is simplified, and the shooting angle and the expression of the template are prevented from changing due to the movement of the head or the eye of the user.
Fig. 8 is a schematic flowchart of a front end interacting with a background to implement a shooting method according to an embodiment of the present invention. Referring to fig. 8, the shooting method according to the embodiment of the present invention may be implemented by interaction between a front end and a background of an intelligent device, where the front end includes a screen and a loudspeaker, the screen is used for displaying contents such as images and information, and the loudspeaker is used for outputting voice.
The tasks executed by the background comprise: configuring a target template image, and determining a first mark curve and expression prompt information corresponding to a first face in the target template image; generating a second marking curve according to a second face image in the current image; judging whether the second marking curve is matched with the first marking curve or not, and generating angle matching information when the second marking curve is matched with the first marking curve; and carrying out photographing timing when the angle matching information display is finished, and photographing pictures when the photographing timing reaches a preset time length.
The tasks executed by the front end include: displaying a current image; acquiring and displaying a first marking curve and a second marking curve from a background; acquiring angle matching information from a background and displaying the angle matching information; acquiring expression prompt information from a background and displaying the expression prompt information through a screen and/or a loudspeaker; displaying the photographing timing through a screen and/or a loudspeaker; and acquiring and displaying the shot pictures from the background.
Fig. 9 is a schematic diagram of a process of taking a photo by using the shooting method according to the embodiment of the present invention, please refer to fig. 9, fig. 9 mainly shows content presented at a front end of the direct interaction with a user, and fig. 9 shows four different states from left to right, which are respectively: 1. displaying a current image, a first marking curve and a second marking curve in a screen, wherein the first marking curve is white, the second marking curve is yellow, and a user adjusts the face angle to enable the second marking curve to be matched with the first marking curve; 2. when the second marking curve is matched with the first marking curve, the second marking curve is combined with the first marking curve and the display color of the curve is changed to be green, and meanwhile, the voice playing expression prompt message 'please see the lower right corner and keep smiling'; 3. shooting timing, namely superposing and displaying count-down numbers on the current image; 4. and displaying the shot picture.
According to the embodiment of the invention, two sets of facial feature positioning marks (namely the first mark curve and the second mark curve) can be automatically generated according to the target template image and the face shape of the user, the user adjusts the face angle to align the two sets of facial feature positioning marks, and the expression is further adjusted according to the expression prompt information, so that the face angle and the expression consistent with the target template image can be realized. And when the angle and the expression of the user are detected to be consistent with the target template image, the shooting countdown is automatically triggered to shoot, and more expressive self-shooting can be obtained.
Fig. 10 is a schematic structural diagram of a camera according to an embodiment of the present invention, and as shown in fig. 10, the camera includes a target template image determining module 1010, a current image acquiring module 1020, a first marking curve determining module 1030, a second marking curve generating module 1040, and a shooting processing module 1060. Wherein the content of the first and second substances,
a target template image determination module 1010, configured to determine a target template image, where the target template image includes a first face image;
a current image obtaining module 1020 for capturing a current image, wherein the current image includes a second face image;
a first labeling curve determining module 1030, configured to determine a first labeling curve pointing to the first face image, where the first labeling curve is used to identify an angle of a face in the first face image in the target template image;
a second marking curve generating module 1040, configured to generate a second marking curve according to the second face image and a preset marking curve constructing method, where the second marking curve is used to identify an angle of a face in the second face image in the current image;
a shooting processing module 1060, configured to perform shooting processing when the first marker curve and the second marker curve match.
A display module 1050 for displaying the current image, the first marker curve and the second marker curve may be further included.
In one possible embodiment, the marking curve construction method includes: acquiring a target face image of a marking curve to be determined; carrying out feature point analysis on the target face image to determine a target feature point sequence of the target face image; and generating a marking curve of the target face image according to the target characteristic point sequence.
Specifically, the target feature point sequence includes a first feature point sequence and a second feature point sequence, the first feature point sequence includes feature points corresponding to eyebrows of a face in the target face image, and the second feature point sequence includes feature points corresponding to a nose and a mouth of the face in the target face image. The generating of the marking curve of the target face image according to the target feature point sequence comprises: generating a transverse curve according to the first characteristic point sequence; generating a longitudinal curve according to the second characteristic point sequence; and combining the transverse curve and the longitudinal curve to obtain the marking curve.
The first marker curve determining module 1030 includes a first determining unit and a second determining unit. The first determining unit is used for acquiring a first marking curve corresponding to a first face image in the target template image according to the mapping relation between the target template image and the first marking curve; the second determining unit is used for generating a first marking curve according to the first face image and the marking curve construction method.
The shooting device further comprises a marking curve matching module. The marking curve matching module is used for: determining a first marking point set of the first marking curve and a second marking point set of the second marking curve, wherein the first marking points in the first marking point set correspond to the second marking points in the second marking point set in a one-to-one manner; calculating the distance between the second mark point and the corresponding first mark point, and judging whether the distance is smaller than a preset distance threshold value; and when the distance between each second mark point in the second mark point set and the corresponding first mark point is smaller than the distance threshold, judging that the second mark curve is matched with the first mark curve.
The shooting processing module 1060 is further configured to: when the first marking curve is matched with the second marking curve, obtaining expression prompt information generated according to the expression of the face in the first face image; displaying the expression prompt information, and starting photographing timing; and when the photographing time reaches a preset duration, recording the current image.
The shooting device further comprises an angle matching module. The angle matching module is used for: when the first marking curve is matched with the second marking curve, generating angle matching information, wherein the angle matching information is used for identifying that the angle of the face in the second face image in the current image is matched with the angle of the face in the first face image in the target template image; and displaying the angle matching information.
The embodiment of the shooting device and the method is based on the same inventive concept.
In the shooting process of the embodiment of the invention, the second marking curve is generated for the current image in real time, the second marking curve is compared with the first marking curve of the target template image, shooting processing is carried out when the second marking curve is matched with the first marking curve, the angle of the face in the shot picture is close to or consistent with the angle of the face in the target template image, the effect of imitating and shooting can be improved, meanwhile, because the target template image is used for providing adjustment reference of the face angle for a user in shooting, the face angle is automatically analyzed and matched, the operation of imitating and shooting is greatly simplified, the difficulty of imitating and shooting is reduced, and better user experience is achieved.
An embodiment of the present invention provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the shooting method provided in the foregoing method embodiment.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The embodiment of the present invention further provides a schematic structural diagram of a terminal, as shown in fig. 11, where the terminal may be used to implement the shooting method provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the client may include components such as RF (Radio Frequency) circuitry 1110, memory 1120 including one or more computer-readable storage media, input unit 1130, display unit 1140, sensors 1150, audio circuitry 1160, WiFi (wireless fidelity) module 1170, processor 1180 including one or more processing cores, and power supply 1190. Those skilled in the art will appreciate that the client architecture shown in fig. 11 does not constitute a limitation on the client, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components. Wherein:
RF circuit 1110 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages by one or more processors 1180; in addition, data relating to uplink is transmitted to the base station. In general, RF circuitry 1110 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, RF circuit 810 may also communicate with networks and other clients via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), email, SMS (Short Messaging Service), and the like.
The memory 1120 may be used to store software programs and modules, and the processor 1180 may execute various functional applications and data processing by operating the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the client, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 1120 may also include a memory controller to provide the processor 880 and the input unit 1130 access to the memory 1120.
The input unit 1130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, input unit 1130 may include a touch-sensitive surface 1131 as well as other input devices 1132. Touch-sensitive surface 1131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 1131 (e.g., operations by a user on or near the touch-sensitive surface 1131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a preset program. Alternatively, touch-sensitive surface 1131 may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1180, and can receive and execute commands sent by the processor 1180. Additionally, touch-sensitive surface 1131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 1130 may include other input devices 1132 in addition to the touch-sensitive surface 1131. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1140 may be used to display information input by or provided to the user as well as various graphical user interfaces of the client, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 1140 may include a Display panel 1141, and optionally, the Display panel 1141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 1131 may cover display panel 1141, and when touch operation is detected on or near touch-sensitive surface 1131, the touch operation is transmitted to processor 1180 to determine the type of touch event, and processor 1180 then provides corresponding visual output on display panel 1141 according to the type of touch event. Touch-sensitive surface 1131 and display panel 1141 may be implemented as two separate components for input and output functions, although touch-sensitive surface 1131 and display panel 1141 may be integrated for input and output functions in some embodiments.
The client may also include at least one sensor 1150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1141 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 1141 and/or the backlight when the client moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the device is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for identifying client gestures, and related functions (such as pedometer and tapping) for vibration identification; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be further configured at the client, detailed description is omitted here.
Audio circuitry 1160, speakers 1161, and microphone 1162 may provide an audio interface between a user and the client. The audio circuit 1160 may transmit the electrical signal converted from the received audio data to the speaker 1161, and convert the electrical signal into a sound signal for output by the speaker 1161; on the other hand, the microphone 1162 converts the collected sound signals into electrical signals, converts the electrical signals into audio data after being received by the audio circuit 1160, and then processes the audio data output processor 1180, and then sends the audio data to, for example, another client via the RF circuit 1110, or outputs the audio data to the memory 1120 for further processing. Audio circuitry 1160 may also include an earbud jack to provide communication of peripheral headphones with the client.
WiFi belongs to short distance wireless transmission technology, and the client can help the user send and receive e-mail, browse web page and access streaming media, etc. through WiFi module 1170, which provides wireless broadband internet access for the user. Although fig. 11 shows the WiFi module 1170, it is understood that it does not belong to the essential constitution of the client and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1180 is a control center of the client, connects various parts of the whole client by using various interfaces and lines, and executes various functions and processes data of the client by running or executing software programs and/or modules stored in the memory 1120 and calling data stored in the memory 1120, thereby performing overall monitoring of the client. Optionally, processor 1180 may include one or more processing cores; preferably, the processor 1180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1180.
The client further includes a power supply 1190 (such as a battery) for supplying power to various components, and preferably, the power supply may be logically connected to the processor 1180 through a power management system, so that functions of managing charging, discharging, power consumption management, and the like are implemented through the power management system. Power supply 1190 may also include one or more dc or ac power supplies, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.
Although not shown, the client may further include a camera, a bluetooth module, and the like, which are not described herein again. Specifically, in this embodiment, the display unit of the client is a touch screen display, the client further includes a memory and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors according to the instructions of the method embodiments of the present invention.
Embodiments of the present invention also provide a storage medium, which may be disposed in a terminal to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing a shooting method in the method embodiments, where the at least one instruction, the at least one program, the code set, or the set of instructions are loaded and executed by the processor to implement the shooting method provided by the method embodiments.
Optionally, in this embodiment, the storage medium may be located in at least one network client of a plurality of network clients of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A photographing method, characterized by comprising:
determining a target template image, wherein the target template image comprises a first face image;
capturing a current image, wherein the current image comprises a second face image;
determining a first marking curve pointing to the first face image, wherein the first marking curve is used for identifying the angle of the face in the first face image in the target template image;
generating a second marking curve according to the second face image and a preset marking curve construction method, wherein the second marking curve is used for identifying the angle of the face in the second face image in the current image;
and when the second marking curve is matched with the first marking curve, shooting processing is carried out.
2. The method of claim 1, wherein the labeling curve construction method comprises:
acquiring a target face image of a marking curve to be determined;
carrying out feature point analysis on the target face image to determine a target feature point sequence of the target face image;
and generating a marking curve of the target face image according to the target characteristic point sequence.
3. The method of claim 2, wherein the determining a first marker curve pointing to the first face image comprises:
acquiring a first marking curve corresponding to a first face image in the target template image according to the mapping relation between the target template image and the first marking curve;
alternatively, the first and second electrodes may be,
and generating a first marking curve according to the first face image and the marking curve construction method.
4. The method according to claim 2, wherein the target feature point sequence includes a first feature point sequence including feature points corresponding to eyebrows of a face in the target face image and a second feature point sequence including feature points corresponding to a nose and a mouth of the face in the target face image;
the generating of the marking curve of the target face image according to the target feature point sequence comprises: generating a transverse curve according to the first characteristic point sequence; generating a longitudinal curve according to the second characteristic point sequence; and combining the transverse curve and the longitudinal curve to obtain the marking curve.
5. The method of claim 1, further comprising:
determining a first marking point set of the first marking curve and a second marking point set of the second marking curve, wherein the first marking points in the first marking point set correspond to the second marking points in the second marking point set in a one-to-one manner;
calculating the distance between the second mark point and the corresponding first mark point, and judging whether the distance is smaller than a preset distance threshold value;
and when the distance between each second mark point in the second mark point set and the corresponding first mark point is smaller than the distance threshold, judging that the second mark curve is matched with the first mark curve.
6. The method according to claim 1, wherein when the first marker profile and the second marker profile match, performing a photographing process includes:
when the first marking curve is matched with the second marking curve, obtaining expression prompt information generated according to the expression of the face in the first face image;
displaying the expression prompt information, and starting photographing timing;
and when the photographing time reaches a preset duration, recording the current image.
7. The method of claim 1, further comprising:
when the first marking curve is matched with the second marking curve, generating angle matching information, wherein the angle matching information is used for identifying that the angle of the face in the second face image in the current image is matched with the angle of the face in the first face image in the target template image;
and displaying the angle matching information.
8. A camera, comprising:
the target template image determining module is used for determining a target template image, and the target template image comprises a first face image;
the current image acquisition module is used for capturing a current image, and the current image comprises a second face image;
a first marker curve determining module, configured to determine a first marker curve pointing to the first face image, where the first marker curve is used to identify an angle of a face in the first face image in the target template image;
a second marking curve generating module, configured to generate a second marking curve according to the second face image and a preset marking curve constructing method, where the second marking curve is used to identify an angle of a face in the second face image in the current image;
and the shooting processing module is used for carrying out shooting processing when the first marking curve is matched with the second marking curve.
9. A terminal, characterized in that it comprises a processor and a memory in which at least one instruction, at least one program, a set of codes or a set of instructions is stored, which is loaded and executed by the processor to implement the shooting method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the photographing method according to any one of claims 1 to 7.
CN201910809688.0A 2019-08-29 2019-08-29 Shooting method, device, terminal and storage medium Active CN112449098B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910809688.0A CN112449098B (en) 2019-08-29 2019-08-29 Shooting method, device, terminal and storage medium
CN202310312887.7A CN116320721A (en) 2019-08-29 2019-08-29 Shooting method, shooting device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910809688.0A CN112449098B (en) 2019-08-29 2019-08-29 Shooting method, device, terminal and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310312887.7A Division CN116320721A (en) 2019-08-29 2019-08-29 Shooting method, shooting device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112449098A true CN112449098A (en) 2021-03-05
CN112449098B CN112449098B (en) 2023-04-07

Family

ID=74741556

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310312887.7A Pending CN116320721A (en) 2019-08-29 2019-08-29 Shooting method, shooting device, terminal and storage medium
CN201910809688.0A Active CN112449098B (en) 2019-08-29 2019-08-29 Shooting method, device, terminal and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310312887.7A Pending CN116320721A (en) 2019-08-29 2019-08-29 Shooting method, shooting device, terminal and storage medium

Country Status (1)

Country Link
CN (2) CN116320721A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727024A (en) * 2021-08-30 2021-11-30 北京达佳互联信息技术有限公司 Multimedia information generation method, apparatus, electronic device, storage medium, and program product
WO2024079893A1 (en) * 2022-10-14 2024-04-18 日本電気株式会社 Information processing system, information processing method, and recording medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN105407285A (en) * 2015-12-01 2016-03-16 小米科技有限责任公司 Photographing control method and device
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108062404A (en) * 2017-12-28 2018-05-22 奇酷互联网络科技(深圳)有限公司 Processing method, device, readable storage medium storing program for executing and the terminal of facial image
CN108229369A (en) * 2017-12-28 2018-06-29 广东欧珀移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN108769537A (en) * 2018-07-25 2018-11-06 珠海格力电器股份有限公司 Photographing method, device, terminal and readable storage medium
CN109218615A (en) * 2018-09-27 2019-01-15 百度在线网络技术(北京)有限公司 Image taking householder method, device, terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN105407285A (en) * 2015-12-01 2016-03-16 小米科技有限责任公司 Photographing control method and device
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108062404A (en) * 2017-12-28 2018-05-22 奇酷互联网络科技(深圳)有限公司 Processing method, device, readable storage medium storing program for executing and the terminal of facial image
CN108229369A (en) * 2017-12-28 2018-06-29 广东欧珀移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN108769537A (en) * 2018-07-25 2018-11-06 珠海格力电器股份有限公司 Photographing method, device, terminal and readable storage medium
CN109218615A (en) * 2018-09-27 2019-01-15 百度在线网络技术(北京)有限公司 Image taking householder method, device, terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727024A (en) * 2021-08-30 2021-11-30 北京达佳互联信息技术有限公司 Multimedia information generation method, apparatus, electronic device, storage medium, and program product
WO2024079893A1 (en) * 2022-10-14 2024-04-18 日本電気株式会社 Information processing system, information processing method, and recording medium

Also Published As

Publication number Publication date
CN116320721A (en) 2023-06-23
CN112449098B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN109819313B (en) Video processing method, device and storage medium
CN109427083B (en) Method, device, terminal and storage medium for displaying three-dimensional virtual image
CN108184050B (en) Photographing method and mobile terminal
WO2018103525A1 (en) Method and device for tracking facial key point, and storage medium
CN108234276B (en) Method, terminal and system for interaction between virtual images
CN108712603B (en) Image processing method and mobile terminal
CN109191410A (en) A kind of facial image fusion method, device and storage medium
WO2019174628A1 (en) Photographing method and mobile terminal
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN107592459A (en) A kind of photographic method and mobile terminal
CN108876878B (en) Head portrait generation method and device
CN110263617B (en) Three-dimensional face model obtaining method and device
CN109819167B (en) Image processing method and device and mobile terminal
CN111047511A (en) Image processing method and electronic equipment
CN109272473B (en) Image processing method and mobile terminal
CN109426343B (en) Collaborative training method and system based on virtual reality
WO2022062808A1 (en) Portrait generation method and device
KR20220154763A (en) Image processing methods and electronic equipment
CN109448069B (en) Template generation method and mobile terminal
CN108881544A (en) A kind of method taken pictures and mobile terminal
CN108986026A (en) A kind of picture joining method, terminal and computer readable storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN107959755B (en) Photographing method, mobile terminal and computer readable storage medium
CN112449098B (en) Shooting method, device, terminal and storage medium
CN105635553B (en) Image shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40040461

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant