CN111091519A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111091519A
CN111091519A CN201911329892.9A CN201911329892A CN111091519A CN 111091519 A CN111091519 A CN 111091519A CN 201911329892 A CN201911329892 A CN 201911329892A CN 111091519 A CN111091519 A CN 111091519A
Authority
CN
China
Prior art keywords
nail
target
model
region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911329892.9A
Other languages
Chinese (zh)
Other versions
CN111091519B (en
Inventor
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Shenzhen Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911329892.9A priority Critical patent/CN111091519B/en
Publication of CN111091519A publication Critical patent/CN111091519A/en
Application granted granted Critical
Publication of CN111091519B publication Critical patent/CN111091519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image processing method and device. The method comprises the following steps: acquiring a target image; identifying nail regions in the target image and determining a nail model corresponding to each nail region; for a target nail model in the nail models, generating a sphere model according to the width of the nail root in the target nail model; determining an intersection region of the sphere model and the target nail model; and performing preset processing on a target area matched with the intersection area in the target image. The invention can process the hand image, so that the semilunar scar of the nail of the processed hand image is more obvious and clearer.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
With the increasing prosperity of photographing culture, more and more people take pictures of scenes such as travel, meetings, lives, parties and the like to record own lives. Similarly, the continuous progress of mobile phone photographing software promotes the beautifying function of face, stature and the like so as to meet the increasing user requirements. In addition to face and body clapping, many people also like to express themselves by gesturing. The white of the crescent is a milky circular arc appearing at the root of the nail, but the milky circular arc (referred to as semilunar scar for short) does not exist at the root of the nail of every person or the semilunar scar of some nail roots is not obvious enough due to the difference among individuals.
Therefore, when the current image processing method is used for processing the hand image, the effect of making the half-moon mark of the fingernail more obvious and clearer is difficult to achieve.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, and aims to solve the problem that when an image processing method in the related art processes a hand image, the semilunar scar of a fingernail is difficult to be more obvious and clear.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, which is applied to an electronic device, and the method includes:
acquiring a target image;
identifying nail regions in the target image and determining a nail model corresponding to each nail region;
for a target nail model in the nail models, generating a sphere model according to the width of the nail root in the target nail model;
determining an intersection region of the sphere model and the target nail model;
and performing preset processing on a target area matched with the intersection area in the target image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, where the apparatus includes:
the acquisition module is used for acquiring a target image;
the first determining module is used for identifying nail regions in the target image and determining a nail model corresponding to each nail region;
the generating module is used for generating a spherical model for a target nail model in the nail models according to the width of the nail root in the target nail model;
a second determination module for determining an intersection region of the sphere model and the target nail model;
and the processing module is used for carrying out preset processing on a target area matched with the intersection area in the target image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the image processing method.
In the embodiment of the invention, by identifying nail regions in a target image and determining a nail model corresponding to each nail region, generating a spherical model according to the width of a nail root part in the target nail model for the target nail model in the nail model, then determining an intersection region of the spherical model and the target nail model so that the nail shape of the intersection region can take on a crescent shape, and finally performing preset processing on a target region matched with the intersection region in the target image, namely mapping the intersection region to the target region in the target image so that the shape of the target region is also in the crescent shape and is positioned at the nail root part of the target nail region corresponding to the target nail model, then after the preset processing is performed on the target region, the target region in the target image can be distinguished from other nail regions, the shape of the target area is close to the crescent shape and is more obvious, and the effect that when the hand image is processed, the semilunar scar of the fingernail of the processed hand image is more obvious and clear is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
FIG. 1 is a flow diagram of an image processing method of one embodiment of the present invention;
FIG. 2 is a schematic two-dimensional image of a nail region of one embodiment of the present invention;
FIG. 3 is a schematic plan view of a target nail model in accordance with one embodiment of the invention;
FIG. 4 is a schematic plan view of a target nail model intersecting the sphere model according to one embodiment of the present invention;
fig. 5 is a block diagram of an image processing apparatus of another embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present invention is shown, and is applied to an electronic device, where the method specifically includes the following steps:
step 101, acquiring a target image;
the target image may be an image received from the outside, an image generated locally (for example, an image captured by a camera of the electronic device), or an image captured in real time by the camera of the electronic device (for example, a preview image captured by the camera of the electronic device).
Step 102, identifying nail regions in the target image, and determining a nail model corresponding to each nail region;
the target image comprises nail regions, and the nail regions are different from other positions of the hand, so that the nail regions in the target image can be identified, and a nail model corresponding to each nail region is determined, wherein the nail model is a three-dimensional model.
Alternatively, in one embodiment, when the step of identifying the nail region in the target image in step 102 is performed, it may be implemented by S201:
s201, identifying a nail area in the target image according to the nail characteristics in the target image.
Among them, since the nail feature is a unique feature distinguished from other positions of the hand, two-dimensional coordinates of a nail feature point (generally, a plurality of nail feature points) can be located to each nail region from the target image.
Alternatively, in recognizing the nail feature, since the target image is a two-dimensional image, if a certain nail region in the target image is an image of the side of the nail, in order to improve the recognition accuracy of the nail feature, the nail feature of each nail region may be recognized from the target image according to the average thickness (e.g., 0.5mm) of the nail and the feature specific to the nail, by which nail feature points in the target image may be more accurately located.
Wherein, since the nail feature points of the same nail region are relatively clustered in position, the respective nail regions in the target image are determined based on the recognized nail feature points.
Alternatively, in one embodiment, when the step of determining the nail model corresponding to each nail region in step 102 is performed, the following steps may be performed through S202 to S204:
s202, acquiring a hand model corresponding to a hand region in the target image;
wherein the hand model is a three-dimensional model of a hand region in the two-dimensional target image.
Further, a hand model corresponding to the hand region in the target image may be acquired from an external device, or the hand model may be obtained by processing the target image.
Specifically, when the target image is processed to obtain the hand model, a two-dimensional image (i.e., RGB information) and depth information corresponding to a hand region in the target image may be obtained; then, a hand model of the hand region is constructed from the two-dimensional image and the depth information.
For example, when a user uses a mobile phone photographing mode to preview or photograph, an image is automatically captured by using a camera, a two-dimensional image of a hand and depth information of the hand image are obtained, and a three-dimensional hand model is constructed.
Also, the depth information is also called depth image information (RGB-D), which is an image or image channel containing information about the distance of the scene object surface from the viewpoint, and each pixel value thereof is the actual distance of the sensor from the object. Therefore, the acquired data corresponding to the target image can be changed into three-dimensional data from two-dimensional data, the hand region in the image can be effectively identified in real time and a three-dimensional model of the hand can be established by combining the depth image information, and finally the purpose of adding crescent moon to the root of the middle finger nail of the hand is achieved.
It should be noted that the execution sequence between S201 and S202 is not limited in the present invention.
S203, acquiring three-dimensional positioning information corresponding to the nail features in the hand model;
based on the two-dimensional coordinates of the nail feature point corresponding to each nail region determined in S201, the precise depth coordinate corresponding to the two-dimensional coordinates can be located in the hand model, so as to obtain the three-dimensional location information of the nail features of each nail region.
S204, generating a nail model corresponding to each nail region in the target image according to the three-dimensional positioning information corresponding to the nail characteristics of each nail region.
Corresponding three-dimensional positioning information is acquired for the nail feature points of each nail region in the target image, so that the nail model of the nail region can be constructed according to the three-dimensional positioning information of the nail feature points of one nail region, and the nail model corresponding to each nail region is obtained. The number of nail regions in the target image is the same as the number of nail models, for example, a nail image including 10 fingers in the target image, and 10 nail models corresponding to the nail regions of 10 fingers may be generated here. And the nail model is a three-dimensional model of a certain nail.
The crescent moon, as a health symbol, is one of the exquisite ways for many users to realize in social contact, and needs to be paid attention and developed. Depth image technology (RGB-D) based hand recognition technology can enable modeling and tracking of hands, but lacks location tracking and application to nails. In the embodiment of the present invention, by obtaining a hand model for a hand region in a target image and identifying a nail region in the target image according to nail features in the target image, three-dimensional positioning information of each nail feature can be positioned by combining the hand model, then, three-dimensional positioning information of nail features of each nail region is utilized, so that a nail model of the nail region can be generated, the three-dimensional positioning information for constructing the nail model is determined based on the hand model and the two-dimensional target image, so that the three-dimensional positioning information has higher precision, the constructed nail model is matched with the actual three-dimensional image of the nail region, and furthermore, the target area processed by the nail model is the position of each nail in the target image, which is generally provided with the semilunar scar, so that the position accuracy of the added semilunar scar is improved.
In addition, in the embodiment of the invention, since the nail model is calculated based on the hand model, the finally determined target region is not a region except for the nail, so that the target region (such as the white of the crescent) after the preset treatment in the embodiment of the invention is ensured not to appear in the region except for the nail, and the matching accuracy is improved.
Of course, in other embodiments, the nail model corresponding to each nail region may be received from the outside when the step of determining the nail model corresponding to each nail region of step 102 is performed.
103, generating a spherical model for a target nail model in the nail models according to the width of the nail root in the target nail model;
in the above step 102, a nail model corresponding to each nail region may be determined, and then in different scenes, the nail added with the semilunar mark may be one nail or a plurality of nails in the target image, and thus, the target nail model may be one or more of the plurality of nail models obtained in step 101. Namely, the nail region corresponding to the target nail model is the nail object to which the semilunar mark needs to be added.
As will be understood by those skilled in the art, the nail of a finger is grown in a direction, and the area corresponding to the nail base in a target nail model is the part of the nail model starting from the nail growth direction in the target nail model.
In addition, the width of the nail base may be width information that a region corresponding to the nail base is in the nail width direction in the nail model. The width direction of the nail is perpendicular to the growth direction of the nail and is in the same plane.
The width of the nail base may be any nail width of the lower half nail model of the target nail model.
Optionally, in step 103, for a target nail model in the nail models, a preset region corresponding to a nail root in the target nail model may be identified; and then, generating a sphere model according to the width of the preset region.
Wherein the preset region is also three-dimensional because it is identified from the target nail model. The preset area may be understood as a three-dimensional model of the root area in the nail area visible in the target image.
In identifying the preset area, an area having a height that is random, but less than or equal to half of the total nail length (i.e., the total height of the target nail model) may be taken from the nail growth direction of the target nail model.
Optionally, in an embodiment, when the preset region corresponding to the nail root in the target nail model is identified, the following steps may be performed through S301 to S305:
s301, acquiring target nail characteristics of a target nail region corresponding to the target nail model in the target image;
for ease of understanding, in one example, FIG. 2 shows a two-dimensional image of a target nail region (e.g., left index finger) in a target image.
The target nail region in fig. 2 includes a first region 21 that is not adhered to the flesh and a second region 22 that is adhered to the flesh, and the nail growth direction of the target nail region is shown by the arrow.
The target nail features acquired in this step are the respective feature points 23 of the target nail region in fig. 2, and the respective feature points 24.
Among them, the feature points 23 are a plurality of feature points on the nail edge uppermost in the nail growth direction in the target nail region, and can be understood as nail top feature points; the feature points 24 are a plurality of feature points on the nail margin at the lowest position of the target nail region opposite to the nail growth direction, and may be understood as nail base feature points. Thus, S301 may acquire two-dimensional coordinates of the nail top feature point and the nail base feature point of the target nail region in the two-dimensional target image.
S302, acquiring three-dimensional positioning information corresponding to the target nail characteristics in the target nail model;
the target nail model is a three-dimensional model of a two-dimensional image of the target nail area, so that the target nail model can be positioned in the three-dimensional coordinate information corresponding to the target nail feature in the target nail model according to the two-dimensional coordinates of the target nail feature.
The step is equivalent to positioning the three-dimensional coordinates of each nail top characteristic point and the three-dimensional coordinates of each nail root characteristic point in the target nail model matched with the target nail area by using the two-dimensional coordinates of the nail top characteristic point and the two-dimensional coordinates of the nail root characteristic point of the target nail area in the two-dimensional image of the target image.
S303, determining the total nail length y corresponding to the target nail model according to the three-dimensional positioning information of the target nail characteristics, wherein the total nail length is in the nail growth direction;
in one example, as shown in FIG. 3, a schematic plan view of the target nail model is shown.
Therefore, the three-dimensional coordinates of the target nail feature (including the nail base feature point and the nail top feature point) in the target nail model are acquired in S202, and the position 32 of the nail base at the lowest position in the target nail model and the position 31 of the nail top at the highest position in the target nail model are determined, wherein the direction from bottom to top is the nail growth direction.
Further, the position 31 may be one of the above-mentioned nail tip feature points, or may be a new feature point position determined based on each nail tip feature point; the determination of the location 32 is similar and will not be described further herein.
Therefore, the distance y between the position 31 and the position 32 of the target nail model can be determined as the total nail length corresponding to the target nail model.
S304, equally dividing the target nail model by n in the nail growth direction to generate n model areas with the nail length of y/n, wherein n is greater than 0;
wherein each model region is a three-dimensional model.
In one example, as shown in FIG. 3, it is equivalent to divide the target nail model n equally in the nail growth direction such that the nail length of each model region in the nail growth direction is y/n.
S305, identifying a target model area which is positioned at the lowest part of the growth direction of the fingernails in the n model areas as a preset area corresponding to the fingernail root in the target fingernail model.
In one example, as shown in fig. 3, a shadow region 33 (i.e., a target model region) located at the lowest position in the nail growth direction among the n model regions may be identified as a preset region corresponding to the nail root in the target nail model shown in fig. 3.
Since fig. 3 is a plan view of the three-dimensional model, the shaded region 33 is also a schematic plan view of the three-dimensional target model region.
In addition, the value of n may be any number greater than or equal to 10, and the value of n may be input by a user and/or configured by the system.
For the user to input the value of n, the user may input a value of n in the shooting preview interface, and may associate the value of n with a certain finger.
In addition, the values of n corresponding to different target nail models can be different, so that the heights of the crescent moon whites added by different fingers in the nail growth direction are different.
In the embodiment of the present invention, when identifying a target area (two-dimensional) for adding a semilunar scar in a target image, a manner of determining a preset area (three-dimensional) of a nail root corresponding to the target area in a target nail model is provided, specifically, a total nail length corresponding to the target nail model is determined according to three-dimensional positioning information of a target nail feature of the target nail area in the target nail model, then, according to the total nail length, the target nail model is equally divided in a nail growth direction by n, and finally, a target model area located at the lowest position in the nail growth direction in the n model areas generated after the equal division is identified as a preset area corresponding to the nail root in the target nail model, so that the target area in the two-dimensional target image can be mapped to a three-dimensional area corresponding to the nail root of the three-dimensional target nail model, therefore, the target region conforms to the position of the actual crescent of the nail, and the image generated after the target region is finally subjected to the preset processing can have a position of a semilunar scar generally in real life, and the semilunar scar is highlighted.
Further, in the step of generating a sphere model according to the width of the nail base in the target nail model in the above step 103, and in the step of generating a sphere model according to the width of the preset region in the above embodiment after the step 103 is detailed. The width of the two embodiments is the width of the nail base in the nail width direction, that is, the width of the preset region in the nail width direction.
Wherein, the width direction of the nail is a direction vertical to the growth direction of the nail in the same plane.
For example, FIG. 4 shows a schematic plan view of a target nail model intersecting the sphere model.
Fig. 4 shows two arrow directions, a nail growth direction and a nail width direction, respectively.
In which fig. 3 and 4 show schematic plan views of the same target nail model, and comparing fig. 3 and 4, it can be seen that the preset region 33 has a width x in the nail width direction.
Wherein the sphere model is a three-dimensional sphere model. Then the radius of the sphere model may be determined at the width x when generating the sphere model. When the radius is greater than x, the shape of the target area may be made to not take on the shape of a semilunar scar, but the position of the target area is still close to the usual position of a semilunar scar with a finger nail.
Alternatively, in order to make the shape of the target region close to a semilunar mark, i.e., crescent-shaped, in executing step 103, a radius r may be determined according to a width x of the nail root in the nail width direction in the target nail model, where a < r ≦ x, where a is a constant; and generating a sphere model according to the radius r.
Similarly, in another embodiment, in order to make the shape of the target region close to a half moon mark, i.e., a crescent shape, when the step of generating the sphere model according to the width of the preset region is performed, a radius r may be determined according to the width x of the preset region in the nail width direction, wherein a < r ≦ x, where a is a constant; and generating a sphere model according to the radius r.
In the two embodiments, when r is x, the boundary of the generated target region in the nail width direction overlaps with the nail width boundary, and as a result, as shown in fig. 4, the target region 34 is a gray region marked with "crescent white", and the crescent width formed by the processing is wider.
When a < r < x, the boundary of the generated target region in the nail width direction does not overlap with the width boundary of the nail, and the effect is that the width of the crescent formed is narrower.
In the embodiment of the invention, in order to enable the generated target area to be approximately crescent-shaped, namely, the shape of the target area is closer to the shape of the semilunar scar of the finger, the embodiment of the invention can enable the radius of the generated sphere model to be smaller than or equal to the width of the preset area in the nail width direction, so that the target area processed by the target image is closer to the actual shape of the semilunar scar of the finger, and the semilunar scar of the nail is more obvious and clearer.
Step 104, determining an intersection area of the sphere model and the target nail model;
specifically, when the spherical surface of the spherical model overlaps with the nail surface of the target nail model, an intersection region between the two models can be identified, and the intersection region is also a three-dimensional model.
In addition, in step 104, a preset region corresponding to the nail root in the target nail model may be identified, and then an intersection region of the sphere model and the preset region may be determined.
For a specific implementation of the step of identifying the preset region corresponding to the nail root in the target nail model, reference may be made to S301 to S305 of the above embodiment, which is not described herein again.
Also, in executing step 104, an intersection region between the sphere model and the preset region may be obtained when the sphere surface of the sphere model overlaps with the nail surface corresponding to the preset region (or the nail surface of the target nail model);
wherein the sphere model is a three-dimensional model, and the preset region is also a three-dimensional model of a part of the nail cut from the three-dimensional target nail model.
In order to ensure that the intersection region is in the form of a circular arc, it is necessary to ensure that the crescent moon exhibits a uniform curved shape, so when the intersection region (also three-dimensional) between the spherical model and the predetermined region is taken, it is necessary to overlap the spherical surface of the spherical model and the nail surface of the predetermined region (part of the three-dimensional model of the nail), that is, the three-dimensional spatial angle of the spherical surface is consistent with the maximum plane angle of the predetermined region (i.e., the nail region corresponding to the nail root).
Since the nail has a curvature, the surface of the sphere model needs to be overlapped with the curvature of the preset region, i.e. the angle is consistent, so that the obtained intersection region is mapped to the shape of the target region in the two-dimensional target image, i.e. the crescent shape.
In this way, the shape of the target region described below is always kept in a uniform circular arc shape regardless of how the user adjusts the value of n.
In one example, as shown in fig. 3 and 4, when the spherical surface of the spherical model 35 overlaps the nail surface corresponding to the preset region 33, the intersection region between the spherical model 35 and the preset region 33 is the region 34 labeled with crescent.
And 105, performing preset processing on a target area matched with the intersection area in the target image.
The intersection region is also a part of the three-dimensional model in the target nail model, so that in order to map the three-dimensional intersection region to a two-dimensional target image, two-dimensional positioning information corresponding to the intersection region in the target nail model can be acquired; and then, performing preset processing on a target area matched with the two-dimensional positioning information in the target image.
The target nail model is a three-dimensional model of a target nail region in the target image, so that the two-dimensional positioning information of the intersection region in the target nail model is subjected to coordinate matching in the target image, that is, a target region with the same coordinates as the two-dimensional positioning information can be obtained from the target image, and the target region is a region where the root of a certain nail in the two-dimensional target image is used for adding the crescent moon mark.
The preset processing may be to add a layer of white mask to make the region in the shape similar to the crescent appear white, and distinguish the region from other regions of the nail, so that the crescent does not exist in the nail of the finger of the user, or the crescent is not obvious, the method of the embodiment of the present invention can make the crescent become clear and obvious.
In addition, the preset treatment may be to add a mask (preferably white, or other color).
In the embodiment of the invention, by identifying nail regions in a target image and determining a nail model corresponding to each nail region, generating a spherical model according to the width of a nail root part in the target nail model for the target nail model in the nail model, then determining an intersection region of the spherical model and the target nail model so that the nail shape of the intersection region can take on a crescent shape, and finally performing preset processing on a target region matched with the intersection region in the target image, namely mapping the intersection region to the target region in the target image so that the shape of the target region is also in the crescent shape and is positioned at the nail root part of the target nail region corresponding to the target nail model, then after the preset processing is performed on the target region, the target region in the target image can be distinguished from other nail regions, the shape of the target area is close to the crescent shape and is more obvious, and the effect that when the hand image is processed, the semilunar scar of the fingernail of the processed hand image is more obvious and clear is achieved.
In addition, the method provided by the embodiment of the invention can meet the requirements of a user on adding crescent white (semilunar scar and little sun) to the hand nails. The users can obtain different personal experiences and social experiences, and the requirements of the users on showing healthy and exquisite living states to others are met. Meanwhile, the hand recognition algorithm is expanded, and the method focuses on positioning the fingernails and applies the fingernails.
In addition, on the basis that the accurate matching of the target area corresponding to the crescent trace is ensured by the technical scheme, more crescent white schemes can be provided to arouse greater interest of the user.
For example, after the monthly whites (i.e., target areas) with accurate matching degrees are calculated, the monthly whites can be subjected to certain size change and color processing (color change), and various monthly whites schemes can be realized. The user can control the size of the crescent moon white by changing the value of n, the larger the value of n is, the smaller the crescent moon white is, otherwise, the larger the crescent moon white is; in addition, the target areas of different fingers can be adjusted by different n values respectively, for example, an interface is provided on an image processing interface, and an n value adjustment interface and an n value independent adjustment interface of the crescent moon white of each finger are provided; and the color of the processed target area can be directly applied to the basic crescent white by a certain color template for carrying out color tone.
In addition, when the target image is a preview image in a shooting preview interface of the electronic equipment, a target area in the target image can be recognized in real time, and preset processing is carried out on the target area; when the postures of the fingers in the target image are changed, the spatial position relationship between the target area (namely the crescent white) and the fingernails in the image is synchronously processed in real time. Through constantly discerning the hand and fixing a position nail position to and with the accurate laminating of the crescent white that matches correspondingly on the nail, in order to ensure that the user can all obtain the better effect of the target area after accurate processing of location when each angle previews the target image.
By means of the technical scheme of the embodiment of the invention, when a user previews or takes pictures in a mobile phone shooting mode, the camera is used for automatically capturing images to obtain two-dimensional images of the hand and depth information of the hand images, so that a three-dimensional hand model is constructed. And further detecting the coordinates of the nail region characteristic points in the current frame, positioning the nail region characteristic points to the nail position for model identification and segmentation, and uploading the accurate data information of the nail. According to the uploaded nail model information, calculating crescent white of the corresponding matching degree to realize accurate matching, synchronizing the depth information of the hand and the nail in real time, and continuously modeling and matching to provide a preview effect.
Referring to fig. 5, a block diagram of an image processing apparatus according to an embodiment of the present invention is shown. The image processing device of the embodiment of the invention can realize the details of the image processing method in the embodiment and achieve the same effect. The image processing apparatus shown in fig. 5 includes:
an obtaining module 501, configured to obtain a target image;
a first determining module 502, configured to identify nail regions in the target image, and determine a nail model corresponding to each nail region;
a generating module 503, configured to generate, for a target nail model in the nail models, a sphere model according to a width of a nail root in the target nail model;
a second determining module 504 for determining an intersection region of the sphere model and the target nail model;
and the processing module 505 is configured to perform preset processing on a target region in the target image, where the target region is matched with the intersection region.
Optionally, the second determining module 504 includes:
the first acquisition submodule is used for acquiring the target nail characteristics of a target nail area corresponding to the target nail model in the target image;
the second acquisition submodule is used for acquiring corresponding three-dimensional positioning information of the target nail characteristics in the target nail model;
the first determining submodule is used for determining the total nail length y corresponding to the target nail model according to the three-dimensional positioning information of the target nail characteristics, wherein the total nail length is in the nail growth direction;
the dividing submodule is used for equally dividing the target nail model by n in the nail growth direction to generate n model areas with the nail length of y/n, wherein n is greater than 0;
the first identification submodule is used for identifying a target model area which is positioned at the lowest part of the nail growth direction in the n model areas as a preset area corresponding to a nail root in the target nail model;
and the second determining submodule is used for determining an intersection area of the sphere model and the preset area.
Optionally, the generating module 503 includes:
the third determining submodule is used for determining the radius r according to the width d of the nail root in the nail width direction in the target nail model, wherein a is more than r and less than or equal to d, and a is a constant;
and the first generation submodule is used for generating a sphere model according to the radius r.
Optionally, the first determining module 502 includes:
the second identification submodule is used for identifying a nail area in the target image according to the nail characteristics in the target image;
the third acquisition sub-module is used for acquiring a hand model corresponding to the hand region in the target image;
the fourth obtaining submodule is used for obtaining corresponding three-dimensional positioning information of the nail features in the hand model;
and the second generation submodule is used for generating a nail model corresponding to each nail region in the target image according to the three-dimensional positioning information corresponding to the nail characteristics of each nail region.
The image processing apparatus provided in the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described here again to avoid repetition.
The image processing device identifies nail areas in a target image and determines a nail model corresponding to each nail area, generates a sphere model according to the width of a nail root in the target nail model for the target nail model in the nail model, then determines an intersection area of the sphere model and the target nail model so that the nail shape of the intersection area can be in a crescent shape, and finally performs preset processing on a target area matched with the intersection area in the target image, namely, the intersection area is mapped to the target area in the target image so that the shape of the target area is also in the crescent shape and is positioned at the nail root of the target nail area corresponding to the target nail model, so that the target area in the target image can be distinguished from other nail areas after the preset processing is performed on the target area, the shape of the target area is close to the crescent shape and is more obvious, and the effect that when the hand image is processed, the semilunar scar of the fingernail of the processed hand image is more obvious and clear is achieved.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
An input unit 404 for acquiring a target image;
a processor 410 for identifying nail regions in the target image and determining a nail model corresponding to each nail region; for a target nail model in the nail models, generating a sphere model according to the width of the nail root in the target nail model; determining an intersection region of the sphere model and the target nail model; and performing preset processing on a target area matched with the intersection area in the target image.
In the embodiment of the invention, by identifying nail regions in a target image and determining a nail model corresponding to each nail region, generating a spherical model according to the width of a nail root part in the target nail model for the target nail model in the nail model, then determining an intersection region of the spherical model and the target nail model so that the nail shape of the intersection region can take on a crescent shape, and finally performing preset processing on a target region matched with the intersection region in the target image, namely mapping the intersection region to the target region in the target image so that the shape of the target region is also in the crescent shape and is positioned at the nail root part of the target nail region corresponding to the target nail model, then after the preset processing is performed on the target region, the target region in the target image can be distinguished from other nail regions, the shape of the target area is close to the crescent shape and is more obvious, and the effect that when the hand image is processed, the semilunar scar of the fingernail of the processed hand image is more obvious and clear is achieved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the electronic apparatus 400 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 6, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, and this is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
acquiring a target image;
identifying nail regions in the target image and determining a nail model corresponding to each nail region;
for a target nail model in the nail models, generating a sphere model according to the width of the nail root in the target nail model;
determining an intersection region of the sphere model and the target nail model;
and performing preset processing on a target area matched with the intersection area in the target image.
2. The method of claim 1, wherein the determining an intersection region of the sphere model and the target nail model comprises:
acquiring target nail characteristics of a target nail region corresponding to the target nail model in the target image;
acquiring corresponding three-dimensional positioning information of the target nail characteristics in the target nail model;
determining a total nail length y corresponding to the target nail model according to the three-dimensional positioning information of the target nail characteristics, wherein the total nail length is in the nail growth direction;
in the nail growth direction, equally dividing the target nail model by n to generate n model areas with the nail length of y/n, wherein n is greater than 0;
identifying a target model area positioned at the lowest part of the growth direction of the fingernails in the n model areas as a preset area corresponding to the fingernail root in the target fingernail model;
and determining an intersection area of the sphere model and the preset area.
3. The method of claim 1, wherein generating a sphere model from the width of the nail base in the target nail model comprises:
determining the radius r according to the width d of the nail root in the nail width direction in the target nail model, wherein a is more than r and less than or equal to d, and a is a constant;
and generating a sphere model according to the radius r.
4. The method of claim 1, wherein identifying nail regions in the target image and determining a nail model for each nail region comprises:
identifying a nail region in the target image according to the nail features in the target image;
acquiring a hand model corresponding to a hand region in the target image;
acquiring corresponding three-dimensional positioning information of the nail features in the hand model;
and generating a nail model corresponding to each nail region in the target image according to the three-dimensional positioning information corresponding to the nail characteristics of each nail region.
5. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target image;
the first determining module is used for identifying nail regions in the target image and determining a nail model corresponding to each nail region;
the generating module is used for generating a spherical model for a target nail model in the nail models according to the width of the nail root in the target nail model;
a second determination module for determining an intersection region of the sphere model and the target nail model;
and the processing module is used for carrying out preset processing on a target area matched with the intersection area in the target image.
6. The apparatus of claim 5, wherein the second determining module comprises:
the first acquisition submodule is used for acquiring the target nail characteristics of a target nail area corresponding to the target nail model in the target image;
the second acquisition submodule is used for acquiring corresponding three-dimensional positioning information of the target nail characteristics in the target nail model;
the first determining submodule is used for determining the total nail length y corresponding to the target nail model according to the three-dimensional positioning information of the target nail characteristics, wherein the total nail length is in the nail growth direction;
the dividing submodule is used for equally dividing the target nail model by n in the nail growth direction to generate n model areas with the nail length of y/n, wherein n is greater than 0;
the first identification submodule is used for identifying a target model area which is positioned at the lowest part of the nail growth direction in the n model areas as a preset area corresponding to a nail root in the target nail model;
and the second determining submodule is used for determining an intersection area of the sphere model and the preset area.
7. The apparatus of claim 5, wherein the generating module comprises:
the third determining submodule is used for determining the radius r according to the width d of the nail root in the nail width direction in the target nail model, wherein a is more than r and less than or equal to d, and a is a constant;
and the first generation submodule is used for generating a sphere model according to the radius r.
8. The apparatus of claim 5, wherein the first determining module comprises:
the second identification submodule is used for identifying a nail area in the target image according to the nail characteristics in the target image;
the third acquisition sub-module is used for acquiring a hand model corresponding to the hand region in the target image;
the fourth obtaining submodule is used for obtaining corresponding three-dimensional positioning information of the nail features in the hand model;
and the second generation submodule is used for generating a nail model corresponding to each nail region in the target image according to the three-dimensional positioning information corresponding to the nail characteristics of each nail region.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps in the image processing method according to any one of claims 1 to 4.
CN201911329892.9A 2019-12-20 2019-12-20 Image processing method and device Active CN111091519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911329892.9A CN111091519B (en) 2019-12-20 2019-12-20 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911329892.9A CN111091519B (en) 2019-12-20 2019-12-20 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111091519A true CN111091519A (en) 2020-05-01
CN111091519B CN111091519B (en) 2023-04-28

Family

ID=70396634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911329892.9A Active CN111091519B (en) 2019-12-20 2019-12-20 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111091519B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750203A (en) * 2021-01-21 2021-05-04 脸萌有限公司 Model reconstruction method, device, equipment and storage medium
CN113660424A (en) * 2021-08-19 2021-11-16 展讯通信(上海)有限公司 Image shooting method and related equipment
WO2022095860A1 (en) * 2020-11-05 2022-05-12 北京达佳互联信息技术有限公司 Fingernail special effect adding method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473563A (en) * 2013-09-23 2013-12-25 程涛 Fingernail image processing method and system, and fingernail feature analysis method and system
JP2014215735A (en) * 2013-04-24 2014-11-17 国立大学法人筑波大学 Nail image synthesizing device, nail image synthesizing method, and nail image synthesizing program
CN104414105A (en) * 2013-09-05 2015-03-18 卡西欧计算机株式会社 Nail print apparatus and printing method thereof
US20160270504A1 (en) * 2015-03-20 2016-09-22 Casio Computer Co., Ltd. Drawing device and method for detecting shape of nail in the same
CN106127181A (en) * 2016-07-02 2016-11-16 乐活无限(北京)科技有限公司 One is virtual tries manicure method, system on
US20160345708A1 (en) * 2013-08-23 2016-12-01 Preemadonna Inc. Nail Decorating Apparatus
CN106651879A (en) * 2016-12-23 2017-05-10 深圳市拟合科技有限公司 Method and system for extracting nail image
US20170154214A1 (en) * 2015-11-27 2017-06-01 Holition Limited Locating and tracking fingernails in images
CN109272519A (en) * 2018-09-03 2019-01-25 先临三维科技股份有限公司 Determination method, apparatus, storage medium and the processor of nail outline

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014215735A (en) * 2013-04-24 2014-11-17 国立大学法人筑波大学 Nail image synthesizing device, nail image synthesizing method, and nail image synthesizing program
US20160345708A1 (en) * 2013-08-23 2016-12-01 Preemadonna Inc. Nail Decorating Apparatus
CN104414105A (en) * 2013-09-05 2015-03-18 卡西欧计算机株式会社 Nail print apparatus and printing method thereof
CN103473563A (en) * 2013-09-23 2013-12-25 程涛 Fingernail image processing method and system, and fingernail feature analysis method and system
US20160270504A1 (en) * 2015-03-20 2016-09-22 Casio Computer Co., Ltd. Drawing device and method for detecting shape of nail in the same
US20170154214A1 (en) * 2015-11-27 2017-06-01 Holition Limited Locating and tracking fingernails in images
CN106127181A (en) * 2016-07-02 2016-11-16 乐活无限(北京)科技有限公司 One is virtual tries manicure method, system on
CN106651879A (en) * 2016-12-23 2017-05-10 深圳市拟合科技有限公司 Method and system for extracting nail image
CN109272519A (en) * 2018-09-03 2019-01-25 先临三维科技股份有限公司 Determination method, apparatus, storage medium and the processor of nail outline

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022095860A1 (en) * 2020-11-05 2022-05-12 北京达佳互联信息技术有限公司 Fingernail special effect adding method and device
CN112750203A (en) * 2021-01-21 2021-05-04 脸萌有限公司 Model reconstruction method, device, equipment and storage medium
CN112750203B (en) * 2021-01-21 2023-10-31 脸萌有限公司 Model reconstruction method, device, equipment and storage medium
CN113660424A (en) * 2021-08-19 2021-11-16 展讯通信(上海)有限公司 Image shooting method and related equipment

Also Published As

Publication number Publication date
CN111091519B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN108184050B (en) Photographing method and mobile terminal
CN111223143B (en) Key point detection method and device and computer readable storage medium
CN107835367A (en) A kind of image processing method, device and mobile terminal
CN107817939A (en) A kind of image processing method and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN107833177A (en) A kind of image processing method and mobile terminal
CN107948499A (en) A kind of image capturing method and mobile terminal
CN109685915B (en) Image processing method and device and mobile terminal
CN107734251A (en) A kind of photographic method and mobile terminal
CN108683850B (en) Shooting prompting method and mobile terminal
CN111091519B (en) Image processing method and device
CN111047511A (en) Image processing method and electronic equipment
CN109671034B (en) Image processing method and terminal equipment
CN108377339A (en) A kind of photographic method and camera arrangement
CN111031234B (en) Image processing method and electronic equipment
US20230014409A1 (en) Detection result output method, electronic device and medium
CN111031253B (en) Shooting method and electronic equipment
CN107678672A (en) A kind of display processing method and mobile terminal
CN111080747B (en) Face image processing method and electronic equipment
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN113365085A (en) Live video generation method and device
CN109544445B (en) Image processing method and device and mobile terminal
CN108600544A (en) A kind of Single-hand control method and terminal
CN107563353B (en) Image processing method and device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230705

Address after: 518133 tower a 2301-09, 2401-09, 2501-09, 2601-09, phase III, North District, Yifang center, 99 Xinhu Road, N12 District, Haiwang community, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: VIVO MOBILE COMMUNICATIONS (SHENZHEN) Co.,Ltd.

Address before: 523860 No. 283 BBK Avenue, Changan Town, Changan, Guangdong.

Patentee before: VIVO MOBILE COMMUNICATION Co.,Ltd.

TR01 Transfer of patent right