WO2019200718A1 - 图像处理方法、装置及电子设备 - Google Patents

图像处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2019200718A1
WO2019200718A1 PCT/CN2018/094071 CN2018094071W WO2019200718A1 WO 2019200718 A1 WO2019200718 A1 WO 2019200718A1 CN 2018094071 W CN2018094071 W CN 2018094071W WO 2019200718 A1 WO2019200718 A1 WO 2019200718A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
image
face
texture value
processing
Prior art date
Application number
PCT/CN2018/094071
Other languages
English (en)
French (fr)
Inventor
李建亿
Original Assignee
太平洋未来科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 太平洋未来科技(深圳)有限公司 filed Critical 太平洋未来科技(深圳)有限公司
Publication of WO2019200718A1 publication Critical patent/WO2019200718A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and electronic device.
  • the image processing method, device and electronic device provided by the embodiments of the present invention are used to solve at least the above problems in the related art.
  • An embodiment of the present invention provides an image processing method, including:
  • Identifying a face image in the picture acquiring feature information of the face image; generating a first 3D face image matching the face image according to the feature information and a standard 3D face model; A 3D face image is polished to obtain a second 3D face image;
  • performing light and shadow processing on the face image according to the first 3D face image and the second 3D face image comprising: acquiring a target point on the face image, in the a corresponding first texture value on the first 3D face image and a second texture value corresponding to the second 3D face image; the first texture value and the second texture value according to the target point And performing light and shadow processing on the face image.
  • performing light and shadow processing on the face image according to the first texture value and the second texture value of the target point including: calculating the second texture value and the first texture a difference value; adjusting the texture value of the target point according to the difference.
  • the feature information includes a size and a rotation angle of the face image
  • the face image in the recognition image includes: identifying a face key point in the image by using a face recognition model, and obtaining the key a coordinate position of the point; determining a size and a rotation angle of the face image according to a coordinate position of the key point.
  • the method further includes: determining an eye mask area and a lighting rendering parameter of the face image; and lighting the eye mask area according to the lighting rendering parameter.
  • the image is acquired by a camera of an electronic device
  • the camera includes a lens, an auto focus voice coil motor, an image sensor, and a micro memory alloy optical image stabilization device, the lens being fixed on the auto focus voice coil motor, the image sensor acquiring an optical scene of the lens Converting to image data, the autofocus voice coil motor is mounted on the micro memory alloy optical image stabilization device, and the processor of the electronic device drives the micro memory alloy optical image stabilization device according to the lens shake data detected by the gyroscope Action to achieve lens compensation for the lens;
  • the micro memory alloy optical image stabilization device includes a movable plate and a substrate, the auto focus voice coil motor is mounted on the movable plate, the substrate has a size larger than the movable plate, and the movable plate is mounted on the substrate
  • a plurality of movable supports are disposed between the movable plate and the substrate, and four sides of the substrate are provided on the periphery of the substrate, and a gap is formed in a middle portion of each of the side walls, and a micro-motion is installed in the notch a movable member of the micro switch capable of opening or closing the notch under the instruction of the processor, and the movable member is provided with a strip disposed along a width direction of the movable member near a side of the movable plate
  • the substrate is provided with a temperature control circuit connected to the electrical contact
  • the processor controls opening and closing of the temperature control circuit according to a lens shake direction detected by the gyroscope
  • the movable plate The middle of the four sides of the four
  • the elastic member is a spring.
  • the electronic device is a camera, and the camera is mounted on a bracket, the bracket includes a mounting seat, a support shaft, and three support frames hinged on the support shaft;
  • the mounting base includes a first mounting plate and a second mounting plate that are perpendicular to each other, and the first mounting plate and the second mounting plate are both for mounting the camera, and the support shaft is vertically mounted on the first mounting plate a bottom surface, the support shaft is disposed away from a bottom surface of the mounting seat with a radial dimension larger than a circumferential surface of the support shaft, and the three support frames are mounted on the support shaft from top to bottom, and each of the two The horizontal projection of the support frame after deployment is at an angle, the support shaft is a telescopic rod member, and includes a tube body connected to the mounting seat and a rod body partially retractable into the tube body, a portion of the rod that extends into the tubular body includes a first section, a second section, a third section, and a fourth section that are sequentially hinged, the first section being coupled to the tubular body, the first section being adjacent to the
  • the end of the second stage is provided with a mounting groove, a locking member is hinged in the mounting
  • a mounting slot is disposed at an end of the second segment adjacent to the third segment, and the mounting slot is hinged a locking member, the end of the third section adjacent to the second section is provided with a locking hole detachably engaged with the locking member, and the third section is provided with a mounting groove near the end of the fourth section A locking member is hinged in the mounting groove, and an end of the fourth segment adjacent to the third segment is provided with a locking hole detachably engaged with the locking member.
  • each of the support frames is further connected with a distance adjusting device
  • the distance adjusting device comprises a bearing ring mounted on the bottom of the support frame, a rotating ring connected to the bearing ring, a pipe body, a screw, a threaded sleeve and a support rod, wherein one end of the tubular body is provided with a plug, and the screw portion is installed in the tube body through the plugging, and the plugging is provided with an inner portion adapted to the screw Thread, another part of the screw is connected to the rotating ring, one end of the screw sleeve is installed in the tube body and is screwed with the screw, and the other end of the screw sleeve protrudes outside the tube body and
  • the support rod is fixedly connected, and the inner wall of the screw sleeve is provided with a protrusion, and the outer side wall of the screw sleeve is provided with a slide rail adapted to the protrusion along the length direction thereof, and the tube body includes adjacent
  • an image processing apparatus including:
  • An identification module configured to identify a face image in the image, and obtain feature information of the face image; and a generating module, configured to generate, according to the feature information and the standard 3D face model, a matching image of the face image a 3D face image; a first lighting module, configured to perform a lighting process on the first 3D face image to obtain a second 3D face image; and a processing module, configured to use the first 3D face image And the second 3D face image, and the face image is subjected to light and shadow processing.
  • processing module includes:
  • Obtaining a sub-module configured to acquire a target point on the image, a corresponding first texture value on the first 3D face image, and a second texture value corresponding to the second 3D face image;
  • a processing submodule configured to perform light and shadow processing on the face image according to the first texture value and the second texture value of the target point.
  • processing sub-module is specifically configured to calculate a difference between the second texture value and the first texture value; and adjust a texture value of the target point according to the difference.
  • the feature information includes a size and a rotation angle of the face image
  • the identification module is specifically configured to identify a face key point in the image by using a face recognition model, and obtain coordinates of the key point. Position; determining a size and a rotation angle of the face image according to a coordinate position of the key point.
  • the device further includes: a determining module, configured to determine an eye mask area and a lighting rendering parameter of the face image; and a second lighting module, configured to perform the eye mask area according to the lighting rendering parameter Light up.
  • a still further aspect of the embodiments of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor;
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the image processing method of any of the above-described embodiments of the present invention .
  • the image processing method, device and electronic device provided by the embodiments of the present invention can make the photographed portraits more three-dimensional and more layered on the basis of brightening the light, and the eyes have more charm. Reach the camera effect of the SLR camera.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present invention
  • FIG. 2 is a specific flowchart of step S104 according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 4 is a structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 5 is a structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing the hardware structure of an electronic device for performing an image processing method provided by an embodiment of the method of the present invention.
  • FIG. 7 is a structural diagram of a camera provided by an embodiment of the present invention.
  • FIG. 8 is a structural diagram of a micro memory alloy optical image stabilization device according to an embodiment of the present invention.
  • FIG. 9 is a structural diagram showing an operation state of a micro memory alloy optical image stabilization device according to an embodiment of the present invention.
  • Figure 10 is a structural view of a bracket according to an embodiment of the present invention.
  • Figure 11 is a structural view of a support shaft according to an embodiment of the present invention.
  • FIG. 12 is a structural diagram of a distance adjusting device according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in FIG. 1 , an image processing method provided by an embodiment of the present invention includes:
  • the face image in the image needs to be recognized.
  • an image in a picture acquired by means of real-time shooting can be identified, and an image stored in a picture local to the terminal can also be identified.
  • the feature information of the face image includes, but is not limited to, the size of the face image and the rotation angle of the face image.
  • the range of the face image can be identified according to edge information and/or color information of the image.
  • the position of the key point determines the size of the face image and the angle of rotation of the face image.
  • the eyebrows, the eyes, the nose, the face, the mouth, and the like in the face image respectively have a plurality of the key points, that is, the eyebrows, the eyes, the nose, and the face in the face image can be determined by the coordinate positions of the key points. And the position and contour of the mouth.
  • positive and negative samples for face image key point recognition may be prepared in advance, and the face recognition model is trained according to the positive and negative samples.
  • the picture to be recognized is input into the face recognition model, and the key point coordinates of the face image in the to-be-identified picture are output.
  • the coordinate system can take the lower left corner of the picture as the origin, the right direction is the positive direction of the X axis, and the upward direction is the positive direction of the Y axis, and the coordinate value is measured by the number of pixels.
  • the watershed image segmentation algorithm is used to obtain the coordinate information of the other forehead and chin, and the coordinates of all the key points obtained are integrated.
  • the depth information of the face can also be output through the model.
  • two boundary lines perpendicular to the X axis may be determined according to the coordinate positions of the left ear and the right ear
  • two boundary lines perpendicular to the Y axis are determined according to the coordinate positions of the forehead and the chin
  • the rectangle surrounded by the four lines is The size of the face image.
  • the rotation angle of the face image may be acquired by a face pose estimation algorithm.
  • the face pose estimation algorithm includes a variety of methods, which can be divided into a model-based method, an appearance-based method, and a classification-based method. Among them, the model-based method achieves the best results because the resulting face pose is continuous.
  • the face sample image and the corresponding face thereof are The rotation angle input convolutional neural network is trained to output a preset range of the face rotation angle or the rotation angle, thereby obtaining a face rotation angle model.
  • the rotation angle When the rotation angle is relatively close, the light and shadow effects of the face are similar. In order to improve the output efficiency, it is not necessary to output an accurate rotation angle, and only the range of the rotation angle can be output.
  • the front and the side of the face are equally divided into two or more interval ranges according to the face rotation angle, and the range of the face rotation angle corresponds to a face pose type.
  • the key point coordinates are input into the face rotation angle model, and the corresponding rotation angle interval is output.
  • the face model data file can be preloaded to establish a standard 3D face model.
  • the standard 3D face model is globally scaled according to the size of the face image in the feature information, such that the distance between the uppermost vertex of the forehead and the lowermost vertex of the chin is equal to the boundary line between step S101 and the Y axis.
  • the distance between the two ears is equal to the distance between the two boundary lines of step S101 perpendicular to the X axis.
  • the standard 3D face model is rotated according to the rotation angle calculated in step S101 to match the postures of the two.
  • the map is completed by using the face image identified in step S101, and the model is texture mapped according to the face information to generate a first 3D matching the face image.
  • Face image may be the three primary colors RGB of each pixel.
  • the first 3D face image may be polished by configuring a lighting rendering parameter and then combining the lighting model. Natural light is applied to the first 3D face image to simulate the light and shadow of the real light hitting the person's face.
  • the lighting rendering parameters include, but are not limited to, the position of the light source, the warmth of the light, the material reflective rate, and the like, which may be adjusted according to the background environment information, the facial expression, etc. in the image to be recognized, or may be customized by the user.
  • the invention is not limited thereto.
  • the first 3D face image is polished using a Phong Lighting Model.
  • the light in 3D space is mainly composed of three components: Ambient, Diffuse, and Specular. Since the specular light is generally related to metal reflection, the embodiment of the present invention mainly relates to the shooting of a human face, and therefore only uses the ambient (Ambient) and diffuse (Diffuse) illumination for calculation, and the specific calculation formula is as follows:
  • Ambient color ambient strength*li ght color
  • Diffuse color diffuse*light color
  • light color refers to the color of the light
  • diffuse refers to the diffuse reflection coefficient of the material and the light source
  • ambientStrength refers to the ambient light intensity
  • the positive light refers to the normal illumination in the Feng's illumination model
  • the negative light color is the opposite of the positive light color, thus obtaining the shadow effect
  • the Gamma is used to correct the illumination anomaly in a specific environment
  • the Exposure is used to correct the illumination overexplosion.
  • Phenomenon the coefficients gamma and exposure all need to be continuously tested to adjust the acquisition.
  • the specific calculation formula is as follows:
  • the above lighting process may be completed by the GPU of the terminal.
  • S104 Perform light and shadow processing on the face image according to the first 3D face image and the second 3D face image.
  • the second 3D face image of the natural light is polished, because the place where the light is brightened will be brightened, and the place where the light cannot be hit will form a shadow, and for the pixel of the same coordinate position, it is in the second 3D person.
  • There is a texture difference between the face image and the first 3D face image and according to the difference, the texture difference is strengthened or weakened at the corresponding position of the pixel point of the face image, so that the face image is generated and the second 3D The same lighting effect as the face image.
  • the face image may be processed according to texture differences in the first 3D image and the second 3D image.
  • this step includes:
  • S1041 Acquire a target point on the face image, a first texture value corresponding to the first 3D face image, and a second texture value corresponding to the second 3D face image.
  • the target point may be a key point in the above steps, and may also be used to facilitate the position of the facial features (for example, the nose, the tibia, the chin, the forehead, etc.), and the present invention is not limited thereto.
  • a first target point corresponding thereto is found in the first 3D face image, and the second 3D face image is in the second 3D face image.
  • the texture value may be an RGB value of the pixel, and thus the RGB value of the first target point and the second RGB value of the second target point are respectively acquired.
  • S1042 Perform light and shadow processing on the face image according to the first texture value and the second texture value of the target point.
  • performing light and shadow processing on the face image including:
  • Calculating a difference between the second RGB value and the first RGB value, and adding the difference to the original RGB value of the coordinate position corresponding to the target point in the face image may be The positive value may also be a negative value), that is, the RGB value corresponding to the target point after the face image is polished.
  • the image processing method provided by the embodiment of the invention can make the photographed portrait facial features more stereoscopic and more layered on the basis of the brightening light, and achieve the photographing effect of the SLR camera.
  • FIG. 3 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in FIG. 3 , this embodiment is a specific implementation of the embodiment shown in FIG. 1 and FIG. 2 , and therefore, the specific implementation method and beneficial effects of the steps in the embodiment shown in FIG. 1 and FIG. 2 are not further described.
  • the image processing method provided by the embodiment specifically includes:
  • S304 Perform light and shadow processing on the face image according to the first 3D face image and the second 3D face image.
  • the eyeball area (ie, the black eye area) in the eye image can be obtained according to the gaze track algorithm, and the light source position and the light are determined by the illumination rendering parameters configured in step S103. Parameters such as heating and cooling, material reflectivity, etc.
  • the eye mask area is polished according to the illumination rendering parameter.
  • the natural light corresponding to the angle of the illumination direction is virtualized, so that an eye light is displayed inside the eye mask, so that the eyes have more charm.
  • the image processing method provided by the embodiment of the invention can make the photographed portraits more stereoscopic and more layered on the basis of the brightening light, and the eyes have more charm, thereby achieving the photographing effect of the SLR camera.
  • FIG. 4 is a structural diagram of an image processing apparatus according to an embodiment of the present invention. As shown in FIG. 4, the device specifically includes: an identification module 100, a generation module 200, a first generation module 300, and a processing module 400. among them,
  • the identification module 100 is configured to identify a face image in a picture, and acquire feature information of the face image.
  • the generating module 200 is configured to generate and describe the feature information according to the feature information and a standard 3D face model. a first 3D face image matched by the face image; the first lighting module 300 is configured to perform light processing on the first 3D face image to obtain a second 3D face image; and the processing module 400 And performing light and shadow processing on the face image according to the first 3D face image and the second 3D face image.
  • the processing module 400 includes: an obtaining submodule, configured to acquire a target point on the image, a corresponding first texture value on the first 3D face image, and the second 3D face image a corresponding second texture value; a processing submodule, configured to perform light and shadow processing on the face image according to the first texture value and the second texture value of the target point.
  • the processing submodule is specifically configured to calculate a difference between the second texture value and the first texture value; and adjust a texture value of the target point according to the difference.
  • the feature information includes a size and a rotation angle of the face image
  • the identification module 100 is specifically configured to identify a face key point in the image by using a face recognition model to obtain the key point. a coordinate position; determining a size and a rotation angle of the face image according to a coordinate position of the key point.
  • the image processing apparatus provided by the embodiment of the present invention is specifically configured to perform the method provided by the embodiment shown in FIG. 1 and FIG. 2, and the implementation principle, method, function, and the like are similar to the embodiment shown in FIG. 1 and FIG. This will not be repeated here.
  • FIG. 5 is a structural diagram of an image processing apparatus according to an embodiment of the present invention. As shown in FIG. 5, the device specifically includes: an identification module 100, a generation module 200, a first generation module 300, a processing module 400, a determination module 500, and a second generation module 600.
  • the identification module 100 is configured to identify a face image in a picture, and acquire feature information of the face image.
  • the generating module 200 is configured to generate and describe the feature information according to the feature information and a standard 3D face model. a first 3D face image matched by the face image; the first lighting module 300 is configured to perform light processing on the first 3D face image to obtain a second 3D face image; and the processing module 400 And performing light and shadow processing on the face image according to the first 3D face image and the second 3D face image.
  • the determining module 500 is configured to determine an eye mask area and a lighting rendering parameter of the face image, and the second lighting module 600 is configured to light the eye mask area according to the lighting rendering parameter.
  • the image processing apparatus provided by the embodiment of the present invention is specifically configured to perform the method provided by the embodiment shown in FIG. 3, and the implementation principle, method, and function of the embodiment are similar to those of the embodiment shown in FIG. 3, and details are not described herein again.
  • the image processing apparatus of the embodiments of the present invention may be separately disposed in the electronic device as one of the software or hardware functional units, or may be used as one of the functional modules integrated in the processor to perform image processing according to the embodiment of the present invention. method.
  • FIG. 6 is a schematic diagram showing the hardware structure of an electronic device for performing an image processing method provided by an embodiment of the method of the present invention.
  • the electronic device includes:
  • processors 610 and memory 620 one processor 610 is taken as an example in FIG.
  • the apparatus for performing the image processing method described above may further include: an input device 630 and an output device 630.
  • the processor 610, the memory 620, the input device 630, and the output device 640 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 620 is a non-volatile computer readable storage medium, and is applicable to a non-volatile software program, a non-volatile computer-executable program, and a module, as in the image processing method in the embodiment of the present invention.
  • the processor 610 performs various image applications and data processing of the server by executing non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the image processing method.
  • the memory 620 can include a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function; the storage data area can be stored according to the use of the image processing apparatus provided by the embodiment of the present invention. Data, etc.
  • memory 620 can include high speed random access memory 620, and can also include non-volatile memory 620, such as at least one disk storage device 620, flash memory device, or other non-volatile solid state memory 620 device.
  • memory 620 can optionally include a memory 620 remotely located relative to processor 66, which can be connected to the image processing device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 630 can receive input numeric or character information and generate key signal inputs related to user settings and function control of the image processing device.
  • Input device 630 can include a device such as a press module.
  • the one or more modules are stored in the memory 620, and when executed by the one or more processors 610, the image processing method is performed.
  • the electronic device of the embodiment of the invention exists in various forms, including but not limited to:
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the device embodiments described above are merely illustrative, wherein the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
  • Embodiments of the present invention provide a non-transitory computer readable storage medium storing computer executable instructions, wherein when the computer executable instructions are executed by an electronic device, the electronic device is caused
  • the image processing method in any of the above method embodiments is performed.
  • An embodiment of the present invention provides a computer program product, wherein the computer program product comprises a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, wherein when the program instruction When executed by the electronic device, the electronic device is caused to perform the image processing method in any of the above method embodiments.
  • a camera with an electronic device with better anti-shake performance is provided, and the picture obtained by the camera is clearer and more capable than the ordinary camera. Meet the needs of beauty users.
  • the picture acquired by the camera in the embodiment is used in the image processing method in the above embodiment, the effect is better.
  • the existing electronic device camera (the electronic device is a mobile phone or a video camera, etc.) including the lens 1, the auto focus voice coil motor 2, and the image sensor 3 are known in the art, and thus are not described here.
  • Micro memory alloy optical anti-shake device is usually used because the existing anti-shake device mostly uses the energized coil to generate Loren magnetic force to drive the lens in the magnetic field, and to achieve optical image stabilization, the lens needs to be driven in at least two directions. It means that multiple coils need to be arranged, which will bring certain challenges to the miniaturization of the overall structure, and is easily interfered by external magnetic fields, thus affecting the anti-shake effect.
  • Some prior art techniques achieve the stretching and shortening of the memory alloy wire through temperature changes.
  • the autofocus voice coil motor is moved to realize the lens shake compensation, and the control chip of the micro memory alloy optical anti-shake actuator can control the change of the drive signal to change the temperature of the memory alloy wire, thereby controlling the extension of the memory alloy wire.
  • the length and length are shortened, and the position and moving distance of the actuator are calculated based on the resistance of the memory alloy wire.
  • the Applicant has found that due to the randomness and uncertainty of the jitter, the structure of the above technical solution alone cannot accurately compensate the lens in the case of multiple jitters due to the temperature rise of the shape memory alloy. It takes a certain time to cool down.
  • the above technical solution can compensate the lens for the first direction jitter, but when the second direction of the jitter occurs, the memory alloy wire is too late. It is deformed in an instant, so it is easy to cause the compensation to be untimely. It is impossible to accurately achieve lens shake compensation for multiple jitters and continuous jitter in different directions. This results in poor quality of the acquired image, so the camera or camera structure needs to be improved.
  • the camera of the embodiment includes a lens 1, an auto focus voice coil motor 2, an image sensor 3, and a micro memory alloy optical image stabilization device 4, and the lens 1 is fixed to the auto focus voice coil.
  • the image sensor 3 transmits an image acquired by the lens 1 to the identification module 100, and the autofocus voice coil motor 2 is mounted on the micro memory alloy optical image stabilization device 4, the electron
  • the internal processor of the device drives the action of the micro-memory alloy optical anti-shake device 4 according to the lens shake detected by the gyroscope (not shown) inside the electronic device to realize the lens shake compensation;
  • the improvement of the micro memory alloy optical anti-shake device is as follows:
  • the micro memory alloy optical image stabilization device comprises a movable plate 5 and a substrate 6, wherein the movable plate 5 and the substrate 6 are rectangular plate-shaped members, and the autofocus voice coil motor 2 is mounted on the movable plate 5, the substrate
  • the size of 6 is larger than the size of the movable panel 5, the movable panel 5 is mounted on the substrate 6, and a plurality of movable supports 7 are disposed between the movable panel 5 and the substrate 6, and the movable support 7 Specifically, the balls are disposed in the grooves at the four corners of the substrate 6 to facilitate the movement of the movable plate 5 on the substrate 6.
  • the substrate 6 has four side walls around, and the central portion of each of the side walls A notch 8 is disposed, and the notch 8 is mounted with a micro switch 9 , and the movable member 10 of the micro switch 9 can open or close the notch under the instruction of the processing module, and the movable member 10 is close to the
  • the side surface of the movable panel 5 is provided with strip-shaped electrical contacts 11 arranged along the width direction of the movable member 10, and the substrate 6 is provided with a temperature control circuit connected to the electrical contact 11 (not shown)
  • the processing module can control the opening of the temperature control circuit according to the lens shake direction detected by the gyroscope Closely, a middle portion of the four sides of the movable panel 5 is provided with a shape memory alloy wire 12, one end of the shape memory alloy wire 12 is fixedly connected to the movable plate 5, and the other end is slid with the electrical contact 11
  • the elastic member 13 for resetting is disposed between the inner side wall of the substrate 6 and the movable plate
  • the working process of the micro memory alloy optical image stabilization device of the present embodiment will be described in detail below with reference to the above structure: taking the lens in the opposite direction of the lens as an example, when the lens is shaken in the first direction, the gyroscope will detect The lens shake direction and distance are fed back to the processor, and the processor calculates the amount of elongation of the shape memory alloy wire that needs to be controlled to compensate the jitter, and drives the corresponding temperature control circuit to heat the shape memory alloy wire.
  • the memory alloy wire is elongated and drives the movable plate to move in a direction that can compensate for the first direction of shaking, while the other shape memory alloy wire symmetrical with the shape memory alloy wire does not change, but with the other shape memory alloy wire
  • the connected movable piece opens the corresponding notch, so that the other shape memory alloy wire protrudes out of the notch by the movable plate, and at this time, the elastic members near the two shape memory alloy wires are respectively stretched and Compression (as shown in Figure 9), feedback the shape memory alloy wire after moving to the specified position on the micro memory alloy optical anti-shake actuator
  • the resistance by comparing the deviation of the resistance value from the target value, can correct the movement deviation on the micro memory alloy optical anti-shake actuator; and when the second jitter occurs, the processor first passes the other shape and the alloy wire
  • the movable member closes the notch, and opens the movable member that abuts the shape memory alloy wire in the extended state, and the rotation of the movable member with the other shape and the
  • the movable plate moves in a direction that can compensate for the second direction of shaking, due to the lack of the previously elongated shape memory alloy wire Open, so it does not affect the other shape and the movement of the alloy ribbon moving plate, and due to the opening speed of the movable member and the resetting action of the spring, the micro memory alloy optical image stabilization device of the embodiment is used when multiple shaking occurs. Accurate compensation can be made, which is far superior to the micro-memory alloy optical anti-shake device in the prior art.
  • the above is only a simple two-jitter.
  • the two adjacent shape memory alloy wires can be elongated to compensate for the jitter.
  • the basic working process is as described above. The description principle is the same, but it is not described here.
  • the detection feedback of the shape memory alloy resistance and the detection feedback of the gyroscope are all prior art, and will not be described here.
  • the electronic device is a camera
  • the camera can be mounted on the bracket of the camera.
  • the bracket of the existing camera has the following defects during use: 1.
  • the existing camera bracket All are supported by a tripod, but the tripod structure can not guarantee the level of the bracket mount when the ground unevenness is installed at a large uneven position, which is easy to be shaken or tilted, which may easily adversely affect the shooting; 2.
  • Existing bracket Cannot be used as a shoulder camera bracket, with a single structure and function, and must be equipped with a shoulder camera bracket separately when shoulder impact shooting is required.
  • the bracket of the embodiment includes a mounting seat 14, a support shaft 15, and three support frames 16 hinged on the support shaft;
  • the mounting bracket 14 includes a first mounting plate 141 and a second mounting plate 142 that are perpendicular to each other, and the first mounting plate 141 and the second mounting plate 142 are both used to mount the camera, and the support shaft 15 is vertically mounted at the same.
  • a bottom surface of the first mounting plate 141, the support shaft 15 is disposed away from the bottom end of the mounting seat 14 with a radial surface slightly larger than the circumferential surface 17 of the support shaft, and the three support frames 16 are from top to bottom.
  • each of the two support frames 16 is inclined at an angle.
  • the circumferential surface 17 is first assumed to be flat on the uneven surface.
  • the erection of the bracket is leveled by opening and adjusting the position of the three retractable support frames, so even the uneven ground can quickly erect the support, adapt to various terrains, and ensure that the mount is horizontal. status.
  • the support shaft 15 of the present embodiment is also a telescopic rod member including a tube body 151 connected to the mounting seat 14 and a rod body 152 partially retractable into the tube body 151, the rod body
  • the portion of the 152 that extends into the tubular body includes a first segment 1521, a second segment 1522, a third segment 1523, and a fourth segment 1524 that are sequentially hinged, the first segment 1521 being coupled to the tubular body 151,
  • a mounting slot 18 is defined in the end of the first segment 1521 adjacent to the second segment 1522.
  • a locking member 19 is hinged in the mounting slot 18, and the second segment 1522 is adjacent to the end of the first segment 1521.
  • the locking hole 20 is detachably engaged with the locking member 19.
  • the second portion 1522 is provided with a mounting groove 18 near the end of the third segment 1523.
  • the mounting groove 18 is hingedly locked.
  • the third section 1523 is provided with a locking hole 20 detachably engaged with the locking member 19 near the end of the second segment 1522, and the third segment 1523 is adjacent to the end of the fourth segment 1524.
  • a mounting slot 18 is defined in the mounting slot 18, and a locking member 19 is hinged therein.
  • the end of the fourth segment 1524 adjacent to the third segment 1523 is detachably coupled with the locking member 19.
  • the locking hole 20 can be hidden in the mounting groove. When the locking member is needed, the locking member can be locked on the locking hole by rotating the locking member.
  • the locking member 19 may be a strip having a protrusion which is adapted to the size of the locking hole to press the protrusion into the two adjacent sections in the locking hole (
  • the first segment and the second segment are fixed in position to prevent relative rotation, and the portion can be formed by the cooperation of the first segment 1521, the second segment 1522, the third segment 1523 and the fourth segment 1524.
  • the structure is fixed, and the relative positions of the segments are fixed by the locking member 19.
  • the soft material can also be provided at the bottom of the structure.
  • the Applicant has also found that the telescopic support frame stretches most of the telescopic portion by human force to realize the adjustment of the telescopic length, but the distance is uncontrollable and the randomness is large, so that the problem of adjustment inconvenience often occurs, especially in need of When the telescopic length is partially adjusted, it is often not easy to implement. Therefore, the applicant also optimizes the structure of the support frame 16. As shown in FIG. 12, the bottom end of each of the support frames 16 of the embodiment is also connected with a pitch adjustment.
  • the device 21 includes a bearing ring 211 mounted on the bottom of the support frame 16, a rotating ring 212 connected to the bearing ring 211, a tube body 213, a screw 214, a threaded sleeve 215 and a support rod 216.
  • One end of the tubular body 213 is provided with a plugging 217, and the screw 215 is partially installed in the tubular body 213 through the plugging 217, and the plugging 217 is provided with the screw 214.
  • the other end of the screw 214 is connected to the rotating ring 212.
  • One end of the threaded sleeve 215 is mounted in the tube body 213 and is screwed to the screw 214.
  • the other end of the threaded sleeve 215 extends.
  • the inner wall of the threaded sleeve 215 is provided with a protrusion 218.
  • the outer side wall of the threaded sleeve 215 is provided with a slide 219 adapted to the protrusion along the length thereof.
  • 213 includes an adjacent first portion 2131 having an inner diameter smaller than an inner diameter of the second portion 2132, and a second portion 2132 disposed on an outer end of the second portion 2132.
  • the end of the screw sleeve 215 near the screw 214 is provided with a limiting end 2151 having an outer diameter larger than the inner diameter of the first portion.
  • the screw 214 By rotating the rotating ring 212, the screw 214 is rotated in the tube body 213, and the rotation trend is transmitted.
  • the screw sleeve 215 is not rotated due to the cooperation of the protrusion 218 and the slide 219, so that the rotational force is turned into an outward linear movement, thereby driving the support rod 216 to move, and the bottom end of the support frame is realized.
  • the length is finely adjusted, which is convenient for the user to flatten the bracket and its mounting seat, and provide a good foundation for the subsequent shooting work.
  • a machine-readable medium includes read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc., the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiments described Methods.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media e.g., magnetic disks, magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc.
  • the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiment

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例提供一种图像处理方法、装置及电子设备,所述方法包括:识别图片中的人脸图像,获取所述人脸图像的特征信息;根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像;对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。通过上述方法、装置及电子设备,能在提亮光线的基础上,使拍照出来的人像五官更立体、更有层次感,眼睛更有神韵,从而达到单反相机的拍照效果。

Description

图像处理方法、装置及电子设备 技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法、装置及电子设备。
背景技术
随着科技的发展,移动终端在人们生活中所起的作用也越来越广泛。例如,人们可以利用移动终端的摄像头和美图软件来进行照片的拍摄。然而,发明人在实现本发明的过程中发现,移动终端拍出来的人脸照片都是扁平的,无突出其重点部位(如鼻子,眼窝等),从而导致拍摄的照片中人像五官比较扁平、没有层次感,用户体验较差。
发明内容
本发明实施例提供的图像处理方法、装置及电子设备,用以至少解决相关技术中的上述问题。
本发明实施例一方面提供了一种图像处理方法,包括:
识别图片中的人脸图像,获取所述人脸图像的特征信息;根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像;对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;
根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。
进一步地,所述根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理,包括:获取所述人脸图像上的目标点、在所述第一3D人脸图像上对应的第一纹理值和在所述第二3D人脸图像上对应的第二纹理值;根据所述目标点的所述第一纹理值和所述第二纹理值,对所述人脸图像进行光影处理。
进一步地,所述根据所述目标点的所述第一纹理值和所述第二纹理值,对所述人脸图像进行光影处理,包括:计算所述第二纹理值与所述第一纹理值的差值;按照所述差值对所述目标点的纹理值进行调整。
进一步地,所述特征信息包括所述人脸图像的尺寸和旋转角度,所述识别图像中的人脸图像包括:利用人脸识别模型识别所述图像中的人脸关键 点,得到所述关键点的坐标位置;根据所述关键点的坐标位置确定所述人脸图像的尺寸和旋转角度。
进一步地,所述方法还包括:确定所述人脸图像的眼膜区域和光照渲染参数;根据所述光照渲染参数对所述眼膜区域进行打光。
进一步的,
所述图像处理方法中通过电子装置的摄像头获取所述图片,
所述摄像头包括镜头、自动聚焦音圈马达、图像传感器以及微型记忆合金光学防抖器,所述镜头固装在所述自动聚焦音圈马达上,所述图像传感器将所述镜头获取的光学场景转换为图像数据,所述自动聚焦音圈马达安装在所述微型记忆合金光学防抖器上,电子装置的处理器根据陀螺仪检测到的镜头抖动数据驱动所述微型记忆合金光学防抖器的动作,实现镜头的抖动补偿;
所述微型记忆合金光学防抖器包括活动板和基板,所述自动聚焦音圈马达安装在所述活动板上,所述基板的尺寸大于所述活动板,所述活动板安装在所述基板上,所述活动板和所述基板之间设有多个活动支撑,所述基板的四周具有四个侧壁,每个所述侧壁的中部设有一缺口,所述缺口处安装有微动开关,所述微动开关的活动件可以在所述处理器的指令下打开或封闭所述缺口,所述活动件靠近所述活动板的侧面设有沿所述活动件宽度方向布设的条形的电触点,所述基板设有与所述电触点相连接的温控电路,所述处理器根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板的四个侧边的中部均设有形状记忆合金丝,所述形状记忆合金丝一端与所述活动板固定连接,另一端与所述电触点滑动配合,所述基板的四周的内侧壁与所述活动板之间均设有弹性件,当所述基板上的一个温控电路连通时,与该电路相连接的形状记忆合金丝伸长,同时,远离该形状记忆合金丝的微动开关的活动件打开所述缺口,与该形状记忆合金丝同侧的弹性件收缩,远离该形状记忆合金丝的弹性件伸长。
进一步的,所述弹性件为弹簧。
进一步的,所述电子装置为摄像机,所述摄像机安装于支架上,所述支架包括安装座、支撑轴、三个铰装在所述支撑轴上的支撑架;
所述安装座包括相互垂直的第一安装板和第二安装板,所述第一安装板和第二安装板均可用于安装所述摄像机,所述支撑轴垂直安装在所述第一安装板的底面,所述支撑轴远离所述安装座的底端设有径向尺寸大于所述支撑轴的圆周面,三个所述支撑架由上至下安装在所述支撑轴上,且每两个所述支撑架展开后的水平投影呈一夹角,所述支撑轴为伸缩杆件,其包括与所述安装座相连接的管体和部分可收缩至所述管体内的杆体,所述杆体伸入所述 管体的部分包括依次铰接的第一段、第二段、第三段和第四段,所述第一段与所述管体相连接,所述第一段靠近所述第二段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第二段靠近所述第一段的端部设有与锁止件可拆卸配合的锁止孔,所述第二段靠近所述第三段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第三段靠近所述第二段的端部设有与锁止件可拆卸配合的锁止孔,所述第三段靠近所述第四段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第四段靠近所述第三段的端部设有与锁止件可拆卸配合的锁止孔。
进一步的,每个所述支撑架的底端还连接有调距装置,所述调距装置包括安装在所述支撑架底部的轴承圈、与所述轴承圈相连接的转动环、管体、螺杆、螺套及支撑杆,所述管体的一端设有封堵,所述螺杆部分通过所述封堵安装在所述管体内,所述封堵设有与所述螺杆相适配的内螺纹,所述螺杆另一部分与所述转动环相连接,所述螺套一端安装在所述管体内并与所述螺杆螺纹连接,所述螺套的另一端伸出所述管体外并与所述支撑杆固定连接,所述螺套的内壁设有一凸起,所述螺套的外侧壁沿其长度方向设有与所述凸起相适配的滑道,所述管体包括相邻的第一部分和第二部分,所述第一部分的内径小于所述第二部分的内径,所述封堵设置在所述第二部分的外端上,所述螺套靠近所述螺杆的端部设有外径大于所述第一部分内径的限位端。
本发明实施例的另一方面提供了一种图像处理装置,包括:
识别模块,用于识别图片中的人脸图像,获取所述人脸图像的特征信息;生成模块,用于根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像;第一打光模块,用于对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;处理模块,用于根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。
进一步地,所述处理模块包括:
获取子模块,用于获取图像上的目标点、在所述第一3D人脸图像上对应的第一纹理值、在所述第二3D人脸图像上对应的第二纹理值;
处理子模块,用于根据所述目标点的所述第一纹理值和所述第二纹理值,对所述人脸图像进行光影处理。
进一步地,所述处理子模块具体用于,计算所述第二纹理值与所述第一纹理值的差值;按照所述差值对所述目标点的纹理值进行调整。
进一步地,所述特征信息包括所述人脸图像的尺寸和旋转角度,所述识别模块具体用于,利用人脸识别模型识别所述图像中的人脸关键点,得到所述关键点的坐标位置;根据所述关键点的坐标位置确定所述人脸图像的尺寸和旋转角度。
进一步地,所述装置还包括:确定模块,用于确定所述人脸图像的眼膜区域和光照渲染参数;第二打光模块,用于根据所述光照渲染参数对所述眼膜区域进行打光。
本发明实施例的又一方面提供一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明实施例上述任一项图像处理方法。
由以上技术方案可见,本发明实施例提供的图像处理方法、装置及电子设备,能在提亮光线的基础上,使拍照出来的人像五官更立体、更有层次感,眼睛更有神韵,从而达到单反相机的拍照效果。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本发明一个实施例提供的图像处理方法流程图;
图2为本发明一个实施例提供的步骤S104的具体流程图;
图3为本发明一个实施例提供的图像处理方法流程图;
图4为本发明一个实施例提供的图像处理装置结构图;
图5为本发明一个实施例提供的图像处理装置结构图;
图6为执行本发明方法实施例提供的图像处理方法的电子设备的硬件结构示意图;
图7为本发明一个实施例提供的摄像头的结构图;
图8为本发明一个实施例提供的微型记忆合金光学防抖器的结构图;
图9为本发明一个实施例提供的微型记忆合金光学防抖器的一种工作状态结构图;
图10为本发明一个实施例提供的支架结构图;
图11为本发明一个实施例提供的支撑轴结构图;
图12为本发明一个实施例提供的调距装置结构图。
图中:1、镜头;2、自动聚焦音圈马达;3、图像传感器;4、微型记忆合金光学防抖器;5、活动板;6、基板;7、活动支撑;8、缺口;9、微动开关;10、活动件;11、电触点;12、形状记忆合金丝;13、弹性件;14、 安装座;141、第一安装板;142、第二安装板;15、支撑轴;151、管体;152、杆体;1521、第一段;1522、第二段;1523、第三段;1524、第四段;16、支撑架;17、圆周面;18、安装槽;19、锁止件;20、锁止孔;21、调距装置;211、轴承圈;212、转动环;213、管体;2131、第一部分;2132、第二部分;214、螺杆;215、螺套;216、支撑杆;217、封堵;218、凸起;219、滑道;100、识别模块;200、生成模块;300、第一打光模块;400、处理模块;500、确定模块;600、第二打光模块;610、处理器;620、存储器;630、输出装置;640、输入装置。
具体实施方式
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。
本发明实施例的执行主体为电子设备,所述电子设备包括但不限于摄像机、手机、平板电脑、笔记本电脑、带摄像头的台式电脑等。下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互结合。图1为本发明实施例提供的图像处理方法流程图。如图1所示,本发明实施例提供的图像处理方法,包括:
S101,识别中的图片人脸图像,获取所述人脸图像的特征信息。
通常情况下,图片中会包含其他非人脸图像,例如背景环境图像等,因此需要对图片中的人脸图像进行识别。在进行本步骤时,可以识别通过实时拍摄的方式获取的图片中的图像,也可以识别保存于终端本地的图片中的图像。人脸图像的特征信息包括但不限于人脸图像的尺寸和人脸图像的旋转角度。
当前存在许多识别人脸图像的识别方法,例如可以根据图像的边缘信息和/或颜色信息等识别出人脸图像的范围,在本实施例中,通过识别预先定义的关键点,基于检测到的关键点的位置确定人脸图像的尺寸和人脸图像的旋转角度。人脸图像中的眉毛、眼睛、鼻子、脸庞和嘴巴等分别有若干个所述关键点组成,即通过所述关键点的坐标位置能够确定所述人脸图像中的眉毛、眼睛、鼻子、脸庞和嘴巴的位置及轮廓。
具体地,可以预先准备用于人脸图像关键点识别的正负样本,依据该正负样本训练人脸识别模型。将待识别图片输入所述人脸识别模型,输出该待 识别图片中人脸图像的关键点坐标。坐标系可以以图片的左下角为原点,向右方向为X轴正方向,向上方向为Y轴正方向,坐标值以像素点的个数进行计量。根据上述关键点坐标、按照人像比例推出一个人脸图像的初步范围,用此范围采用分水岭图像分割算法,求出另外的额头、下巴的坐标信息,把得到的所有关键点坐标进行整合就是一张完整的人脸图像。此外,还可以通过该模型输出人脸的深度信息。
进一步地,可以根据左耳、右耳的坐标位置确定垂直于X轴的两条边界线,根据额头和下巴的坐标位置确定垂直于Y轴的两条边界线,这四条线所包围的矩形即为人脸图像的尺寸。
进一步地,可以通过人脸姿态估计算法来获取所述人脸图像的旋转角度。人脸姿态估计算法包括多种,可以分为基于模型的方法、基于表观的方法、基于分类的方法。其中,基于模型的方法得到的效果最好,因为其得到的人脸姿态是连续的。在对模型进行训练的过程中,首先,搜集人脸样本图像,对所述人脸样本图像进行关键点的标记和旋转角度的标记;其次,将所述人脸样本图像及其对应的人脸旋转角度输入卷积神经网络进行训练,输出预设的人脸旋转角度或旋转角度的区间范围,从而得到人脸旋转角度模型。
当旋转角度比较接近时,人脸的光影效果也比较相似,为了提高输出效率,不需要输出精确的旋转角度,可以只输出旋转角度的区间范围。例如,根据人脸旋转角度将人脸的正面和侧面进行平均分割为两个或两个以上的区间范围,每个人脸旋转角度的区间范围对应一种人脸姿态类型。在本步骤中,将关键点坐标输入所述人脸旋转角度模型,输出其对应的旋转角度区间。
S102,根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像。
具体地,可以预先加载人脸模型数据文件,建立标准3D人脸模型。首先,根据所述特征信息中人脸图像的尺寸对所述标准3D人脸模型进行整体缩放,使额头最上顶点与下巴最下顶点的距离等于步骤S101垂直于Y轴的两条边界线之间的距离,使两个耳朵之间的距离等于步骤S101垂直于X轴的两条边界线之间的距离。其次,根据步骤S101中得到的各关键点的坐标位置、确定所述标准3D人脸模型中眉毛、眼睛、鼻子、嘴及脸庞的位置及轮廓,使它们的形状位置与其在所述人脸图像中为位置及轮廓相一致。再次,根据步骤S101计算得到的旋转角度对所述标准3D人脸模型进行旋转,使两者的姿态相匹配。最后,根据上述各关键点的坐标位置,利用步骤S101中识别得到的人脸图像完成贴图,并根据上述人脸信息对所述模型进行纹理映射,生成与所述人脸图像匹配的第一3D人脸图像。可选地,所述纹理可以为每个像素的三原色RGB。
S103,对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像。
本实施例中,可以通过配置光照渲染参数,再结合光照模型对所述第一3D人脸图像进行打光处理。在所述第一3D人脸图像上打上自然光线.从而模拟出现实光线打在人脸上的光亮和阴影。
具体地,光照渲染参数包括但不限于光源位置、光线冷暖、材质反光率等参数,可以根据待识别图片中的背景环境信息、人脸表情等进行调整,也可以是由用户进行自定义调整,本发明再次不做限定。
在本实施例中,采用冯氏光照模型(Phong Lighting Model)对所述第一3D人脸图像进行打光处理。按照冯氏光照模型,3D空间中光线主要由三个分量构成:环境(Ambient)、漫反射(Diffuse)、镜面(Specular)光照。由于镜面光一般与金属反射有关,本发明实施例主要涉及的是人脸的拍摄,因此只使用环境(Ambient)和漫反射(Diffuse)光照进行计算,具体计算公式如下:
ambient color=ambient strength*li ght color
diffuse color=diffuse*light color
result color=diffuse color+ambient color
其中,light color指的是光照的颜色,diffuse指的是材质和光源的漫反射系数,ambientStrength指的是环境光强
可选地,还可以引入正负光和概念系数Gamma和Exposure对光照的颜色进行修正。其中,正光指的是冯氏光照模型中的正常光照,负光light color为正光light color的相反数,从而获取阴影效果;Gamma用来纠正特定环境下光照异常现象,Exposure用来纠正光照过爆现象,系数gamma和exposure都需要不断试验调节获取。具体计算公式如下:
result light color=pow(light color,vec3(1.0/gamma))
result light color=vec3(1.0)-exp(-light color*exposure)
可选地,为了加速本步骤的执行,上述打光处理过程可以由终端的GPU完成。
S104,根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。
依据步骤S103打好自然光线的第二3D人脸图像,由于其光线打亮的地方会提亮、光线打不到的地方会形成阴影,对于同一坐标位置的像素点,其在第二3D人脸图像与第一3D人脸图像中会存在纹理差值,按该差值,在人脸图像所述像素点的对应位置加强或减弱其纹理差值,使得人脸图像会产生和第二3D人脸图像一样的打光效果。本步骤中,可以根据所述第一3D图像 和所述第二3D图像中的纹理差异,对所述人脸图像进行处理。
具体地,如图2所示,本步骤包括:
S1041,获取所述人脸图像上的目标点、在所述第一3D人脸图像上对应的第一纹理值和在所述第二3D人脸图像上对应的第二纹理值。
所述目标点可以是上述步骤中的关键点,也可以使便于五官立体化的位置(例如鼻翼、颧骨、下巴、额头等),本发明在此不做限制。确定了目标点后,根据该目标点在所述人脸图像上的坐标位置,在所述第一3D人脸图像中找到与其对应的第一目标点,在所述第二3D人脸图像中找到与其对应的第二目标点。如上所述,纹理值可以为像素的RGB值,因此分别获取所述第一目标点的RGB值和所述第二目标点的第二RGB值。
S1042,根据所述目标点的所述第一纹理值和所述第二纹理值,对所述人脸图像进行光影处理。
可选地,对所述人脸图像进行光影处理,包括:
计算所述第二RGB值与所述第一RGB值的差值,对所述人脸图像中、所述目标点对应的坐标位置原有的RGB值相对应的加上该差值(可能是正值也可能是负值),即为所述人脸图像进行打光后该目标点对应的RGB值。当对所述人脸图像上所有的目标点按照步骤S1041和步骤S1042进行调整后,即完成了对所述人脸图像的光影处理,得到一张具有光源和阴影渲染后的人脸图像。
本发明实施例提供的图像处理方法,能在提亮光线的基础上,使拍照出来的人像五官更立体、更有层次感,达到单反相机的拍照效果。
图3为本发明实施例提供的图像处理方法流程图。如图3所示,本实施例为图1和图2所示实施例的具体实现方案,因此不再赘述图1和图2所示实施例中各步骤的具体实现方法和有益效果,本发明实施例提供的图像处理方法,具体包括:
S301,识别图片中的人脸图像,获取所述人脸图像的特征信息。
S302,根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像。
S303,对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像。
S304,根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。
S305,确定所述人脸图像的眼膜区域和光照渲染参数。
在本步骤中,可以根据视线跟踪(gaze track)算法得到所述人脸图像中眼球里面眼膜区域(即黑眼球区域),并通过步骤S103中配置的光照渲染参数确定其中的光源位置、光线冷暖、材质反光率等参数。
S306,根据所述光照渲染参数对所述眼膜区域进行打光。
具体地,根据所述光照渲染参数虚拟对应角度光照方向的自然光,从而在眼膜里面呈现出的一个眼神光,使眼神更有神韵。
本发明实施例提供的图像处理方法,能在提亮光线的基础上,使拍照出来的人像五官更立体、更有层次感,眼睛更有神韵,从而达到单反相机的拍照效果。
图4为本发明实施例提供的图像处理装置结构图。如图4所示,该装置具体包括:识别模块100,生成模块200,第一生成模块300,处理模块400。其中,
所述识别模块100,用于识别图片中的人脸图像,获取所述人脸图像的特征信息;所述生成模块200,用于根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像;所述第一打光模块300,用于对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;所述处理模块400,用于根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。
可选地,所述处理模块400包括:获取子模块,用于获取图像上的目标点、在所述第一3D人脸图像上对应的第一纹理值、在所述第二3D人脸图像上对应的第二纹理值;处理子模块,用于根据所述目标点的所述第一纹理值和所述第二纹理值,对所述人脸图像进行光影处理。
可选地,所述处理子模块具体用于,计算所述第二纹理值与所述第一纹理值的差值;按照所述差值对所述目标点的纹理值进行调整。
可选地,所述特征信息包括所述人脸图像的尺寸和旋转角度,所述识别模块100具体用于,利用人脸识别模型识别所述图像中的人脸关键点,得到所述关键点的坐标位置;根据所述关键点的坐标位置确定所述人脸图像的尺寸和旋转角度。
本发明实施例提供的图像处理装置具体用于执行图1和图2所示实施例提供的所述方法,其实现原理、方法和功能用途等与图1和图2所示实施例类似,在此不再赘述。
图5为本发明实施例提供的图像处理装置结构图。如图5所示,该装置具体包括:识别模块100,生成模块200,第一生成模块300,处理模块400,确定模块500,第二生成模块600。
所述识别模块100,用于识别图片中的人脸图像,获取所述人脸图像的特征信息;所述生成模块200,用于根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像;所述第一打光模块300,用 于对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;所述处理模块400,用于根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。所述确定模块500,用于确定所述人脸图像的眼膜区域和光照渲染参数;所述第二打光模块600,用于根据所述光照渲染参数对所述眼膜区域进行打光。
本发明实施例提供的图像处理装置具体用于执行图3所示实施例提供的所述方法,其实现原理、方法和功能用途和图3所示实施例类似,在此不再赘述。
上述这些本发明实施例的图像处理装置可以作为其中一个软件或者硬件功能单元,独立设置在上述电子设备中,也可以作为整合在处理器中的其中一个功能模块,执行本发明实施例的图像处理方法。
图6为执行本发明方法实施例提供的图像处理方法的电子设备的硬件结构示意图。根据图6所示,该电子设备包括:
一个或多个处理器610以及存储器620,图6中以一个处理器610为例。
执行所述的图像处理方法的设备还可以包括:输入装置630和输出装置630。
处理器610、存储器620、输入装置630和输出装置640可以通过总线或者其他方式连接,图6中以通过总线连接为例。
存储器620作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的所述图像处理方法对应的程序指令/模块。处理器610通过运行存储在存储器620中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现所述图像处理方法。
存储器620可以包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需要的应用程序;存储数据区可存储根据本发明实施例提供的图像处理装置的使用所创建的数据等。此外,存储器620可以包括高速随机存取存储器620,还可以包括非易失性存储器620,例如至少一个磁盘存储器620件、闪存器件、或其他非易失性固态存储器620件。在一些实施例中,存储器620可选包括相对于处理器66远程设置的存储器620,这些远程存储器620可以通过网络连接至所述图像处理装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置630可接收输入的数字或字符信息,以及产生与图像处理装置的用户设置以及功能控制有关的键信号输入。输入装置630可包括按压模组等设备。
所述一个或者多个模块存储在所述存储器620中,当被所述一个或者多个处理器610执行时,执行所述图像处理方法。
本发明实施例的电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)其他具有数据交互功能或拍照娱乐功能的电子装置,例如数码摄像机、微信数码单反摄像机等。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
本发明实施例提供了一种非暂态计算机可读存存储介质,所述计算机存储介质存储有计算机可执行指令,其中,当所述计算机可执行指令被电子设备执行时,使所述电子设备上执行上述任意方法实施例中的图像处理方法。
本发明实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,其中,当所述程序指令被电子设备执行时,使所述电子设备执行上述任意方法实施例中的图像处理方法。
在另一实施例中,为了便于上述实施例对图片的处理,还提供了一种具有更好防抖性能的电子装置的摄像头,通过该摄像头获取的图片相比于普通摄像头更加清晰,更能满足美颜用户的需求。特别是本实施例中的摄像头获取的图片用于上述实施例中的图像处理方法时,效果更佳。
具体的,现有的电子装置摄像头(电子装置为手机或摄像机等)包括镜头1、自动聚焦音圈马达2、图像传感器3为本领域技术人员公知的现有技术,因此这里不过多描述。通常采用微型记忆合金光学防抖器是因为现有的防抖器大多由通电线圈在磁场中产生洛伦磁力驱动镜头移动,而要实现光学 防抖,需要在至少两个方向上驱动镜头,这意味着需要布置多个线圈,会给整体结构的微型化带来一定挑战,而且容易受外界磁场干扰,进而影响防抖效果,一些现有技术通过温度变化实现记忆合金丝的拉伸和缩短,以此拉动自动聚焦音圈马达移动,实现镜头的抖动补偿,微型记忆合金光学防抖致动器的控制芯片可以控制驱动信号的变化来改变记忆合金丝的温度,以此控制记忆合金丝的伸长和缩短,并且根据记忆合金丝的电阻来计算致动器的位置和移动距离。当微型记忆合金光学防抖致动器上移动到指定位置后反馈记忆合金丝此时的电阻,通过比较这个电阻值与目标值的偏差,可以校正微型记忆合金光学防抖致动器上的移动偏差。但是申请人发现,由于抖动的随机性和不确定性,仅仅依靠上述技术方案的结构是无法实现在多次抖动发生的情况下能够对镜头进行精确的补偿,这是由于形状记忆合金的升温和降温均需要一定的时间,当抖动向第一方向发生时,上述技术方案可以实现镜头对第一方向抖动的补偿,但是当随之而来的第二方向的抖动发生时,由于记忆合金丝来不及在瞬间变形,因此容易造成补偿不及时,无法精准实现对多次抖动和不同方向的连续抖动的镜头抖动补偿,这导致了获取的图片质量不佳,因此需要对摄像头或摄像机结构上进行改进。
如图7所示,本实施例的所述摄像头包括镜头1、自动聚焦音圈马达2、图像传感器3以及微型记忆合金光学防抖器4,所述镜头1固装在所述自动聚焦音圈马达2上,所述图像传感器3将所述镜头1获取的图像传输至所述识别模块100,所述自动聚焦音圈马达2安装在所述微型记忆合金光学防抖器4上,所述电子装置内部处理器根据电子装置内部陀螺仪(图中未示出)检测到的镜头抖动驱动所述微型记忆合金光学防抖器4的动作,实现镜头的抖动补偿;
结合附图8所示,对所述微型记忆合金光学防抖器的改进之处介绍如下:
所述微型记忆合金光学防抖器包括活动板5和基板6,活动板5和基板6均为矩形板状件,所述自动聚焦音圈马达2安装在所述活动板5上,所述基板6的尺寸大于所述活动板5的尺寸,所述活动板5安装在所述基板6上,所述活动板5和所述基板6之间设有多个活动支撑7,所述活动支撑7具体为设置在所述基板6四个角处凹槽内的滚珠,便于活动板5在基板6上的移动,所述基板6的四周具有四个侧壁,每个所述侧壁的中部均设有一缺口8,所述缺口8处安装有微动开关9,所述微动开关9的活动件10可以在所述处理模块的指令下打开或封闭所述缺口,所述活动件10靠近所述活动板5的侧面设有沿所述活动件10宽度方向布设的条形的电触点11,所述基板6设有与所述电触点11相连接的温控电路(图中未示出),所述处理 模块可以根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板5的四个侧边的中部均设有形状记忆合金丝12,所述形状记忆合金丝12一端与所述活动板5固定连接,另一端与所述电触点11滑动配合,所述基板6的四周的内侧壁与所述活动板5之间均设有用于复位的弹性件13,具体的,本实施例的所述弹性件优选为微型的弹簧。
下面结合上述结构对本实施例的微型记忆合金光学防抖器的工作过程进行详细的描述:以镜头两次方向相反的抖动为例,当镜头发生向第一方向抖动时,陀螺仪将检测到的镜头抖动方向和距离反馈给所述处理器,处理器计算出需要控制可以补偿该抖动的形状记忆合金丝的伸长量,并驱动相应的温控电路对该形状记忆合金丝进行升温,该形状记忆合金丝伸长并带动活动板向可补偿第一方向抖动的方向运动,与此同时与该形状记忆合金丝相对称的另一形状记忆合金丝没有变化,但是与该另一形状记忆合金丝相连接的活动件会打开与其对应的缺口,便于所述另一形状记忆合金丝在活动板的带动下向缺口外伸出,此时,两个形状记忆合金丝附近的弹性件分别拉伸和压缩(如图9所示),当微型记忆合金光学防抖致动器上移动到指定位置后反馈该形状记忆合金丝的电阻,通过比较这个电阻值与目标值的偏差,可以校正微型记忆合金光学防抖致动器上的移动偏差;而当第二次抖动发生时,处理器首先通过与另一形状以及合金丝相抵接的活动件关闭缺口,并且打开与处于伸长状态的该形状记忆合金丝相抵接的活动件,与另一形状以及合金丝相抵接活动件的转动可以推动另一形状记忆合金丝复位,与处于伸长状态的该形状记忆合金丝相抵接的活动件的打开可以便于伸长状态的形状记忆合金丝伸出,并且在上述的两个弹性件的弹性作用下可以保证活动板迅速复位,同时处理器再次计算出需要控制可以补偿第二次抖动的形状记忆合金丝的伸长量,并驱动相应的温控电路对另一形状记忆合金丝进行升温,另一形状记忆合金丝伸长并带动活动板向可补偿第二方向抖动的方向运动,由于在先伸长的形状记忆合金丝处的缺口打开,因此不会影响另一形状以及合金丝带动活动板运动,而由于活动件的打开速度和弹簧的复位作用,因此在发生多次抖动时,本实施例的微型记忆合金光学防抖器均可做出精准的补偿,其效果远远优于现有技术中的微型记忆合金光学防抖器。
当然上述仅仅为简单的两次抖动,当发生多次抖动时,或者抖动的方向并非往复运动时,可以通过驱动两个相邻的形状记忆合金丝伸长以补偿抖动,其基础工作过程与上述描述原理相同,这里不过多赘述,另外关于形状记忆合金电阻的检测反馈、陀螺仪的检测反馈等均为现有技术,这里也不做赘述。
另一实施例中,电子装置为摄像机,所述摄像机可以安装于所述摄像机 的支架上,但是申请人在使用过程中发现,现有的摄像机的支架具有以下缺陷:1、现有的摄像机支架均采用三脚架支撑,但是三脚架结构在地面不平整存在较大凹凸不平的位置进行安装时无法保证支架安装座的水平,容易发生抖动或者倾斜,对拍摄容易产生不良的影响;2、现有的支架无法作为肩抗式摄影机支架,结构和功能单一,在需要肩抗拍摄时必须单独配备肩抗式摄影机支架。
因此,申请人对支架结构进行改进,如图10和11所示,本实施例的所述支架包括安装座14、支撑轴15、三个铰装在所述支撑轴上的支撑架16;所述安装座14包括相互垂直的第一安装板141和第二安装板142,所述第一安装板141和第二安装板142均可用于安装所述摄像机,所述支撑轴15垂直安装在所述第一安装板141的底面,所述支撑轴15远离所述安装座14的底端设有径向尺寸略大于所述支撑轴的圆周面17,三个所述支撑架16由上至下安装在所述支撑轴15上,且每两个所述支撑架16展开后的水平投影呈一倾角,上述结构在进行支架的架设时,首先将圆周面17假设在凹凸不平的平面较平整的一小块区域,在通过打开并调整三个可伸缩的支撑架的位置实现支架的架设平整,因此即使是凹凸不平的地面也能迅速将支架架设平整,适应各种地形,保证安装座处于水平状态。
更有利的,本实施例的所述支撑轴15也是伸缩杆件,其包括与所述安装座14相连接的管体151和部分可收缩至所述管体151内的杆体152,所述杆体152伸入所述管体的部分包括依次铰接的第一段1521、第二段1522、第三段1523和第四段1524,所述第一段1521与所述管体151相连接,所述第一段1521靠近所述第二段1522的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第二段1522靠近所述第一段1521的端部设有与锁止件19可拆卸配合的锁止孔20,同理,所述第二段1522靠近所述第三段1523的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第三段1523靠近所述第二段1522的端部设有与锁止件19可拆卸配合的锁止孔20,所述第三段1523靠近所述第四段1524的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第四段1524靠近所述第三段1523的端部设有与锁止件19可拆卸配合的锁止孔20,所述锁止件可以隐藏在安装槽内,当需要使用锁止件时可以通过转动锁止件,将锁止件扣合在所述锁止孔上,具体的,所述锁止件19可以是具有一个凸起的条形件,该凸起与所述锁止孔的大小尺寸相适配,将凸起压紧在锁止孔内完整相邻两个段(例如第一段和第二段)位置的固定,防止相对转动,而通过第一段1521、第二段1522、第三段1523和第四段1524的配合可以将该部分形成一
Figure PCTCN2018094071-appb-000001
形结构,并且通过锁止件19固定各个段的相对位置,还可以在该结构的底部 设有软质材料,当需要将支架作为肩抗式摄像机支架时,该部分放置在用户的肩部,通过把持三个支撑架中的一个作为肩抗式支架的手持部,可以快速的实现由固定式支架到肩抗式支架的切换,十分方便。
另外,申请人还发现,可伸缩的支撑架伸大多通过人力拉出伸缩部分以实现伸缩长度的调节,但是该距离不可控制,随机性较大,因此常常出现调节不便的问题,特别是需要将伸缩长度部分微调时,往往不容易实现,因此申请人还对支撑架的16结构进行优化,结合附图12所示,本实施例的每个所述支撑架16的底端还连接有调距装置21,所述调距装置21包括安装在所述支撑架16底部的轴承圈211、与所述轴承圈211相连接的转动环212、管体213、螺杆214、螺套215及支撑杆216,所述管体213的一端设有封堵217,所述螺杆215部分通过所述封堵217安装在所述管体213内,所述封堵217设有与所述螺杆214相适配的内螺纹,所述螺杆214另一部分与所述转动环212相连接,所述螺套215一端安装在所述管体213内并与所述螺杆214螺纹连接,所述螺套215的另一端伸出所述管体213外并与所述支撑杆216固定连接,所述螺套215的内壁设有一凸起218,所述螺套215的外侧壁沿其长度方向设有与所述凸起相适配的滑道219,所述管体213包括相邻的第一部分2131和第二部分2132,所述第一部分2131的内径小于所述第二部分2132的内径,所述封堵217设置在所述第二部分2132的外端上,所述螺套215靠近所述螺杆214的端部设有外径大于所述第一部分内径的限位端2151,通过转动所述转动环212带动螺杆214在管体213内转动,并将转动趋势传递给所述螺套215,而由于螺套受凸起218和滑道219的配合影响,无法转动,因此将转动力化为向外的直线移动,进而带动支撑杆216运动,实现支撑架底端的长度微调节,便于用户架平支架及其安装座,为后续的拍摄工作提供良好的基础保障。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。例如,机器可读介质包括只读存储器(ROM)、随机存取存储器(RAM)、磁盘存储介质、光存储介质、闪速存储介质、电、光、声或其他形式的传播信号(例如,载波、红外信号、数字信号等)等,该计算机软件产品包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明实施例的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种图像处理方法,其特征在于,包括:
    识别图片中的人脸图像,获取所述人脸图像的特征信息;
    根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像;
    对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;
    根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理,包括:
    获取所述人脸图像上的目标点、在所述第一3D人脸图像上对应的第一纹理值和在所述第二3D人脸图像上对应的第二纹理值;
    根据所述目标点的所述第一纹理值和所述第二纹理值,对所述人脸图像进行光影处理。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述目标点的所述第一纹理值和所述第二纹理值,对所述人脸图像进行光影处理,包括:
    计算所述第二纹理值与所述第一纹理值的差值;
    按照所述差值对所述目标点的纹理值进行调整。
  4. 根据权利要求1的方法,其特征在于,所述特征信息包括所述人脸图像的尺寸和旋转角度,所述识别图像中的人脸图像包括:
    利用人脸识别模型识别所述图像中的人脸关键点,得到所述关键点的坐标位置;
    根据所述关键点的坐标位置确定所述人脸图像的尺寸和旋转角度。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    确定所述人脸图像的眼膜区域和光照渲染参数;
    根据所述光照渲染参数对所述眼膜区域进行打光。
  6. 一种图像处理装置,其特征在于,包括:
    识别模块,用于识别图片中的人脸图像,获取所述人脸图像的特征信息;
    生成模块,用于根据所述特征信息和标准3D人脸模型,生成与所述人脸图像匹配的第一3D人脸图像;
    第一打光模块,用于对所述第一3D人脸图像进行打光处理,得到第二3D人脸图像;
    处理模块,用于根据所述第一3D人脸图像和所述第二3D人脸图像,对所述人脸图像进行光影处理。
  7. 根据权利要求6所述的装置,其特征在于,所述处理模块包括:
    获取子模块,用于获取图像上的目标点、在所述第一3D人脸图像上对应的第一纹理值、在所述第二3D人脸图像上对应的第二纹理值;
    处理子模块,用于根据所述目标点的所述第一纹理值和所述第二纹理值,对所述人脸图像进行光影处理。
  8. 根据权利要求7所述的装置,其特征在于,所述处理子模块具体用于,计算所述第二纹理值与所述第一纹理值的差值;按照所述差值对所述目标点的纹理值进行调整。
  9. 根据权利要求6的装置,其特征在于,所述特征信息包括所述人脸图像的尺寸和旋转角度,所述识别模块具体用于,利用人脸识别模型识别所述图像中的人脸关键点,得到所述关键点的坐标位置;根据所述关键点的坐标位置确定所述人脸图像的尺寸和旋转角度。
  10. 一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-5任一项所述的图像处理方法。
PCT/CN2018/094071 2018-04-16 2018-07-02 图像处理方法、装置及电子设备 WO2019200718A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810339634.8A CN108537870B (zh) 2018-04-16 2018-04-16 图像处理方法、装置及电子设备
CN201810339634.8 2018-04-16

Publications (1)

Publication Number Publication Date
WO2019200718A1 true WO2019200718A1 (zh) 2019-10-24

Family

ID=63481236

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094071 WO2019200718A1 (zh) 2018-04-16 2018-07-02 图像处理方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN108537870B (zh)
WO (1) WO2019200718A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020056692A1 (zh) * 2018-09-20 2020-03-26 太平洋未来科技(深圳)有限公司 一种信息交互方法、装置及电子设备
CN109360176B (zh) * 2018-10-15 2021-03-02 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN109167935A (zh) * 2018-10-15 2019-01-08 Oppo广东移动通信有限公司 视频处理方法和装置、电子设备、计算机可读存储介质
CN109993150B (zh) * 2019-04-15 2021-04-27 北京字节跳动网络技术有限公司 用于识别年龄的方法和装置
CN111178266B (zh) * 2019-12-30 2023-09-01 北京华捷艾米科技有限公司 一种生成人脸关键点的方法及装置
CN111522771B (zh) * 2020-04-20 2023-08-15 北京百度网讯科技有限公司 眼底图像处理方法、终端设备及存储介质
CN111556255B (zh) * 2020-04-30 2021-10-01 华为技术有限公司 图像生成方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825544A (zh) * 2015-11-25 2016-08-03 维沃移动通信有限公司 一种图像处理方法及移动终端
US20170243396A1 (en) * 2016-02-19 2017-08-24 Samsung Electronics Co., Ltd Method for processing image and electronic device thereof
CN107506714A (zh) * 2017-08-16 2017-12-22 成都品果科技有限公司 一种人脸图像重光照的方法
CN107610237A (zh) * 2017-09-08 2018-01-19 北京奇虎科技有限公司 图像采集设备数据实时处理方法及装置、计算设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825544A (zh) * 2015-11-25 2016-08-03 维沃移动通信有限公司 一种图像处理方法及移动终端
US20170243396A1 (en) * 2016-02-19 2017-08-24 Samsung Electronics Co., Ltd Method for processing image and electronic device thereof
CN107506714A (zh) * 2017-08-16 2017-12-22 成都品果科技有限公司 一种人脸图像重光照的方法
CN107610237A (zh) * 2017-09-08 2018-01-19 北京奇虎科技有限公司 图像采集设备数据实时处理方法及装置、计算设备

Also Published As

Publication number Publication date
CN108537870B (zh) 2019-09-03
CN108537870A (zh) 2018-09-14

Similar Documents

Publication Publication Date Title
WO2019200718A1 (zh) 图像处理方法、装置及电子设备
WO2019200719A1 (zh) 三维人脸模型生成方法、装置及电子设备
US10691934B2 (en) Real-time visual feedback for user positioning with respect to a camera and a display
WO2019200720A1 (zh) 基于图像处理的环境光补偿方法、装置及电子设备
US11308675B2 (en) 3D facial capture and modification using image and temporal tracking neural networks
WO2019205284A1 (zh) Ar成像方法和装置
WO2019205283A1 (zh) 基于红外的ar成像方法、***及电子设备
KR101977638B1 (ko) 영상 내 사용자의 시선 보정 방법, 기계로 읽을 수 있는 저장 매체 및 통신 단말
US9369658B2 (en) Image correction of surface projected image
TW201915831A (zh) 對象識別方法
WO2020037680A1 (zh) 基于光线的三维人脸优化方法、装置及电子设备
US8400532B2 (en) Digital image capturing device providing photographing composition and method thereof
WO2019011091A1 (zh) 拍照提醒方法、装置、终端和计算机存储介质
WO2020258258A1 (zh) 目标跟随的方法、***、可读存储介质和可移动平台
CN109285216A (zh) 基于遮挡图像生成三维人脸图像方法、装置及电子设备
WO2020056690A1 (zh) 一种视频内容关联界面的呈现方法、装置及电子设备
WO2020056691A1 (zh) 一种交互对象的生成方法、装置及电子设备
WO2021147650A1 (zh) 拍照方法、装置、存储介质及电子设备
TWI604413B (zh) 影像處理方法及影像處理裝置
CN110221626B (zh) 一种跟拍控制方法、装置、计算机设备及存储介质
WO2016090759A1 (zh) 一种智能拍照的方法和装置
CN112561787B (zh) 图像处理方法、装置、电子设备及存储介质
CN115225806A (zh) 用于宽视场(fov)相机的电影式图像取景
CN108961371B (zh) 全景启动页及app显示方法、处理装置以及移动终端
WO2020056693A1 (zh) 一种图片合成方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18915435

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 12/02/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18915435

Country of ref document: EP

Kind code of ref document: A1