CN112995539B - Mobile terminal and image processing method - Google Patents

Mobile terminal and image processing method Download PDF

Info

Publication number
CN112995539B
CN112995539B CN201911294621.4A CN201911294621A CN112995539B CN 112995539 B CN112995539 B CN 112995539B CN 201911294621 A CN201911294621 A CN 201911294621A CN 112995539 B CN112995539 B CN 112995539B
Authority
CN
China
Prior art keywords
illumination
image
target
parameters
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911294621.4A
Other languages
Chinese (zh)
Other versions
CN112995539A (en
Inventor
张培龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN201911294621.4A priority Critical patent/CN112995539B/en
Publication of CN112995539A publication Critical patent/CN112995539A/en
Application granted granted Critical
Publication of CN112995539B publication Critical patent/CN112995539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a mobile terminal and an image processing method, which are used for improving the light effect of an image and reducing the cost. The mobile terminal in this application includes: the system comprises a processor, a memory and a display screen; wherein: the memory is used for storing the image to be processed; the processor is used for extracting the features of the image to be processed through the feature extraction model to obtain a feature image containing a target object; determining a target illumination template according to a preset illumination parameter corresponding to an image to be processed; performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing a target object; and outputting and displaying the light effect image and the image to be processed in a display screen after image fusion. The image shot by the 2D camera of the mobile terminal is processed through an image processing technology, the light effect of the target object in the output image is improved, and the cost is reduced.

Description

Mobile terminal and image processing method
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a mobile terminal and an image processing method.
Background
The shooting function is an essential function in the mobile terminal, but when pictures are shot through the mobile terminal, as the 2D camera is used in the mobile terminal, only simple light effect processing can be performed on the obtained target object, for example, simple image fusion of an illumination template is performed on the obtained target, but pictures with special artistic effects under various illumination angles, illumination brightness, spot shapes, illumination intensity and other conditions cannot be obtained.
At present, in order to obtain the picture with the special artistic effect through a mobile terminal, a 3D camera is installed in the mobile terminal, and a 3D model is established through the 3D camera, so that the effects of light spots, shadows and the like are reconstructed on the face under a specific illumination angle through a digital image processing method. But 3D cameras are bulky and costly.
Disclosure of Invention
The embodiment of the application provides a mobile terminal and an image processing method, which are used for improving the light effect of an image and reducing the cost.
The embodiment of the application provides the following specific technical scheme:
in a first aspect, the present application provides a mobile terminal, comprising: the system comprises a processor, a memory and a display screen; wherein:
the memory is used for storing the image to be processed;
the processor is used for extracting the characteristics of the image to be processed through the characteristic extraction model to obtain a characteristic image containing a target object; determining a target illumination template according to a preset illumination parameter corresponding to an image to be processed; performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing a target object; and (5) performing image fusion on the light effect image and the image to be processed, and outputting and displaying the light effect image and the image to be processed in a display screen.
The mobile terminal determines a target illumination template according to a preset illumination parameter corresponding to an image to be processed; the method comprises the steps of carrying out style migration processing on a characteristic image of an image to be processed according to a target illumination template, migrating the style of the target illumination template into the characteristic image to obtain a light effect image which contains a target object and is in the style of the target illumination template, fusing the light effect image with the image to be processed, outputting and displaying the light effect image, enabling the output and displayed image to have the light effect of the target illumination template, improving the light effect of the target object in the output image, and simultaneously processing the image shot by a camera of the mobile terminal through the image processing technical map so as to reduce cost.
In one possible implementation, the processor is specifically configured to:
if the preset illumination parameters are stored in the corresponding relation between the illumination parameters stored in the illumination migration parameter library and the illumination templates, taking the illumination templates corresponding to the preset illumination parameters as target illumination templates; or
If the corresponding relation between the illumination parameters stored in the illumination migration parameter base and the illumination templates is determined, and the preset illumination parameters are not stored, determining at least two groups of target illumination parameters in the illumination migration parameter base according to the preset illumination parameters, determining the illumination templates corresponding to the target illumination parameters in the corresponding relation, and taking the illumination templates corresponding to the target illumination parameters as the target illumination templates.
The mobile terminal specifically provides a scheme for determining a target illumination template according to a preset illumination parameter, and when the preset illumination parameter is stored in the illumination migration parameter library, the illumination template corresponding to the preset illumination parameter is used as the target illumination template; if the preset illumination parameters are not stored in the illumination migration parameter library, at least two groups of illumination parameters are determined in the illumination migration parameter library according to the preset illumination parameters, and the illumination templates corresponding to the at least two groups of illumination parameters are used as target illumination templates, so that the accuracy of determining the target illumination templates according to the preset illumination parameters is ensured.
In one possible implementation, the processor is specifically configured to:
when the preset illumination parameters are stored in the illumination migration parameter library, performing style migration processing on the characteristic image according to a target illumination template which is stored in the illumination migration parameter library and corresponds to the preset illumination parameters to obtain a light effect image containing a target object;
when the preset illumination parameters are not stored in the illumination migration parameter library, performing style migration processing on the feature image according to the illumination templates corresponding to the at least two groups of determined target illumination parameters, performing style migration processing on the feature image according to the illumination template corresponding to any one group of target illumination parameters to obtain a style migration image, and performing fusion processing on all the style migration images to obtain a light effect image containing the target object.
The mobile terminal provides a specific scheme for performing style migration processing on the characteristic image according to the target illumination template to obtain the light effect image containing the target object, and when only one target illumination template exists, the style migration processing is performed on the characteristic image according to the target illumination template to obtain the light effect image containing the target object; when the light effect images are stored in a plurality of groups of target illumination templates, style migration processing is carried out on the characteristic images aiming at each group of target illumination templates to obtain a plurality of style migration images containing target objects, all the style migration images are fused to obtain light effect images containing the target objects, and the obtained light effect image effect is ensured.
In one possible implementation, the processor is specifically configured to:
determining absolute difference values between the preset illumination parameters and the illumination parameters stored in the illumination migration parameter library;
and selecting at least two target illumination parameters corresponding to the minimum absolute difference values in the illumination migration parameter library according to the absolute difference values.
According to the absolute difference values of the preset illumination parameters stored in the illumination migration parameter library, the mobile terminal determines at least two target illumination parameters corresponding to the lowest absolute difference values in the illumination migration parameter library, and determines the target illumination parameter closest to the preset illumination parameter, so that the determined target illumination template is closest to the illumination template of the preset illumination parameter, and the light effect of the light effect image after the style migration processing is further ensured.
In one possible implementation, the processor is further configured to:
according to the form of the target object in the image to be processed, determining a standard reference image consistent with the form in an illumination migration parameter library;
before performing style migration processing on the characteristic image according to the target illumination template, determining a perspective transformation matrix according to coordinates of target object key points in the image to be processed and coordinates of reference object key points in a standard reference image, wherein the characteristics corresponding to the target object key points are the same as the characteristics corresponding to the reference object key points;
carrying out perspective transformation on the characteristic image by adopting a perspective transformation matrix;
and after the style transfer processing is carried out on the characteristic image according to the target illumination template, the inverse matrix of the perspective transformation matrix is adopted to carry out the image perspective inverse transformation processing on the light effect image.
The mobile terminal carries out perspective transformation on the characteristic image to align a target object in the characteristic image with a reference object in a standard reference image, further carries out illumination migration processing on the characteristic image according to a target illumination template of the standard reference image under preset illumination parameters, so that the illumination migration processing process is more accurate, the obtained light effect image has a better light effect, carries out perspective inverse transformation processing on the light effect image by adopting an inverse matrix of a perspective transformation matrix after the style migration processing is carried out, and restores the original size of the image so as to accurately fuse the image when the image is fused with an image to be processed.
In one possible implementation, the processor is specifically configured to:
and fusing the light effect image and the image to be processed according to the mask image, wherein the mask image is obtained by performing first-type labeling on the region where the target object is located in the image to be processed and performing second-type labeling on the region except the target object region.
According to the mobile terminal, the background area and the target object area can be more accurately distinguished through the mask image, so that the light effect image and the image to be processed are fused according to the mask image, only the area where the target object is located has a light effect, the background area does not have the light effect, and the accuracy of image fusion is guaranteed.
In a second aspect, the present application provides a method of image processing, the method comprising:
performing feature extraction on the image to be processed through a feature extraction model to obtain a feature image containing a target object;
determining a target illumination template according to a preset illumination parameter corresponding to an image to be processed;
performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing a target object;
and carrying out image fusion on the light effect image and the image to be processed, and then outputting and displaying the light effect image and the image to be processed.
In a possible implementation manner, when the target illumination template is determined according to a preset illumination parameter used when the target object is shot:
if the preset illumination parameters are stored in the corresponding relation between the illumination parameters stored in the illumination migration parameter library and the illumination templates, taking the illumination templates corresponding to the preset illumination parameters as target illumination templates; or
If the corresponding relation between the illumination parameters stored in the illumination migration parameter base and the illumination templates is determined, and the preset illumination parameters are not stored, determining at least two target illumination parameters in the illumination migration parameter base according to the preset illumination parameters, determining the illumination templates corresponding to the target illumination parameters in the corresponding relation, and taking the illumination templates corresponding to the target illumination parameters as the target illumination templates.
In a possible implementation manner, when performing style migration processing on the feature image according to the target illumination template to obtain a light effect image including the target object:
when the preset illumination parameters are stored in the illumination migration parameter library, performing style migration processing on the characteristic image according to a target illumination template which is stored in the illumination migration parameter library and corresponds to the preset illumination parameters to obtain a light effect image containing a target object;
when the preset illumination parameters are not stored in the illumination migration parameter library, performing style migration processing on the feature image according to the illumination templates corresponding to the at least two groups of determined target illumination parameters, performing style migration processing on the feature image according to the illumination template corresponding to any one group of target illumination parameters to obtain a style migration image, and performing fusion processing on all the style migration images to obtain a light effect image containing the target object.
In one possible implementation, when at least two target illumination parameters are determined in the illumination migration parameter library according to preset illumination parameters:
determining absolute difference values between the preset illumination parameters and the illumination parameters stored in the illumination migration parameter library;
and selecting at least two target illumination parameters corresponding to the minimum absolute difference values in the illumination migration parameter library according to the absolute difference values.
In one possible implementation manner, before performing the style migration processing on the feature image according to the target illumination template:
according to the form of the target object in the image to be processed, determining a standard reference image consistent with the form in an illumination migration parameter library;
determining a perspective transformation matrix according to the coordinates of the key points of the target object in the image to be processed and the coordinates of the key points of the reference object in the standard reference image, wherein the corresponding features of the key points of the target object are the same as the corresponding features of the key points of the reference object;
carrying out perspective transformation on the characteristic image by adopting a perspective transformation matrix;
after the style migration processing is carried out on the characteristic image according to the target illumination template: and performing image perspective inverse transformation processing on the light effect image by adopting an inverse matrix of the perspective transformation matrix.
In one possible implementation, when the light effect image is image fused with the image to be processed:
and fusing the light effect image and the image to be processed according to the mask image, wherein the mask image is obtained by performing first-type labeling on the region where the target object is located in the image to be processed and performing second-type labeling on the region except the target object region.
In a third aspect, the present application provides an apparatus for image processing, the apparatus comprising: the system comprises a feature extraction unit, a determination unit, a style migration unit and a fusion unit; wherein:
the characteristic extraction unit is used for extracting the characteristics of the image to be processed through the characteristic extraction model to obtain a characteristic image containing a target object;
the determining unit is used for determining a target illumination template according to a preset illumination parameter corresponding to the image to be processed;
the style migration unit is used for performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing a target object;
and the fusion unit is used for performing image fusion on the light effect image and the image to be processed and then outputting and displaying the light effect image and the image to be processed.
In a fourth aspect, the present application further provides a computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions, when executed by a processor, implement the method for image processing provided in the embodiments of the present application.
For technical effects brought by any one implementation manner in the second aspect to the fourth aspect, reference may be made to technical effects brought by a corresponding implementation manner in the first aspect, and details are not described here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a mobile terminal provided in the present application;
fig. 2 is a block diagram of a software structure of a mobile terminal according to the present application;
fig. 3 is a schematic view of a user interface of a mobile terminal according to the present application;
fig. 4 is a flowchart of a first image processing method according to an embodiment of the present application;
fig. 5 is a schematic diagram of feature extraction of an image to be processed according to an embodiment of the present application;
fig. 6 is a schematic view of a human-computer interaction interface when shooting is performed through a camera of a mobile terminal according to an embodiment of the present application;
fig. 7 is a schematic diagram of a style migration CNN network model according to an embodiment of the present application;
fig. 8 is a schematic diagram of a key point extraction provided in the embodiment of the present application;
fig. 9 is a flowchart of a second image processing method according to an embodiment of the present application;
FIG. 10 is a flowchart of a third image processing method according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a mobile terminal for image processing according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 13 is a block diagram of an image processing system according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solution and advantages of the present application more clearly and clearly understood, the technical solution in the embodiments of the present application will be described below in detail and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions in the embodiments of the present application will be described in detail and clearly with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Fig. 1 shows a schematic configuration of a mobile terminal 100.
The following describes an embodiment specifically by taking the mobile terminal 100 as an example. It should be understood that the mobile terminal 100 shown in fig. 1 is merely an example, and that the mobile terminal 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of a mobile terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 1. As shown in fig. 1, the mobile terminal 100 includes: a Radio Frequency (RF) circuit 110, a memory 120, a display unit 130, a camera 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a processor 180, a bluetooth module 181, and a power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 180 performs various functions of the mobile terminal 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the mobile terminal 100 to operate. The memory 120 may store an operating system and various application programs, and may also store codes for performing the methods of the embodiments of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the mobile terminal 100, and particularly, the display unit 130 may include a touch screen 131 disposed on the front of the mobile terminal 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display information input by the user or information provided to the user and a Graphical User Interface (GUI) of various menus of the mobile terminal 100. In particular, the display unit 130 may include a display screen 132 disposed on the front surface of the mobile terminal 100. The display screen 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display various graphical user interfaces in the present application.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the mobile terminal 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals.
The mobile terminal 100 may further include at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The mobile terminal 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between a user and the mobile terminal 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The mobile terminal 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signals into electrical signals, converts the electrical signals into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another mobile terminal, or outputs the audio data to the memory 120 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the mobile terminal 100 may help a user to receive and transmit e-mails, browse webpages, access streaming media, and the like through the Wi-Fi module 170, which provides a wireless broadband internet access for the user.
The processor 180 is a control center of the mobile terminal 100, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. In the present application, the processor 180 may run an operating system, an application program, a user interface display, a touch response, and a processing method according to the embodiments of the present application. In addition, the processor 180 is coupled with the input unit 130 and the display unit 140.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the mobile terminal 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) having a bluetooth module via the bluetooth module 181, so as to perform data interaction.
The mobile terminal 100 also includes a power supply 190 (e.g., a battery) that powers the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The mobile terminal 100 may also be configured with power buttons for powering the mobile terminal on and off, and locking the screen.
Fig. 2 is a block diagram of a software configuration of the mobile terminal 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and answered, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the mobile terminal 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the mobile terminal vibrates, an indicator light flashes, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide a fusion of the 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the mobile terminal 100 software and hardware in connection with capturing a photo scene.
When the touch screen 131 receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, timestamp of the touch operation, and the like). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 140.
The mobile terminal 100 in the embodiment of the present application may be a mobile phone, a tablet computer, a wearable device, a notebook computer, a television, and the like.
Fig. 3 is a schematic diagram illustrating a user interface on a mobile terminal (e.g., mobile terminal 100 of fig. 1). In some implementations, a user can open a corresponding application by touching an application icon on the user interface, or can open a corresponding folder by touching a folder icon on the user interface.
In the shooting process, a user opens a camera application program through a camera application icon displayed on a touch user interface, a shooting function is started, a person image is acquired through a 2D camera in the mobile terminal, when the image acquired by the 2D camera is processed, the image fusion of a simple illumination template can be only carried out on the person image, and the light effect cannot be obtained from multiple angles at will, so that the light effect in the image shot by professional lighting equipment cannot be achieved in a professional studio environment when the person image is shot through the 2D camera of the mobile terminal.
The 2D camera is used in the mobile terminal at present, so that images which are stored in a memory of the mobile terminal and are acquired in advance through the 2D camera, when a local image stored in the memory is edited, a user opens the locally stored image by touching a gallery displayed on a user interface, and enters an image editing state after finding a locally stored image selection editing function, but the edited image is the same as a shot image, so that people can simply fuse images of illumination templates, and the light effect in the image shot by professional illumination equipment in a professional studio environment cannot be achieved.
The lighting effect is that under the environment of a simulation studio, illumination is adjusted by professional illumination equipment, so that portrait pictures with special artistic effects under the conditions of various illumination angles, various illumination brightness, light spot shapes, illumination intensity and the like are obtained.
Therefore, the application provides an image processing method and a mobile terminal aiming at the problem that the light effect of an image shot by a 2D camera of the mobile terminal is poor.
In the application, an image to be processed is firstly acquired, and the mode of acquiring the image to be processed can be that the image is shot by a camera of the mobile terminal or that the image shot in advance is selected from a memory of the mobile terminal; performing feature extraction on an image to be processed to obtain a feature image containing a target object, determining a target illumination template according to preset illumination parameters corresponding to the image to be processed, wherein the illumination template is a light effect picture shot by professional illumination equipment in a camera shed under different illumination parameter combinations, and taking a plurality of shot light effect pictures as standard illumination migration stylized templates, so that the target illumination template is an illumination migration stylized template selected from the plurality of standard illumination migration stylized templates according to the preset illumination parameters; after the target illumination template is determined, style migration processing is carried out on the characteristic image according to the target illumination template, the style of the target illumination template is migrated to the characteristic image, a light effect image with the style of the target illumination template is obtained, the obtained light effect image and an image to be processed are subjected to image fusion and then output and displayed, the output and displayed image has the light effect of an image shot by professional illumination equipment under a preset illumination parameter, the light effect of the image shot by the mobile terminal under a 2D camera is improved, meanwhile, the use of the 3D camera in the mobile terminal is avoided, and the cost is saved.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
As shown in fig. 4, a flowchart of a first image processing method provided by the present application specifically includes the following steps:
and 400, performing feature extraction on the image to be processed through the feature extraction model to obtain a feature image containing the target object.
In the application, an image to be processed is input into a feature extraction model for feature extraction, and a feature image containing a target object is obtained.
Wherein, the image to be processed is an RGB (red-green-blue) three-channel image; the method mainly comprises the following steps:
converting a video stream (Raw image or YUV (brightness-image color-saturation) video frame) acquired by a 2D camera of a mobile terminal into an RGB (red, green and blue) three-channel image, and taking the converted RGB three-channel image as an image to be processed, wherein the Raw image is original data of a captured light source signal converted into a digital signal by an image sensor, namely an unprocessed original image;
and secondly, decoding a picture (JPEG picture (Joint Photographic Experts Group)/BMP picture (Bitmap)/PNG picture (Portable Network Graphics)) or a Video file (MPG (Moving Pictures Experts Group)/AVI (Audio Video Interleaved format)) which is pre-stored in a memory of the mobile terminal, and outputting the decoded RGB three-channel image as an image to be processed.
Therefore, the method and the device can be used for shooting the scene of the image through the camera of the mobile terminal and can also be used for editing the scene of the image stored in the mobile terminal.
In the application, a feature extraction model for extracting features of an image to be processed is a semantic segmentation model, and PSPnet, Deeplab and the like in a deep learning semantic segmentation algorithm are mainly adopted.
Taking the target object as a person and the to-be-processed image including the person as an example, the semantic segmentation model classifies the to-be-processed image pixel by pixel, and outputs a feature image including the target object, wherein the feature image is a blank background area except for the area of the target object.
As shown in fig. 5, after the image to be processed passes through the semantic segmentation model, the output feature image only includes the region of the target object, and the background region becomes blank, so that the schematic diagram of the feature image including the target object is obtained by performing feature extraction on the image to be processed through the semantic segmentation model.
Step 401, determining a target illumination template according to a preset illumination parameter corresponding to an image to be processed.
When the method is used for shooting a scene, a to-be-processed image is obtained through a camera of the mobile terminal, and the preset illumination parameters of the to-be-processed image object are determined after being adjusted by a user in the shooting process.
As shown in fig. 6, which is a schematic view of a human-computer interaction interface when shooting is performed through a camera of a mobile terminal, an illumination parameter adjusting area is arranged in the human-computer interaction interface, wherein an illumination parameter includes: illumination angle, illumination color and illumination brightness; wherein the illumination angle is divided into a horizontal illumination angle and a vertical illumination angle, and the value range of the illumination angle is [ -180, +180 ]; the illumination color mainly refers to the color temperature of an illumination light source, such as D35, D40, D50, D65, D75 and the like; the illumination brightness is also called light source brightness, which is the magnitude of luminous flux per unit area, and the value range of the luminous flux is [0lux,1000lux ].
In this application, the illumination parameter adjustment area may also be an input box set for each illumination parameter, and a user may input a parameter value for each illumination parameter, and may also preset the illumination parameter value in other manners, which is not described herein again.
When the method is used for editing a local picture, a selection instruction of a user is responded, the picture needing to be edited is determined, the editing instruction of the user is responded, the picture selected by the user, namely each editing function key is displayed in a display page, wherein each editing function key comprises an illumination parameter adjusting key, the illumination parameter adjusting key is triggered, and an adjusting progress bar corresponding to each illumination parameter is displayed in the display page, as shown in fig. 6, and is not repeated herein.
And after the preset illumination parameters are determined, determining a target illumination template according to the preset illumination parameters so as to determine the illumination template which is closest to the preset illumination parameters and is acquired by professional illumination equipment under the studio.
In the method, when a target illumination template is determined according to a preset illumination parameter corresponding to an image to be processed, whether the preset illumination parameter is stored or not is determined in a corresponding relation between the illumination parameter stored in an illumination migration parameter library in which the illumination template is stored and the illumination template;
and if the preset illumination parameters are stored, taking the illumination template corresponding to the preset illumination parameters as a target illumination template, otherwise, determining at least two target illumination parameters in the illumination migration parameter library according to the preset illumination parameters, and determining the target illumination template according to the determined target illumination parameters. The method comprises the following specific steps:
the first condition is as follows: the illumination migration parameter library stores preset illumination parameters.
In the present application, the illumination migration parameter library stores a corresponding relationship between an illumination template and an illumination parameter, as shown in table 1:
TABLE 1
Illumination template Illumination angle Color of illumination Brightness of illumination
S1 φ1 C1 L1
S2 φ2 C2 L2
…… …… …… ……
Sn φn Cn Ln
As can be seen from table 1, each illumination template corresponds to a set of illumination parameters, and each set of illumination parameters includes an illumination angle, an illumination color, and an illumination brightness.
When the target illumination template is determined, any one of the preset illumination parameters can be used as a standard parameter, and the target illumination template is determined in an illumination migration parameter library according to the standard parameter;
for example, the illumination color in the preset illumination parameter is used as a standard parameter, the illumination color value corresponding to the illumination color in the preset illumination parameter is determined in the illumination migration parameter library, and the illumination template corresponding to the illumination color value corresponding to the illumination color in the preset illumination parameter in the illumination migration parameter library is used as a target illumination template. Assuming that the illumination color in the preset illumination parameters is C1, and the illumination color C1 is stored in the illumination migration parameter library, the illumination template S1 corresponding to the illumination color C1 is used as the target illumination template.
If a plurality of illumination color values corresponding to the illumination colors in the preset illumination parameters are stored in the illumination migration parameter library, taking an illumination template corresponding to the plurality of illumination color values corresponding to the illumination colors in the preset illumination parameters in the illumination migration parameter library as an alternative illumination template; for example, if the illumination color in the preset illumination parameters is C1, the illumination color corresponding to the illumination template S1 stored in the illumination migration parameter library is C1, the illumination color corresponding to the illumination template S3 is C1, and the illumination color corresponding to the illumination template S4 is C1, the illumination template S1, the illumination template S3, and the illumination template S4 are used as alternative illumination templates;
in order to ensure the accuracy of the target illumination template, further selecting a standard parameter again from the preset illumination parameters, and selecting a target grating template from the alternative illumination templates according to the again selected standard parameter; assuming that the standard parameter selected again is illumination brightness, if the illumination brightness in the preset illumination parameters is L1, determining an illumination template with illumination brightness of L1 corresponding to the alternative illumination template as a target illumination template, and at this time, determining the illumination template S1 as the target illumination template;
if no illumination template with illumination brightness L1 exists in the alternative illumination templates, fusing the determined alternative illumination templates, and taking the illumination template generated after fusion as a target illumination template; or randomly selecting one illumination template from the alternative illumination templates as a target illumination template; or selecting one illumination template from the alternative illumination templates as a target illumination template according to other illumination parameters.
In this application, a target illumination template may also be determined in the illumination migration parameter library according to all parameters in the preset illumination parameters, at this time, it is determined whether each parameter value in each group of illumination parameters corresponding to a plurality of illumination templates stored in the illumination migration parameter library corresponds to each parameter value in the preset illumination parameters one to one, and if there is an illumination parameter corresponding to each parameter in the preset illumination parameters one to one in the illumination migration parameter library, the illumination template corresponding to the group of illumination parameters is determined as the target illumination template.
For example, if the illumination angle, the illumination color, and the illumination brightness in the preset illumination parameters are Φ 1, C1, and L1, it is determined whether a group of illumination parameters, the illumination angle of which is Φ 1, the illumination color of which is C1, and the illumination brightness of which is L1, are stored in the illumination migration parameter library, and at this time, it is determined that the illumination angle corresponding to the illumination template S1 is Φ 1, the illumination color of which is C1, and the illumination brightness of which is L1, so that the illumination template S1 is the target illumination template.
Case two: the preset illumination parameters are not stored in the illumination migration parameter library.
The illumination migration parameter library does not store preset illumination parameters as follows: and each parameter value in the illumination parameters corresponding to each illumination template stored in the illumination migration parameter library is different from each parameter value in the preset illumination parameters.
In the application, when it is determined that the preset illumination parameters are not stored in the illumination migration parameter library, at least two groups of target illumination parameters are determined in the illumination migration parameter library according to the preset illumination parameters, an illumination template corresponding to the target illumination parameters is determined in the illumination migration parameter library according to the determined at least two groups of target illumination parameters, and the illumination template corresponding to the target illumination parameters is used as the target illumination template.
When at least two groups of target illumination parameters are determined in the illumination migration parameter library according to the preset illumination parameters, the target illumination parameters can be determined according to any one of the preset illumination parameters, and can also be determined according to a plurality of parameters in the preset illumination parameters.
The first embodiment is as follows: the explanation is given by determining at least two groups of target illumination parameters in the illumination migration parameter library according to any one of the preset illumination parameters:
and determining at least two groups of target illumination parameters in an illumination migration parameter library according to the illumination brightness in the preset illumination parameters by taking the illumination brightness in the preset illumination parameters as standard parameters.
The first method is as follows: when at least two groups of target illumination parameters are determined in the illumination migration parameter library according to the illumination brightness in the preset illumination parameters, determining the difference value between the illumination brightness in the preset illumination parameters and the illumination brightness in the illumination parameters stored in the illumination migration parameter library, and determining the absolute value of the difference value; then, selecting two minimum absolute values from the determined absolute values, and determining the illumination brightness corresponding to the two minimum absolute values; and taking the illumination brightness corresponding to the two selected minimum absolute values as target illumination brightness, and further determining an illumination template corresponding to the target illumination brightness in an illumination migration parameter library according to the target illumination brightness.
Assuming that the illumination brightness in the preset illumination parameters is phi i, the two target illumination brightnesses determined according to the absolute values are phi j and phi k respectively, a first illumination template corresponding to phi j and a second illumination template corresponding to phi k are determined in an illumination migration parameter library, and the first illumination template and the second illumination template are used as target illumination templates.
The second method comprises the following steps: when at least two groups of target illumination parameters are determined in an illumination migration parameter library according to illumination brightness in preset illumination parameters, arranging the illumination brightness stored in the preset illumination migration parameter library in a descending or descending order to generate an illumination brightness queue; according to the illumination brightness phi i in the preset illumination parameter, determining a first target illumination brightness phi i +1 which is larger than the illumination brightness in the preset illumination parameter and is adjacent to the illumination brightness in the preset illumination parameter in an illumination brightness queue, and determining a second target illumination brightness phi i-1 which is smaller than the illumination brightness in the preset illumination parameter and is adjacent to the illumination brightness in the preset illumination parameter in the illumination brightness queue; further, a first illumination template corresponding to the illumination brightness of the first target is determined in the illumination migration parameter library, a second illumination template corresponding to the illumination brightness of the second target is determined in the illumination migration parameter library, and the first illumination template and the second illumination template are used as target illumination templates, wherein phi i-1< phi i + 1.
Example two: the explanation is given by determining at least two groups of target illumination parameters in an illumination migration parameter library according to a plurality of parameters in preset illumination parameters:
and determining at least two groups of target illumination parameters in an illumination migration parameter library according to the illumination brightness and the illumination color in the preset illumination parameters by taking the illumination brightness and the illumination color in the preset illumination parameters as standard parameters.
Because each illumination template in the illumination migration parameter library corresponds to one group of illumination parameters, and each group of illumination parameters comprises illumination brightness and illumination color;
therefore, calculating a difference value between the illumination brightness in the preset illumination parameter and the illumination brightness in the illumination parameter in the illumination migration parameter library, and determining an absolute value of the difference value as a first absolute value; meanwhile, calculating a difference value between the illumination color in the preset illumination parameter and the illumination color in the illumination parameter in the illumination migration parameter library, and determining an absolute value of the difference value as a second absolute value; and further calculating weight values according to the first absolute value and the second absolute value in the group of illumination parameters to obtain the weight values corresponding to each group of illumination parameters in the illumination migration parameter library, selecting the two smallest values from the weight values, taking the illumination parameters corresponding to the two smallest values as target illumination parameters, and further determining a target illumination template according to the target illumination parameters.
In the present application, the correspondence between the illumination template and the illumination parameter stored in the illumination migration parameter library is obtained as follows:
firstly, arranging and combining a plurality of illumination angles, a plurality of illumination colors and a plurality of illumination brightness to obtain a plurality of combination parameters;
a professional illumination device is used in the studio to shoot light effect pictures under various combination parameters, and the shot light effect pictures are used as an illumination template;
and binding the illumination template with the parameters used when the light effect picture is shot to obtain the corresponding relation between the illumination template and the illumination parameters.
And 402, performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing the target object.
In the application, when the preset illumination parameters are stored in the illumination migration parameter library, the determined target illumination template is the illumination template corresponding to the preset illumination parameters, and when style migration processing is performed on the feature image according to the target illumination template, style migration processing is performed only according to the illumination template corresponding to the preset illumination parameters in the illumination migration parameter library, so that the light effect image corresponding to the target can be obtained.
However, when the preset illumination parameters are not stored in the illumination migration parameter library, the determined target illumination templates include a plurality of templates, and the determined target illumination templates include the first illumination template and the second illumination template; at this time, when style migration processing is performed on the feature image according to the target illumination template to obtain a light effect image including the target object: performing style migration processing on the characteristic image according to the first illumination template to obtain a first image corresponding to the target; performing style migration processing on the characteristic image according to the second illumination template to obtain a second image corresponding to the target; and then, carrying out fusion processing on the first image and the second image to obtain a light effect image containing the target object.
When the first image and the second image are fused, the first image and the second image are fused according to a first target illumination parameter corresponding to the first illumination template, a second target illumination parameter corresponding to the second illumination template and a size relationship between the preset illumination parameters.
In the fusion, the fusion process may be performed according to any one of the illumination parameters, and the example of the fusion process performed according to the illumination brightness in the illumination parameters is illustrated. As in the above embodiment, a first target illumination intensity φ i +1 of the first illumination template, a second target illumination intensity φ i-1 of the second illumination template, and an illumination intensity φ i in the preset illumination parameters are introduced, wherein φ i-1< φ i < φ i + 1.
Thus, the light effect image containing the target object is (phi i + 1-phi i)/(phi i + 1-phi i-1) × first image + (phi i-1)/(phi i + 1-phi i-1) × second image.
It should be noted that the above fusion method is only an illustration, and similar operations may be performed in other cases, which is not described herein again.
In the method, when style migration processing is performed on an image to be processed according to a target illumination template, style migration operation is performed on a feature image based on a style migration model, wherein a network parameter of the style migration model is a network parameter corresponding to the target illumination template, so that in the process of performing the style migration operation on the feature image, the style of the target illumination template is migrated into the feature image, and the image with the style of the target illumination template is obtained.
The network parameters corresponding to the target illumination template are network parameters obtained by training the target illumination template.
In the present application, the style migration model may be a style migration CNN (Convolutional Neural network) network model, or may be another DNN (Deep Neural network) network model that can implement a style migration function.
As shown in fig. 7, a style migration CNN network model provided in the embodiment of the present application is provided. Wherein, Conv1, Conv2 and Conv3 represent 2D convolution operations, for example, Conv1(8, 5, 1) represents that a convolution filter is 8 channels, a convolution kernel is 5 × 5, and a convolution step is 1. Resid1, Resid2 are residual network modules, Resid1(16, 16, 3, 3) indicates that both sub-networks of the residual network are 16 channels, and the convolution kernel is 3 × 3. Conv _ t1, Conv _ t2, and Conv _ t3 represent 3 deconvolution layers, respectively. Conv _ t1(16, 3, 2) represents the deconvolution layer with channel number 16, convolution kernel 3 x 3, step 2.
And 403, performing image fusion on the light effect image and the image to be processed, and outputting and displaying the light effect image and the image to be processed.
In the application, the light effect image and the image to be processed are fused into a mode that the target object in the image to be processed is replaced by the target object in the light effect image, and the background image except the target object in the image to be processed is kept unchanged, so that the target object in the output and displayed image has the light effect shot by professional illumination equipment under a studio.
In the embodiment, when processing an image shot by a 2D camera of a mobile terminal, the shot image or the image being shot may be used as an image to be processed; meanwhile, illumination parameters such as an illumination angle, an illumination color, illumination brightness and the like can be adjusted through the human-computer interaction interface, so that illumination parameters such as a 360-degree continuously adjustable illumination angle, a continuously adjustable illumination color, a continuously adjustable illumination brightness and the like are provided for a user; selecting a target illumination template in an illumination migration parameter library according to the illumination parameters adjusted by the user; further, style migration processing is carried out on the characteristic image of the image to be processed according to the target illumination template, the style of the target illumination template is migrated to the characteristic image of the image to be processed, the characteristic image of the image to be processed is enabled to have the style of the target illumination template, the target illumination template is an image shot by professional illumination equipment in a studio, and therefore the characteristic image of the image to be processed is a light effect image obtained by the professional illumination equipment under the illumination angle adjusted by a user in a professional studio environment. The limitation that the lighting effect is nonadjustable under the 2D camera in the mobile terminal is broken through, and the flexibility of image processing and the user experience are improved.
In a possible embodiment, in order to improve the light effect of the output image, the characteristic image is subjected to image perspective transformation before the stylization processing.
In the application, the method comprises the steps of performing key point detection on a target object in a feature image, and acquiring at least 5 key points of the target object, wherein the 5 key points on the target object can be represented by P1, P2, P3, P4 and P5; as shown in fig. 8, for a key point extraction image provided in the embodiment of the present application, taking a target object as an example, the extracted 5 key points are coordinates of center positions of eyeball centers of two eyes (left P1 and right P2), coordinates of positions of nose tips P3, and coordinates of positions of mouth corners (left P4 and right P5). Wherein, the key point detection can accurately detect the positions of 5 key points through the MTCNN algorithm.
Determining a perspective transformation matrix according to the coordinates of the key points of the target object and the coordinates of the key points of the reference object in the standard reference image, wherein the reference object in the standard reference image is consistent with the form of the target object, and the standard reference image is determined in the illumination migration parameter library according to the form of the target object; and the corresponding features of the key points of the reference object in the standard reference image are the same as the corresponding features of the key points of the target object, and as above, 5 key points in the reference object are extracted, wherein the 5 key points are respectively the eyeball center position coordinates (left P1 ', right P2 ') of the two eyes, the nose tip position coordinate P3 ' and the mouth angle position coordinates (left P4 ', right P5 ').
According to the theory of image perspective transformation, the perspective transformation matrix is a 3 x 3 matrix, also called homography matrix, so that the perspective transformation matrix can be solved by finding out four key points, and the perspective transformation matrix can be solved by selecting 4 key points from any one of the 5 key points.
Further, perspective transformation processing is carried out on the characteristic image according to the calculated perspective transformation matrix, namely, the characteristic image is rotated, translated, scaled, cut by mistake and the like, so that key points of the target object in the characteristic image are aligned with key points of the reference image in the standard reference image one by one.
In the present application, the form of the target object is determined according to the posture of the target object and the angle of shooting. For example, when a face is photographed, the face image shown in fig. 8 is acquired by photographing the face on the front side, and therefore, when a standard reference image is selected in the illumination migration parameter library according to the form of the target object, the image in which the face is photographed on the front side is selected.
After perspective transformation processing is carried out, style migration processing is carried out on the characteristic image after the perspective transformation processing, a target illumination template is determined in an illumination migration parameter library according to the form of a target object and preset illumination parameters, multiple forms are stored in the illumination migration parameter library, and each form is an image shot by professional illumination equipment in a photostudio under different illumination parameters; and transferring the style of the target illumination template to the characteristic image after perspective transformation processing to obtain a light effect image.
And further, carrying out image inverse perspective transformation on the stylized light effect image by adopting an inverse matrix of a perspective transformation matrix, keeping the style of the image unchanged, recovering the image into the size of the characteristic image, and fusing the recovered image and the image to be processed and outputting and displaying the fused image.
In the method, a standard reference image consistent with the form of a target object is determined in an illumination migration parameter library according to the form of the target object, perspective transformation processing is carried out on a characteristic image according to the standard reference image, so that key points in the image subjected to perspective transformation are aligned with key points in the standard reference image, and style migration processing is carried out on the transformed image according to the standard reference image under a target illumination template corresponding to preset illumination parameters, so that the style migration processing process can be more accurate, and the obtained image has a better light effect; and after the style migration processing, the perspective inverse transformation processing is carried out through the inverse matrix of the perspective transformation matrix, the light effect is kept unchanged, and the picture is restored to the original size so as to be fused with the image to be processed.
As shown in fig. 9, a flowchart of a second image processing method provided in the embodiment of the present application includes the following steps:
step 900, the feature extraction model extracts features of the image to be processed to obtain a feature image containing a target object;
step 901, determining a target illumination template in an illumination migration parameter library according to a preset illumination parameter corresponding to an image to be processed;
step 902, according to the form of the target object in the characteristic image, determining a standard reference image consistent with the form of the target object in an illumination migration parameter library, and performing key point detection on the reference object in the standard reference image to obtain key point coordinates in the reference object;
step 903, detecting key points of the target object in the characteristic image to obtain the coordinates of the key points in the target object;
step 904, determining a perspective transformation matrix according to the key point coordinates in the reference object and the key point coordinates in the target object;
step 905, performing image perspective transformation on the characteristic image according to the perspective transformation matrix;
step 906, performing style migration processing on the image after the image perspective transformation according to the target illumination template to obtain a light effect image containing a target object;
step 907, performing image perspective inverse transformation on the light effect image by using an inverse matrix of the perspective transformation matrix;
and 908, fusing the image obtained after the image perspective inverse matrix with the image to be processed, and outputting and displaying the fused image.
It should be noted that, when performing the keypoint detection in the present application, the keypoint detection may be performed on the target object in the image to be processed, so the keypoint detection on the target object may be performed before the step 900; or before the step 901; there are parallel steps in the above steps, and the order is not separated, for example, between step 902 and step 903, the content described in step 903 may be executed first, and the content described in step 902 may be executed, so the above flow is only an example.
In a possible implementation manner, when the feature extraction is performed on the image to be processed through the feature extraction model, a small amount of background features exist around the target object in the obtained feature image containing the target object except the target object, so that the extracted target object is not accurate enough, and is not accurate enough when the extracted target object is fused with the image to be processed, so that the application provides a mask image which is more accurate in obtaining the features of the target object in the feature image containing the target object compared with performing feature extraction on the image to be processed through the feature extraction model.
Therefore, when the light effect image and the image to be processed are fused, the light effect image and the image to be processed are fused according to the mask image, wherein the mask image is obtained by performing a first type of labeling on the region where the target object is located in the image to be processed and performing a second type of labeling on the region outside the target object region, for example, the target object region value is 0, and the non-target object region value is 1.
When the light effect image and the image to be processed are fused according to the mask image, the fused image is obtained in the following way:
the fused image is a photo effect image mask image + (1-mask image) to-be-processed image, wherein the photo effect image mask image is used for determining a target object region, and the to-be-processed image is used for determining a background region, and the target object region and the background region are added to form the fused image.
The target object area determined by the mask image is more accurate, and fusion can be performed more accurately.
As shown in fig. 10, a flowchart of a third method for image processing according to an embodiment of the present application includes the following steps:
step 1000, extracting the features of the image to be processed through a feature extraction model to obtain a feature image containing a target object, and simultaneously recording a mask for a target object area to obtain a mask image;
1001, determining a target illumination template according to a preset illumination parameter corresponding to an image to be processed;
step 1002, performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing a target object;
and 1003, fusing the light effect image and the image to be processed according to the mask image, and outputting and displaying the fused light effect image and the image to be processed.
The step of determining the mask image may be performed before step 1003, and the step of determining the mask image in the above flow is merely described in step 1000.
In this application, in order to determine the light effect image more accurately, the merging processing may be performed by the image processing methods in fig. 4, 9, and 10, which are not described herein again.
Based on the same inventive concept, the embodiment of the present application further provides a mobile terminal for image processing, and as the mobile terminal corresponds to the mobile terminal corresponding to the image processing method of the present application, and the principle of the mobile terminal for solving the problem is similar to that of the method, the implementation of the mobile terminal may refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 11, for a schematic structural diagram of a mobile terminal for image processing according to an embodiment of the present application, the mobile terminal 110 includes: a processor 111, a memory 112, and a display screen 113; wherein:
a memory 112 for storing an image to be processed;
the processor 111 is configured to perform feature extraction on the image to be processed through the feature extraction model to obtain a feature image including a target object; determining a target illumination template according to a preset illumination parameter corresponding to an image to be processed; performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing a target object; and the light effect image and the image to be processed are output and displayed in the display screen 113 after image fusion.
In one possible implementation, the processor 111 is specifically configured to:
if the preset illumination parameters are stored in the corresponding relation between the illumination parameters stored in the illumination migration parameter library and the illumination templates, taking the illumination templates corresponding to the preset illumination parameters as target illumination templates; or
If the corresponding relation between the illumination parameters stored in the illumination migration parameter base and the illumination templates is determined, and the preset illumination parameters are not stored, determining at least two groups of target illumination parameters in the illumination migration parameter base according to the preset illumination parameters, determining the illumination templates corresponding to the target illumination parameters in the corresponding relation, and taking the illumination templates corresponding to the target illumination parameters as the target illumination templates.
In one possible implementation, the processor 111 is specifically configured to:
when the preset illumination parameters are stored in the illumination migration parameter library, performing style migration processing on the characteristic image according to a target illumination template which is stored in the illumination migration parameter library and corresponds to the preset illumination parameters to obtain a light effect image containing a target object;
when the preset illumination parameters are not stored in the illumination migration parameter library, performing style migration processing on the feature image according to the illumination templates corresponding to the at least two groups of determined target illumination parameters, performing style migration processing on the feature image according to the illumination template corresponding to any one group of target illumination parameters to obtain a style migration image, and performing fusion processing on all the style migration images to obtain a light effect image containing the target object.
In one possible implementation, the processor 111 is specifically configured to:
determining absolute differences between preset illumination parameters and all the illumination parameters stored in an illumination migration parameter library;
and selecting at least two target illumination parameters corresponding to the minimum absolute difference values in the illumination migration parameter library according to the absolute difference values.
In one possible implementation, the processor 111 is further configured to:
according to the form of the target object in the image to be processed, determining a standard reference image consistent with the form in an illumination migration parameter library;
before performing style migration processing on the characteristic image according to the target illumination template, determining a perspective transformation matrix according to coordinates of target object key points in the image to be processed and coordinates of reference object key points in a standard reference image, wherein the characteristics corresponding to the target object key points are the same as the characteristics corresponding to the reference object key points;
carrying out perspective transformation on the characteristic image by adopting a perspective transformation matrix;
and after the style migration processing is carried out on the characteristic image according to the target illumination template, the inverse matrix of the perspective transformation matrix is adopted to carry out image perspective inverse transformation processing on the light effect image.
In one possible implementation, the processor 111 is specifically configured to:
and fusing the light effect image and the image to be processed according to the mask image, wherein the mask image is obtained by performing first-type labeling on the region where the target object is located in the image to be processed and performing second-type labeling on the region except the target object region.
As shown in fig. 12, an embodiment of the present application provides a schematic structural diagram of another image processing apparatus, where the image processing apparatus 120 includes: a feature extraction unit 121, a determination unit 122, a style migration unit 123, and a fusion unit 124; wherein:
the feature extraction unit 121 is configured to perform feature extraction on the image to be processed through a feature extraction model to obtain a feature image including a target object;
the determining unit 122 is configured to determine a target illumination template according to a preset illumination parameter corresponding to the image to be processed;
the style migration unit 123 is configured to perform style migration processing on the feature image according to the target illumination template to obtain a light effect image including the target object;
and the fusion unit 124 is used for performing image fusion on the light effect image and the image to be processed and then outputting and displaying the light effect image and the image to be processed.
In a possible implementation manner, the determining unit 122 is specifically configured to:
if the preset illumination parameters are stored in the corresponding relation between the illumination parameters stored in the illumination migration parameter library and the illumination templates, taking the illumination templates corresponding to the preset illumination parameters as target illumination templates; or
If the corresponding relation between the illumination parameters stored in the illumination migration parameter base and the illumination templates is determined, and the preset illumination parameters are not stored, determining at least two target illumination parameters in the illumination migration parameter base according to the preset illumination parameters, determining the illumination templates corresponding to the target illumination parameters in the corresponding relation, and taking the illumination templates corresponding to the target illumination parameters as the target illumination templates.
In a possible implementation manner, the style migration unit 123 is specifically configured to:
when the preset illumination parameters are stored in the illumination migration parameter library, performing style migration processing on the characteristic image according to a target illumination template which is stored in the illumination migration parameter library and corresponds to the preset illumination parameters to obtain a light effect image containing a target object;
when the preset illumination parameters are not stored in the illumination migration parameter library, performing style migration processing on the feature image according to the illumination templates corresponding to the at least two groups of determined target illumination parameters, performing style migration processing on the feature image according to the illumination template corresponding to any one group of target illumination parameters to obtain a style migration image, and performing fusion processing on all the style migration images to obtain a light effect image containing the target object.
In a possible implementation manner, the determining unit 122 is specifically configured to:
determining absolute difference values between the preset illumination parameters and the illumination parameters stored in the illumination migration parameter library;
and selecting at least two target illumination parameters corresponding to the minimum absolute difference values in the illumination migration parameter library according to the absolute difference values.
In one possible implementation, the style migration unit 123 is further configured to:
before style migration processing is carried out on the characteristic image according to the target illumination template: according to the form of the target object in the image to be processed, determining a standard reference image consistent with the form in an illumination migration parameter library;
determining a perspective transformation matrix according to the coordinates of the key points of the target object in the image to be processed and the coordinates of the key points of the reference object in the standard reference image, wherein the corresponding features of the key points of the target object are the same as the corresponding features of the key points of the reference object;
carrying out perspective transformation on the characteristic image by adopting a perspective transformation matrix;
after the style migration processing is carried out on the characteristic image according to the target illumination template: and performing image perspective inverse transformation processing on the light effect image by adopting an inverse matrix of the perspective transformation matrix.
In a possible implementation manner, the fusion unit 124 is specifically configured to:
and fusing the light effect image and the image to be processed according to the mask image, wherein the mask image is obtained by performing first-type labeling on the region where the target object is located in the image to be processed and performing second-type labeling on the region except the target object region.
The present application further provides an image processing system, as shown in fig. 13, which is a structural diagram of the image processing system provided in the embodiment of the present application, the image processing system 130 includes: the system comprises an image input module 131, a key point detection module 132, a feature extraction module 133, a perspective transformation module 134, a style migration module 135, a light effect interaction module 136, an illumination migration parameter library module 137, a perspective inverse transformation module 138 and an image storage module 139; wherein:
the image input module 131 converts the video stream of the mobile terminal camera into an RGB three-channel image and inputs the RGB three-channel image to the image processing system 130;
the key point detection module 132 performs key point detection on the target object in the image input by the image input module 131;
the feature extraction module 133 performs feature extraction on the image input by the image input module 131, that is, performs pixel-by-pixel classification on the input image, and outputs an image in which only the target object region is reserved and the background region is filled with zeros;
the perspective transformation module 134 performs geometric perspective transformation on the image output by the feature extraction module 133, that is, performs operations such as rotation, translation, scaling, and miscut on the image output by the feature extraction module 133, so that the key points of the target object are aligned with the key points of the reference object in the standard reference image in the illumination migration parameter library module 137;
the style migration module 135 performs style migration operation on the image output by the perspective transformation module 134 according to an illumination template determined by the illumination parameter set in the light effect interaction module 136 in the illumination migration parameter library module 137, so as to obtain a corresponding light effect image;
the perspective inverse transformation module 138 performs perspective inverse transformation processing on the obtained light effect image;
the image storage module 139 compresses and decodes the stylized image or video after the inverse perspective transformation process, and stores the stylized image or video in the memory of the mobile terminal.
In this application, the light effect interaction module 136 includes a display module and an illumination parameter adjustment module, the display module is used for displaying the image processing result, and the illumination parameter adjustment module provides a user interface for a user to adjust the illumination parameters of the illumination angle, the illumination brightness and the illumination color.
The embodiment of the present application further provides a computer-readable non-volatile storage medium, which includes program code for causing a computing terminal to execute the steps of the image processing method described above when the program code runs on the computing terminal.
The present application is described above with reference to block diagrams and/or flowchart illustrations of methods, apparatus (systems) and/or computer program products according to embodiments of the application. It will be understood that one block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, and/or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the subject application may also be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). Furthermore, the present application may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. In the context of this application, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or terminal.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An image processing mobile terminal, characterized in that the mobile terminal comprises: the system comprises a processor, a memory and a display screen; wherein:
the memory is used for storing images to be processed;
the processor is used for extracting the features of the image to be processed through a feature extraction model to obtain a feature image which comprises a target object and has a blank background area; determining a target illumination template in an illumination migration parameter library which stores illumination parameters and corresponding illumination templates according to preset illumination parameters corresponding to the image to be processed; performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing the target object; the light effect image and the image to be processed are subjected to image fusion and then output to a display screen;
the display screen is used for displaying the light effect image output by the processor;
the preset illumination parameters are determined based on parameter values input for the illumination parameters in an illumination parameter adjusting area of a human-computer interaction interface;
performing style migration processing on the feature image according to the target illumination template to obtain a light effect image containing the target object, including:
and adjusting the network parameters of a style migration model based on the network parameters of the target illumination template, performing style migration processing on the feature image based on the style migration model after the network parameters are adjusted, and migrating the style of the target illumination template into the feature image to obtain a light effect image containing the target object.
2. The mobile terminal of claim 1, wherein the processor is specifically configured to:
if the preset illumination parameters are stored in the corresponding relation between the illumination parameters stored in the illumination migration parameter library and the illumination templates, taking the illumination templates corresponding to the preset illumination parameters as the target illumination templates; or
If the preset illumination parameters are not stored in the correspondence relationship between the illumination parameters stored in the illumination migration parameter library and the illumination templates, determining at least two groups of target illumination parameters in the illumination migration parameter library according to the preset illumination parameters, determining the illumination templates corresponding to the target illumination parameters in the correspondence relationship, and taking the illumination templates corresponding to the target illumination parameters as the target illumination templates.
3. The mobile terminal of claim 2, wherein the processor is specifically configured to:
determining absolute difference values between the preset illumination parameters and the illumination parameters stored in the illumination migration parameter library;
and selecting at least two target illumination parameters corresponding to the minimum absolute difference values in the illumination migration parameter library according to the absolute difference values.
4. The mobile terminal of claim 1, wherein the processor is further configured to:
according to the form of the target object in the image to be processed, determining a standard reference image consistent with the form in an illumination migration parameter library;
before performing style migration processing on the characteristic image according to the target illumination template, determining a perspective transformation matrix according to coordinates of target object key points in the image to be processed and coordinates of reference object key points in the standard reference image, wherein the characteristics corresponding to the target object key points are the same as the characteristics corresponding to the reference object key points;
carrying out perspective transformation on the characteristic image by adopting the perspective transformation matrix;
and after the style transfer processing is carried out on the characteristic image according to the target illumination template, carrying out image perspective inverse transformation processing on the light effect image by adopting an inverse matrix of the perspective transformation matrix.
5. The mobile terminal of claim 1, wherein the processor is specifically configured to:
and fusing the light effect image and the image to be processed according to a mask image, wherein the mask image is obtained by performing first-type labeling on the region where the target object is located in the image to be processed and performing second-type labeling on the region except the target object region.
6. A method of image processing, the method comprising:
performing feature extraction on an image to be processed through a feature extraction model to obtain a feature image which comprises a target object and has a blank background area;
determining a target illumination template in an illumination migration parameter library storing illumination parameters and corresponding illumination templates according to preset illumination parameters corresponding to the image to be processed, wherein the preset illumination parameters are determined based on parameter values input for the illumination parameters in an illumination parameter adjusting area of a human-computer interaction interface;
performing style migration processing on the characteristic image according to the target illumination template to obtain a light effect image containing the target object;
carrying out image fusion on the light effect image and the image to be processed, and then outputting and displaying the light effect image and the image to be processed;
performing style migration processing on the feature image according to the target illumination template to obtain a light effect image containing the target object, including:
and adjusting the network parameters of a style migration model based on the network parameters of the target illumination template, performing style migration processing on the feature image based on the style migration model after the network parameters are adjusted, and migrating the style of the target illumination template into the feature image to obtain a light effect image containing the target object.
7. The method of claim 6, wherein the determining a target illumination template according to the preset illumination parameter corresponding to the image to be processed comprises:
if the preset illumination parameters are stored in the corresponding relation between the illumination parameters stored in the illumination migration parameter library and the illumination templates, taking the illumination templates corresponding to the preset illumination parameters as the target illumination templates; or
If the preset illumination parameters are not stored in the corresponding relation between the illumination parameters stored in the illumination migration parameter library and the illumination templates, determining at least two target illumination parameters in the illumination migration parameter library according to the preset illumination parameters, determining the illumination templates corresponding to the target illumination parameters in the corresponding relation, and taking the illumination templates corresponding to the target illumination parameters as the target illumination templates.
8. The method of claim 7, wherein said determining at least two target lighting parameters in said lighting migration parameter library from said preset lighting parameters comprises:
determining absolute difference values between the preset illumination parameters and the illumination parameters stored in the illumination migration parameter library;
and selecting at least two target illumination parameters corresponding to the minimum absolute difference values in the illumination migration parameter library according to the absolute difference values.
9. The method of claim 6, wherein before performing the style migration process on the feature image according to the target illumination template, further comprising:
according to the form of the target object in the image to be processed, determining a standard reference image consistent with the form in an illumination migration parameter library;
determining a perspective transformation matrix according to the coordinates of the key points of the target object in the image to be processed and the coordinates of the key points of the reference object in the standard reference image, wherein the corresponding features of the key points of the target object are the same as the corresponding features of the key points of the reference object;
carrying out perspective transformation on the characteristic image by adopting the perspective transformation matrix;
after the style migration processing is performed on the feature image according to the target illumination template, the method further includes:
and carrying out image perspective inverse transformation processing on the light effect image by adopting an inverse matrix of the perspective transformation matrix.
10. The method of claim 6, wherein the image fusing the light effect image with the image to be processed comprises:
and fusing the light effect image and the image to be processed according to a mask image, wherein the mask image is obtained by performing first-type labeling on the region where the target object is located in the image to be processed and performing second-type labeling on the region except the target object region.
CN201911294621.4A 2019-12-16 2019-12-16 Mobile terminal and image processing method Active CN112995539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911294621.4A CN112995539B (en) 2019-12-16 2019-12-16 Mobile terminal and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911294621.4A CN112995539B (en) 2019-12-16 2019-12-16 Mobile terminal and image processing method

Publications (2)

Publication Number Publication Date
CN112995539A CN112995539A (en) 2021-06-18
CN112995539B true CN112995539B (en) 2022-07-01

Family

ID=76343297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911294621.4A Active CN112995539B (en) 2019-12-16 2019-12-16 Mobile terminal and image processing method

Country Status (1)

Country Link
CN (1) CN112995539B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8041123B2 (en) * 2005-03-03 2011-10-18 Pioneer Corporation Template matching processing apparatus and method, hologram reproducing apparatus and method, and computer program
JP2008145835A (en) * 2006-12-12 2008-06-26 Sony Corp Self-luminous display apparatus, white balance adjustment circuit, and white balance adjustment method
US7995060B2 (en) * 2007-08-01 2011-08-09 Disney Enterprises, Inc. Multiple artistic look rendering methods and apparatus
CN107809591B (en) * 2017-11-13 2019-09-10 Oppo广东移动通信有限公司 Shoot method, apparatus, terminal and the storage medium of image
US10872399B2 (en) * 2018-02-02 2020-12-22 Nvidia Corporation Photorealistic image stylization using a neural network model
CN108765278B (en) * 2018-06-05 2023-04-07 Oppo广东移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN109325906B (en) * 2018-09-07 2023-05-05 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN109191403A (en) * 2018-09-07 2019-01-11 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109447927B (en) * 2018-10-15 2021-01-22 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112995539A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
EP3713212B1 (en) Image capture method, terminal, and storage medium
CN114205522B (en) Method for long-focus shooting and electronic equipment
US10181203B2 (en) Method for processing image data and apparatus for the same
WO2018120238A1 (en) File processing device and method, and graphical user interface
WO2020192692A1 (en) Image processing method and related apparatus
CN111225108A (en) Communication terminal and card display method of negative screen interface
CN111526232B (en) Camera control method based on double-screen terminal and double-screen terminal
CN113810588B (en) Image synthesis method, terminal and storage medium
WO2021103919A1 (en) Composition recommendation method and electronic device
CN115359105B (en) Depth-of-field extended image generation method, device and storage medium
US20230224574A1 (en) Photographing method and apparatus
US11102397B2 (en) Method for capturing images, terminal, and storage medium
CN114782296B (en) Image fusion method, device and storage medium
CN113038141B (en) Video frame processing method and electronic equipment
CN111031377B (en) Mobile terminal and video production method
CN111193874B (en) Image display parameter adjusting method and mobile terminal
CN115589539B (en) Image adjustment method, device and storage medium
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
CN112995539B (en) Mobile terminal and image processing method
CN112799557B (en) Ink screen display control method, terminal and computer readable storage medium
CN113542711A (en) Image display method and terminal
CN115334239B (en) Front camera and rear camera photographing fusion method, terminal equipment and storage medium
CN117119316B (en) Image processing method, electronic device, and readable storage medium
CN111142648B (en) Data processing method and intelligent terminal
EP4361805A1 (en) Method for generating theme wallpaper, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee before: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder