CN113191976B - Image shooting method, device, terminal and storage medium - Google Patents

Image shooting method, device, terminal and storage medium Download PDF

Info

Publication number
CN113191976B
CN113191976B CN202110478495.9A CN202110478495A CN113191976B CN 113191976 B CN113191976 B CN 113191976B CN 202110478495 A CN202110478495 A CN 202110478495A CN 113191976 B CN113191976 B CN 113191976B
Authority
CN
China
Prior art keywords
image
detection
stripe
lens module
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110478495.9A
Other languages
Chinese (zh)
Other versions
CN113191976A (en
Inventor
邵明天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110478495.9A priority Critical patent/CN113191976B/en
Publication of CN113191976A publication Critical patent/CN113191976A/en
Priority to PCT/CN2022/080664 priority patent/WO2022227893A1/en
Application granted granted Critical
Publication of CN113191976B publication Critical patent/CN113191976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image shooting method, an image shooting device, a terminal and a storage medium, and relates to the technical field of image processing. The method is applied to a terminal, the terminal comprises a lens module arranged below a screen glass cover plate, and the method comprises the following steps: acquiring the surface type information of the screen glass cover plate; based on the surface type information of the screen glass cover plate, carrying out image correction on an original image obtained by shooting by the lens module; and outputting the corrected image obtained after the original image is corrected. According to the method provided by the embodiment of the application, on one hand, the imaging quality of image shooting can be improved through image correction, and on the other hand, when hardware design of the terminal is not needed, the influence of the bending degree of the screen glass cover plate on the setting position of the camera shooting assembly is considered, so that the arrangement of original assemblies of the terminal is influenced, and the implementation complexity of the terminal is reduced.

Description

Image shooting method, device, terminal and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image capturing method, an image capturing device, a terminal, and a storage medium.
Background
With the development of terminal technology, many mobile phones choose to configure a curved screen, which is a type of screen glass cover plate.
Curved surfaces are arranged on two side edges or four side edges of the curved surface screen, and the uneven screen glass cover plate of the curved surface screen easily affects image shooting of a front camera arranged under the curved surface screen.
In the related art, the front camera is placed below a relatively flat area of the curved surface screen as much as possible, so that the influence of the curved surface screen on image shooting of the front camera is avoided.
Disclosure of Invention
The embodiment of the application provides an image shooting method, an image shooting device, a terminal and a storage medium, and the imaging quality of image shooting can be improved through image correction. The technical scheme is as follows.
According to an aspect of the present application, there is provided an image capturing method applied to a terminal, where the terminal includes a lens module disposed under a cover plate of a screen glass, and the method includes:
acquiring the surface type information of the screen glass cover plate;
based on the surface type information of the screen glass cover plate, carrying out image correction on an original image obtained by shooting by the lens module;
and outputting the corrected image obtained after the original image is corrected.
According to an aspect of the present application, there is provided an image photographing apparatus applied to a terminal including a lens module disposed under a cover plate of a screen glass, the apparatus including: the system comprises a surface type information acquisition module, an image correction module and an image output module;
The surface type information acquisition module is used for acquiring surface type information of the screen glass cover plate;
the image correction module is used for correcting the original image shot by the lens module based on the surface type information of the screen glass cover plate;
the image output module is used for outputting the corrected image obtained after the original image is corrected.
According to another aspect of the present application, there is provided a computer device comprising: a processor and a memory in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the image capturing method as described in the above aspect.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the image capturing method as described in the above aspect.
According to another aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the image capturing method provided in the above-described alternative implementation.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise the following beneficial effects:
by acquiring the surface type information of the screen glass cover plate, the original image shot by the lens module under the screen glass cover plate is subjected to image correction, so that the influence of the curved surface screen on the image shot by the lens module is avoided, on one hand, the imaging quality of the image shot can be improved, and on the other hand, the influence of the bending degree of the screen glass cover plate on the setting position of the camera shooting assembly is not required when the hardware design of the terminal is carried out, the arrangement of original assemblies in the terminal is influenced, and therefore the realization complexity of the terminal is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an image capturing method provided in an exemplary embodiment of the present application;
fig. 2 is a flowchart of an image capturing method provided in an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of storing surface type information of a screen glass cover plate provided in an exemplary embodiment of the present application;
fig. 4 is a flowchart of an image capturing method provided in an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a stripe light detection provided by an exemplary embodiment of the present application;
fig. 6 is a flowchart of an image capturing method provided in an exemplary embodiment of the present application;
fig. 7 is a flowchart of an image capturing method provided in an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of a lens module provided in an exemplary embodiment of the present application;
fig. 9 is a block diagram of an image photographing device provided in an exemplary embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In recent years, the curved surface screen is used as a standard of the mobile phone industry, the curved surface screen of the mobile phone has a tendency of four curves besides side curves, and the uneven screen glass cover plate has great influence on focusing shooting of a front camera below the screen glass cover plate, such as far focus imaging, unclear pictures and the like.
In order to avoid the influence of the curved screen on the image capturing of the front camera, the related art solves from the perspective of hardware design, such as: the front camera is designed below a relatively flat area in the curved screen.
In the embodiment of the application, by acquiring the surface type information of the screen glass cover plate, the original image shot by the lens module under the screen glass cover plate is subjected to image correction, and the influence of the curved surface screen on the image shooting of the lens module is avoided. Next, an exemplary description is given of an image capturing method provided in the embodiment of the present application with reference to fig. 1 below.
In this embodiment, the lens module includes: the device comprises a stripe emitting end, a front camera and an image sensor. The front camera is used for shooting front images; the stripe emitting end is used for detecting stripe light, and concretely, the stripe emitting end is used for emitting a detection stripe emitting signal, the image sensor is used for receiving a detection stripe reflecting signal, and the image sensor can be a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) receiver.
As shown in fig. 1, when the terminal performs image capturing using the lens module, the image sensor may acquire a detection stripe reflection signal and feed back the detection stripe reflection signal to a central processing unit (Central Processing Unit, CPU) in the terminal. In an exemplary case where the detected stripe emission signal emitted from the stripe emission end is fixed, the image sensor receives a detected stripe reflection signal reflected by the object to be photographed and sends the detected stripe reflection signal to the CPU processor, and the CPU processor may determine a stripe state change based on the detected stripe emission signal and the detected stripe reflection signal, where the stripe state change includes: variation in stripe width, variation in stripe pitch, and the like.
The CPU processor calculates the shooting distance from the shot object to the front-end camera by acquiring the time difference between the emission signal of the detection stripes emitted by the stripe emission end and the reflection signal of the detection stripes received by the image sensor, and feeds back the shooting distance to the front-end camera. The front camera performs focusing based on the shooting distance fed back by the CPU processor, performs picture shooting after focusing, and feeds back an original image obtained by shooting to the CPU processor.
The CPU processor calculates the depth information of the photographed original image through the change of the stripe state, and performs image segmentation on the original image based on the depth information. Illustratively, the CPU processor may divide the person in the original image from the background based on the depth information and then divide the person into different regions based on the depth information, and the accuracy of the division may be higher than 1mm.
After the image segmentation is completed, the CPU processor calls the surface type information of the screen glass cover plate stored in an electrically erasable programmable read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM) of the front-end camera, deconvolution processing is carried out on each segmented area based on the surface type information of the screen glass cover plate, so that the influence of the curvature of the screen glass cover plate on the image is eliminated in each area, and after the deconvolution processing is completed, the CPU processor synthesizes the image, so that a complete image corrected picture is obtained.
Next, an image capturing method provided in the embodiment of the present application is further described.
Fig. 2 shows a flowchart of an image capturing method according to an exemplary embodiment of the present application, where the method may be applied to a terminal including a lens module disposed under a cover plate of a screen glass, and the method includes:
step 201, obtaining the surface type information of the screen glass cover plate.
In one possible implementation manner, the terminal acquires the pre-stored surface type information of the screen glass cover plate in front of the lens module, or acquires the surface type information of the screen glass cover plate in a real-time detection manner.
The terminal detects the surface type information of the screen glass cover plate in real time under the condition that the lens module in the terminal is in a shooting state. The lens module being in a shooting state refers to a state in which the lens module is framing and shooting is not completed yet. The user enters a camera function page, clicks an icon converted by the camera, switches the terminal to use the front-end camera, and uses the front-end camera to view the view to obtain the surface type information of the current screen glass cover plate.
The terminal reads the pre-stored surface type information of the screen glass cover plate from a memory (such as an EEPROM) for example. The surface type information of the screen glass cover plate in the memory can be prestored before the terminal is assembled, or can be stored after the screen glass cover plate is detected after the terminal is assembled.
In this embodiment of the application, the lens module is disposed under the screen glass cover plate, and the lens module receives light from an area in front of the screen glass cover plate through the screen glass cover plate, and optically images a picture of the area. That is, the lens module in the embodiment of the present application is not a lens module that adopts a lifting design, and can receive light without passing through the screen glass cover plate.
For example, in the case where the cover glass does not cover the opposite two planes of the terminal but only covers the front surface of the terminal, the lens module may be understood as a lens module that performs front-facing photographing.
The surface type information of the screen glass cover plate is information for indicating the surface type condition of the screen glass cover plate. In the embodiment of the present application, the surface type information of the screen glass cover plate can also be understood as curvature information of the screen glass cover plate, influence information of the screen glass cover plate on imaging, and the like.
Step 202, based on the surface type information of the screen glass cover plate, performing image correction on the original image shot by the lens module.
In one possible implementation manner, the terminal uses the lens module to shoot and obtain an original image, calls the obtained surface type information of the screen glass cover plate, and performs image correction on the original image.
Optionally, if the terminal device obtains the surface type information of the screen glass cover plate in a real-time detection manner, after obtaining the surface type information of the screen glass cover plate, the terminal stores the surface type information of the screen glass cover plate in a memory corresponding to the lens module, such as an EEPROM, and after capturing an original image, invokes the surface type information of the screen glass cover plate from the memory corresponding to the lens module.
For example, referring to fig. 3 in combination, the lens module includes an image sensor and a front-end camera, the image sensor feeds back the received detection reflection signal to the CPU processor, the CPU processor calculates surface type information of the screen glass cover plate based on the detection reflection signal fed back by the image sensor, and transmits the surface type information of the screen glass cover plate to the front-end camera EEPROM, and stores the surface type information in the front-end camera EEPROM.
Optionally, after the terminal obtains the surface type information of the screen glass cover plate through one detection, the surface type information of the screen glass cover plate can be used for correcting the image of the original image shot in a period of time, that is, the surface type information of the screen glass cover plate is updated at a fixed frequency. The terminal stores the surface type information of the screen glass cover plate after acquiring the surface type information of the screen glass cover plate once, and uses the surface type information of the screen glass cover plate to carry out image correction on the photographed original image in the next month.
Optionally, the surface type information of the screen glass cover plate is automatically updated every time the terminal performs image capturing. The terminal uses the lens module to shoot an image, acquires the surface type information of the screen glass cover plate when framing is performed, shoots the image after framing is completed, obtains an original image, and corrects the original image by using the surface type information of the screen glass cover plate acquired this time. For example, before the current image shooting, the screen glass cover plate of the terminal is pasted with a film, and the optical structure (material/thickness) of the screen glass cover plate is changed, so that the terminal detects the curvature of the screen glass cover plate after pasting the film to obtain the latest surface type information, and the latest surface type information is applied to image correction in actual shooting.
And 203, outputting a corrected image obtained after the original image is corrected.
In one possible implementation, the terminal corrects the original image into a corrected image and outputs the corrected image.
In summary, according to the method provided by the embodiment, by acquiring the surface type information of the screen glass cover plate, the original image shot by the lens module under the screen glass cover plate is subjected to image correction, so that the influence of the curved surface screen on the image shot by the lens module is avoided, on one hand, the imaging quality of the image shot can be improved, and on the other hand, the influence of the bending degree of the screen glass cover plate on the setting position of the camera shooting assembly is not required when the hardware design of the terminal is performed, the arrangement of the original assembly in the terminal is influenced, and therefore the implementation complexity of the terminal is reduced.
In an exemplary embodiment, the terminal performs image correction on the original image based on the depth information of the original image to obtain a better image correction effect.
Fig. 4 shows a flowchart of an image capturing method according to an exemplary embodiment of the present application, where the method may be applied to a terminal including a lens module disposed under a cover plate of a screen glass, and the method includes:
step 401, obtaining surface type information of a screen glass cover plate.
The embodiment of this step is referred to above in step 201, and will not be described in detail herein.
Step 402, depth information of an original image is acquired.
The depth information of the original image refers to three-dimensional information of the photographed object in the three-dimensional world in the original image.
Optionally, the lens module in the terminal obtains depth information of the original image by using a machine vision technology, where the machine vision technology may include: structured light (TOF) detection techniques and Time of Flight (TOF) methods, which are not limited by the embodiments of the present application.
The optical time-of-flight method is a detection method for acquiring depth information of a photographed object by measuring the time for which light irradiates the photographed object and returns.
The structured light detection technology refers to a detection mode of obtaining depth information by projecting a specific coding pattern onto a photographed object, converting depth change information into a change in the coding pattern, and detecting the change in the coding pattern. Optionally, the structured light detection technique includes: stripe light detection and speckle pattern detection. The code pattern projected in the stripe light detection is a stripe pattern, and the code pattern projected in the speckle pattern detection is a speckle pattern. In one possible implementation manner, the terminal obtains depth information of the original image by using the following stripe light detection manner:
s11, the lens module sends a first detection stripe emission signal.
Optionally, the lens module includes a stripe transmitting end, and the terminal invokes the stripe transmitting end to transmit the first detection stripe transmitting signal.
S12, the lens module receives a first detection stripe reflection signal, wherein the first detection stripe reflection signal is formed by reflecting the first detection stripe transmission signal by a shot object.
Optionally, the lens module includes an image sensor, the first detection stripe emission signal penetrates through the screen glass cover plate and projects onto the surface of the photographed object, the first detection stripe emission signal is formed by reflection of the photographed object, and the first detection stripe emission signal is received by the image sensor.
S13, acquiring depth information of the original image based on the change condition between the first detection stripe reflection signal and the first detection stripe emission signal.
In one possible implementation manner, the interval, the width, etc. of the stripes between the first detection stripe reflection signal and the first detection stripe emission signal are changed, and the terminal calculates the depth information of the original image based on the changes.
By way of example, referring to fig. 5 in combination with the projection of the fringe pattern 501 (i.e., the first detected fringe reflection signal) onto the object 502, it can be seen that the fringe pattern 501 is distorted in the original vertical fringe pattern, the fringe spacing, width, etc., changes, the fringe pattern 501 is modulated by the height of the object 502, and the distorted fringe shape of the fringe pattern 503 (i.e., the first detected fringe reflection signal) reveals depth information of the object 502.
Optionally, the method for obtaining depth information of the original image by the terminal based on the change condition between the first detection stripe reflection signal and the first detection stripe emission signal is as follows: and the terminal performs fitting calculation based on the change condition between the first detection stripe reflection signal and the first detection stripe emission signal, so that the depth information of the original image is obtained. Alternatively, the fitting method used in the fitting calculation may be a gaussian polynomial fitting or other fitting scheme, which is not limited in the embodiments of the present application.
Step 403, segmenting the original image into at least one image region based on the depth information.
Optionally, the terminal divides pixels having similar depths into one image area based on depth information of the image.
In one possible implementation, the terminal segments a portion of the original image into at least one image region based on the depth information. In another possible implementation, the terminal segments the entire original image into at least one image region based on the depth information.
Illustratively, the original image includes a person and a background, the terminal divides the person and the background based on the depth information, and then further divides the person and the background into smaller image areas based on the depth information.
Illustratively, the original image includes a person and a background, and the terminal segments the person portion of the original image into at least one image region.
And step 404, performing image correction on at least one image area based on the surface type information of the screen glass cover plate.
In one possible implementation, the terminal deconvolves at least one image area for image correction using the area information of the screen glass cover plate.
The original image output by the lens module can be regarded as a result of the convolution of the real image and the screen glass cover plate surface type. Therefore, the surface type information of the screen glass cover plate is used for deconvoluting the original image corresponding to at least one image area, so that a real image corresponding to the image area can be obtained.
Alternatively, the deconvolution process is a cross-channel a priori based deconvolution process. The cross channel prior refers to the sharing of information of different channels in the deconvolution process, so that the frequency information reserved by one channel can help the reconstruction of other channels, and chromatic aberration is eliminated. Color difference correction is added in advance through the cross channel, so that the phenomena of blurring and color edges caused by color difference in the image correction process are weakened, and high-image quality imaging is realized.
And step 405, synthesizing the original image and at least one image area after image correction to obtain a corrected image.
In one possible implementation manner, after performing image correction on at least one image area, the terminal performs image synthesis on a portion of the original image, which is not subjected to image correction, and the at least one image area after image correction, so as to obtain a corrected image.
The terminal divides the character part in the original image into an image area, corrects the image area corresponding to the character part, and synthesizes the corrected character part with other parts in the original image to obtain a corrected image.
It can be understood that if the whole original image is divided into at least one image area, the terminal synthesizes the at least one image area after the image correction to obtain a corrected image. Step 406, outputting a rectified image.
In summary, according to the method provided by the embodiment, the terminal obtains the depth information of the original image, divides the original image into a plurality of image areas based on the depth information of the original image, and corrects the image respectively.
In an exemplary embodiment, the surface type information of the screen glass cover plate is detected by the terminal based on a machine vision technology. The machine vision techniques may include: structured light detection techniques and time of flight methods, to which the present embodiments are not limited. Optionally, the structured light detection technique includes: stripe light detection and speckle pattern detection. In the following, exemplary description is made of the terminal acquiring the surface type information of the screen glass cover plate by the stripe light detection mode.
Fig. 6 shows a flowchart of an image capturing method according to an exemplary embodiment of the present application, where the method may be applied to a terminal including a lens module disposed under a cover plate of a screen glass, and the method includes:
and 601, calling a lens module to perform stripe light detection to acquire the surface type information of the screen glass cover plate.
Stripe light detection refers to a detection mode of performing a reflection test by detecting a stripe signal with certain stripe information. Determining the stripe information means that the pitch, width, etc. of the stripes are fixed in the stripe pattern corresponding to the detected stripe signal.
In one possible implementation, the terminal performs stripe light detection by:
s21, the lens module sends a second detection stripe emission signal.
Optionally, the lens module includes a stripe transmitting end, and the terminal invokes the stripe transmitting end to transmit the second detection stripe transmitting signal.
In this embodiment of the present application, the second sounding strip emission signal and the first sounding strip emission signal described in the foregoing embodiment may be two different parts of the sounding strip emission signal emitted by the strip emission end at the same time point: the first part, namely the first detection stripe emits a signal, and the signal of the first part is projected to the surface of the photographed object through the screen glass cover plate; the second part, the second detection stripe, emits a signal which does not penetrate the screen glass cover plate.
S22, the lens module receives a second detection stripe reflection signal, wherein the second detection stripe reflection signal is formed by reflecting the second detection stripe transmission signal through the screen glass cover plate.
Optionally, the lens module includes an image sensor, the second detection stripe emission signal is reflected by the screen glass cover plate to form a second detection stripe emission signal, and the second detection stripe emission signal is received by the image sensor.
S23, based on the change condition between the second detection stripe reflection signal and the second detection stripe emission signal, the surface type information of the screen glass cover plate is obtained.
In one possible implementation manner, the interval, the width and the like of the stripes between the second detection stripe reflection signal and the second detection stripe emission signal are changed, and the terminal calculates the surface type information of the screen glass cover plate based on the changes.
Optionally, the mode of obtaining the surface type information of the screen glass cover plate by the terminal based on the change condition between the second detection stripe reflection signal and the second detection stripe emission signal is as follows: and the terminal performs fitting calculation based on the change condition between the second detection stripe reflection signal and the second detection stripe emission signal, so that the surface type information of the screen glass cover plate is obtained. Alternatively, the fitting method used in the fitting calculation may be a gaussian polynomial fitting or other fitting scheme, which is not limited in the embodiments of the present application.
Step 602, based on the surface type information of the screen glass cover plate, performing image correction on the original image obtained by shooting the lens module.
The embodiment of this step is referred to above in step 202, and will not be described in detail herein.
And 603, outputting a corrected image obtained after the original image is corrected.
The embodiment of this step is referred to above in step 203, and will not be described in detail herein.
In summary, according to the method provided by the embodiment, the terminal obtains the accurate surface type information of the screen glass cover plate by performing stripe light detection, so that the image correction effect is improved.
In an exemplary embodiment, before image shooting, the terminal focuses on the lens module, so that the imaging quality of an image is improved.
Fig. 7 is a flowchart of an image capturing method according to an exemplary embodiment of the present application, where the method may be applied to a terminal including a lens module disposed under a cover plate of a screen glass, and the method includes:
step 701, obtaining the surface type information of the screen glass cover plate.
The embodiment of this step is referred to above in step 201, and will not be described in detail herein.
Step 702, a reference distance is obtained, where the reference distance is a distance between the photographed object and the lens module.
When the lens module is framing, the terminal obtains the distance between the shot object and the lens module, and focuses based on the distance.
Optionally, the reference distance is detected by the terminal based on machine vision technology. The machine vision techniques may include: structured light detection techniques and time of flight methods, to which the present embodiments are not limited. Optionally, the structured light detection technique includes: stripe light detection and speckle pattern detection.
In one possible implementation manner, the terminal obtains the reference distance by using a stripe light detection manner:
s31, calling the lens module to send a third detection stripe emission signal.
Optionally, the lens module includes a stripe transmitting end, and the terminal invokes the stripe transmitting end to transmit a third detection stripe transmitting signal.
In this embodiment, the third sounding strip emission signal may be the same signal as the first sounding strip emission signal described in the previous embodiment.
S32, calling the lens module to receive a third detection stripe reflection signal, wherein the third detection stripe reflection signal is formed by reflecting the third detection stripe transmission signal by the shot object.
Optionally, the lens module includes an image sensor, the third detection stripe emission signal is projected to the surface of the photographed object through the screen glass cover plate, and the third detection stripe emission signal is formed by reflection of the photographed object, and the third detection stripe emission signal is received by the image sensor.
In this embodiment, the third stripe reflected signal may be the same signal as the first stripe emitted signal described in the previous embodiment.
S33, determining a reference distance based on the first round trip time.
The first round trip time is the difference between the time point of the lens module transmitting the third detection stripe transmitting signal and the time point of receiving the third detection stripe reflecting signal.
Since the third detection stripe emission signal is emitted by the lens module, the third detection stripe reflection signal is formed by reflecting the third detection stripe emission signal by the shot object, and the third detection stripe reflection signal is also received by the lens module, the distance between the shot object and the lens module can be calculated based on the time difference between the emission and the reception (i.e. the first round trip time) and the propagation speed of the signal.
It will be appreciated that if the third probe stripe emission signal and the second probe stripe emission signal are different portions of one signal emitted by the terminal at the same time point, the first round trip time may be measured by a difference between a time point when the lens module receives the third probe stripe reflection signal and a time point when the second probe stripe reflection signal is received, where the second probe stripe reflection signal is a signal formed by reflecting the second probe stripe emission signal through the screen glass cover plate. This is because the distance between the screen glass cover plate and the lens module is very short, and the influence of the distance on the value of the reference distance is very small, so that in the case that the terminal transmits the third detection stripe transmission signal and the second detection stripe transmission signal at the same time, the terminal can equate the difference between the time point when the lens module receives the third detection stripe reflection signal and the time point when the lens module receives the second detection stripe reflection signal with the first round trip time.
In step 703, focusing is performed on the lens module based on the reference distance.
In one possible implementation manner, the lens module adopts a zoom lens, and after the terminal obtains the reference distance, a focusing motor is adopted to drive a lens in the lens module to an ideal position to complete focusing, wherein the ideal position is a position where the lens is when the lens supports to obtain an ideal focusing effect on a photographed object under the current reference distance.
It can be understood that if the lens module adopts the fixed focus lens, the terminal compares the reference distance with the ideal reference distance range after obtaining the reference distance, if the reference distance is not within the ideal reference distance range, prompt information is displayed on the terminal, the prompt information is used for prompting the user to adjust the distance between the photographed object and the terminal, wherein the ideal reference distance range is a distance range corresponding to the distance between the photographed object and the lens module under the condition that the lens module obtains the ideal focusing effect. For example, if the reference distance is greater than the ideal reference distance range, the terminal prompts the user to shorten the distance between the photographed object and the terminal; if the reference distance is smaller than the ideal reference distance range, the terminal prompts the user to increase the distance between the shot object and the terminal.
Alternatively, the reference distance may be one value or may be a plurality of values. When the reference distance is a value, the reference distance is the distance between a point of the shot object and the lens module; when the reference distance is a plurality of values, the reference distance is the distance between a plurality of points of the photographed object and the lens module respectively.
Optionally, in response to obtaining at least two reference distances, the terminal processes the at least two reference distances based on an attention mechanism to obtain a target reference distance; focusing the lens module based on the target reference distance.
Processing the at least two reference distances based on the attention mechanism refers to a processing mode of assigning different weights to the at least two reference distances and then weighting the at least two reference distances.
Optionally, the terminal assigns weights for different reference distances based on the positions of the points of the photographed object corresponding to the reference distances in the image. Such as: and if the point corresponding to the first reference distance is positioned in the middle of the image and the point corresponding to the second reference distance is positioned at the edge of the image, the weight of the first reference distance is higher than that of the second reference distance.
Optionally, the terminal assigns weights for different reference distances based on the properties of points of the photographed object corresponding to the reference distances. Such as: and if the point corresponding to the first reference distance belongs to the person and the point corresponding to the second reference distance belongs to the background, the weight of the first reference distance is higher than that of the second reference distance.
And step 704, performing image shooting by using the lens module after focusing to obtain an original image.
In one possible implementation manner, after focusing is performed by the terminal based on the reference distance, image shooting is performed by using the lens module, and at this time, the original image obtained by shooting corresponds to a better focusing effect.
Step 705, based on the surface type information of the screen glass cover plate, performing image correction on the original image obtained by shooting the lens module.
The embodiment of this step is referred to above in step 202, and will not be described in detail herein.
And step 706, outputting a corrected image obtained after the original image is corrected.
The embodiment of this step is referred to above in step 203, and will not be described in detail herein.
In summary, according to the method provided by the embodiment, before image capturing, the terminal obtains the reference distance between the object to be captured and the lens module, and uses the reference distance to focus the lens module, so as to provide an implementation manner of auxiliary focusing, thereby improving the imaging quality of the image.
Next, a lens module of the above embodiment will be exemplarily described. Fig. 8 is a schematic diagram of a lens module according to an exemplary embodiment of the present application, where the lens module includes: a camera 801, a detection signal transmitting end 802, and an image sensor 803.
As shown in fig. 8, the detection signal transmitting end 802 and the image sensor 803 are symmetrically arranged with the optical axis of the camera 801 as a symmetry axis. Alternatively, the optical axis of the detection signal emitting end 802 and the central axis of the photosensitive surface of the image sensor 803 are symmetrically arranged with the optical axis of the camera 801 as the symmetry axis.
Alternatively, the detection signal transmitting end 802 and the image sensor 803 may be independently disposed, and no fixed connection is made between the two devices; the detection signal emitting end 802 and the image sensor 803 may also be uniformly arranged, and the two devices are fixedly connected, for example: the two devices form a U-shaped structure and encircle the two sides of the camera 801.
The camera 801 is used for image capturing. Optionally, the camera 801 is a front camera.
The probe signal transmitting terminal 802 is configured to transmit a probe transmission signal. Optionally, the probe signal transmitting end 802 is configured to transmit a probe stripe transmission signal, and the probe signal transmitting end 802 is a stripe transmitting end. Optionally, the probe signal transmitting end 802 is configured to transmit a probe spot transmitting signal, and the probe signal transmitting end 802 is a speckle pattern transmitting end.
Optionally, the detection signal emitting end 802 is a liquid crystal display (Liquid Crystal Display, LCD) screen. Optionally, the wavelength of the detection signal emitted by the detection signal emitting end 802 is in the non-visible light range, for example: belonging to infrared light.
The image sensor 803 is configured to receive a detection reflection signal, which is a signal formed by reflecting a detection transmission signal. Optionally, the detection target surface pixels of the image sensor 803 are not less than 2000×2000, and the size of a single pixel is 2um-4um. Optionally, the image sensor 803 is an area array image sensor, where the area array image sensor supports detecting distances between a plurality of points of the object to be photographed and the lens module, that is, the area array image sensor supports acquiring a plurality of reference distances.
Referring to fig. 8 in combination, taking the detection signal emitting end 802 as a stripe emitting end as an example, stripe light detection of the lens module is exemplarily described.
The sounding signal transmitter 802 transmits sounding strip transmit signals with certain strip information. The detection stripe reflection signal corresponding to the detection stripe emission signal is absorbed by the image sensor 803.
A part of the detection stripe emission signal (i.e., the second detection stripe emission signal) is reflected by the screen glass cover plate 804 of the terminal, the reflection forms a second detection stripe reflection signal, the second detection stripe reflection signal is absorbed by the image sensor 803 and is sent to a CPU processor in the terminal, and the CPU processor calculates the surface type information of the screen glass cover plate 804 according to the stripe change, mainly including the curvatures in the X and Y directions, so as to construct a three-dimensional model of the screen glass cover plate 804.
The other part of the detection stripe emission signal (i.e., the first detection stripe emission signal) is transmitted to the surface of the photographed object 805 to be photographed through the screen glass cover plate 804, and is reflected to form a first detection stripe reflection signal, and the first detection stripe reflection signal is absorbed by the image sensor 803 and sent to the CPU processor in the terminal, and the CPU processor calculates the depth information of the photographed object 805 according to the stripe change. The CPU may further calculate a reference distance between the photographed object 805 and the lens module according to a time difference between the first and second detection stripe emission signals received by the image sensor 803.
In the embodiment of the present application, the screen glass cover 804 is a curved screen glass cover, and the lens module is disposed under the curved screen glass cover. Accordingly, the terminal may be a curved screen terminal.
It will be appreciated that if the cover glass 804 in the terminal is a roller glass cover, a fold glass cover, the image capture method shown in this application may be equally applicable to terminals of this type.
In summary, in the lens module shown in the embodiment of the present application, the lens module may perform stripe light detection through the stripe emission end and the image sensor that are symmetrically disposed, and the stripe light detection scheme may be applied to the front of the mobile phone, firstly, the plane information of the screen glass cover plate on the lens module is detected and calculated, and aberration is compensated and corrected through an algorithm, so as to correct the original image obtained by shooting; secondly, detecting and calculating the distance and depth information of the shot object, and realizing more accurate face recognition and focusing.
It is to be understood that the above method embodiments may be implemented alone or in combination, and are not limited in this application.
The following is a device embodiment of the present application, and details of the device embodiment that are not described in detail may be combined with corresponding descriptions in the method embodiment described above, which are not described herein again.
Fig. 9 shows a schematic structural diagram of an image capturing apparatus provided in an exemplary embodiment of the present application. The apparatus may be implemented as all or part of a terminal by software, hardware or a combination of both, the apparatus comprising: a face type information acquisition module 901, an image correction module 902, and an image output module 903;
the surface type information acquisition module 901 is used for acquiring surface type information of the screen glass cover plate;
the image correction module 902 is configured to perform image correction on an original image captured by the lens module based on the surface type information of the screen glass cover plate;
the image output module 903 is configured to output a corrected image obtained after the correction of the original image.
In an alternative embodiment, the image rectification module 902 is configured to obtain depth information of the original image; dividing the original image into at least one image area based on the depth information; based on the surface type information of the screen glass cover plate, carrying out image correction on the at least one image area; the image output module 903 is configured to synthesize the original image with the corrected at least one image area to obtain the corrected image; outputting the corrected image.
In an alternative embodiment, the image correction module 902 is configured to obtain depth information of the original image based on a structured light detection technique by using the lens module; or, the image correction module 902 is configured to obtain depth information of the original image based on a light time-of-flight method by using the lens module.
In an alternative embodiment, the image correction module 902 is configured to send a first detection stripe emission signal by the lens module; the lens module receives a first detection stripe reflection signal, wherein the first detection stripe reflection signal is formed by reflecting a shot object by the first detection stripe transmission signal; and acquiring depth information of the original image based on the change condition between the first detection stripe reflection signal and the first detection stripe emission signal.
In an alternative embodiment, the image correction module 902 is configured to use the surface type information of the screen glass cover plate to perform deconvolution processing on the at least one image area to perform image correction.
In an optional embodiment, the surface type information obtaining module 901 is configured to obtain surface type information of the screen glass cover plate based on a structured light detection technology by using the lens module; or, the surface type information obtaining module 901 is configured to obtain the surface type information of the screen glass cover plate based on the optical time-of-flight method by using the lens module.
In an optional embodiment, the area information obtaining module 901 is configured to send a second probe stripe emission signal by using the lens module; the lens module receives a second detection stripe reflection signal, wherein the second detection stripe reflection signal is a signal formed by reflecting the second detection stripe emission signal through the screen glass cover plate; and acquiring the surface type information of the screen glass cover plate based on the change condition between the second detection stripe reflection signal and the second detection stripe emission signal.
In an alternative embodiment, the apparatus further comprises a focusing module; the focusing module is used for acquiring a reference distance, wherein the reference distance is the distance between a shot object and the lens module; focusing the lens module based on the reference distance; and shooting an image by using the lens module after focusing to obtain the original image.
In an optional embodiment, the focusing module is configured to obtain the reference distance based on a structured light detection technique by using the lens module; or, the focusing module is configured to obtain the reference distance based on a light flight time method by using the lens module.
In an optional embodiment, the focusing module is configured to send a third detection stripe emission signal by the lens module; the lens module receives a third detection stripe reflection signal, wherein the third detection stripe reflection signal is a signal formed by reflecting the third detection stripe transmission signal by the shot object; determining the reference distance based on a first round trip time; the first round trip time is a difference value between a time point when the lens module transmits the third detection stripe transmission signal and a time point when the lens module receives the third detection stripe reflection signal.
In an optional embodiment, the focusing module is configured to, in response to obtaining at least two of the reference distances, process the at least two of the reference distances based on an attention mechanism to obtain a target reference distance; and focusing the lens module based on the target reference distance.
Fig. 10 shows a block diagram of a terminal 1000 according to an exemplary embodiment of the present application. The terminal 1000 can be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1000 can also be referred to by other names of user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In the embodiment of the present application, terminal 1000 includes: a processor 1001, a memory 1002, a peripheral interface 1003, and a lens module 1006.
The processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1001 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of content that the display screen needs to display. In some embodiments, the processor 1001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is used to store at least one instruction for execution by processor 1001 to implement the image capture method provided by the method embodiments herein.
Peripheral interface 1003 may be used to connect I/O (Input/Output) related at least one peripheral to processor 1001 and memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1001, memory 1002, and peripheral interface 1003 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The processor 1001, the memory 1002, and the peripheral interface 1003 may be connected by a bus or signal line. The lens module 1006 may be connected to the peripheral device interface 1003 via a bus, signal line, or circuit board.
The lens module 1006 is used for capturing images or video. Optionally, the lens module 1006 includes: the camera, the detection signal transmitting end and the image sensor are symmetrically arranged by taking the optical axis of the camera as a symmetry axis; the camera is used for shooting images; the detection signal transmitting end is used for transmitting detection transmission signals; the image sensor is used for receiving a detection reflection signal, wherein the detection reflection signal is a signal formed by reflecting the detection transmission signal. The camera comprises a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the lens module 1006 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
In some embodiments, terminal 1000 can further include: other peripheral devices besides the lens module 1006. The various peripheral devices may be connected to the peripheral device interface 1003 via a bus, signal wire, or circuit board. Specifically, other peripheral devices include: at least one of radio frequency circuitry 1004, a display 1005, audio circuitry 1007, a positioning component 1008, and a power supply 1009.
Radio Frequency circuit 1004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. Radio frequency circuitry 1004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. Radio frequency circuitry 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 1004 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1005 is a touch screen, the display 1005 also has the ability to capture touch signals at or above the surface of the display 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this time, the display 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, display 1005 may be one, disposed on the front panel of terminal 1000; in other embodiments, display 1005 may be provided in at least two, separately provided on different surfaces of terminal 1000 or in a folded configuration; in other embodiments, display 1005 may be a flexible display disposed on a curved surface or a folded surface of terminal 1000. Even more, the display 1005 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 1005 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing, or inputting the electric signals to the radio frequency circuit 1004 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each located at a different portion of terminal 1000. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 1007 may also include a headphone jack.
The location component 1008 is used to locate the current geographic location of terminal 1000 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1008 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, or the galileo system of russia.
Power supply 1009 is used to power the various components in terminal 1000. The power source 1009 may be alternating current, direct current, disposable battery or rechargeable battery. When the power source 1009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1000 can further include one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor 1011, gyroscope sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
The acceleration sensor 1011 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1000. For example, the acceleration sensor 1011 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1001 may control the display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect the body direction and the rotation angle of the terminal 1000, and the gyro sensor 1012 may collect the 3D motion of the user to the terminal 1000 in cooperation with the acceleration sensor 1011. The processor 1001 may implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1013 may be disposed on a side frame of terminal 1000 and/or on an underlying layer of display 1005. When the pressure sensor 1013 is provided at a side frame of the terminal 1000, a grip signal of the terminal 1000 by a user can be detected, and the processor 1001 performs right-and-left hand recognition or quick operation according to the grip signal collected by the pressure sensor 1013. When the pressure sensor 1013 is provided at the lower layer of the display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1005. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1014 may be disposed on the front, back, or side of terminal 1000. When a physical key or vendor Logo is provided on terminal 1000, fingerprint sensor 1014 may be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the display screen 1005 based on the ambient light intensity collected by the optical sensor 1015. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1005 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1005 is turned down. In another embodiment, the processor 1001 may dynamically adjust the shooting parameters of the camera module 1006 according to the ambient light intensity collected by the optical sensor 1015.
Proximity sensor 1016, also referred to as a distance sensor, is typically located on the front panel of terminal 1000. Proximity sensor 1016 is used to collect the distance between the user and the front of terminal 1000. In one embodiment, when proximity sensor 1016 detects a gradual decrease in the distance between the user and the front face of terminal 1000, processor 1001 controls display 1005 to switch from the bright screen state to the off screen state; when proximity sensor 1016 detects a gradual increase in the distance between the user and the front of terminal 1000, processor 1001 controls display 1005 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 10 is not limiting and that terminal 1000 can include more or fewer components than shown, or certain components can be combined, or a different arrangement of components can be employed.
The present application also provides a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or an instruction set, which is loaded and executed by a processor to implement the image capturing method provided in the above method embodiments.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the image capturing method provided in the above-described alternative implementation.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof, but rather as being included within the spirit and principles of the present invention.

Claims (7)

1. An image shooting method is characterized by being applied to a terminal, wherein the terminal comprises a lens module arranged below a screen glass cover plate, and the method comprises the following steps:
transmitting a detection stripe transmitting signal through the lens module;
receiving a second detection stripe reflection signal through the lens module, wherein the second detection stripe reflection signal is formed by reflecting the detection stripe emission signal through the screen glass cover plate;
acquiring the surface type information of the screen glass cover plate based on the change condition between the second detection stripe reflection signal and the detection stripe emission signal;
Receiving a first detection stripe reflection signal through the lens module, wherein the first detection stripe reflection signal is formed by reflecting the detection stripe emission signal by a shot object;
acquiring depth information of the shot object based on the change condition between the first detection stripe reflection signal and the detection stripe emission signal;
acquiring a reference distance based on the time difference of the first stripe detection reflection signal and the second stripe detection reflection signal received by the lens module, wherein the reference distance is the distance between a shot object and the lens module;
focusing the lens module based on the reference distance;
using the lens module after focusing to carry out image shooting on the shot object to obtain an original image;
dividing the original image into at least one image area based on the depth information;
based on the surface type information of the screen glass cover plate, carrying out image correction on the at least one image area;
combining the part of the original image which is not subjected to image correction with the at least one image area subjected to image correction to obtain a corrected image;
Outputting the corrected image.
2. The method of claim 1, wherein focusing the lens module based on the reference distance comprises:
responding to obtaining at least two reference distances, and processing the at least two reference distances based on an attention mechanism to obtain a target reference distance;
and focusing the lens module based on the target reference distance.
3. A terminal, the terminal comprising: the device comprises a processor, a memory and a lens module arranged below a screen glass cover plate;
at least one section of program is stored in the memory, loaded and executed by the processor to implement the image capturing method according to any one of claims 1 to 2.
4. A terminal according to claim 3, wherein,
the screen glass cover plate is a curved screen glass cover plate.
5. A terminal according to claim 3 or 4, wherein the lens module comprises: the camera, the detection signal transmitting end and the image sensor are symmetrically arranged by taking the optical axis of the camera as a symmetry axis;
the camera is used for shooting images;
The detection signal transmitting end is used for transmitting detection stripe transmitting signals;
the image sensor is used for receiving a detection stripe reflection signal, wherein the detection stripe reflection signal is a signal formed by reflecting the detection stripe transmission signal.
6. An image photographing device, which is applied to a terminal, wherein the terminal comprises a lens module arranged below a screen glass cover plate, and the device comprises: the device comprises a surface information acquisition module, an image correction module, a focusing module and an image output module;
the surface type information acquisition module is used for sending a detection stripe emission signal through the lens module;
the surface type information acquisition module is further used for receiving a second detection stripe reflection signal through the lens module, wherein the second detection stripe reflection signal is a signal formed by reflecting the detection stripe emission signal through the screen glass cover plate;
the surface type information acquisition module is further used for acquiring the surface type information of the screen glass cover plate based on the change condition between the second detection stripe reflection signal and the detection stripe emission signal;
the image correction module is used for receiving a first detection stripe reflection signal through the lens module, wherein the first detection stripe reflection signal is a signal formed by reflecting the detection stripe emission signal by a shot object;
The image correction module is further used for acquiring depth information of the shot object based on the change condition between the first detection stripe reflection signal and the detection stripe emission signal;
the focusing module is used for acquiring a reference distance based on the time difference of the first stripe detection reflection signal and the second stripe detection reflection signal received by the lens module, wherein the reference distance is the distance between a shot object and the lens module;
the focusing module is further used for focusing the lens module based on the reference distance;
the focusing module is also used for shooting the image of the shot object by using the lens module after focusing to obtain an original image;
the image correction module is further used for dividing the original image into at least one image area based on the depth information;
the image correction module is further used for carrying out image correction on the at least one image area based on the surface type information of the screen glass cover plate;
the image output module is used for synthesizing the part which is not subjected to image correction in the original image with the at least one image area subjected to image correction to obtain a corrected image;
The image output module is also used for outputting the correction image.
7. A computer-readable storage medium, wherein at least one program is stored in the storage medium, the at least one program being loaded and executed by a processor to implement the image capturing method according to any one of claims 1 to 2.
CN202110478495.9A 2021-04-30 2021-04-30 Image shooting method, device, terminal and storage medium Active CN113191976B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110478495.9A CN113191976B (en) 2021-04-30 2021-04-30 Image shooting method, device, terminal and storage medium
PCT/CN2022/080664 WO2022227893A1 (en) 2021-04-30 2022-03-14 Image photographing method and device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110478495.9A CN113191976B (en) 2021-04-30 2021-04-30 Image shooting method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113191976A CN113191976A (en) 2021-07-30
CN113191976B true CN113191976B (en) 2024-03-22

Family

ID=76982857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110478495.9A Active CN113191976B (en) 2021-04-30 2021-04-30 Image shooting method, device, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN113191976B (en)
WO (1) WO2022227893A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191976B (en) * 2021-04-30 2024-03-22 Oppo广东移动通信有限公司 Image shooting method, device, terminal and storage medium
CN114666509A (en) * 2022-04-08 2022-06-24 Oppo广东移动通信有限公司 Image acquisition method and device, detection module, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274849A (en) * 2018-12-04 2020-06-12 上海耕岩智能科技有限公司 Method for determining imaging proportion of curved screen, storage medium and electronic equipment
CN111722816A (en) * 2019-03-19 2020-09-29 上海耕岩智能科技有限公司 Method for determining imaging ratio of bendable screen, electronic device and storage medium
CN112004054A (en) * 2020-07-29 2020-11-27 深圳宏芯宇电子股份有限公司 Multi-azimuth monitoring method, equipment and computer readable storage medium
CN112130800A (en) * 2020-09-29 2020-12-25 Oppo广东移动通信有限公司 Image processing method, electronic device, apparatus, and storage medium
CN112232155A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Non-contact fingerprint identification method and device, terminal and storage medium
CN112651286A (en) * 2019-10-11 2021-04-13 西安交通大学 Three-dimensional depth sensing device and method based on transparent screen

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8306348B2 (en) * 2007-04-24 2012-11-06 DigitalOptics Corporation Europe Limited Techniques for adjusting the effect of applying kernels to signals to achieve desired effect on signal
TWI663577B (en) * 2018-06-04 2019-06-21 宏碁股份有限公司 Demura system for non-planar screen
CN113191976B (en) * 2021-04-30 2024-03-22 Oppo广东移动通信有限公司 Image shooting method, device, terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274849A (en) * 2018-12-04 2020-06-12 上海耕岩智能科技有限公司 Method for determining imaging proportion of curved screen, storage medium and electronic equipment
CN111722816A (en) * 2019-03-19 2020-09-29 上海耕岩智能科技有限公司 Method for determining imaging ratio of bendable screen, electronic device and storage medium
CN112651286A (en) * 2019-10-11 2021-04-13 西安交通大学 Three-dimensional depth sensing device and method based on transparent screen
CN112004054A (en) * 2020-07-29 2020-11-27 深圳宏芯宇电子股份有限公司 Multi-azimuth monitoring method, equipment and computer readable storage medium
CN112130800A (en) * 2020-09-29 2020-12-25 Oppo广东移动通信有限公司 Image processing method, electronic device, apparatus, and storage medium
CN112232155A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Non-contact fingerprint identification method and device, terminal and storage medium

Also Published As

Publication number Publication date
CN113191976A (en) 2021-07-30
WO2022227893A1 (en) 2022-11-03

Similar Documents

Publication Publication Date Title
CN110502954B (en) Video analysis method and device
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN112270718B (en) Camera calibration method, device, system and storage medium
CN113191976B (en) Image shooting method, device, terminal and storage medium
WO2021238564A1 (en) Display device and distortion parameter determination method, apparatus and system thereof, and storage medium
CN111982305A (en) Temperature measuring method, device and computer storage medium
CN113763228A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109754439B (en) Calibration method, calibration device, electronic equipment and medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN111127541B (en) Method and device for determining vehicle size and storage medium
CN110874699B (en) Method, device and system for recording logistics information of article
KR20160031819A (en) Mobile terminal and method for controlling the same
CN109413190B (en) File acquisition method and device, electronic equipment and storage medium
CN112991439A (en) Method, apparatus, electronic device, and medium for positioning target object
CN111127539B (en) Parallax determination method and device, computer equipment and storage medium
CN110672036B (en) Method and device for determining projection area
CN113012211A (en) Image acquisition method, device, system, computer equipment and storage medium
CN112184802B (en) Calibration frame adjusting method, device and storage medium
CN110443841B (en) Method, device and system for measuring ground depth
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN112150554B (en) Picture display method, device, terminal and storage medium
CN116681746B (en) Depth image determining method and device
CN111353934B (en) Video synthesis method and device
CN114390195B (en) Automatic focusing method, device, equipment and storage medium
CN111354032B (en) Method and device for generating disparity map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant