WO2022156683A1 - 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 - Google Patents

图像处理方法、装置、拍摄支架、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2022156683A1
WO2022156683A1 PCT/CN2022/072577 CN2022072577W WO2022156683A1 WO 2022156683 A1 WO2022156683 A1 WO 2022156683A1 CN 2022072577 W CN2022072577 W CN 2022072577W WO 2022156683 A1 WO2022156683 A1 WO 2022156683A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
images
target sample
target
Prior art date
Application number
PCT/CN2022/072577
Other languages
English (en)
French (fr)
Inventor
周恩至
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2022156683A1 publication Critical patent/WO2022156683A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the embodiments of the present application relate to the field of communication technologies, and in particular, to an image processing method, an apparatus, a photographing stand, an electronic device, and a readable storage medium.
  • design schemes such as "hole-digging screen” and “water drop screen” are usually adopted to reduce the influence of the front camera on the screen-to-screen ratio. What's more, the design of the under-screen camera has greatly improved the screen-to-body ratio of electronic devices.
  • the camera is located below the screen, and is limited by the occlusion of the screen, and the image quality of the image captured by the under-screen camera is poor.
  • the purpose of the embodiments of the present application is to provide an image processing method, an apparatus, a shooting stand, an electronic device and a readable storage medium, which can solve the problem of poor image quality of an image captured by an under-screen camera.
  • an embodiment of the present application provides an image processing method, the method includes: collecting a first image through a first camera, where the first camera is an off-screen camera; using an image processing model to process the first image to obtain a second image, The image quality of the second image is higher than that of the first image; wherein, the image processing model is obtained after training the preset model with the target sample set; each target sample of the target sample set includes two images; The two images in the sample are the images collected by the second camera and the third camera at the same position using the same shooting angle, the same shooting environment, the same shooting parameters, and the same shooting object, and the images collected by the second camera are the same.
  • the image quality is lower than that of the image collected by the third camera; the second camera is an off-screen camera.
  • the embodiments of the present application further provide an image processing device, the device includes an acquisition module and a processing module; the acquisition module is used to acquire a first image through a first camera, where the first camera is an under-screen camera; the processing module , for using the image processing model to process the first image collected by the acquisition module, to obtain a second image, the image quality of the second image is higher than that of the first image; wherein, the image processing model is to use the target sample set for the preset model Obtained after training; each target sample of the target sample set includes two images; the two images in each target sample are the second camera and the third camera at the same position with the same shooting angle, the same shooting environment, The same shooting parameters, images collected for the same shooting object, and the image quality of the image collected by the second camera is lower than the image quality of the image collected by the third camera; the second camera is an off-screen camera.
  • an embodiment of the present application provides a shooting stand, including: a stand, a slide rail connected to the stand, and a first pitch stage and a second pitch stage arranged on the slide rail, and the first pitch stand is used to support
  • the first camera and the second tilting stage are used to support the second camera;
  • the first camera is used to collect the first target image;
  • the second camera is used to collect the second target image;
  • the first target image and the second target image are:
  • the first camera and the second camera use the same shooting angle, the same shooting environment, the same shooting parameters, and the images collected for the same shooting object at the same position;
  • the first target image and the second target image are a sample in the target sample set , the target sample set is used to train the preset model.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, and the program or instruction is implemented when executed by the processor The steps of the image processing method according to the first aspect.
  • an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .
  • an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the first aspect the method described.
  • the preset model is trained by using the images captured by the under-screen camera (ie the second camera) and the normal camera (ie the third camera) at the same position and at the same shooting angle as the samples in the target sample set. Afterwards, the trained image processing model is used to process the first image collected by the first camera to obtain a second image with higher image quality, which improves the image quality of photos taken by the under-screen camera.
  • Fig. 1 is a kind of electronic device that adopts the solution of the camera under the screen provided by the embodiment of the present application;
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of image segmentation applied by an image processing method provided in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a photographing support provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • first, second and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and distinguish between “first”, “second”, etc.
  • the objects are usually of one type, and the number of objects is not limited.
  • the first object may be one or more than one.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the associated objects are in an "or” relationship.
  • the image processing method provided by the embodiment of the present application can be applied to a scene in which an electronic device shoots with an under-screen camera.
  • FIG. 1(A) Exemplarily, for the scene that the electronic device shoots through the under-screen camera, in the related art, as shown in FIG. 1(A), it is a design scheme in which the camera is located under the screen. The camera settings are at the bottom of the screen.
  • the under-screen camera captures images, the light will first pass through the screen. Due to the screen blocking the light and the diffraction phenomenon occurring when the light passes through the object, the image quality of the under-screen camera is poor ( For example, the picture is dark, or there is a halo, etc.).
  • the images collected by the under-screen camera and the normal camera at the same position with the same shooting angle are taken as the samples in the sample set, and then, by changing the shooting conditions, or more
  • the method of replacing the photographed object every time obtains a sample set containing N samples, and uses the sample set to train the preset model.
  • the trained image processing model is used to process the image collected by the under-screen camera, to obtain an image with higher image quality, which improves the image quality of the photo taken by the under-screen camera.
  • an image processing method provided by an embodiment of the present application may include the following steps 201 and 202:
  • Step 201 The image processing apparatus collects a first image through a first camera.
  • the above-mentioned first camera is an off-screen camera.
  • Step 202 The image processing apparatus processes the above-mentioned first image by using an image processing model to obtain a second image.
  • the image quality of the second image is higher than that of the first image.
  • the above image processing model is obtained after training the preset model with the target sample set.
  • Each target sample of the above target sample set includes two images.
  • the two images in each target sample are images collected by the second camera and the third camera at the same position using the same shooting angle, the same shooting environment, the same shooting parameters, and the same shooting object, and the second The image quality of the image collected by the camera is lower than that of the image collected by the third camera.
  • the second camera is an off-screen camera.
  • the second camera and the first camera may be the same camera, or may be different cameras. Specifically, it can be cameras on different electronic devices.
  • the above-mentioned second camera and third camera can use the shooting bracket provided by the embodiment of the present application to obtain the second camera and the third camera at the same position using the same shooting angle, the same shooting environment, and the same shooting parameters.
  • the above-mentioned third camera may be a camera with the same specifications as the above-mentioned first camera or the second camera, and is not blocked by a screen.
  • the above-mentioned camera specifications may include the external dimensions of the camera, the focal length of the camera, the angle of view, the aperture, and the like.
  • the images collected by the second camera and the third camera are images collected for the same shooting object, and the same shooting object may be a still person or landscape.
  • the image processing apparatus repeatedly performs the process of obtaining samples for N times, and obtains a target sample set including N target samples, wherein each target sample includes two shooting positions, shooting angles, shooting objects and shooting objects.
  • each target sample includes two shooting positions, shooting angles, shooting objects and shooting objects.
  • the only difference is the camera.
  • the shooting conditions can be different from sample to sample.
  • each target sample in the target sample set is a sample collected under different shooting conditions.
  • the shooting conditions include at least one of the following differences: the shooting object, the shooting background, the environmental parameters of the shooting environment, the shooting equipment shooting parameters.
  • the preset model is trained by taking the images captured by the off-screen camera (ie the second camera) and the normal camera (ie the third camera) at the same position and at the same shooting angle as the samples in the target sample set. Afterwards, the trained image processing model is used to process the first image collected by the first camera to obtain a second image with higher image quality, which improves the image quality of photos taken by the under-screen camera.
  • the image processing apparatus before using the above-mentioned image processing model to process the image captured by the first camera, the image processing apparatus needs to train the preset model to obtain the above-mentioned image processing model.
  • the image processing method provided in this embodiment of the present application may further include the following steps 203 and 204:
  • Step 203 The image processing apparatus acquires the target sample set.
  • the above target sample set includes N target samples, and each target sample includes two corresponding images.
  • Step 204 The image processing apparatus uses the target sample set to train the preset model to obtain an image processing model.
  • the above target samples include: a third image and a fourth image; the third image is an image collected by the second camera, and the fourth image is an image collected by the third camera; the third image and the fourth image are the second camera and the fourth image.
  • the above-mentioned preset model is a deep learning model with an image processing function, and after the preset model is trained, the above-mentioned image processing model is obtained.
  • the image processing device can process the image captured by the under-screen camera, thereby obtaining an image with higher image quality.
  • each target sample in the above target sample set is a sample collected by the image processing device under different shooting conditions.
  • the image processing device can acquire different samples under the same shooting conditions, they can also be used as target samples in the target training set. For example, a sample of images captured on different subjects under the same shooting conditions.
  • the number of repeated samples in the target sample set can be reduced, and the training efficiency of the training model can be improved.
  • each target sample in the above target sample set includes two images, and the two images are respectively an image collected by the second camera and an image collected by the third camera.
  • the image processing apparatus Before using the target sample set to train the preset model, the image processing apparatus also needs to process each target sample in the target sample set.
  • each target sample in the target sample set includes an image collected by a second camera and an image collected by a third camera.
  • the image processing method provided in this embodiment of the present application may further include the following step 204a:
  • step 204a the image processing apparatus uses a grayscale-based image matching algorithm to match the third image and the fourth image to obtain the matched third image and the fourth image.
  • the pixels of the matched third image correspond to the pixels of the fourth image.
  • the above-mentioned grayscale-based image matching algorithm may include any one of the following: mean absolute difference algorithm (mean absolute differences, mad), absolute error sum algorithm (sum of absolute differences, sad), error sum of squares algorithm (sum of squared differences, ssd), mean square sum of square differences (msd), normalized cross correlation (ncc), sequential similarity detection algorithm (sequential similiarity detection algorithm, ssda), Hada Hadamard matrix transformation algorithm (sum of absolute transformed difference, satd).
  • t and f are the image collected by the second camera and the image collected by the third camera respectively; J and K are the height and width of the image in the matching template used for image matching, respectively; R(x, y) is the For the cross-correlation matrix obtained by operation, taking the xm and ym values when R is the maximum value, the image f(xm+j, ym+k) matching t can be obtained.
  • step 204 may include the following step 204b:
  • Step 204b The image processing apparatus uses the target sample set matched by the grayscale-based image matching algorithm to train the above-mentioned preset model.
  • image matching is performed on the samples in the sample set, so that the two images in each sample achieve pixel-level matching, thereby meeting the image requirements of the preset model during the training process , so that the image quality of the image obtained by the trained image processing model after processing the image collected by the under-screen camera is higher.
  • a range of 6-8 pixels can be reserved at the edge of the image, so that the selection is as large as possible.
  • the image range is guaranteed to match the success rate.
  • step 204a may further include the following step 204a1 or step 204a2:
  • Step 204a1 The image processing apparatus matches the image of the preset area in the third image with the fourth image to obtain the matched third image and the fourth image.
  • Step 204a2 The image processing apparatus matches the image of the preset area in the fourth image with the third image to obtain the matched third image and the fourth image.
  • the size of the preset area is: on the basis of the image size of the third image or the fourth image, the image size after the edge is reduced by a preset number of pixels.
  • the third image may be used as the basis or the fourth image may be used as the basis.
  • a matching template needs to be used, and the matching template is the image of the above-mentioned preset area, and the height and width of the matching template are J and K in the above formula 1, respectively.
  • the purpose of using a matching template with a larger range is to improve the matching degree of the third image and the fourth image, so that the third image and the fourth image can achieve a pixel-level matching degree. Furthermore, the two images in each target sample in the target sample set can meet the corresponding requirements at the pixel level.
  • the images in each sample may be divided, and then one sample may be divided into multiple samples. sample.
  • step 203 may include the following steps 203a1 and 203a2:
  • Step 203a1 The image processing apparatus divides the third image collected by the second camera into M third sub-images, and divides the fourth image collected by the third camera into M fourth sub-images.
  • M third sub-images correspond to the M fourth sub-images one-to-one, and one third sub-image corresponds to one fourth sub-image.
  • Step 203a2 the image processing device uses the target third sub-image in the above-mentioned M third sub-images and the fourth sub-image corresponding to the target third sub-image in the M fourth sub-images as a sample in the target sample set , get the target sample set.
  • the positions and numbers of the divisions are the same, so that each third sub-image in the divided M third sub-images can be There are corresponding images in the M fourth sub-images.
  • FIG. 3 there are two images (image 31 and image 32) contained in the sample.
  • Divide image 31 into 4 images (images a1, a2, a3, and a4), and divide image 32 into 4 images (images b1, b2, b3, and b4), and map the divided samples one-to-one (a1 Corresponding b1, a2 corresponds to b2, a3 corresponds to b3, a4 corresponds to b4), and each two corresponding images are used as new samples (for example, a1 and b1 can be used as a new sample).
  • the segmentation position 31a of the image 31 and the segmentation position 32a of the image 32 are the same (that is, the segmentation point and the relative position of the object in the image are the same) .
  • the target sample set containing N samples can be expanded to a sample set containing N*4 samples.
  • the capacity of the samples can be greatly expanded, and the requirements for computer configuration in the subsequent training process can be reduced.
  • the images collected by the under-screen camera and the normal camera at the same position and at the same shooting angle are taken as the samples in the sample set, and then the shooting conditions are changed or the shooting objects are replaced multiple times. , to obtain a sample set containing N samples.
  • pixel-level matching can be performed on the two images in each sample, and the preset model can be trained using the matched sample set.
  • the trained image processing model is used to process the images collected by the off-screen camera to obtain images with higher image quality, which improves the image quality of the photos taken by the off-screen camera.
  • the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method.
  • the image processing apparatus provided by the embodiments of the present application is described by taking an image processing apparatus executing an image processing method as an example.
  • an embodiment of the present application provides a shooting support for shooting.
  • the shooting support includes: a support 5 , a slide rail 3 connected to the support 5 , and a first tilt stage disposed on the slide rail 3 1 and the second pitch stage 2.
  • the first tilting stage 1 is used for supporting the first camera
  • the second tilting stage 2 is used for supporting the second camera.
  • the first camera is used for collecting the first target image
  • the second camera is used for collecting the second target image.
  • the first target image is the third image in the above-mentioned embodiment of the image processing method
  • the second target image is the fourth image in the above-mentioned embodiment of the image processing method.
  • the first target image and the second target image are: the first camera and the second camera adopt the same shooting angle, the same shooting environment, the same shooting parameters, and the images collected from the same shooting object at the same position.
  • the above-mentioned first target image and second target image are a sample in a target sample set, and the target sample set is used for training a preset model.
  • the first camera and the second camera described above are different from the first camera and the second camera involved in the image processing method shown in FIG. 2 .
  • the first camera may be the same as the second camera involved in the image processing method shown in FIG. 2
  • the second camera may be the same as the third camera involved in the image processing method shown in FIG. 2 .
  • the above-mentioned shooting stand includes two control modes: manual control and automatic control.
  • the user can manually or automatically adjust the rotation of the first pitch stage 1 and the second pitch stage 2 in the x-y plane to calibrate the pitch angles of the above two pitch stages.
  • the first tilt stage 1 can be adjusted to the target angle and moved to the target position, and then the first camera can be controlled to capture images. After that, move the first pitch stage 1 away, adjust the second pitch stage to the target angle and move to the target position, and then control the second camera to capture images.
  • the above-mentioned shooting support further includes: a control module, which is used to control the first tilting stage to move to the target position, adjust the first tilting stage to the target angle, and control the first camera to collect the first target image; the The control module is further configured to control the second pitching stage to move to the target position, adjust the second pitching stage to the target angle, and control the second camera to collect the second target image.
  • the above-mentioned shooting support further includes: a conversion interface 4 between the slide rail 3 and the support 5 , a power supply system and a programmable microcontroller 6 .
  • the user can realize the precise control of the above-mentioned first pitch stage 1 and second pitch stage 2 through the power supply system and the programmable single chip 6 .
  • the user can write codes to enable the first and second tilting stages 1 and 2 to electrically control the cameras mounted on them in the z-direction of the slide rail 3 by means of the power supply system 6 to achieve precise displacement.
  • the above-mentioned first camera may be an off-screen camera installed on the first electronic device
  • the above-mentioned second camera may be installed on the second electronic device
  • the relative position of the first camera relative to the first electronic device is the same as that of the first camera. position.
  • the installation position of the under-screen camera 11 (ie the above-mentioned first electronic device) on the electronic device 10a is the same as that of the camera 12 (ie the above-mentioned second electronic device) on the electronic device 10b
  • the installation location is the same on the . Only in this way can the first camera and the second camera capture images at the same position using the same shooting angle.
  • each device can be connected by means of a conversion interface
  • the bracket 5 and the guide rail 3 are connected by means of a conversion interface 4
  • the first electronic device and the second electronic device can be connected through the first pitching Fixtures and dry plate clamps provided on stage 1 and the second pitching stage 2 are fixed to increase their stability.
  • the straight line where the lower edges of the two electronic devices are located is parallel to the direction of the guide rail 3 .
  • the microcontroller 6 can be programmed according to the parameters of the shooting stand, so that it can control the first pitch stage on the slide rail 3.
  • a tilting table 1 and a second tilting table 2 are precisely translated, the bracket 5 is fixed and the pan-tilt is adjusted to make the tilt of the pan-tilt as 0 as possible (a slight tilt does not affect the accuracy of data acquisition), and the conversion interface 4 is used to connect the The power supply system and the guide rail 3 of the single-chip microcomputer 6 are installed on the bracket 5 .
  • the first electronic device and the second electronic device are respectively fixed on the first elevation platform 1 and the second elevation platform 2 .
  • the shooting stand needs to be adjusted and calibrated first, so that the electronic devices can be successively moved to the same position by means of the slide rail 3 to maintain the same posture.
  • image acquisition software the image captured by the first camera is obtained and saved in the computer.
  • the second pitch stage 2 is controlled on the slide rail 3 by the power supply and the single-chip microcomputer 6 Translate, move the second electronic device to the same shooting position, adjust the second tilt stage 2, so that the second camera can capture the same image as the image captured by the first camera, and obtain the second image with the help of the above-mentioned image acquisition software.
  • the image captured by the camera can complete the calibration work.
  • the shooting bracket provided by the embodiment of the present application can accurately control the pitch angles and positions of the first pitch stage and the second pitch stage, so as to realize the images that the first camera and the second camera can collect at the same position and at the same shooting angle
  • pixel-level correspondence can be implemented for the images captured by the first camera and the second camera.
  • FIG. 5 is a schematic diagram of a possible structure for implementing an image processing apparatus provided by an embodiment of the present application.
  • the image processing apparatus 600 includes: an acquisition module 601 and a processing module 602;
  • the camera collects the first image, and the first camera is an off-screen camera;
  • the processing module 602 is used to process the first image collected by the collection module 601 using an image processing model to obtain a second image, and the image quality of the second image is higher than that of the first image
  • the image processing model is obtained after training the preset model with the target sample set; each target sample of the target sample set includes two images; the two images in each target sample are the second camera respectively
  • the same shooting angle, the same shooting environment, the same shooting parameters, and the image collected for the same shooting object are used in the same position as the third camera, and the image quality of the image collected by the second camera is lower than that of the image collected by the third camera.
  • Image quality; the second camera is an off-screen camera.
  • the preset model is trained by taking the images captured by the off-screen camera (ie the second camera) and the normal camera (ie the third camera) at the same position and at the same shooting angle as the samples in the target sample set. Afterwards, the trained image processing model is used to process the first image collected by the first camera to obtain a second image with higher image quality, which improves the image quality of photos taken by the under-screen camera.
  • the image processing apparatus 600 further includes: an acquisition module 603 and a training module 604; the acquisition module 603 is used to acquire the target sample set; the training module 604 is used to train the preset model by using the target sample set acquired by the acquisition module 603, An image processing model is obtained; wherein, the target sample includes: a third image and a fourth image; the third image is an image collected by the second camera, and the fourth image is an image collected by the third camera; the third image and the fourth image are the third image and the fourth image. Images obtained by the second camera and the third camera at the same position and using the same shooting parameters for the same target respectively.
  • the image processing device can process the image captured by the under-screen camera, thereby obtaining an image with higher image quality.
  • each target sample in the target sample set is a sample collected under different shooting conditions; wherein, the shooting conditions include at least one of the following: shooting object, shooting background, environmental parameters of shooting environment, shooting parameters of shooting equipment .
  • the number of repeated samples in the target sample set can be reduced, and the training efficiency of the training model can be improved.
  • the image processing apparatus 600 further includes: a matching module 605; the matching module 605 is configured to use a grayscale-based image matching algorithm to match the third image and the fourth image to obtain the matched third image and the fourth image.
  • a matching module 605 is configured to use a grayscale-based image matching algorithm to match the third image and the fourth image to obtain the matched third image and the fourth image.
  • Four images wherein, the pixels of the matched third image and the fourth image correspond.
  • image matching is performed on the samples in the sample set, so that the two images in each sample achieve pixel-level matching, thereby meeting the image requirements of the preset model during the training process , so that the image quality of the image obtained by the trained image processing model after processing the image collected by the under-screen camera is higher.
  • the matching module 605 is specifically used to match the image of the preset area in the third image with the fourth image to obtain the matched third image and the fourth image; or, the matching module 605 is specifically used to match The image of the preset area in the fourth image is matched with the third image to obtain the matched third image and the fourth image; wherein, the size of the preset area is: based on the image size of the third image or the fourth image , the size of the image after the edges are reduced by a preset number of pixels.
  • the acquisition module 603 is specifically configured to divide the third image into M third sub-images, and divide the fourth image into M fourth sub-images; the M third sub-images and the M fourth sub-images The images are in one-to-one correspondence; the acquisition module 603 is also specifically configured to use the target third sub-image in the M third sub-images and the fourth sub-image corresponding to the target third sub-image in the M fourth sub-images as the target A sample in the sample set to get the target sample set.
  • the capacity of the sample can be greatly expanded, and the requirements for computer configuration in the subsequent training process can be reduced.
  • the image processing apparatus in this embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal.
  • the apparatus may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • netbook or a personal digital assistant
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the image processing apparatus in this embodiment of the present application may be an apparatus having an operating system.
  • the operating system may be an Android (Android) operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
  • the image processing apparatus provided in this embodiment of the present application can implement each process implemented by the image processing apparatus in the method embodiments of FIG. 2 and FIG. 3 , and to avoid repetition, details are not described here.
  • the images collected by the under-screen camera and the normal camera at the same position and at the same shooting angle are taken as the samples in the sample set, and then the shooting conditions are changed or the shooting objects are replaced multiple times. , to obtain a sample set containing N samples.
  • pixel-level matching can be performed on the two images in each sample, and the preset model can be trained using the matched sample set.
  • the trained image processing model is used to process the image collected by the under-screen camera, to obtain an image with higher image quality, which improves the image quality of the photo taken by the under-screen camera.
  • an embodiment of the present application further provides an electronic device, including a processor 110, a memory 109, a program or instruction stored in the memory 109 and executable on the processor 110, the program or instruction being processed by the processor
  • an electronic device including a processor 110, a memory 109, a program or instruction stored in the memory 109 and executable on the processor 110, the program or instruction being processed by the processor
  • the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
  • the electronic device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110, etc. part.
  • the electronic device 100 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power management through the power management system. consumption management and other functions.
  • a power source such as a battery
  • the structure of the electronic device shown in FIG. 6 does not constitute a limitation on the electronic device, and the electronic device may include more or less components than those shown in the figure, or combine some components, or arrange different components, which will not be repeated here. .
  • the input unit 104 is used to collect the first image through the first camera, and the first camera is an off-screen camera; the processor 110 is used to process the first image collected by the input unit 104 by using an image processing model to obtain the second image, The image quality of the second image is higher than that of the first image; wherein, the image processing model is obtained after training the preset model with the target sample set; each target sample of the target sample set includes two images; The two images in the sample are the images collected by the second camera and the third camera at the same position using the same shooting angle, the same shooting environment, the same shooting parameters, and the same shooting object, and the images collected by the second camera are the same. The image quality is lower than that of the image collected by the third camera; the second camera is an off-screen camera.
  • the processor 110 is configured to acquire a target sample set; the processor 110 is configured to train a preset model by using the target sample set to obtain an image processing model; wherein the target sample includes: a third image and a fourth image; The three images are captured by the second camera, and the fourth image is captured by the third camera; the third and fourth images are captured by the second camera and the third camera at the same location using the same shooting parameters for the same target, respectively image.
  • the processor 110 is configured to use a grayscale-based image matching algorithm to match the third image and the fourth image to obtain the matched third image and the fourth image; wherein, the matched third image corresponds to the pixel of the fourth image.
  • the processor 110 is specifically configured to match the image of the preset area in the third image with the fourth image to obtain the matched third image and the fourth image; or, the processor 110 is specifically configured to match The image of the preset area in the fourth image is matched with the third image to obtain the matched third image and the fourth image; wherein, the size of the preset area is: based on the image size of the third image or the fourth image , the size of the image after the edges are reduced by a preset number of pixels.
  • the processor 110 is specifically configured to divide the third image into M third sub-images, and divide the fourth image into M fourth sub-images; the M third sub-images and the M fourth sub-images The images are in one-to-one correspondence; the processor 110 is specifically further configured to use the target third sub-image in the M third sub-images and the fourth sub-image corresponding to the target third sub-image in the M fourth sub-images as the target A sample in the sample set to get the target sample set.
  • the images collected by the under-screen camera and the normal camera at the same position and at the same shooting angle are used as samples in the sample set, and then, by changing the shooting conditions or changing the shooting object multiple times, Get a sample set containing N samples.
  • pixel-level matching can be performed on the two images in each sample, and the preset model can be trained using the matched sample set.
  • the trained image processing model is used to process the images collected by the under-screen camera to obtain images with higher image quality, which improves the image quality of photos taken by the under-screen camera.
  • the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042. Such as camera) to obtain still pictures or video image data for processing.
  • the display unit 106 may include a display panel 1061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072 .
  • the touch panel 1071 is also called a touch screen.
  • the touch panel 1071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 1072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • Memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems.
  • the processor 110 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 110 .
  • Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium.
  • a program or an instruction is stored on the readable storage medium.
  • the processor is the processor in the electronic device described in the foregoing embodiments.
  • the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make an electronic device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the various embodiments of the present application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本申请公开了一种图像处理方法、装置、拍摄支架、电子设备及可读存储介质,属于通信技术领域。该方法包括:通过第一摄像头采集第一图像,第一摄像头为屏下摄像头;使用图像处理模型处理第一图像,得到第二图像,第二图像的图像质量高于第一图像的图像质量;其中,图像处理模型为采用目标样本集对预设模型训练后得到的;目标样本集的每个目标样本包括两个图像;每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像。

Description

图像处理方法、装置、拍摄支架、电子设备及可读存储介质
相关申请的交叉引用
本申请主张在2021年01月25日在中国提交的中国专利申请号202110097229.1的优先权,其全部内容通过引用包含于此。
技术领域
本申请实施例涉及通信技术领域,尤其涉及一种图像处理方法、装置、拍摄支架、电子设备及可读存储介质。
背景技术
随着电子技术的进步,为了使用户体验更好,电子设备(例如,手机、平板等)的屏占比越来越高。
在相关技术中,为了提高电子设备的屏占比,通常采用“挖孔屏”、“水滴屏”等设计方案来减小前置摄像头对屏占比的影响。更有甚者,采用屏下摄像头的设计方案,极大地提高了电子设备的屏占比。
然而,由于屏下摄像头的设计方案,摄像头位于屏幕下方,受限于屏幕的遮挡,屏下摄像头拍摄的图像的图像质量较差。
发明内容
本申请实施例的目的是提供一种图像处理方法、装置、拍摄支架、电子设备及可读存储介质,能够解决屏下摄像头拍摄的图像的图像质量较差的问题。
第一方面,本申请实施例提供一种图像处理方法,该方法包括:通过第一摄像头采集第一图像,第一摄像头为屏下摄像头;使用图像处理模型处理第一图像,得到第二图像,第二图像的图像质量高于第一图像的图像质量;其中,图像处理模型为采用目标样本集对预设模型训练后得到的;目标样本集的每个目标样本包括两个图像;每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且第二摄像头采集的图像的图像质量低于第三摄像头采集的图像的图像质量;第二摄像头为屏 下摄像头。
第二方面,本申请实施例还提供了一种图像处理装置,该装置包括采集模块和处理模块;采集模块,用于通过第一摄像头采集第一图像,第一摄像头为屏下摄像头;处理模块,用于使用图像处理模型处理采集模块采集的第一图像,得到第二图像,第二图像的图像质量高于第一图像的图像质量;其中,图像处理模型为采用目标样本集对预设模型训练后得到的;目标样本集的每个目标样本包括两个图像;每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且第二摄像头采集的图像的图像质量低于第三摄像头采集的图像的图像质量;第二摄像头为屏下摄像头。
第三方面,本申请实施例提供了一种拍摄支架,包括:支架、与支架连接的滑轨,以及设置在滑轨上的第一俯仰台和第二俯仰台,第一俯仰台用于支撑第一摄像头,第二俯仰台用于支撑第二摄像头;第一摄像头用于采集第一目标图像;第二摄像头用于采集第二目标图像;其中,第一目标图像和第二目标图像为:第一摄像头和第二摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像;第一目标图像和第二目标图像为目标样本集中的一个样本,目标样本集用于训练预设模型。
第四方面,本申请实施例提供了一种电子设备,包括处理器、存储器及存储在该存储器上并可在该处理器上运行的程序或指令,该程序或指令被该处理器执行时实现如第一方面所述的图像处理方法的步骤。
第五方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第六方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
在本申请实施例中,通过将屏下摄像头(即第二摄像头)和正常摄像头(即第三摄像头)在同一位置采用相同的拍摄角度采集的图像作为目标样本集中的样本,训练预设模型。之后,使用训练后的图像处理模型对第一摄像头采集的第一图像进行处理, 得到图像质量较高的第二图像,提高了屏下摄像头拍摄照片的图像质量。
附图说明
图1是本申请实施例提供的一种采用屏下摄像头方案的电子设备;
图2是本申请实施例提供的一种图像处理方法流程示意图;
图3是本申请实施例提供的一种图像处理方法所应用的图像分割的示意图;
图4是本申请实施例提供的一种拍摄支架结构示意图;
图5是本申请实施例提供的一种图像处理装置结构示意图;
图6是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
本申请实施例提供的图像处理方法可以应用于电子设备通过屏下摄像头进行拍摄的场景中。
示例性地,针对电子设备通过屏下摄像头进行拍摄的场景,在相关技术中,如图1中(A)所示,为一种摄像头位于屏幕下方的设计方案,在该方案中,由于屏下摄像头设置与屏幕下方,当屏下摄像头进行图像采集时,光线会先经过屏幕,由于屏幕对光线的阻挡以及光线穿过物体时会发生衍射现象,导致屏下摄像头拍摄的图像图像质量较差(例如,画面较暗,或出现光晕等情况)。
针对这一问题,在本申请实施例提供的技术方案中,通过将屏下摄像头和正常摄像头 在同一位置采用相同的拍摄角度采集的图像作为样本集中的样本,之后,通过改变拍摄条件、或者多次更换拍摄对象的方式,获取包含N个样本的样本集,并使用该样本集训练预设模型。之后,使用训练后的图像处理模型对屏下摄像头采集的图像进行处理,得到图像质量较高的图像,提高了屏下摄像头拍摄照片的图像质量。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图像处理方法进行详细地说明。
如图2所示,本申请实施例提供的一种图像处理方法,该方法可以包括下述步骤201和步骤202:
步骤201、图像处理装置通过第一摄像头采集第一图像。
其中,上述第一摄像头为屏下摄像头。
步骤202、图像处理装置使用图像处理模型处理上述第一图像,得到第二图像。
其中,上述第二图像的图像质量高于第一图像的图像质量。上述图像处理模型为采用目标样本集对预设模型训练后得到的。上述目标样本集的每个目标样本包括两个图像。其中,每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且第二摄像头采集的图像的图像质量低于第三摄像头采集的图像的图像质量。该第二摄像头为屏下摄像头。
示例性地,上述第二摄像头和第一摄像头可以是同一个摄像头,也可以是不同的摄像头。具体地,可以是不同电子设备上的摄像头。
示例性地,上述第二摄像头和第三摄像头可以通过本申请实施例提供的拍摄支架,来获取第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像。上述第三摄像头可以为与上述第一摄像头或第二摄像头规格相同,且没有屏幕遮挡的摄像头。具体地,上述摄像头规格可以包括摄像头的外形尺寸,摄像头焦距,视场角,光圈等。
示例性地,上述第二摄像头和第三摄像头采集的图像为针对同一拍摄对象采集的图像,该同一拍摄对象可以是静止的人物或者风景。
示例性地,图像处理装置在重复执行N次样本获取的过程,获取到包含N个目标样本 的目标样本集,其中,每个目标样本中均包含两个拍摄位置、拍摄角度、拍摄对象和拍摄参数相同的图像,区别仅在于摄像头的不同。为了防止样本重复,提高样本的复杂度,样本与样本之间的拍摄条件可以不同。具体地,目标样本集中的每个目标样本均为在不同拍摄条件下采集的样本,不同的目标样本,拍摄条件至少包括以下一项不同:拍摄对象,拍摄背景,拍摄环境的环境参数、拍摄装备的拍摄参数。
如此,通过将屏下摄像头(即第二摄像头)和正常摄像头(即第三摄像头)在同一位置采用相同的拍摄角度采集的图像作为目标样本集中的样本,训练预设模型。之后,使用训练后的图像处理模型对第一摄像头采集的第一图像进行处理,得到图像质量较高的第二图像,提高了屏下摄像头拍摄照片的图像质量。
可选地,在本申请实施例中,在使用上述图像处理模型处理第一摄像头拍摄的图像之前,图像处理装置需要对预设模型进行训练,进而得到上述图像处理模型。
示例性地,上述步骤202之前,本申请实施例提供的图像处理方法,还可以包括以下步骤203和步骤204:
步骤203、图像处理装置获取目标样本集。
示例性地,上述目标样本集中包括N个目标样本,每个目标样本中包括两个对应的图像。
步骤204、图像处理装置采用上述目标样本集训练上述预设模型,得到图像处理模型。
其中,上述目标样本包括:第三图像和第四图像;第三图像为第二摄像头采集的图像,第四图像为第三摄像头采集的图像;第三图像和第四图像为第二摄像头和第三摄像头在同一位置采用同样的拍摄参数分别对相同目标获取的图像。
示例性地,上述预设模型为具有图像处理功能的深度学习模型,对预设模型训练完成后,得到上述图像处理模型。
如此,图像处理装置使用目标样本集训练预设模型,并得到图像处理模型之后,才能使得图像处理装置处理屏下摄像头拍摄的图像,进而得到图像质量较高的图像。
进一步可选地,在本申请实施例中,为了提高样本复杂度,可以获取不同背景环境、噪声条件下的图像,从而增加样本的复杂度。上述目标样本集中的每个目标样本,均为图像处理装置在不同拍摄条件下采集的样本。
需要说明的是,若图像处理装置在相同的拍摄条件下能够获取不同的样本,也可以作为目标训练集中的目标样本。例如,在拍摄条件相同的情况下,对不同拍摄对象拍摄的图像的样本。
如此,可以减少目标样本集中重复样本的数量,提高训练模型的训练效率。
可选地,在本申请实施例中,上述目标样本集中的每个目标样本,均包括两个图像,该两个图像分别为第二摄像头采集的图像和第三摄像头采集的图像。图像处理装置在使用目标样本集训练预设模型之前,还需要对目标样本集中的每个目标样本进行处理。
需要说明的是,目标样本集中的每个目标样本均包括一个第二摄像头采集的图像,和一个第三摄像头采集的图像。
示例性地,上述步骤204之前,本申请实施例提供的图像处理方法,还可以包括以下步骤204a:
步骤204a、图像处理装置采用基于灰度的图像匹配算法,对上述第三图像和第四图像进行匹配,得到匹配后的第三图像和第四图像。
其中,匹配后的第三图像和第四图像的像素对应。
示例性地,上述基于灰度的图像匹配算法可以包括以下任一种:平均绝对差算法(mean absolute differences,mad)、绝对误差和算法(sum of absolute differences,sad)、误差平方和算法(sum of squared differences,ssd)、平均误差平方和算法(mean square differences,msd)、归一化积相关算法(normalized cross correlation,ncc)、序贯相似性检测算法(sequential similiarity detection algorithm,ssda)、哈达玛(hadamard)矩阵变换算法(sum of absolute transformed difference,satd)。
示例性地,本申请实施例中采用以下算法对上述第三图像和第四图像进行图像匹配:
公式一:
Figure PCTCN2022072577-appb-000001
其中,上述t与f分别为第二摄像头采集的图像和第三摄像头采集的图像;J与K分别为在用于图像匹配的匹配模板的图像的高和宽;R(x,y)为经过运算得到的互相关矩阵,取R为最大值时的xm、ym值即可得到与t匹配的图像f(xm+j,ym+k)。
示例性地,在对目标样本集进行处理后,上述步骤204可以包括以下步骤204b:
步骤204b、图像处理装置采用基于灰度的图像匹配算法匹配过的目标样本集训练上述预设模型。
如此,在使用目标样本集训练预设模型之前,对样本集中的样本进行图像匹配,使得每个样本中的两个图像均达到像素级匹配,进而满足预设模型在训练过程中对图像的要求,使得训练好的图像处理模型处理屏下摄像头采集的图像后得到的图像的图像质量更高。
进一步可选地,在本申请实施例中,为了提高图像匹配算法的匹配成功率,在进行匹配选取图像范围时,可在图像边缘留出6-8像素的范围,以使得在选择尽可能大的图像范围的同时保证匹配的成功率。
示例性地,上述步骤204a,还可以包括以下步骤204a1或者步骤204a2:
步骤204a1、图像处理装置将上述第三图像中预设区域的图像与第四图像进行匹配,得到匹配后的第三图像和第四图像。
步骤204a2、图像处理装置将上述第四图像中预设区域的图像与第三图像进行匹配,得到匹配后的第三图像和第四图像。
其中,上述预设区域的大小为:在上述第三图像或第四图像的图像大小的基础上,边缘减少预设数量像素之后的图像大小。
示例性地,由于上述第三图像和第四图像的图像大小可能相同,因此,以第三图像为基础或以第四图像为基础均可。
示例性地,在图像匹配过程中,需要用到匹配模板,该匹配模板为上述预设区域的图像,还匹配模板的高和宽分别为上述公式一中的J和K。
需要说明的是,使用范围较大的匹配模板,是为了提高第三图像和第四图像的匹配程度,以使得第三图像和第四图像能够达到像素级匹配的程度。进而使得目标样本集中的每个目标样本中的两个图像,均能满足像素级对应的要求。
如此,在图像匹配的过程中,使用图像范围较大的匹配模板,可以在选择尽可能大的图像范围的同时,保证匹配的成功率提高匹配的成功率。
可选地,在本申请实施例中,为了减小目标样本集中样本图像收集的工作量,同时大 幅增加样本的数量,可以将每个样本中的图像进行分割,进而将一个样本分割为多个样本。
示例性地,上述步骤203,可以包括以下步骤203a1和步骤203a2:
步骤203a1、图像处理装置将上述第二摄像头采集的第三图像分割为M个第三子图像,并将上述第三摄像头采集的第四图像分割为M个第四子图像。
其中,上述M个第三子图像与M个第四子图像一一对应,一个第三子图像对应一个第四子图像。
步骤203a2、图像处理装置将上述M个第三子图像中的目标第三子图像,以及M个第四子图像中与该目标第三子图像对应的第四子图像作为目标样本集中的一个样本,得到目标样本集。
示例性地,图像处理装置在分割第三图像和第四图像时,分割的位置和数量相同,使得分割后的M个第三子图像中的每个第三子图像,均能在分割后的M个第四子图像中有对应的图像。
举例说明,如图3所示,为样本中包含的两个图像(图像31和图像32)。将图像31分割为4个图像(图像a1、a2、a3和a4),将图像32也分割为4个图像(图像b1、b2、b3和b4),并将分割后的样本一一对应(a1对应b1、a2对应b2、a3对应b3、a4对应b4),每两个对应的图像作为新的样本(例如,a1和b1可以作为一个新的样本)。其中,由于图像31和图像32的尺寸可能存在差异,为了保证匹配的成功率,图像31的分割位置31a和图像32的分割位置32a相同(即分割点与拍摄对象在图像中的相对位置相同)。和通过将一个样本可分为四个样本,可以将包含N个样本的目标样本集扩充为包含N*4个样本的样本集。
如此,将每个样本中的图像分割后,可在极大的扩充样本的容量同时,降低后续训练过程中对计算机配置的要求。
本申请实施例提供的图像处理方法,通过将屏下摄像头和正常摄像头在同一位置采用相同的拍摄角度采集的图像作为样本集中的样本,之后,通过改变拍摄条件、或者多次更换拍摄对象的方式,获取包含N个样本的样本集。为了满足预设模型对样本的要求,在获取到样本后,可对每个样本中的两个图像进行像素级匹配,并使用匹配后的样本集训练预设模型。之后,使用训练后的图像处理模型对屏下摄像头采集的图像进行处理,得到图像 质量较高的图像,提高了屏下摄像头拍摄照片的图像质量。
需要说明的是,本申请实施例提供的图像处理方法,执行主体可以为图像处理装置,或者该图像处理装置中的用于执行图像处理方法的控制模块。本申请实施例中以图像处理装置执行图像处理方法为例,说明本申请实施例提供的图像处理装置。
需要说明的是,本申请实施例中,上述各个方法附图所示的。图像处理方法均是以结合本申请实施例中的一个附图为例示例性地说明的。具体实现时,上述各个方法附图所示的图像处理方法还可以结合上述实施例中示意的其它可以结合的任意附图实现,此处不再赘述。
如图4所示,本申请实施例提供的一种用于拍摄的拍摄支架,该拍摄支架包括:支架5、与支架5连接的滑轨3,以及设置在滑轨3上的第一俯仰台1和第二俯仰台2。上述第一俯仰台1用于支撑第一摄像头,上述第二俯仰台2用于支撑第二摄像头。上述第一摄像头用于采集第一目标图像,上述第二摄像头用于采集第二目标图像。该第一目标图像为上述图像处理方法实施例中的第三图像,该第二目标图像为上述图像处理方法实施例中的第四图像。
其中,上述第一目标图像和第二目标图像为:上述第一摄像头和第二摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像。上述第一目标图像和第二目标图像为目标样本集中的一个样本,该目标样本集用于训练预设模型。
示例性地,上述第一摄像头和第二摄像头与如图2所示的图像处理方法中涉及的第一摄像头和第二摄像头不同。具体地,上述第一摄像头可以与如图2所示的图像处理方法中涉及的第二摄像头相同,上述第二摄像头可以与如图2所示的图像处理方法中涉及的第三摄像头相同。
示例性地,上述拍摄支架包括手动控制和自动控制两种控制方式。用户可以手动或通过程序自动调节第一俯仰台1和第二俯仰台2在x-y平面内旋转以校准上述两个俯仰台的俯仰角度。为了实现第一摄像头和第二摄像头能够在同一位置采用相同的拍摄角度采集的图像,可以将第一俯仰台1调整至目标角度以及移动至目标位置后,控制第一摄像头拍摄图像。之后,将第一俯仰台1移开,再将第二俯仰台调整至目标角度以及移动至目标位置 后,控制第二摄像头拍摄图像。
具体地,上述拍摄支架还包括:控制模块,该控制模块,用于控制第一俯仰台移动至目标位置,并调整第一俯仰台至目标角度,以及控制第一摄像头采集第一目标图像;该控制模块,还用于控制第二俯仰台移动至目标位置,并调整第二俯仰台至目标角度,以及控制第二摄像头采集第二目标图像。
可选地,在本申请实施例中,为了方便拆卸以及实现自动控制功能,上述拍摄支架还包括:滑轨3与支架5的转换接口4、电源***及可编程单片机6。用户可以通过电源***及可编程单片机6实现对上述第一俯仰台1和第二俯仰台2的精确控制。
示例性地,用户可以编写代码使第一俯仰台1和第二俯仰台2能够借助电源***6在滑轨3的z方向上对其搭载的摄像头进行电动控制以实现精确位移。
示例性地,上述第一摄像头可以为安装在第一电子设备上的屏下摄像头,上述第二摄像头可以为安装在第二电子设备上,与第一摄像头相对于第一电子设备的相对位置相同的位置上。如图1中(A)和(B)所示,屏下摄像头11(即上述第一电子设备)在电子设备10a上的安装位置,与摄像头12(即上述第二电子设备)在电子设备10b上的安装位置相同。如此,才能使得第一摄像头和第二摄像头在同一位置采用相同的拍摄角度采集图像。
需要说明的是,在本拍摄支架中,各装置之间可以借助转换接口来实现连接,支架5与导轨3借助转换接口4进行连接,上述第一电子设备和第二电子设备可以通过第一俯仰台1和第二俯仰台2上设置的固定件和干板夹进行固定,以增加其稳定性。需要注意的是,两个电子设备的下边缘所在的直线与导轨3方向平行。
示例性地,当用户通过自动控制方式控制拍摄支架上的第一俯仰台1和第二俯仰台2时,可以根据拍摄支架的参数对单片机6进行编程,使其能够控制滑轨3上的第一俯仰台1和第二俯仰台2进行精确平移,固定支架5并调节其云台使云台倾斜度尽可能为0(略微倾斜不影响数据获取的准确性),利用转换接口4将连接有电源***和单片机6的导轨3安装在支架5上。
之后,将第一电子设备和第二电子设备分别固定在第一俯仰台1和第二俯仰台2上。在进行实际拍摄之前首先需要对拍摄支架进行调节和校准,使个电子设备能够先后借助滑轨3平移到相同的位置保持相同姿态。首先固定第一俯仰台1,并将其连接至计算机借助 图像获取软件,得到该第一摄像头所拍摄的图像并保存在计算机中,通过电源与单片机6控制第二俯仰台2在滑轨3上平移,使第二电子设备移动到相同的拍摄位置,调节第二俯仰台2,使第二摄像头能够拍摄到与第一摄像头拍摄的图像尽可能相同的图像,并借助上述图像获取软件得到第二摄像头拍摄的图像,即可完成标定工作。通过不断更改支架5的高度和拍摄景物与环境,即可获得多组包含屏下模糊图像和清晰图像的图像对,再使用图像匹配算法即可获取像素级对应的训练数据用于后续深度学习的图像恢复环节。
本申请实施例提供的拍摄支架,可以通过对第一俯仰台和第二俯仰台的俯仰角度及位置的精确控制,实现第一摄像头和第二摄像头能够在同一位置采用相同的拍摄角度采集的图像的目的,使得图像处理方法实施例中,能够对第一摄像头和第二摄像头拍摄的图像实现像素级对应。
图5为实现本申请实施例提供的一种图像处理装置的可能的结构示意图,如图5所示,图像处理装置600包括:采集模块601和处理模块602;采集模块601,用于通过第一摄像头采集第一图像,第一摄像头为屏下摄像头;处理模块602,用于使用图像处理模型处理采集模块601采集的第一图像,得到第二图像,第二图像的图像质量高于第一图像的图像质量;其中,图像处理模型为采用目标样本集对预设模型训练后得到的;目标样本集的每个目标样本包括两个图像;每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且第二摄像头采集的图像的图像质量低于第三摄像头采集的图像的图像质量;第二摄像头为屏下摄像头。
如此,通过将屏下摄像头(即第二摄像头)和正常摄像头(即第三摄像头)在同一位置采用相同的拍摄角度采集的图像作为目标样本集中的样本,训练预设模型。之后,使用训练后的图像处理模型对第一摄像头采集的第一图像进行处理,得到图像质量较高的第二图像,提高了屏下摄像头拍摄照片的图像质量。
可选地,图像处理装置600还包括:获取模块603和训练模块604;获取模块603,用于获取目标样本集;训练模块604,用于采用获取模块603获取的目标样本集训练预设模型,得到图像处理模型;其中,目标样本包括:第三图像和第四图像;第三图像为第二摄像头采集的图像,第四图像为第三摄像头采集的图像;第三图像和第四图像为第二摄像 头和第三摄像头在同一位置采用同样的拍摄参数分别对相同目标获取的图像。
如此,图像处理装置使用目标样本集训练预设模型,并得到图像处理模型之后,才能使得图像处理装置处理屏下摄像头拍摄的图像,进而得到图像质量较高的图像。
可选地,目标样本集中的每个目标样本均为在不同拍摄条件下采集的样本;其中,拍摄条件包括以下至少一项:拍摄对象,拍摄背景,拍摄环境的环境参数、拍摄装备的拍摄参数。
如此,可以减少目标样本集中重复样本的数量,提高训练模型的训练效率。
可选地,图像处理装置600还包括:匹配模块605;匹配模块605,用于采用基于灰度的图像匹配算法,对第三图像和第四图像进行匹配,得到匹配后的第三图像和第四图像;其中,匹配后的第三图像和第四图像的像素对应。
如此,在使用目标样本集训练预设模型之前,对样本集中的样本进行图像匹配,使得每个样本中的两个图像均达到像素级匹配,进而满足预设模型在训练过程中对图像的要求,使得训练好的图像处理模型处理屏下摄像头采集的图像后得到的图像的图像质量更高。
可选地,匹配模块605,具体用于将第三图像中预设区域的图像与第四图像进行匹配,得到匹配后的第三图像和第四图像;或者,匹配模块605,具体用于将第四图像中预设区域的图像与第三图像进行匹配,得到匹配后的第三图像和第四图像;其中,预设区域的大小为:在第三图像或第四图像的图像大小的基础上,边缘减少预设数量像素之后的图像大小。
如此,在图像匹配的过程中,使用图像范围较大的匹配模板,可以在选择尽可能大的图像范围的同时,保证匹配的成功率提高匹配的成功率。
可选地,获取模块603,具体用于将第三图像分割为M个第三子图像,并将第四图像分割为M个第四子图像;M个第三子图像与M个第四子图像一一对应;获取模块603,具体还用于将M个第三子图像中的目标第三子图像,以及M个第四子图像中与目标第三子图像对应的第四子图像作为目标样本集中的一个样本,得到目标样本集。
如此,将每个目标样本中的图像分割后,可在极大的扩充样本的容量同时,降低后续训练过程中对计算机配置的要求。
本申请实施例中的图像处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性地,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图像处理装置可以为具有操作***的装置。该操作***可以为安卓(Android)操作***,可以为iOS操作***,还可以为其他可能的操作***,本申请实施例不作具体限定。
本申请实施例提供的图像处理装置能够实现图2和图3的方法实施例中图像处理装置实现的各个过程,为避免重复,这里不再赘述。
本申请实施例提供的图像处理装置,通过将屏下摄像头和正常摄像头在同一位置采用相同的拍摄角度采集的图像作为样本集中的样本,之后,通过改变拍摄条件、或者多次更换拍摄对象的方式,获取包含N个样本的样本集。为了满足预设模型对样本的要求,在获取到样本后,可对每个样本中的两个图像进行像素级匹配,并使用匹配后的样本集训练预设模型。之后,使用训练后的图像处理模型对屏下摄像头采集的图像进行处理,得到图像质量较高的图像,提高了屏下摄像头拍摄照片的图像质量。
可选的,本申请实施例还提供一种电子设备,包括处理器110,存储器109,存储在存储器109上并可在所述处理器110上运行的程序或指令,该程序或指令被处理器110执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要注意的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图6为实现本申请各个实施例的一种电子设备的硬件结构示意图。
该电子设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、 以及处理器110等部件。
本领域技术人员可以理解,电子设备100还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理***与处理器110逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。图6中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,输入单元104,用于通过第一摄像头采集第一图像,第一摄像头为屏下摄像头;处理器110,用于使用图像处理模型处理输入单元104采集的第一图像,得到第二图像,第二图像的图像质量高于第一图像的图像质量;其中,图像处理模型为采用目标样本集对预设模型训练后得到的;目标样本集的每个目标样本包括两个图像;每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且第二摄像头采集的图像的图像质量低于第三摄像头采集的图像的图像质量;第二摄像头为屏下摄像头。
可选地,处理器110,用于获取目标样本集;处理器110,用于采用目标样本集训练预设模型,得到图像处理模型;其中,目标样本包括:第三图像和第四图像;第三图像为第二摄像头采集的图像,第四图像为第三摄像头采集的图像;第三图像和第四图像为第二摄像头和第三摄像头在同一位置采用同样的拍摄参数分别对相同目标获取的图像。
可选地,处理器110,用于采用基于灰度的图像匹配算法,对第三图像和第四图像进行匹配,得到匹配后的第三图像和第四图像;其中,匹配后的第三图像和第四图像的像素对应。
可选地,处理器110,具体用于将第三图像中预设区域的图像与第四图像进行匹配,得到匹配后的第三图像和第四图像;或者,处理器110,具体用于将第四图像中预设区域的图像与第三图像进行匹配,得到匹配后的第三图像和第四图像;其中,预设区域的大小为:在第三图像或第四图像的图像大小的基础上,边缘减少预设数量像素之后的图像大小。
可选地,处理器110,具体用于将第三图像分割为M个第三子图像,并将第四图像分割为M个第四子图像;M个第三子图像与M个第四子图像一一对应;处理器110,具体还用于将M个第三子图像中的目标第三子图像,以及M个第四子图像中与目标第三子图 像对应的第四子图像作为目标样本集中的一个样本,得到目标样本集。
本申请实施例提供的电子设备,通过将屏下摄像头和正常摄像头在同一位置采用相同的拍摄角度采集的图像作为样本集中的样本,之后,通过改变拍摄条件、或者多次更换拍摄对象的方式,获取包含N个样本的样本集。为了满足预设模型对样本的要求,在获取到样本后,可对每个样本中的两个图像进行像素级匹配,并使用匹配后的样本集训练预设模型。之后,使用训练后的图像处理模型对屏下摄像头采集的图像进行处理,得到图像质量较高的图像,提高了屏下摄像头拍摄照片的图像质量。
应理解的是,本申请实施例中,输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元106可包括显示面板1061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板1061。用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器109可用于存储软件程序以及各种数据,包括但不限于应用程序和操作***。处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为***级芯片、***芯片、芯片***或片上***芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台电子设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (17)

  1. 一种图像处理方法,所述方法包括:
    通过第一摄像头采集第一图像,所述第一摄像头为屏下摄像头;
    使用图像处理模型处理所述第一图像,得到第二图像,所述第二图像的图像质量高于所述第一图像的图像质量;
    其中,所述图像处理模型为采用目标样本集对预设模型训练后得到的;所述目标样本集的每个目标样本包括两个图像;所述每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且所述第二摄像头采集的图像的图像质量低于所述第三摄像头采集的图像的图像质量;所述第二摄像头为屏下摄像头。
  2. 根据权利要求1所述的方法,其中,所述使用图像处理模型处理所述第一图像之前,所述方法还包括:
    获取所述目标样本集;
    采用所述目标样本集训练所述预设模型,得到所述图像处理模型;
    其中,所述目标样本包括:第三图像和第四图像;所述第三图像为所述第二摄像头采集的图像,所述第四图像为所述第三摄像头采集的图像。
  3. 根据权利要求2所述的方法,其中,所述目标样本集中的每个目标样本均为在不同拍摄条件下采集的样本,其中,所述拍摄条件包括以下至少一项:拍摄对象,拍摄背景,拍摄环境的环境参数,拍摄装备的拍摄参数。
  4. 根据权利要求2所述的方法,其中,所述采用所述目标样本集训练所述预设模型之前,所述方法还包括:
    采用基于灰度的图像匹配算法,对所述第三图像和所述第四图像进行匹配,得到匹配后的第三图像和第四图像,所述匹配后的第三图像和第四图像的像素对应。
  5. 根据权利要求4所述的方法,其中,所述采用基于灰度的图像匹配算法,对所述第三图像和所述第四图像进行匹配,得到匹配后的第三图像和第四图像,包括:
    将所述第三图像中预设区域的图像与所述第四图像进行匹配,得到匹配后的第三图像和第四图像;
    或者,
    将所述第四图像中预设区域的图像与所述第三图像进行匹配,得到匹配后的第三图像和第四图像;
    其中,所述预设区域的大小为:在所述第三图像或所述第四图像的图像大小的基础上,边缘减少预设数量像素之后的图像大小。
  6. 根据权利要求2至5中任一项所述的方法,其中,所述获取所述目标样本集,包括:
    将所述第三图像分割为M个第三子图像,并将所述第四图像分割为M个第四子图像;所述M个第三子图像与所述M个第四子图像一一对应;
    将所述M个第三子图像中的目标第三子图像,以及所述M个第四子图像中与所述目标第三子图像对应的第四子图像作为所述目标样本集中的一个样本,得到所述目标样本集。
  7. 一种图像处理装置,所述装置包括:采集模块和处理模块;
    所述采集模块,用于通过第一摄像头采集第一图像,所述第一摄像头为屏下摄像头;
    所述处理模块,用于使用图像处理模型处理所述采集模块采集的第一图像,得到第二图像,所述第二图像的图像质量高于所述第一图像的图像质量;
    其中,所述图像处理模型为采用目标样本集对预设模型训练后得到的;所述目标样本集的每个目标样本包括两个图像;所述每个目标样本中的两个图像分别为第二摄像头和第三摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像,且所述第二摄像头采集的图像的图像质量低于所述第三摄像头采集的图像的图像质量;所述第二摄像头为屏下摄像头。
  8. 根据权利要求7所述的装置,其中,所述图像处理装置还包括获取模块和训练模块;
    所述获取模块,用于所述处理模块使用图像处理模型处理所述第一图像之前,获取所述目标样本集;
    所述训练模块,用于采用所述目标样本集训练所述预设模型,得到所述图像处理 模型;
    其中,所述目标样本包括:第三图像和第四图像;所述第三图像为所述第二摄像头采集的图像,所述第四图像为所述第三摄像头采集的图像。
  9. 根据权利要求8所述的装置,其中,所述目标样本集中的每个目标样本均为在不同拍摄条件下采集的样本,其中,所述拍摄条件包括以下至少一项:拍摄对象,拍摄背景,拍摄环境的环境参数,拍摄装备的拍摄参数。
  10. 根据权利要求8所述的装置,其中,所述图像处理装置还包括匹配模块;
    所述匹配模块,用于所述训练模块采用所述目标样本集训练所述预设模型之前,采用基于灰度的图像匹配算法,对所述第三图像和所述第四图像进行匹配,得到匹配后的第三图像和第四图像,所述匹配后的第三图像和第四图像的像素对应。
  11. 根据权利要求10所述的装置,其中,所述匹配模块,具体用于将所述第三图像中预设区域的图像与所述第四图像进行匹配,得到匹配后的第三图像和第四图像;或者,将所述第四图像中预设区域的图像与所述第三图像进行匹配,得到匹配后的第三图像和第四图像;其中,所述预设区域的大小为:在所述第三图像或所述第四图像的图像大小的基础上,边缘减少预设数量像素之后的图像大小。
  12. 根据权利要求8-11中任一项所述的装置,其中,所述获取模块,具体用于将所述第三图像分割为M个第三子图像,并将所述第四图像分割为M个第四子图像;所述M个第三子图像与所述M个第四子图像一一对应;将所述M个第三子图像中的目标第三子图像,以及所述M个第四子图像中与所述目标第三子图像对应的第四子图像作为所述目标样本集中的一个样本,得到所述目标样本集。
  13. 一种拍摄支架,包括:支架、与所述支架连接的滑轨,以及设置在所述滑轨上的第一俯仰台和第二俯仰台,所述第一俯仰台用于支撑第一摄像头,所述第二俯仰台用于支撑第二摄像头;所述第一摄像头用于采集第一目标图像;所述第二摄像头用于采集第二目标图像;
    其中,所述第一目标图像和所述第二目标图像为:所述第一摄像头和第二摄像头在同一位置采用相同的拍摄角度、相同的拍摄环境、相同的拍摄参数、针对同一拍摄对象采集的图像;所述第一目标图像和所述第二目标图像为目标样本集中的一个样本, 所述目标样本集用于训练预设模型。
  14. 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至6中任一项所述的图像处理方法的步骤。
  15. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至6中任一项所述的图像处理方法的步骤。
  16. 一种计算机程序产品,所述程序产品被至少一个处理器执行以实现如权利要求1至6中任一项所述的图像处理方法。
  17. 一种用户设备UE,包括所述UE被配置成用于执行如权利要求1至6中任一项所述的图像处理方法。
PCT/CN2022/072577 2021-01-25 2022-01-18 图像处理方法、装置、拍摄支架、电子设备及可读存储介质 WO2022156683A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110097229.1 2021-01-25
CN202110097229.1A CN112887598A (zh) 2021-01-25 2021-01-25 图像处理方法、装置、拍摄支架、电子设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2022156683A1 true WO2022156683A1 (zh) 2022-07-28

Family

ID=76050941

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072577 WO2022156683A1 (zh) 2021-01-25 2022-01-18 图像处理方法、装置、拍摄支架、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN112887598A (zh)
WO (1) WO2022156683A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887598A (zh) * 2021-01-25 2021-06-01 维沃移动通信有限公司 图像处理方法、装置、拍摄支架、电子设备及可读存储介质
CN116416656A (zh) * 2021-12-29 2023-07-11 荣耀终端有限公司 基于屏下图像的图像处理方法、装置及存储介质
CN115580690B (zh) * 2022-01-24 2023-10-20 荣耀终端有限公司 图像处理的方法和电子设备
CN115565213B (zh) * 2022-01-28 2023-10-27 荣耀终端有限公司 图像处理方法及装置
CN114785908A (zh) * 2022-04-20 2022-07-22 Oppo广东移动通信有限公司 电子设备、电子设备的图像获取方法及计算机可读存储介质
CN115100054A (zh) * 2022-06-16 2022-09-23 昆山国显光电有限公司 一种显示装置及屏下拍照处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9549101B1 (en) * 2015-09-01 2017-01-17 International Business Machines Corporation Image capture enhancement using dynamic control image
CN111107269A (zh) * 2019-12-31 2020-05-05 维沃移动通信有限公司 拍摄方法、电子设备及存储介质
CN111951192A (zh) * 2020-08-18 2020-11-17 义乌清越光电科技有限公司 一种拍摄图像的处理方法及拍摄设备
CN112887598A (zh) * 2021-01-25 2021-06-01 维沃移动通信有限公司 图像处理方法、装置、拍摄支架、电子设备及可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108924420B (zh) * 2018-07-10 2020-08-04 Oppo广东移动通信有限公司 图像拍摄方法、装置、介质、电子设备及模型训练方法
CN110880003B (zh) * 2019-10-12 2023-01-17 中国第一汽车股份有限公司 一种图像匹配方法、装置、存储介质及汽车
CN111311523B (zh) * 2020-03-26 2023-09-05 北京迈格威科技有限公司 图像处理方法、装置、***和电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9549101B1 (en) * 2015-09-01 2017-01-17 International Business Machines Corporation Image capture enhancement using dynamic control image
CN111107269A (zh) * 2019-12-31 2020-05-05 维沃移动通信有限公司 拍摄方法、电子设备及存储介质
CN111951192A (zh) * 2020-08-18 2020-11-17 义乌清越光电科技有限公司 一种拍摄图像的处理方法及拍摄设备
CN112887598A (zh) * 2021-01-25 2021-06-01 维沃移动通信有限公司 图像处理方法、装置、拍摄支架、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN112887598A (zh) 2021-06-01

Similar Documents

Publication Publication Date Title
WO2022156683A1 (zh) 图像处理方法、装置、拍摄支架、电子设备及可读存储介质
WO2015003604A1 (zh) 一种图像处理方法、装置及终端
WO2022166944A1 (zh) 拍照方法、装置、电子设备及介质
WO2022001897A1 (zh) 图像拍摄方法及电子设备
CN113840070B (zh) 拍摄方法、装置、电子设备及介质
CN113329172B (zh) 拍摄方法、装置及电子设备
CN109120854A (zh) 图像处理方法、装置、电子设备及存储介质
US8983227B2 (en) Perspective correction using a reflection
CN109218609A (zh) 图像构图方法及装置
WO2023030223A1 (zh) 拍摄方法、装置及电子设备
CN105635568A (zh) 一种移动终端中的图像处理方法和移动终端
TW201713110A (zh) 拍照系統、裝置及方法
CN114640833A (zh) 投影画面调整方法、装置、电子设备和存储介质
CN104902187A (zh) 一种移动终端自拍控制方法以及控制***
US9992411B2 (en) Electronic device having a photographing function and photographing method thereof
US10009545B2 (en) Image processing apparatus and method of operating the same
CN104168407A (zh) 全景影像的拍摄方法
KR20130081439A (ko) 휴대 단말기에서 카메라 뷰 영역을 표시하는 장치 및 방법
CN110363729A (zh) 一种图像处理方法、终端设备及计算机可读存储介质
CN106488128B (zh) 一种自动拍照的方法及装置
WO2024061134A1 (zh) 拍摄方法、装置、电子设备及介质
CN112702527A (zh) 图像拍摄方法、装置及电子设备
CN105100557B (zh) 便携式电子装置以及图像提取方法
JP6092371B2 (ja) 電子機器および画像処理方法
WO2017107324A1 (zh) 拍照模式的处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22742162

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22742162

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.01.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 22742162

Country of ref document: EP

Kind code of ref document: A1