CN110475068B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110475068B
CN110475068B CN201910817865.XA CN201910817865A CN110475068B CN 110475068 B CN110475068 B CN 110475068B CN 201910817865 A CN201910817865 A CN 201910817865A CN 110475068 B CN110475068 B CN 110475068B
Authority
CN
China
Prior art keywords
image
images
psf
lens
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910817865.XA
Other languages
Chinese (zh)
Other versions
CN110475068A (en
Inventor
罗天歌
范浩强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910817865.XA priority Critical patent/CN110475068B/en
Publication of CN110475068A publication Critical patent/CN110475068A/en
Application granted granted Critical
Publication of CN110475068B publication Critical patent/CN110475068B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image processing method and device, which are applied to a mobile terminal provided with a plurality of cameras, wherein the lens of each camera stores PSF data which is calibrated by a calibration model and does not contain noise, and the method comprises the following steps: controlling a plurality of cameras to shoot the same object to generate a plurality of first images; performing deconvolution operation on the plurality of first images and PSF data of corresponding lenses respectively to generate a second image corresponding to each lens; performing primary alignment of image pixels on the plurality of second images according to the spatial position relationship among the plurality of lenses; inputting the plurality of primarily aligned second images and the plurality of PSF data of the plurality of lenses into a pixel alignment model trained in advance, so that the pixel alignment model performs secondary alignment of image pixels on the plurality of primarily aligned second images according to the plurality of PSF data to generate a third image with aligned image pixels. The invention can accurately align the image pixels.

Description

Image processing method and device
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to an image processing method and apparatus.
Background
Currently, the resolution of the optical device, such as a camera, used in the terminal is generally limited due to the factors of the size and cost of the terminal. Then in order to obtain a high resolution picture containing more detail, super resolution techniques need to be used to increase the resolution of the image taken by the camera. Among them, a Point Spread Function (PSF) using an optical imaging system such as a lens is a method of obtaining a high-resolution image.
The PSF characterizes the response of the optical imaging system to the point source, and is generally determined by the materials and composition of the optical imaging system itself. The PSF with precise lens without noise needs to be performed in an ideal alignment environment, such as a unique point light source without noise such as an additional light source, and an optical imaging system satisfying the assumption of linear characteristics. However, the actual calibration environment is an assumption that there is noise interference such as an extra light source and the optical imaging system does not satisfy the linear characteristic.
If the calibration environment is to reach the ideal calibration environment, a higher cost is required, and how to complete the PSF calibration of the lens under the low-cost calibration environment to further improve the resolution of the image captured by the lens does not exist at present.
In addition, because a plurality of cameras of the same mobile phone have different PSFs and different spatial position relationships, the positions of pixels corresponding to the same object in high-resolution images generated by two different cameras are different. However, the currently widely adopted image pixel alignment scheme fails to achieve pixel alignment well (that is, pixels belonging to the same object and obtained by shooting with different lenses require coordinate coincidence), resulting in the problem that the edge of the generated high-resolution image object is not smooth and unnatural.
Disclosure of Invention
The invention provides a lens calibration method and an image processing method, which aim to solve the problem that in the related art, the alignment effect of a plurality of cameras on the image pixels of the same object is poor, and the edge of the object is not smooth.
In order to solve the above problem, according to an aspect of the present invention, the present invention discloses an image processing method applied to a mobile terminal configured with a plurality of cameras, wherein lenses of each camera respectively store noise-free point spread function PSF data calibrated by a calibration model in advance, the method comprising:
controlling the plurality of cameras to shoot the same object to generate a plurality of first images;
performing deconvolution operation on the plurality of first images and the PSF data of the corresponding lenses respectively to generate second images corresponding to each lens;
performing primary alignment of image pixels on a plurality of second images corresponding to a plurality of lenses according to spatial position relations among the lenses of the plurality of cameras;
inputting the plurality of primarily aligned second images and the plurality of PSF data of the plurality of lenses into a pixel alignment model trained in advance, so that the pixel alignment model performs secondary alignment of image pixels on the plurality of primarily aligned second images according to the plurality of PSF data to generate a third image with aligned image pixels.
According to another aspect of the present invention, the present invention also discloses an image processing apparatus applied to a mobile terminal configured with a plurality of cameras, wherein lenses of the cameras respectively store noise-free point spread function PSF data calibrated by a calibration model in advance, the apparatus comprising:
the control module is used for controlling the plurality of cameras to shoot the same object to generate a plurality of first images;
the operation module is used for performing deconvolution operation on the plurality of first images and the PSF data of the corresponding lenses respectively to generate second images corresponding to the lenses;
the first alignment module is used for carrying out primary alignment on image pixels of a plurality of second images corresponding to a plurality of lenses according to the spatial position relation among the lenses of the plurality of cameras;
and the second alignment module is used for inputting the plurality of primarily aligned second images and the plurality of PSF data of the plurality of lenses into a pixel alignment model trained in advance, so that the pixel alignment model performs secondary alignment of image pixels on the plurality of primarily aligned second images according to the plurality of PSF data to generate a third image with aligned image pixels.
According to another aspect of the present invention, the present invention also discloses a terminal, comprising: the image processing system comprises a memory, a processor and an image processing program which is stored on the memory and can run on the processor, wherein the image processing program realizes the steps of the image processing method when being executed by the processor.
According to still another aspect of the present invention, the present invention also discloses a computer readable storage medium having stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method.
Compared with the prior art, the invention has the following advantages:
in the embodiment of the invention, because the plurality of lenses of the plurality of cameras of the mobile terminal are stored with the noise-free PSF data calibrated by the calibration model in advance, the stored noise-free PSF data can be utilized in real time during shooting, specifically, a plurality of first images shot by the plurality of cameras for the same object are deconvoluted with the PSF data of the corresponding lens respectively, so that a plurality of second images with greatly improved resolution can be obtained; then, the plurality of second images are aligned by combining the spatial position relations of the plurality of cameras and the calibrated PSFs of the plurality of lenses, so that the resolution of the obtained third image is far higher than that of the plurality of lenses, the resolution of the image is improved, the edge of an object in the third image is clearer and smoother, and the accurate alignment of the image pixels is ensured.
In addition, in the embodiment of the present invention, when calibrating the PSF of a lens, only the multiple sets of first image data containing noise of the lens need to be input into the calibration model trained in advance, so that the calibration model performs PSF calibration on the multiple sets of first image data to generate the target PSF data without noise of the lens, and then when calibrating the PSF of the lens, the calibration operation need not be performed in an ideal calibration environment, and the PSF of the lens can be accurately calibrated in a non-ideal calibration environment only by using the calibration model, thereby reducing the calibration cost of the PSF of the lens.
Drawings
FIG. 1 is a flow chart of the steps of one embodiment of a model training method of the present invention;
FIG. 2 is a schematic diagram of acquiring first PSF data of a lens under a low-cost environment according to the present invention;
FIG. 3 is a flowchart illustrating steps of a lens calibration method according to an embodiment of the present invention;
FIG. 4 is a flow chart of the steps of an embodiment of an image processing method of the present invention;
FIG. 5 is a flow chart of a method of the present invention for training the pixel alignment model of the embodiment of FIG. 4;
FIG. 6 is a flow chart of the present invention for processing an image based on a lens calibrated by PSF;
fig. 7 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to FIG. 1, a flow chart of the steps of one embodiment of the model training method of the present invention is shown.
The model trained by the model training method of the embodiment of the invention can be called as a calibration model. The calibration model is used for obtaining the accurate PSF of the lens under the low-cost calibration environment, and meanwhile, the pre-trained calibration model can guarantee the calibration speed of the PSF of the lens, so that the whole calibration process is efficient.
The method shown in fig. 1 may specifically include the following steps:
step 101, acquiring a training sample set, wherein the training sample set comprises a plurality of groups of first images of the same shot, which contain noise, and second PSF data of the shot, which do not contain noise;
optionally, when multiple groups of first images of a certain lens in the training sample set are obtained, multiple groups of images obtained by respectively shooting point light sources distributed at different positions each time by the camera can be obtained; then, a matrix sequence of the multiple groups of images is obtained, wherein the matrix sequence of the multiple groups of images is multiple groups of first images containing noise of a lens in the camera, that is, multiple groups of first PSF data containing noise of the lens.
Specifically, the camera can be controlled to shoot the same point light source for multiple times, and when the point light source is shot each time, the point light sources are distributed at different positions, so that images of multiple groups of point light sources are obtained, and matrix data of the images of the multiple groups of point light sources are multiple groups of first PSF data containing noise of the lens of the camera.
In the embodiment of the present invention, because the positions of the point light sources of the multiple sets of first PSF data of the same lens are different, so that the angles of the point light sources are more diverse, the utilization of the multiple sets of first PSF data can facilitate the neural network model to learn the possible nonlinear changes of the lens (i.e., the optical system) of the camera in the low-cost environment.
Optionally, the plurality of positions where the point light sources are respectively located are equidistantly arranged on the same side of the lens and are symmetrical with respect to the center line of the lens.
That is, each time the camera shoots the point light source, the positions formed by the positions of the point light source are equidistantly arranged on the same side of the lens and are symmetrical about the center line of the lens of the camera.
As shown in fig. 2, a schematic diagram of acquiring noisy first PSF data of a shot in a low-cost environment is shown. Fig. 2 shows the lens of the camera and the receiving plane of the camera (corresponding to the image sensor of the camera), and shows 5 light source positions where the point light source is located when the camera takes the same point light source 5 times, as can be seen from fig. 2, the 5 light source positions are equidistant from the lens and located on the same side of the lens, and the 5 light source positions are symmetrical about the center line of the lens.
As shown in fig. 2, the center line of the lens of the present invention is a straight line passing through the center of the lens and perpendicular to the mirror surface of the lens.
Then, in conjunction with the example shown in fig. 2, the image displayed on the receiving plane is the image displayed by the point light source through the lens, and is also the image of the point light source captured by the camera. Therefore, the multiple sets of first PSF data of the lens acquired as described in the above embodiment may acquire multiple sets of images acquired by the camera respectively capturing the point light sources distributed at different positions each time by setting the point light sources at different positions each time and capturing the image displayed on the receiving plane.
In addition, since the image displayed on the receiving plane is a partial sampling point of the PSF of the lens under ideal conditions (for example, a point light source which is unique and has no noise such as an additional light source, and a lens (i.e., an optical imaging system) satisfies linear characteristics, etc.), however, since the point light source and the lens photographed here are not ideal conditions, the image displayed on the receiving plane is a sampling point of the PSF of the lens containing noise. Therefore, a matrix sequence of images of a plurality of point light sources captured for the receiving plane can be used as the noise-containing sets of first PSF data of the lens shown in fig. 2.
Then, in the embodiment of the present invention, since the plurality of positions where the point light sources are respectively located are equidistantly disposed on the same side of the lens and are symmetrical with respect to the center line of the lens as shown in fig. 2. For example, position 1 and position 5 are symmetric about the centerline, and position 2 and position 4 are symmetric about the centerline. Then the noise of the two sets of first PSF data obtained by respectively shooting point light sources at two positions symmetrical to each other is also symmetrical. Then, because of the symmetry of noise among the acquired multiple sets of first PSF data, when training the neural network model, the multiple sets of first PSF data can be used to eliminate the interference of noise such as extra light sources to some extent in the model training stage by using the symmetry information of the noise (where the training sample set itself does not include this symmetry information, but the multiple sets of first PSF data as training samples can express this symmetry information), and the noise among the two sets of first PSF data containing noise corresponding to the point light sources at symmetric positions can be cancelled out, so that the calibration model for PSF calibration can be obtained more easily, and the model training time can be further reduced.
The second PSF data of the shot in step 101, which is free of noise, is the accurate PSF data of the shot acquired in a high-cost environment.
And 102, inputting the multiple groups of first images and the second PSF data into a neural network model for training to obtain a trained calibration model.
Optionally, the first image is input data and the second PSF data is an accurate label.
When a plurality of groups of first PSF data and a group of second PSF data which are related to the same lens and are in a training sample set are input into a neural network model to train the neural network model, the second PSF data are equivalent to accurate PSFs obtained by the lens in a high-cost calibration environment, and the accurate PSFs can be used as accurate labels for training the neural network model to calibrate the PSFs of the lens.
It should be noted that the training sample set may include training data of a plurality of types of shots, and the training data of each type of shot may include a plurality of first PSF data and second PSF data of the shot, which is free of noise. Then, the neural network model is trained by adopting the training data of the lenses of various models, so that the calibration model obtained by training can be generalized to be used for the lenses of which the lens models are similar to any one of the models, and PSF calibration is carried out on the lenses of various models.
The trained calibration model is used for calibrating a plurality of groups of images (namely PSF data containing noise) of any lens of which the lens type accords with the lens type of the lens, and PSF data without noise is generated.
By means of the technical scheme of the embodiment of the invention, the PSF of the lens can be accurately calibrated in a non-ideal calibration environment.
Alternatively, supervised learning and attention mechanisms may be employed to train the neural network model to obtain the calibration model of the embodiments of the present invention.
The trained neural network model, i.e., the calibration model, may be used to calibrate input noise-containing sets of first PSF data of any one of the trained shots of a certain model, so as to obtain noise-free second PSF data of the shot of the model. Therefore, even if the calibration environment is a low-cost calibration environment, the neural network model trained by the method of the embodiment of the invention can be used for accurately calibrating the PSF of the lens to obtain the accurate PSF of the lens, so that the calibration cost of the PSF of the lens is reduced.
Referring to fig. 3, a flowchart illustrating steps of an embodiment of a lens calibration method of the present invention is shown.
In one embodiment, fig. 3 details how to perform PSF calibration on a lens of a lens model to which the calibration model can be generalized by using the neural network model (i.e., the calibration model) trained by the embodiment of fig. 1, so as to obtain an accurate PSF of the lens.
For a mobile terminal configured with multiple cameras, each having a lens, the method shown in fig. 3 may perform PSF calibration on any of the multiple lenses of the multiple cameras. Fig. 3 here illustrates an example of calibrating the PSF of one lens, and the PSF calibration process for other lenses is similar to that described above, and is not described here again.
Step 201, acquiring a plurality of groups of first image data containing noise of a lens;
in this step, the implementation manner of obtaining multiple sets of first image data containing noise of the shot, that is, multiple sets of first PSF data, is similar to the implementation manner of obtaining multiple sets of first PSF data of the same shot in the model training process.
Optionally, in step 201, a plurality of groups of first test images obtained by shooting point light sources distributed at different positions each time by a camera to which the lens belongs may be acquired, where a plurality of positions where the point light sources are respectively located are the same as a plurality of positions where the point light sources are respectively located during the training of the calibration model; then, acquiring matrix sequences of the multiple groups of first test images, wherein the matrix sequences of the multiple groups of first test images are multiple groups of first image data of the lens, which contain noise.
During the calibration model use phase, the same point light source positions (e.g., 5 symmetrical point light source positions shown in fig. 2) are also set during the model training. For example, the training sample set in the training process of the calibration model includes training data of the shot 1, whereas in the embodiment of the present invention, PSF calibration needs to be performed on the shot 2, where the model of the shot 2 is similar to that of the shot 1, or the two shots are the same. Therefore, the PSF of the lens 2 can be calibrated by using the calibration model trained in fig. 1.
However, when the PSF of the lens 2 is calibrated, when the input data of the calibration model is acquired, the positions of the point light sources respectively photographed by the lens 2 are set according to the same point light source positions corresponding to the plurality of sets of first PFS data of the lens 1 in the calibration model training stage, for example, 5 position point light sources are similarly set as shown in fig. 2, and then the point light sources respectively at the 5 positions are photographed by using the camera of the lens 2, thereby obtaining 5 sets of first PSF data of the lens 2 corresponding to the 5 point light source positions shown in fig. 2.
Optionally, the plurality of positions where the point light sources are respectively located are equidistantly arranged on the same side of the lens and are symmetrical with respect to the center line of the lens.
Then, in the embodiment of the present invention, since the plurality of positions where the point light sources are respectively located are equidistantly disposed on the same side of the lens and are symmetrical with respect to the center line of the lens as shown in fig. 2. For example, position 1 and position 5 are symmetric about the centerline, and position 2 and position 4 are symmetric about the centerline. Then the noise of the two sets of first PSF data obtained by respectively shooting point light sources at two positions symmetrical to each other is also symmetrical. Then, due to the noisy symmetry between the acquired multiple sets of first PSF data, the calibration accuracy of the calibration model for the PSF can be improved when the PSF of the lens is calibrated by using the multiple sets of first PSF data and the calibration model.
Step 202, inputting the multiple sets of first image data into a calibration model trained in advance, so that the calibration model performs PSF calibration on the multiple sets of first image data, and generates PSF data (target PSF data) of the lens without noise
Wherein, the type of the lens calibrated by the embodiment of fig. 3 is consistent with the type of the lens used for calibration by the calibration model. That is, the calibration model trained by the training method shown in fig. 1 has a shot type used for calibration that is consistent with the shot type in the training sample set used in the training stage of the calibration model.
In the embodiment of the invention, in the training stage and the using stage of the calibration model, the positions of the point light sources shot in the two stages are the same, namely the relative spatial positions of the light sources and the lenses are kept the same, so that the distribution of the training data and the test data is consistent, and the distribution difference of the training data and the test data is reduced. The calibration model is sensitive to the distribution of the input data, and if the distribution of the training data is consistent with that of the test data, the calibration accuracy of the calibration model can be improved.
Alternatively, when the training sample set of the calibration model training phase includes multiple sets of training data of different shot types (or shot models), that is, the trained calibration model can be used for PSF calibration on multiple types of shots (e.g., type 1, type 2, and type 3). Then in the fig. 3 embodiment the input data may comprise not only the sets of first PSF data for a shot but also the type of the shot, e.g. type 1. The calibration model may perform PSF calibration on the input sets of first PSF data belonging to the lens of type 1 according to the input type 1, and output the accurate PSF of type 1, i.e., the target PSF data.
It should be noted that, in the process of actually using the lens, this point light source corresponds to the subject to be photographed.
In the embodiment of the invention, during the use process of the calibration model: the input of the lens comprises a plurality of PSFs containing noise of the same lens, and the output of the lens is a more accurate PSF of the lens.
Step 203, storing the PSF data to the lens, so that the lens stores PSF data which is calibrated by a corresponding calibration model in advance and does not contain noise.
When a mobile terminal that needs PSF calibration is configured with multiple cameras, and the lens of each camera is subjected to PSF calibration through the process shown in fig. 3, multiple lenses of the multiple cameras can be made to store noise-free PSF data that is calibrated in advance through corresponding calibration models.
The accurate PSF (i.e., the target PSF data) of, for example, the lens 2 calibrated by the calibration model may be stored in a hardware unit of the lens or the camera to which the lens belongs.
In the embodiment of the invention, when the PSF of a lens is calibrated, only a plurality of groups of first image data containing noise of the lens need to be input into a calibration model which is trained in advance, so that the calibration model performs PSF calibration on the plurality of groups of first image data to generate target PSF data without noise of the lens, and then when the PSF of the lens is calibrated, calibration operation does not need to be performed under an ideal calibration environment, and the PSF of the lens can be accurately calibrated under a non-ideal calibration environment only by using the calibration model, thereby reducing the calibration cost of the PSF of the lens.
In addition, the mobile terminal generally has a plurality of cameras, and images taken by the plurality of cameras have a problem of pixel alignment of the plurality of images taken by different cameras. In order to align pixels of a plurality of graphs, the currently commonly adopted image pixel alignment scheme only considers the spatial position relationship and does not consider the difference of PSFs of different lenses, so that the alignment of the pixels cannot be well realized (for example, pixel points of the same shot object, which belong to different lenses, are required to be overlapped in coordinates, namely aligned), and the edge of a generated high-resolution image object is unsmooth and unnatural.
To this end, and in one embodiment, referring to FIG. 4, a flow chart of steps of a method of image processing of an embodiment of the present invention is shown. The method in the embodiment of fig. 4 not only enables the multiple cameras of the mobile terminal to capture high-resolution images with higher resolution than the respective lenses, but also enables the multiple high-resolution images captured by the multiple lenses to be aligned in pixels, so that the edge of an object in the final high-resolution image is smoother and more natural.
The image processing method described in fig. 4 is applied to a mobile terminal configured with a plurality of cameras.
The lens of each camera may be a lens calibrated by the method in the embodiment in fig. 3, and therefore, the lens of each camera may store therein respective PSF data (i.e., accurate PSF data of each lens) without noise, which is calibrated in advance by the corresponding calibration model.
When the shot type capable of being generalized by a trained calibration model includes multiple shots of the multiple cameras, the calibration model used by each shot is the same, and when the shot type capable of being generalized by a trained calibration model does not completely include the multiple shots, different calibration models can be used to calibrate the PSFs of the corresponding shots respectively.
The method shown in fig. 4 may specifically include the following steps:
step 301, controlling the plurality of cameras to shoot the same object to generate a plurality of first images;
the shooting directions of the cameras are the same, for example, all the cameras are front cameras or all the cameras are rear cameras.
Since the mobile terminal is provided with the plurality of cameras, after the photographing request is triggered on the mobile terminal, the plurality of cameras are triggered to photograph the same object at the same time, so that a plurality of first images corresponding to the plurality of cameras can be generated.
The plurality of first images herein are images of resolutions corresponding to a plurality of lenses of the plurality of cameras, and are referred to herein as a plurality of low resolution images.
Step 302, performing deconvolution operation on the plurality of first images and the PSF data of the corresponding lenses respectively to generate second images corresponding to each lens;
in order to improve the resolution of the image captured by the mobile terminal, the plurality of first images may be deconvoluted with the pre-stored PSF data, which is calibrated by the calibration model in advance, of the corresponding lens, so as to generate a high-resolution image, i.e., a second image, corresponding to each lens.
For example, the mobile terminal has 2 rear cameras, the camera 1 has a lens 1, the camera 2 has a lens 2, the 2 rear cameras are controlled to shoot the same object, the image 1 shot by the lens 1 and the image 2 shot by the lens 2 can be obtained, the lens 1 stores the PSF1 calibrated in advance, the lens 2 stores the PSF2 calibrated in advance, the step can perform deconvolution operation on the image 1 and the PSF1 to obtain an image 1 ', and perform deconvolution operation on the image 2 and the PSF2 to obtain an image 2'. Wherein, the image 1 'is the second image corresponding to the lens 1, and the image 2' is the second image corresponding to the lens 2. The resolution of the image 1' is greatly improved compared with the resolution of the image 1; the resolution of image 2' is greatly improved compared to the resolution of image 2.
Step 303, performing primary alignment of image pixels on a plurality of second images corresponding to a plurality of lenses according to spatial position relations among the plurality of lenses of the plurality of cameras;
the spatial position relationship between the lens 1 and the lens 2 in the example, for example, three-dimensional coordinate information of two lenses, may be obtained, so as to perform initial alignment of image pixels on the image 1 'and the image 2' by using two sets of three-dimensional coordinate information, where the initial alignment operation may adjust the position of a pixel point in the image 1 'to obtain the image 1 ″ and adjust the position of a pixel point in the image 2' to obtain the image 2 ″.
Step 304, inputting the plurality of primarily aligned second images and the plurality of PSF data of the plurality of lenses to a pixel alignment model trained in advance, so that the pixel alignment model performs secondary alignment of image pixels on the plurality of primarily aligned second images according to the plurality of PSF data, and generates a third image with aligned image pixels.
Although the first image and the PSF of the corresponding lens are subjected to deconvolution operation, the PSFs of different lenses have certain differences, so that it is difficult to align pixels of high-resolution images obtained by different lenses by only depending on the three-dimensional coordinates of the lenses. As explained in the above example, since the images 1 "and 2" obtained after the initial alignment do not refer to the accurate PSFs of the two lenses when aligning, in this step, the two images need to be aligned twice.
The specific method is that the image 1 ″ and the PSF1, the image 2 ″ and the PSF2 are input into a pixel alignment model which is trained in advance, and the pixel alignment model can use the PSF1 and the PSF2 to perform secondary alignment of image pixels on the image 1 ″ and the image 2 ″ so as to generate the image 3 with aligned pixels, wherein because pixel points of positions of the image 1 ″ and the image 2 ″ after secondary alignment of the image pixels are aligned, two images obtained after secondary alignment are actually the same, and therefore, the image 3 is named as a third image.
The image 3 obtained by the image processing method of the embodiment of the present invention is not only a high resolution image with a resolution higher than that of the lens, but also an image with pixels of the image precisely aligned thereto.
Generally, the resolution of image 3 is 2 to 8 times the resolution of image 1 or image 2.
The pixel alignment model aligns pixels of the high-resolution image generated by each lens for secondary alignment based on the calibrated PSF pre-stored by the lens, so that the edge details of an object in the obtained third image are more natural, and the high-resolution image with clearer image edge is output.
In the embodiment of the invention, because the plurality of lenses of the plurality of cameras of the mobile terminal are stored with the noise-free PSF data calibrated by the calibration model in advance, the stored noise-free PSF data can be utilized in real time during shooting, specifically, a plurality of first images shot by the plurality of cameras for the same object are deconvoluted with the PSF data of the corresponding lens respectively, so that a plurality of second images with greatly improved resolution can be obtained; then, the plurality of second images are aligned by combining the spatial position relations of the plurality of cameras and the calibrated PSFs of the plurality of lenses, so that the resolution of the obtained third image is far higher than that of the plurality of lenses, the resolution of the image is improved, the edge of an object in the third image is clearer and smoother, and the accurate alignment of the image pixels is ensured.
Before performing 304 of FIG. 4, in one embodiment, referring to FIG. 5, a flow diagram of a method of training the pixel alignment model of the FIG. 4 embodiment is shown.
As shown in fig. 5, the training method involves a model G to be trained, and a discriminant model D for training the model G. The model G is trained to be the pixel alignment model of the embodiment of fig. 4. G is the abbreviation of generator, which is a generative model, and D is the abbreviation of discriminator, which is a discriminant model. This training mode is called adaptive training (antagonistic learning).
First, a training sample set may be obtained:
the training sample set comprises multi-frame images (namely a plurality of second test images) after primary alignment of a plurality of lenses of a plurality of cameras configured by the mobile terminal, wherein the multi-frame images are a plurality of high-resolution images subjected to primary alignment;
in addition, the training sample set further includes a plurality of accurate PSFs (i.e., noise-free sets of PSF test data) of the plurality of shots that have been calibrated by the calibration model.
In addition, the training sample set also includes, as a training label, a third test image taken by a camera corresponding to a test shot having a higher pixel (or resolution) than the plurality of shots (corresponding to the image taken by the higher pixel shot in fig. 5).
Of course, the photographic subject corresponding to the third test image is the same as the photographic subject corresponding to the multi-frame image.
When acquiring the multi-frame images of the multiple lenses aligned for the first time, the specific method may refer to steps 301 to 303 in the embodiment of fig. 4. For a brief introduction here:
firstly, a plurality of first images shot by a mobile terminal, such as a plurality of lenses of a mobile phone, on the same shooting object under different scenes, light rays and other environments need to be collected; then, carrying out deconvolution operation on the plurality of first images and the calibrated PSFs of the corresponding lenses respectively to obtain a plurality of high-resolution images; and then, carrying out primary alignment on the plurality of high-resolution images according to the spatial position relationship among the plurality of lenses to obtain a plurality of frame images of the plurality of lenses after primary alignment.
In the training process, the first input of the model G is the second test images, that is, the first aligned multi-frame images corresponding to the lenses (the first aligned high-resolution images of the lenses, specifically, the first aligned high-resolution images of the lenses, that is, the low-resolution images of the lenses, which are deconvoluted with the accurate PSFs of the lenses to obtain the high-resolution images, and then the first alignment is performed by using the spatial position relationship between the lenses to obtain the aligned high-resolution images) and the accurate PSFs of the lenses corresponding to the multi-frame images, that is, the PSF test data;
obtaining a fourth test image output by a model G by inputting the plurality of second test images and the plurality of sets of PSF test data into the model G, wherein the model G is used for performing secondary pixel alignment on the plurality of second test images based on the plurality of sets of PSF test data to generate the fourth test image; that is, the output of the model G is a high-resolution image after the model G is subjected to pixel secondary alignment, but since the model G is in the training process, the secondary alignment effect is not optimal here;
therefore, the third test image and the fourth test image need to be continuously input to a model D, so as to obtain a determination result which is output by the model D and represents whether the third test image and the fourth test image are the same;
that is, the input of the model D is a high-resolution image of the output quadratic alignment of the model G, and an image taken by a higher pixel camera (the same as the subject of the high-resolution image), and the output result of the model D is a determination result indicating whether or not the high-resolution image input by the model D coincides with the high-resolution image as the classification label (i.e., the image taken by the higher pixel camera);
if the third test image is different from the fourth test image, the countertraining of the model G and the model D needs to be performed according to a countertraining method, for example, parameters of the model G are reversely adjusted.
Then, after multiple rounds of iterative training until the output result of the model D is a judgment result indicating that the high-resolution image input by the model D is consistent with the high-resolution image (i.e., the image captured by the camera with higher pixels) serving as the classification label, the training of the model G is completed, and the pixel alignment model which is trained in advance in the embodiment of fig. 4 is obtained.
The training method of the pixel alignment model provided by the embodiment of the invention can enable the network to fully learn how to align the image pixels under different environmental conditions under the condition that the accurate PSF of the lens is known, the neural network of the pixel alignment model can be a generative deep neural network, the generative deep neural network can effectively learn to obtain the low-dimensional manifold of the edge of the shape of the object, a plurality of high-resolution images which are aligned for the first time of each lens and the accurate PSF of the corresponding lens are input into the generative deep neural network as conditions, and the generative deep neural network can be enabled, namely, the model G effectively learns how to align the pixels based on the information, and more smooth edges are interpolated.
Referring to fig. 6, a flowchart for processing an image based on a lens calibrated by a PSF is shown in combination with the lens calibration method and the image processing method shown in fig. 3 and 4.
As shown in fig. 6, in a lens shipping stage, a PSF calibration model (i.e., the calibration model trained in fig. 1) may be used to calibrate the PSF of a lens, and an accurate PSF may be acquired and stored in hardware of the lens.
The lenses configured for the multiple cameras of the mobile phone are all lenses of which the PSF is calibrated by the PSF calibration model.
When the mobile phone shoots images, a plurality of lenses shoot the same object at the same time to obtain a plurality of frames of low-resolution images;
then, reading the calibrated PSF of each lens from the hardware of the plurality of lenses, and obtaining a plurality of high-resolution images corresponding to the plurality of lenses through deconvolution operation of the low-resolution images and the PSF of the corresponding lens;
then, according to the space positions of the lenses, carrying out primary alignment of pixels on the high-resolution images;
then, the plurality of primarily aligned high-resolution images are input to a pixel alignment model to perform secondary alignment of pixels, so that one aligned high-resolution image is obtained.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
In correspondence with the method provided by the above embodiment of the present invention, referring to fig. 7, a block diagram of an embodiment of an image processing apparatus according to the present invention is shown, which is applied to a mobile terminal configured with a plurality of cameras, wherein lenses of each camera respectively store noise-free PSF data calibrated by a calibration model in advance, and the apparatus includes:
a control module 701, configured to control the multiple cameras to shoot a same object, so as to generate multiple first images;
an operation module 702, configured to perform deconvolution operation on the plurality of first images and the noise-free PSF data of the corresponding lens, which is calibrated in advance by the calibration model, to generate a second image corresponding to each lens;
a first alignment module 703, configured to perform primary alignment of image pixels on a plurality of second images corresponding to a plurality of lenses according to spatial position relationships between the lenses of the plurality of cameras;
a second alignment module 704, configured to input the plurality of primarily aligned second images and the plurality of PSF data of the plurality of lenses to a pixel alignment model trained in advance, so that the pixel alignment model performs secondary alignment of image pixels on the plurality of primarily aligned second images according to the plurality of PSF data, and generates a third image with aligned image pixels.
Optionally, the apparatus further comprises:
the first acquisition module is used for acquiring a plurality of groups of first image data containing noise of the lens aiming at the lens of each camera;
an input module, configured to input the multiple sets of first image data into a calibration model trained in advance, so that the calibration model performs point spread function PSF calibration on the multiple sets of first image data, and generates noise-free PSF data of the lens, where a type of the lens is consistent with a type of a lens used for calibration by the calibration model;
and the storage module is used for storing the PSF data to the lenses so that the lenses of each camera respectively store the PSF data which are calibrated by the corresponding calibration model and do not contain noise.
Optionally, the first obtaining module includes:
the first acquisition submodule is used for acquiring a plurality of groups of first test images which are obtained by respectively shooting point light sources distributed at different positions each time by a camera to which the lens belongs, wherein the positions of the point light sources are the same as the positions of the point light sources adopted in the training of the calibration model;
and the second obtaining submodule is used for obtaining the matrix sequences of the multiple groups of first test images, wherein the matrix sequences of the multiple groups of first test images are the multiple groups of first image data of the lens, which contain noise.
Optionally, the plurality of positions where the point light sources are respectively located are equidistantly arranged on the same side of the lens and are symmetrical with respect to the center line of the lens.
Optionally, the apparatus further comprises:
a second obtaining module, configured to obtain a training sample set, where the training sample set includes a plurality of second test images obtained by primarily aligning image pixels of the plurality of lenses, a plurality of sets of PSF test data of the plurality of lenses, the PSF test data being free of noise, and a third test image captured by a test lens having a resolution higher than a resolution of the plurality of lenses, where the second test images and the third test image correspond to a same captured object;
a third obtaining module, configured to input the multiple second test images and the multiple sets of PSF test data into a first neural network model, so as to obtain a fourth test image output by the first neural network, where the first neural network model is configured to perform secondary pixel alignment on the multiple second test images based on the multiple sets of PSF test data, so as to generate the fourth test image;
the fourth obtaining module is used for inputting the third test image and the fourth test image into a second neural network model to obtain a judgment result which is output by the second neural network model and represents whether the third test image and the fourth test image are the same or not;
and the training module is used for performing countermeasure training on the first neural network model and the second neural network model according to a countermeasure training method if the judgment result is that the third test image is different from the fourth test image, until the judgment result output by the second neural network model and indicating that the third test image is the same as the fourth test image, wherein the first neural network model subjected to the countermeasure training is the pixel alignment model which is trained in advance.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
According to still another embodiment of the present invention, there is also provided a terminal including: a memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program when executed by the processor implementing the steps of the method according to any of the embodiments described above.
According to yet another embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon an image processing program which, when executed by a processor, implements the steps of the method according to any one of the above embodiments.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The present invention provides a model training method, a lens calibration method, an image processing apparatus, a terminal, and a computer-readable storage medium, which have been described in detail above, wherein specific examples are applied to illustrate the principles and embodiments of the present invention, and the description of the above embodiments is only used to help understanding the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An image processing method applied to a mobile terminal configured with a plurality of cameras, wherein lenses of each camera respectively store noise-free Point Spread Function (PSF) data calibrated by a calibration model in advance, the method comprising the following steps: acquiring a training sample set, wherein the training sample set comprises a plurality of groups of first images containing noise of the same lens and second PSF data without noise of the lens, and inputting the plurality of groups of first images and the second PSF data into a neural network model for training to obtain a trained calibration model; acquiring multiple groups of first test images which are obtained by respectively shooting point light sources distributed at different positions at each time by a camera to which the lens belongs, wherein the multiple positions of the point light sources are the same as the multiple positions of the point light sources adopted during the training of the calibration model, and acquiring a matrix sequence of the multiple groups of first test images, wherein the matrix sequence of the multiple groups of first test images is multiple groups of first image data of the lens, which contain noise; inputting the multiple groups of first image data into a calibration model which is trained in advance, so that the calibration model performs Point Spread Function (PSF) calibration on the multiple groups of first image data, and generating noise-free PSF data of the lenses, wherein the types of the lenses are consistent with the types of the lenses used for calibration by the calibration model, and the PSF data are stored in the lenses, so that the lenses of each camera are respectively stored with the noise-free PSF data which are calibrated by the corresponding calibration model in advance;
the image processing method comprises the following steps:
controlling the plurality of cameras to shoot the same object to generate a plurality of first images;
performing deconvolution operation on the plurality of first images and the PSF data of the corresponding lenses respectively to generate second images corresponding to each lens;
performing primary alignment of image pixels on a plurality of second images corresponding to a plurality of lenses according to spatial position relations among the lenses of the plurality of cameras;
inputting the plurality of primarily aligned second images and the plurality of PSF data of the plurality of lenses into a pixel alignment model trained in advance, so that the pixel alignment model performs secondary alignment of image pixels on the plurality of primarily aligned second images according to the plurality of PSF data to generate a third image with aligned image pixels.
2. The image processing method of claim 1, wherein the plurality of positions where the point light sources are respectively located are equidistantly arranged on the same side of the lens and are symmetrical with respect to a center line of the lens.
3. The image processing method according to claim 1, wherein before the inputting the plurality of first-aligned second images and the plurality of PSF data of the plurality of shots into a pixel alignment model trained in advance, the method further comprises:
acquiring a training sample set, wherein the training sample set comprises a plurality of second test images of the plurality of lenses after primary alignment of image pixels, a plurality of sets of PSF (pseudo-static filter) test data of the plurality of lenses without noise, and a third test image shot by a test lens with resolution higher than that of the plurality of lenses, wherein the plurality of second test images and the third test image correspond to the same shot object;
inputting the plurality of second test images and the plurality of groups of PSF test data into a first neural network model to obtain a fourth test image output by the first neural network, wherein the first neural network model is used for performing secondary pixel alignment on the plurality of second test images based on the plurality of groups of PSF test data to generate the fourth test image;
inputting the third test image and the fourth test image into a second neural network model to obtain a judgment result which is output by the second neural network model and represents whether the third test image and the fourth test image are the same;
and if the judgment result is that the third test image is different from the fourth test image, performing countermeasure training on the first neural network model and the second neural network model according to a countermeasure training method until the judgment result output by the second neural network model and representing that the third test image is the same as the fourth test image, wherein the first neural network model subjected to countermeasure training is the pixel alignment model which is trained in advance.
4. An image processing apparatus applied to a mobile terminal equipped with a plurality of cameras, wherein lenses of the cameras each store noise-free Point Spread Function (PSF) data calibrated by a calibration model in advance, the image processing apparatus comprising: acquiring a training sample set, wherein the training sample set comprises a plurality of groups of first images containing noise of the same lens and second PSF data without noise of the lens, and inputting the plurality of groups of first images and the second PSF data into a neural network model for training to obtain a trained calibration model; acquiring multiple groups of first test images which are obtained by respectively shooting point light sources distributed at different positions at each time by a camera to which the lens belongs, wherein the multiple positions of the point light sources are the same as the multiple positions of the point light sources adopted during the training of the calibration model, and acquiring a matrix sequence of the multiple groups of first test images, wherein the matrix sequence of the multiple groups of first test images is multiple groups of first image data of the lens, which contain noise; inputting the multiple groups of first image data into a calibration model which is trained in advance, so that the calibration model performs Point Spread Function (PSF) calibration on the multiple groups of first image data, and generating noise-free PSF data of the lenses, wherein the types of the lenses are consistent with the types of the lenses used for calibration by the calibration model, and the PSF data are stored in the lenses, so that the lenses of each camera are respectively stored with the noise-free PSF data which are calibrated by the corresponding calibration model in advance;
the device comprises:
the control module is used for controlling the plurality of cameras to shoot the same object to generate a plurality of first images;
the operation module is used for performing deconvolution operation on the plurality of first images and the PSF data of the corresponding lenses respectively to generate second images corresponding to the lenses;
the first alignment module is used for carrying out primary alignment on image pixels of a plurality of second images corresponding to a plurality of lenses according to the spatial position relation among the lenses of the plurality of cameras;
and the second alignment module is used for inputting the plurality of primarily aligned second images and the plurality of PSF data of the plurality of lenses into a pixel alignment model trained in advance, so that the pixel alignment model performs secondary alignment of image pixels on the plurality of primarily aligned second images according to the plurality of PSF data to generate a third image with aligned image pixels.
5. The image processing apparatus according to claim 4, characterized in that the apparatus further comprises:
the first acquisition module is used for acquiring a plurality of groups of first image data containing noise of the lens aiming at the lens of each camera;
an input module, configured to input the multiple sets of first image data into a calibration model trained in advance, so that the calibration model performs point spread function PSF calibration on the multiple sets of first image data, and generates noise-free PSF data of the lens, where a type of the lens is consistent with a type of a lens used for calibration by the calibration model;
and the storage module is used for storing the PSF data to the lenses so that the lenses of each camera respectively store the PSF data which are calibrated by the corresponding calibration model and do not contain noise.
6. The image processing apparatus according to claim 5, wherein the first acquisition module includes:
the first acquisition submodule is used for acquiring a plurality of groups of first test images which are obtained by respectively shooting point light sources distributed at different positions each time by a camera to which the lens belongs, wherein the positions of the point light sources are the same as the positions of the point light sources adopted in the training of the calibration model;
and the second obtaining submodule is used for obtaining the matrix sequences of the multiple groups of first test images, wherein the matrix sequences of the multiple groups of first test images are the multiple groups of first image data of the lens, which contain noise.
7. The image processing device of claim 6, wherein the plurality of positions where the point light sources are respectively located are equidistantly arranged on the same side of the lens and are symmetrical with respect to a center line of the lens.
8. The image processing apparatus according to claim 4, characterized in that the apparatus further comprises:
a second obtaining module, configured to obtain a training sample set, where the training sample set includes a plurality of second test images obtained by primarily aligning image pixels of the plurality of lenses, a plurality of sets of PSF test data of the plurality of lenses, the PSF test data being free of noise, and a third test image captured by a test lens having a resolution higher than a resolution of the plurality of lenses, where the second test images and the third test image correspond to a same captured object;
a third obtaining module, configured to input the multiple second test images and the multiple sets of PSF test data into a first neural network model, so as to obtain a fourth test image output by the first neural network, where the first neural network model is configured to perform secondary pixel alignment on the multiple second test images based on the multiple sets of PSF test data, so as to generate the fourth test image;
the fourth obtaining module is used for inputting the third test image and the fourth test image into a second neural network model to obtain a judgment result which is output by the second neural network model and represents whether the third test image and the fourth test image are the same or not;
and the training module is used for performing countermeasure training on the first neural network model and the second neural network model according to a countermeasure training method if the judgment result is that the third test image is different from the fourth test image, until the judgment result output by the second neural network model and indicating that the third test image is the same as the fourth test image, wherein the first neural network model subjected to the countermeasure training is the pixel alignment model which is trained in advance.
9. A terminal, comprising: memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 3.
10. A computer-readable storage medium, characterized in that an image processing program is stored thereon, which when executed by a processor implements the steps of the image processing method according to any one of claims 1 to 3.
CN201910817865.XA 2019-08-30 2019-08-30 Image processing method and device Active CN110475068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910817865.XA CN110475068B (en) 2019-08-30 2019-08-30 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910817865.XA CN110475068B (en) 2019-08-30 2019-08-30 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110475068A CN110475068A (en) 2019-11-19
CN110475068B true CN110475068B (en) 2021-10-29

Family

ID=68514305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910817865.XA Active CN110475068B (en) 2019-08-30 2019-08-30 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110475068B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117395385B (en) * 2023-12-11 2024-04-26 深圳市博盛医疗科技有限公司 Method, device, equipment and medium for processing acquired image of 3D laparoscope

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875732A (en) * 2018-01-11 2018-11-23 北京旷视科技有限公司 Model training and example dividing method, device and system and storage medium
CN109002769A (en) * 2018-06-22 2018-12-14 深源恒际科技有限公司 A kind of ox face alignment schemes and system based on deep neural network
EP3422699A1 (en) * 2017-06-28 2019-01-02 Samsung Electronics Co., Ltd. Electronic device including camera module
CN110008817A (en) * 2019-01-29 2019-07-12 北京奇艺世纪科技有限公司 Model training, image processing method, device, electronic equipment and computer readable storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398533B (en) * 2007-09-25 2010-06-16 财团法人工业技术研究院 Stray light assessment method and system thereof
JP2011128978A (en) * 2009-12-18 2011-06-30 Sony Corp Information processor, information processing method and program
US8620065B2 (en) * 2010-04-09 2013-12-31 The Regents Of The University Of Colorado Methods and systems for three dimensional optical imaging, sensing, particle localization and manipulation
JP2012234393A (en) * 2011-05-02 2012-11-29 Sony Corp Image processing device, image processing method, and program
JP5917054B2 (en) * 2011-09-12 2016-05-11 キヤノン株式会社 Imaging apparatus, image data processing method, and program
US9137526B2 (en) * 2012-05-07 2015-09-15 Microsoft Technology Licensing, Llc Image enhancement via calibrated lens simulation
CN104767930A (en) * 2014-01-03 2015-07-08 三星电机株式会社 Device used for image correction and method
CN103856723B (en) * 2014-02-25 2015-02-11 中国人民解放军国防科学技术大学 PSF fast calibration method based on single-lens imaging
JP2016119532A (en) * 2014-12-19 2016-06-30 キヤノン株式会社 Image processing apparatus, imaging apparatus, image processing method, image processing program, and storage medium
CN106709879B (en) * 2016-12-08 2017-12-15 中国人民解放军国防科学技术大学 A kind of spatial variations point spread function smoothing method that picture is calculated as based on unzoned lens
CN108088660B (en) * 2017-12-15 2019-10-29 清华大学 The point spread function measurement method and system of wide field fluorescence microscope
CN108830805A (en) * 2018-05-25 2018-11-16 北京小米移动软件有限公司 Image processing method, device and readable storage medium storing program for executing, electronic equipment
CN109801215B (en) * 2018-12-12 2020-04-28 天津津航技术物理研究所 Infrared super-resolution imaging method based on countermeasure generation network
CN109949226A (en) * 2019-03-11 2019-06-28 厦门美图之家科技有限公司 A kind of image processing method and calculate equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3422699A1 (en) * 2017-06-28 2019-01-02 Samsung Electronics Co., Ltd. Electronic device including camera module
CN108875732A (en) * 2018-01-11 2018-11-23 北京旷视科技有限公司 Model training and example dividing method, device and system and storage medium
CN109002769A (en) * 2018-06-22 2018-12-14 深源恒际科技有限公司 A kind of ox face alignment schemes and system based on deep neural network
CN110008817A (en) * 2019-01-29 2019-07-12 北京奇艺世纪科技有限公司 Model training, image processing method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN110475068A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
KR102227583B1 (en) Method and apparatus for camera calibration based on deep learning
KR102480245B1 (en) Automated generation of panning shots
JP6663040B2 (en) Depth information acquisition method and apparatus, and image acquisition device
US20180070015A1 (en) Still image stabilization/optical image stabilization synchronization in multi-camera image capture
Delbracio et al. Removing camera shake via weighted fourier burst accumulation
CN110536057A (en) Image processing method and device, electronic equipment, computer readable storage medium
US9807372B2 (en) Focused image generation single depth information from multiple images from multiple sensors
CN108063932B (en) Luminosity calibration method and device
KR20210028218A (en) Image processing methods and devices, electronic devices and storage media
CN107800979A (en) High dynamic range video image pickup method and filming apparatus
CN112785507A (en) Image processing method and device, storage medium and terminal
US20200296259A1 (en) Method and apparatus for determining depth value
CN108702450A (en) Stablize the method for image sequence
CN109598764A (en) Camera calibration method and device, electronic equipment, computer readable storage medium
CN113875219B (en) Image processing method and device, electronic equipment and computer readable storage medium
US20220368877A1 (en) Image processing method, image processing apparatus, storage medium, manufacturing method of learned model, and image processing system
CN110493522A (en) Anti-fluttering method and device, electronic equipment, computer readable storage medium
Peng et al. Deep HDR reconstruction of dynamic scenes
CN105335959B (en) Imaging device quick focusing method and its equipment
CN113628134B (en) Image noise reduction method and device, electronic equipment and storage medium
CN109325912B (en) Reflection separation method based on polarized light field and calibration splicing system
CN110475068B (en) Image processing method and device
US9736366B1 (en) Tile-based digital image correspondence
JP2017211982A (en) Face identification system and face identification method
US9196020B2 (en) Systems and methods for digital correction of aberrations produced by tilted plane-parallel plates or optical wedges

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant