CN118247319A - Image alignment method and device, computer readable medium and electronic equipment - Google Patents

Image alignment method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN118247319A
CN118247319A CN202211651771.8A CN202211651771A CN118247319A CN 118247319 A CN118247319 A CN 118247319A CN 202211651771 A CN202211651771 A CN 202211651771A CN 118247319 A CN118247319 A CN 118247319A
Authority
CN
China
Prior art keywords
image
optical flow
value
determining
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211651771.8A
Other languages
Chinese (zh)
Inventor
李政青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211651771.8A priority Critical patent/CN118247319A/en
Publication of CN118247319A publication Critical patent/CN118247319A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The disclosure provides an image alignment method and device, a computer readable medium and electronic equipment, and relates to the technical field of image processing. The method comprises the following steps: acquiring at least two frames of images to be processed; performing image registration processing on the images to be processed to obtain initial optical flow images among the images to be processed; determining an occlusion region in the initial optical flow image; restoring the initial optical flow image based on the shielding area, determining a target optical flow image, and realizing image alignment of the image to be processed through the target optical flow image. The method and the device can smoothly update the optical flow image, effectively avoid the problem of inaccurate image alignment caused by distortion and fault phenomena generated by the shielding area of the optical flow image, improve the accuracy of the optical flow image of the image to be processed, further improve the accuracy of the image alignment result, and ensure the image fusion result of the image to be processed.

Description

Image alignment method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image alignment method, an image alignment apparatus, a computer readable medium, and an electronic device.
Background
Image registration (Image Registration) refers to the process of aligning or fusing two or more images taken at different times and angles from different sources. The image alignment technology is widely applied to various tasks of computer vision, particularly the current intelligent mobile phone photography has developed trend from single shooting to double shooting and multiple shooting, and two or more images need to be aligned in a double-shot or multiple-shot scene to be further fused.
In actual processing, because of rapid movement of an object, camera shake and parallax of the same object in different cameras, better image alignment cannot be achieved, and artifact conditions such as faults and distortion often exist, so that accuracy of an image alignment result is reduced.
Disclosure of Invention
The invention aims to provide an image alignment method, an image alignment device, a computer readable medium and electronic equipment, so as to reduce fault, distortion and other artifact conditions in the image alignment process and effectively improve the accuracy of an image alignment result.
According to a first aspect of the present disclosure, there is provided an image alignment method including:
Acquiring at least two frames of images to be processed;
Performing image registration processing on the images to be processed to obtain initial optical flow images among the images to be processed;
Determining an occlusion region in the initial optical flow image;
Restoring the initial optical flow image based on the shielding area, determining a target optical flow image, and realizing image alignment of the image to be processed through the target optical flow image.
According to a second aspect of the present disclosure, there is provided an image alignment apparatus comprising:
The image acquisition module is used for acquiring at least two frames of images to be processed;
The optical flow image determining module is used for carrying out image registration processing on the images to be processed to obtain initial optical flow images among the images to be processed;
The shielding area determining module is used for determining shielding areas in the initial optical flow image;
And the optical flow image updating module is used for restoring the initial optical flow image based on the shielding area, determining a target optical flow image and realizing the image alignment of the image to be processed through the target optical flow image.
According to a third aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method described above.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising:
A processor; and
And a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the methods described above.
According to the image alignment method provided by the embodiment of the disclosure, image registration processing can be performed on at least two frames of images to be processed to obtain initial optical flow images between the images to be processed, then, the shielding area in the initial optical flow images is determined, the initial optical flow images are restored based on the shielding area, the target optical flow images are determined, and the image alignment of the images to be processed is realized through the target optical flow images. The method has the advantages that the blocking area in the image to be processed can be detected rapidly through the optical flow image, the accuracy of the blocking area is effectively guaranteed, the restoration of the initial optical flow image can be achieved through the detected blocking area, the accuracy of the target optical flow image is improved, and therefore the accuracy of the image alignment result is effectively improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
FIG. 2 schematically illustrates a flow diagram of an image alignment method in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram for determining a target optical flow image in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of one method of determining a first light flow value in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of one method of determining a first light flow value in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow diagram for determining occlusion regions in an exemplary embodiment of the present disclosure;
FIG. 7 schematically illustrates a schematic diagram of one method of determining occlusion regions in an exemplary embodiment of the present disclosure;
FIG. 8 schematically illustrates a composition schematic of an image alignment apparatus in an exemplary embodiment of the present disclosure;
fig. 9 shows a schematic diagram of an electronic device to which embodiments of the present disclosure may be applied.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 shows a schematic diagram of a system architecture of an exemplary application environment in which an image alignment method and apparatus of embodiments of the present disclosure may be applied.
Referring to fig. 1, in a possible application scenario, the image alignment method in the embodiment of the disclosure may be applied to the terminal device 110, and accordingly, the image alignment apparatus may also be disposed in the terminal device 110. Specifically, the terminal device 110 may be provided with a rear camera module, and collect an image to be processed through the rear camera module, where the image to be processed may include at least two frames of image frames with different angles of view, for example, the rear camera module may include a main camera 111 and a sub-camera 112, and further may obtain a first image by shooting through the main camera 111, obtain a second image by shooting through the sub-camera 112, where the angle of view of the first image is greater than or equal to the angle of view of the second image.
The main camera 111 may be a wide-angle lens equivalent to a focal length of about 28mm, because this focal length is close to the viewing range of "what the human eye sees", what you see is what you take, and is the most frequently used camera on a widely used terminal device. The operation page of the mobile phone camera is opened, the numerical display of 1X can be generally seen on the page, and the focal segment corresponding to 1X is the focal segment of the main camera, so that the image quality is the best in all focal segments, and the mobile phone camera is suitable for shooting figures, buildings, scenery, disciplines and the like.
The auxiliary camera 112 is a camera for assisting or supplementing the focal segment or defect of the main camera 111, for example, the auxiliary camera 112 may be a super wide angle camera, and compared with the main camera 111, the super wide angle camera has a larger view finding range, and can obtain a wider picture than the main camera 111 in the same position, so that the camera can shoot a wider picture, is suitable for shooting wind and light and building, and corresponds to the focal segment corresponding to 0.5X-0.6X in the mobile phone camera; the sub-camera 112 may also be a tele camera, in which a focal segment of 1X or more is generally called tele, the higher the front coefficient of X is, the farther the front coefficient of X can be, the tele lens can take a higher quality picture from a distance, and take a further object or an object in an enlarged picture, without causing degradation of image quality like digital zooming, the tele lens can "zoom in" the distance between the background and the foreground, thereby bringing a compressing feeling of a spatial distance, so that the whole picture is more full, the "compressing feeling" is just one of the features of the tele lens, the tele lens has small deformation, and the perspective effect is weak to zoom in the distance between the foreground and the background, thereby enhancing the relation between the foreground and the background, and creating some unique visual effects. Of course, the secondary camera may also be other types of cameras, including but not limited to infrared cameras, depth cameras, etc., as the present exemplary embodiment is not particularly limited. The sub-camera 112 may be one of the above different types of cameras, or may be a plurality of combinations, and the present exemplary embodiment is not limited thereto.
In a possible application scenario, the image alignment method in the embodiment of the present disclosure may be applied to the terminal device 120, and accordingly, the image alignment apparatus may also be disposed in the terminal device 120. Specifically, the terminal device 120 may be provided with a front-end camera module, and collect an image to be processed through the front-end camera module, where the image to be processed may include at least two frames of image frames with different angles of view, for example, the front-end camera module may at least include a main camera 121 and a sub-camera 122, and further may be used for capturing a first image by the main camera 121 and capturing a second image by the sub-camera 122. It is understood that the types and related settings of the primary camera 121 and the secondary camera 122 are the same as those of the primary camera 111 and the secondary camera 112 of the terminal device 110, and will not be described herein.
In a possible application scenario, the image alignment method in the embodiments of the present disclosure may be applied to the image capturing system 130, and accordingly, the image alignment device may also be disposed in a terminal device of the image capturing system 130, and collect an image to be processed through the image capturing system 130. Specifically, the camera system 130 may include a terminal device with only one camera 131 and an external camera 132 or other terminal devices 133 connected to the terminal device through a network (such as a wired network, a wireless network, etc.), where the camera 131 may be used as a main camera of the camera system 130, and the camera of the external camera 132 or other terminal devices 133 may be used as a sub-camera of the camera system 130, so that a first image may be acquired through the camera 131, and a second image may be acquired through the camera of the external camera 132 or other terminal devices 133; of course, the external camera 132 may be used as a main camera of the imaging system 130 and the camera 131 may be used as a sub-camera of the imaging system 130, which is not particularly limited in this exemplary embodiment.
In a possible application scenario, the image alignment method may also be performed by a server connected to the terminal device 110, the terminal device 120 or the camera system 130 via a network, and the image alignment apparatus may be provided in the server. For example, in an exemplary embodiment, the terminal device 110, the terminal device 120, or the image capturing system 130 may collect at least two frames of images to be processed, upload the images to be processed to a server, and after the server generates a target optical flow image or an image alignment result by using the image alignment method provided by the embodiment of the present disclosure, transmit the target optical flow image or the image alignment result to the terminal device 110, the terminal device 120, or the image capturing system 130 for post-processing, and so on.
In one possible technical scheme, a target frame in a video frame and a frame before the target frame can be obtained, and a deblurring map of the frame before the target frame can be obtained; wherein, the target frame and the previous frame are both blurred images; calculating the deblurred images of the input target frame, the previous frame and the previous frame by utilizing a video deblurring model so as to obtain a deblurred result image of the target frame; the video deblurring model comprises a feature extraction module, an alignment and shielding correction module and a reconstruction fusion module, wherein the method and the device simultaneously carry out shielding correction when the mining time sequence relation is aligned, and solve the shielding problem encountered when the images are aligned so as to achieve a better video deblurring effect. However, the scheme learns an occlusion correction module through a deep convolutional network, but the module does not display an aligned correction result aiming at an occlusion region, and the module can play a role together only by relying on a subsequent deblurring filter, so that the expandability and the mobility are poor, and the module is difficult to combine with more traditional fusion algorithms.
At present, the related art lacks an effective and displayed post-processing method for alignment artifacts of a shielding region in an image to be processed, so that the problem of uneven transition of optical flow is very easy to occur at the position of parallax fault, the optical flow image is inaccurate, and the accuracy of an image alignment result is poor.
Based on one or more problems in the related art, the present disclosure first provides an image alignment method, and an image alignment apparatus and an image alignment method according to exemplary embodiments of the present disclosure will be specifically described below by taking a terminal device to perform the method as an example.
Fig. 2 shows a schematic flow chart of an image alignment method in the present exemplary embodiment, which may include the following steps S210 to S240:
In step S210, at least two frames of images to be processed are acquired.
In an exemplary embodiment, the image to be processed refers to at least two frames of images captured by at least two cameras started when the user triggers the capturing instruction, optionally, the image to be processed may include a first image and a second image obtained by capturing the same capturing object by different cameras, where the different cameras may be at least two cameras started by the terminal device, and the at least two cameras may be a combination of a main camera and a sub camera of the terminal device, or a combination of different types of sub cameras, for example, the different cameras may be a combination of a main camera and a super wide angle camera, or a combination of a main camera and a tele camera, and of course, or a combination of a super wide angle camera and a tele camera.
It can be understood that the at least two frames of images to be processed may also be images obtained by photographing the same photographing target and stored in the storage unit in advance, and of course, the images with similar image contents may also be images transmitted by other terminal devices in a wired or wireless manner, and the method for obtaining the at least two frames of images to be processed is not limited in any way in this embodiment.
It can be appreciated that the first image may be an image captured by the main camera, i.e. a preview image displayed on the image user interface of the terminal device, visible to the user; the second image may be an image shot by the secondary camera, and since at least two cameras are started and two images are acquired at the same time during shooting, the image user interface of the terminal device can generally only display one image, so the second image may not be visible to a user, that is, the acquired second image may be image data only used to assist the first image in performing operations such as image alignment, image fusion, image stitching or image segmentation, and of course, both the first image and the second image may be displayed in the graphical user interface in some scenes, which is not particularly limited in this example embodiment.
The angle of view of the first image and the angle of view of the second image are generally different, for example, the angle of view of the first image may be larger than the angle of view of the second image, and of course, the angle of view of the first image may also be smaller than or equal to the angle of view of the second image, which may specifically be determined based on the camera parameters during shooting, and the embodiment is not limited thereto. For easy understanding and explanation, the image alignment method in this embodiment will be explained by taking the first image as the wide-angle image captured by the main camera and taking the second image as the image alignment of the super-wide-angle image captured by the super-wide-angle camera as an example.
In step S220, an image registration process is performed on the to-be-processed images, so as to obtain an initial optical flow image between the to-be-processed images.
In an exemplary embodiment, the image registration process refers to a process of aligning or fusing two or more images captured from different sources and at different times and angles, and the image registration process may establish a correspondence between matching points between two images, and spatially align the matching points to minimize an error, that is, a unified proximity measurement between the two images, for example, the image registration process may be based on feature point matching, or may be based on gray scale matching, for example, the image registration of the image to be processed may be implemented by an optical flow method, for example, an optical flow algorithm such as a Horn-Schunck algorithm, a Lucas-Kanade algorithm, or DeepFlow, or may be other types of optical flow algorithms, which is not limited in particular in this exemplary embodiment.
The optical flow method is a method for finding the correspondence between the previous frame and the current frame by utilizing the change of pixels in an image sequence in a time domain and the correlation between the adjacent frames, so as to calculate the motion information of an object between the adjacent frames, and although the image to be processed can be an image shot by a camera at the same time, the optical flow vector can be generated due to the movement of a foreground object in a shooting scene, the movement of the camera or the joint movement of the foreground object and the camera. The initial optical flow image is an image with the optical flow value of each pixel obtained after image registration processing is performed on the image to be processed.
In step S230, an occlusion region in the initial optical flow image is determined.
In an exemplary embodiment, the occlusion region refers to an image region in which phenomena such as occlusion, fault, distortion and the like of optical flow values occur in an initial optical flow image, for example, the occlusion region can be determined in the initial optical flow image through Forward-backward consistency detection (Forward-Backward consistency), which is a priori assumption, and theoretically, the Forward optical flow and the backward optical flow of static pixels in a scene in two images should be equal in value and opposite in direction, for example, the optical flow values from a first image to a second image in an image to be processed can be calculated, then the optical flow values from the second image to the first image can be calculated, finally, the optical flow values obtained before and after are compared, and the image region with abnormal optical flow values is determined to be the occlusion region; the occlusion region may also be detected by means of local pixel differences of the optical flow, for example, statistical data of the optical flow value of the current pixel and the optical flow value of surrounding pixels may be collected, and further, an image region with a large difference of the optical flow values in the optical flow image may be analyzed by the statistical data, and the image region may be used as the occlusion region. Of course, the determination of the occlusion region in the initial optical flow image may also be implemented in other ways, which are not particularly limited by the present example embodiment.
In step S240, the initial optical flow image is restored based on the occlusion region, a target optical flow image is determined, and image alignment of the image to be processed is achieved through the target optical flow image.
In an exemplary embodiment, the target optical flow image is an optical flow image obtained by restoring an occlusion region in the initial optical flow image, and optical flow values of pixels in the target optical flow image are clear, and there are no phenomena such as optical flow occlusion, fault, and distortion.
The initial optical flow image can be restored based on the shielding region, for example, a non-shielding region can be determined in the initial optical flow image based on the shielding region, the shielding region is restored through a normal optical flow value close to the shielding region in the non-shielding region, and if the area of the shielding region is smaller, the shielding region can be filled according to the average value of the normal optical flow values near the shielding region; of course, when the area of the shielding region is larger, the optical flow values of the pixels of the shielding region can be updated layer by layer from outside to inside according to the average value of the normal optical flow values near the shielding region in a first-order taylor expansion mode until the updating of all the optical flow values of the pixels in the shielding region is completed; of course, other types of processing methods that can complete filling of the occlusion region in the initial optical flow image are also possible, and this example implementation is not particularly limited.
After the target optical flow images among the images to be processed are obtained, alignment of two or more images can be achieved according to optical flow values in the target optical flow images, accuracy of image alignment results is improved, tasks in the computer vision field such as image fusion, image segmentation and image stitching can be achieved according to the aligned images, and accuracy and precision of the tasks in the computer vision field are effectively improved.
Next, the technical contents in step S210 to step S240 will be described in detail.
In an exemplary embodiment, restoration of the initial optical-flow image based on the occlusion region may be achieved by:
The non-occlusion region may be determined from the occlusion region in the initial optical flow image, and the occlusion region may be filled with based on the optical flow values in the non-occlusion region to determine the target optical flow image. Wherein, the non-occlusion region refers to an image region where the initial optical flow image does not belong to an occlusion region, and compared with the occlusion region, the optical flow value of pixels in the non-occlusion region is clear and accurate; the non-occlusion region can be screened and determined in the initial optical flow image directly according to the pixel positions in the occlusion region.
The filling processing of the optical flow value in the shielding region can be realized through the normal optical flow value of the non-shielding region so as to update or recover the optical flow value in the shielding region, the filling processing of the shielding region is realized, and the smooth transition of the optical flow value in the shielding region and the non-shielding region can be ensured through the filling processing of the optical flow value in the non-shielding region, so that the precision and the accuracy of the target optical flow image can be improved to a certain extent.
Optionally, if the area of the shielding region is detected to be smaller than or equal to the preset area threshold, the shielding area may be directly filled according to the normal light value around the shielding region in the non-shielding region, for example, the shielding region may be filled according to an average value of the normal light values around the shielding region in the non-shielding region, or the shielding region may be filled according to a maximum value or a median value of the normal light values around the shielding region in the non-shielding region.
Alternatively, if the area of the occlusion region is detected to be greater than the preset area threshold, the occlusion region may be divided into edge pixels and intermediate pixels, where the edge pixels refer to partial pixels adjacent to the non-occlusion region in the occlusion region, for example, the edge pixels may be one layer of pixels adjacent to the non-occlusion region in the occlusion region, or may be two layers of pixels adjacent to the non-occlusion region in the occlusion region, and the selection range of the edge pixels is not limited in any way in this example implementation. The middle pixel refers to a pixel in the occlusion region that is not adjacent to the non-occlusion region or is farther from the edge of the non-occlusion region, and after determining the pixel position of the edge pixel, the remaining pixels in the occlusion region may be used as the middle pixel of the occlusion region.
Specifically, if it is detected that the area of the shielding region is greater than the preset area threshold, the filling processing of the shielding region based on the light current value in the non-shielding region may be implemented through the steps in fig. 3, and referring to fig. 3, the method may specifically include:
Step S310, determining adjacent optical flow values corresponding to the edge pixels in the non-occlusion region, and updating the optical flow values of the edge pixels according to the adjacent optical flow values to obtain a first optical flow value;
Step S320, updating the optical flow value of the middle pixel layer by layer from outside to inside based on the first optical flow value or based on the first optical flow value and the adjacent optical flow value, to obtain a second optical flow value;
and step S330, performing filling processing on the occlusion region according to the first light value and the second light value, and determining the target optical flow image.
The adjacent optical flow value refers to an abnormal-free optical flow value of a pixel in the non-occlusion region, which is closer to the occlusion region, for example, the adjacent optical flow value may be an abnormal-free optical flow value corresponding to one or more layers of pixels surrounding the occlusion region in the non-occlusion region, or may be an abnormal-free optical flow value of a pixel in the non-occlusion region within a certain range of the occlusion region, which is not limited in this exemplary embodiment.
The first optical flow value is a new optical flow value obtained by updating the abnormal optical flow value of the edge pixel according to the adjacent optical flow value of the edge pixel, and the second optical flow value is a new optical flow value obtained by updating the abnormal optical flow value of the intermediate pixel according to the first optical flow value of the edge pixel. It should be noted that, the "first" and "second" in the "first light value" and the "second light value" in the present embodiment are only used to distinguish the updated light flow values corresponding to the edge pixels and the middle pixels, and have no special meaning, and should not cause any special limitation to the present exemplary embodiment.
After the first optical flow value corresponding to the edge pixel is obtained, the middle pixel can be updated based on the first optical flow value, specifically, if the outer side of the adjacent pixel of the middle pixel is only the edge pixel, the optical flow value of the middle pixel can be updated layer by layer from outside to inside according to the first optical flow value of the edge pixel, and if the outer side of the adjacent pixel of the middle pixel is only the edge pixel and the pixel in the non-shielding region, the optical flow value of the middle pixel can be updated layer by layer from outside to inside according to the first optical flow value of the edge pixel and the adjacent optical flow value in the non-shielding region.
For example, assuming that the shielding area has three layers of pixels from outside to inside, one layer is an edge pixel on the outermost layer, and two layers are middle pixels, the edge pixels on the outermost layer of the shielding area can be updated according to the adjacent light flow values in the non-shielding area to obtain a first light flow value, then the light flow value of the second layer of pixels is updated according to the first light flow value to obtain a new light flow value, and then the light flow value of the third layer of pixels is continuously updated according to the new light flow value of the second layer of pixels, so that the second light flow value of the middle pixels is obtained by updating from outside to inside layer by layer.
In an exemplary embodiment, updating the optical flow values of the edge pixels according to the neighboring optical flow values may be implemented through the steps in fig. 4, to obtain the first optical flow value, which may specifically include:
step S410, determining at least one adjacent light current value in the non-shielding area according to the pixel position of the edge pixel;
Step S420, determining an updated light value of at least one of the edge pixels based on the adjacent light flow values;
Step S430, performing weighting processing on the updated light value to obtain a first light value corresponding to the edge pixel.
Wherein for each of the edge pixels, a plurality of adjacent pixels may be determined based on the pixel positions, for example, 8 pixels adjacent around may be determined with the pixel position of a certain pixel in the edge pixels as the center, and pixels belonging to the non-occlusion region are screened out of the 8 pixels, and the optical flow value of the pixels belonging to the non-occlusion region is taken as at least one adjacent optical flow value of the edge pixel in the non-occlusion region.
The updated light value refers to a new light value obtained by updating the light value of the edge pixel by using a single adjacent light value, and for a certain edge pixel, the updated light value can generally correspond to a plurality of adjacent light values, so that a plurality of updated light values can be obtained for the edge pixel, and finally, the obtained at least one updated light value can be weighted to obtain a first light value corresponding to the edge pixel.
Alternatively, determining the updated light value of at least one edge pixel based on the adjacent light values may be accomplished by:
the original light value of the edge pixel can be obtained, and the light flow gradient value of the edge pixel is determined according to the original light value and the adjacent light flow value; distance data from adjacent pixels to edge pixels corresponding to adjacent optical flow values may then be determined; an updated light value for the edge pixel is determined from the original light value, the light flow gradient value, and the distance data. The original light value refers to an abnormal light value corresponding to an edge pixel in the shielding area.
Fig. 5 schematically illustrates a schematic diagram of determining a first light current value in an exemplary embodiment of the present disclosure.
Referring to fig. 5, for an image 500 to be processed, an occlusion region 501 (a thicker-colored image enclosed in the figure) therein can be determined. After the occlusion region 501 is obtained, the optical flow pixels in the occlusion region 501 can be smoothly filled by adopting weighted first-order taylor expansion, the filling sequence is to fill the abnormal pixels in the outermost circle from the thick line frame region at the outermost periphery gradually inwards, and then fill the abnormal pixels in the outermost circle inwards until the optical flow values of all the pixels in the occlusion region 501 are updated, and the arrow direction is the inwards updated direction of each position.
Specifically, the occlusion region 501 may include edge pixels and intermediate pixels, and pixels adjacent to each other between the occlusion region and the non-occlusion region may be used as edge pixels, for example, the edge pixels may be Q1-Q13, and pixels except for the edge pixels in the occlusion region may be used as intermediate pixels. Specifically, the determining of the updated light value of at least one edge pixel based on the adjacent light value can be implemented in a first-order taylor expansion mode, assuming that the edge pixel Q1 is a pixel in the occlusion region 501, setting a neighborhood region R q centered on Q1, such as 8 neighborhood of 3*3, and taking the light value of the adjacent pixels P1-P5 in the non-occlusion region in the neighborhood region R q as the adjacent light value of the edge pixel Q1. For example, the updated light value of the edge pixel Q1 can be calculated by the relation (1):
Wherein, Can represent an updated light value, flow (Q) or/>, obtained by updating the original light value of the edge pixel Q1 from the adjacent light value of the adjacent pixel PThe original light value of edge pixel Q1 may be represented,May represent an optical flow gradient value for edge pixel Q1, including optical flow gradient values in the x-direction and the y-direction, (Q-P) = (Δx, Δy) T may represent a difference in positional distance of edge pixel Q1 from adjacent pixel P.
Optionally, an updated light value may be obtained for the obtained plurality of updated light values, for example, the adjacent pixel P1 and the edge pixel Q1 through the first-order taylor expansion formula (1), the adjacent pixels P2-P5 may respectively obtain an updated light value with the edge pixel Q1, and further, the updated light value may be weighted to obtain a first light value corresponding to the edge pixel, and specifically, the updated light value may be weighted through the following steps:
Distance data from adjacent pixels to edge pixels corresponding to adjacent optical flow values can be determined, and weight data is determined according to a preset related mapping relation and the distance data; and carrying out weighting processing on the updated light current value according to the weight data to obtain a first light current value corresponding to the edge pixel. The correlation mapping relationship refers to data for representing a correspondence relationship between a weight and distance data from an adjacent pixel to an edge pixel, for example, the correlation mapping relationship may be a positive correlation mapping relationship, and the positive correlation mapping relationship may include a linear correlation mapping relationship or a nonlinear correlation mapping relationship. For example, with continued reference to fig. 5, the updated light values may be weighted by the relation (2) to obtain the first light values corresponding to the edge pixels:
Wherein flow q may represent a first light value corresponding to edge pixel Q1, The updated light flow value obtained by updating the original light flow value of the edge pixel Q1 from the adjacent light flow value of the adjacent pixel P may be represented, w q may represent weight data corresponding to each updated light flow value, and in particular, w q may be represented by the relation (3):
Where x q and y q may represent pixel position coordinates of edge pixel Q1 in the image to be processed, and x q and y p may represent pixel position coordinates of neighboring pixel P in the image to be processed.
The updated light value is determined through first-order Taylor expansion, the weight data is determined through distance data from adjacent pixels to edge pixels, and at least one updated light value corresponding to the edge pixels is weighted based on the weight data, so that smoothness and accuracy of the updated light value in a shielding area can be improved, smoothness of a generated target light flow image is guaranteed, and accuracy of an image alignment result is improved.
In an exemplary embodiment, the image to be processed may include a first image and a second image, for example, the first image may be a super wide angle image, the second image may be a normal wide angle image, and specifically, determining the occlusion region in the initial optical flow image may be implemented through the steps in fig. 6, and referring to fig. 6, may specifically include:
Step S610, determining a first initial optical flow image from the first image to the second image, and determining a second initial optical flow image from the second image to the first image;
step S620, determining an optical flow differential value of the pixel at the same position according to the first initial optical flow image and the second initial optical flow image;
step S630, determining an occlusion region in the initial optical flow image based on pixels having the optical flow differential value greater than or equal to a preset optical flow differential threshold.
The first initial optical flow image refers to an optical flow image corresponding to a forward optical flow from the first image to the second image, and the second initial optical flow image refers to an optical flow image corresponding to a backward optical flow from the first image to the second image, that is, a forward optical flow from the second image to the first image, and it should be noted that "first" and "second" in the "first initial optical flow image" and "second initial optical flow image" are only optical flow images for distinguishing different directions between the images to be processed, and have no special meaning, and should not cause any special limitation to the present exemplary embodiment.
Fig. 7 schematically illustrates a schematic diagram of determining occlusion regions in an exemplary embodiment of the present disclosure.
Referring to fig. 7, a first initial optical flow image 701 from a first image to a second image may be determined, a second initial optical flow image 702 from the second image to the first image may be determined, an optical flow difference value of a pixel at the same position may be determined according to the first initial optical flow image 701 and the second initial optical flow image 702, and then an image area corresponding to a pixel whose optical flow difference value is greater than or equal to a preset optical flow difference threshold may be used as an occlusion area 704 in the initial optical flow image 703.
Alternatively, the determination of the occlusion region in the initial optical flow image may also be achieved by: optical flow value statistics of target pixels and surrounding pixels of the target pixels in the initial optical flow image may be determined; and determining an occlusion region in the initial optical flow image according to the target pixels of which the optical flow value statistical data is greater than or equal to a preset statistical data threshold value. The optical flow value statistics data may be an average value of optical flow values of surrounding pixels of the target pixel, a variance value of the optical flow values of the surrounding pixels of the target pixel, a maximum pixel difference value of the optical flow values of the target pixel and the surrounding pixels, or a gradient value of the optical flow image, which is not limited to this example embodiment.
The maximum pixel difference value between the optical flow value of the target pixel in the initial optical flow image and the surrounding pixels (such as in the 9*9 range) of the surrounding neighborhood, or the variance value, or the gradient value (such as Sobel operator, laplacian operator, etc.) of the initial optical flow image can be calculated, the larger the statistical value is, the region is described as an abnormal region, and the abnormal region of the optical flow, which is larger than or equal to the preset statistical data threshold, is detected through the preset statistical data threshold and is used as an occlusion region in the initial optical flow image.
In summary, in this exemplary embodiment, the image registration process may be performed on at least two frames of to-be-processed images to obtain an initial optical flow image between the to-be-processed images, then, an occlusion area in the initial optical flow image is determined, the initial optical flow image is repaired based on the occlusion area, a target optical flow image is determined, and image alignment of the to-be-processed images is achieved through the target optical flow image. The method has the advantages that the blocking area in the image to be processed can be detected rapidly through the optical flow image, the accuracy of the blocking area is effectively guaranteed, the restoration of the initial optical flow image can be achieved through the detected blocking area, the accuracy of the target optical flow image is improved, and therefore the accuracy of the image alignment result is effectively improved.
In the field of computer vision tasks, particularly in a double-shot fusion scene, an occlusion region is identified through optical flow images obtained after registration of a first image and a second image, an abnormal region is smoothly filled by using optical flow values of a non-occlusion region near the occlusion region, smooth transition of filling optical flow can be realized through an outside-in updating strategy and a weighted first-order Taylor expansion algorithm, distortion and fault phenomena of the optical flow of an original abnormal region are reduced, and further the presentation effect of double-shot fusion is improved.
It is noted that the above-described figures are merely schematic illustrations of processes involved in a method according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Further, referring to fig. 8, in this exemplary embodiment, there is further provided an image alignment apparatus 800, including an image acquisition module 810, an optical flow image determination module 820, an occlusion region determination module 830, and an optical flow image update module 840. Wherein:
the image acquisition module 810 is configured to acquire at least two frames of images to be processed;
The optical flow image determining module 820 is configured to perform image registration processing on the to-be-processed images, so as to obtain initial optical flow images between the to-be-processed images;
the occlusion region determination module 830 is configured to determine an occlusion region in the initial optical flow image;
The optical flow image update module 840 is configured to repair the initial optical flow image based on the occlusion region, determine a target optical flow image, and implement image alignment of the image to be processed through the target optical flow image.
In an exemplary embodiment, the optical flow image update module 840 may be configured to:
Determining a non-occlusion region in the initial optical flow image according to the occlusion region;
And filling the occlusion region based on the optical flow value in the non-occlusion region, and determining the target optical flow image.
In an exemplary embodiment, the occlusion region may include edge pixels and intermediate pixels, and the optical-flow image update module 840 may be configured to:
determining adjacent optical flow values corresponding to the edge pixels in the non-occlusion region, and updating the optical flow values of the edge pixels according to the adjacent optical flow values to obtain first optical flow values;
based on the first light value or the first light value and the adjacent light value, updating the light flow value of the middle pixel layer by layer from outside to inside to obtain a second light value;
And filling the shielding area according to the first light value and the second light value, and determining the target light flow image.
In an exemplary embodiment, the optical flow image update module 840 may be configured to:
determining at least one adjacent light flow value in the non-occlusion region according to the pixel position of the edge pixel;
Determining an updated light flow value for at least one of the edge pixels based on the adjacent light flow values;
And carrying out weighting processing on the updated light current value to obtain a first light current value corresponding to the edge pixel.
In an exemplary embodiment, the optical flow image update module 840 may be configured to:
Acquiring an original light value of the edge pixel;
determining an optical flow gradient value for the edge pixel from the original optical flow value and the adjacent optical flow value;
Determining distance data from adjacent pixels corresponding to the adjacent optical flow values to the edge pixels;
and determining an updated light value of the edge pixel through the original light value, the light flow gradient value and distance data.
In an exemplary embodiment, the optical flow image update module 840 may be configured to:
determining distance data from the adjacent pixels corresponding to the adjacent optical flow values to the edge pixels, and determining weight data according to a preset related mapping relation and the distance data;
and weighting the updated light value according to the weight data to obtain a first light value corresponding to the edge pixel.
In an exemplary embodiment, the image to be processed may include a first image and a second image, and the occlusion region determination module 830 may be configured to:
determining a first initial optical flow image of the first image to the second image, and determining a second initial optical flow image of the second image to the first image;
determining an optical flow differential value of the pixels at the same position according to the first initial optical flow image and the second initial optical flow image;
and determining an occlusion region in the initial optical flow image based on pixels of which the optical flow differential value is greater than or equal to a preset optical flow differential threshold value.
In an exemplary embodiment, occlusion region determination module 830 may be configured to:
determining optical flow value statistics of target pixels in the initial optical flow image and surrounding pixels of the target pixels;
And determining an occlusion region in the initial optical flow image according to the target pixels of which the optical flow value statistical data are greater than or equal to a preset statistical data threshold value.
The specific details of each module in the above apparatus are already described in the method section, and the details that are not disclosed can be referred to the embodiment of the method section, so that they will not be described in detail.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide an electronic device. The electronic devices may be the above-described terminal devices 101, 102, 103 and server 105. In general, the electronic device may include a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the above-described image alignment method via execution of the executable instructions.
The configuration of the electronic device will be exemplarily described below using the mobile terminal 900 of fig. 9 as an example. It will be appreciated by those skilled in the art that the configuration of fig. 9 can also be applied to stationary type devices in addition to components specifically for mobile purposes.
As shown in fig. 9, the mobile terminal 900 may specifically include: processor 901, memory 902, bus 903, mobile communication module 904, antenna 1, wireless communication module 905, antenna 2, display 906, camera module 907, audio module 908, power module 909, and sensor module 910.
The processor 901 may include one or more processing units, such as: processor 901 may include an AP (Application Processor ), modem Processor, GPU (Graphics Processing Unit, graphics Processor), ISP (IMAGE SIGNAL Processor ), controller, encoder, decoder, DSP (DIGITAL SIGNAL Processor ), baseband Processor and/or NPU (Neural-Network Processing Unit, neural network Processor), and the like. The image alignment method in the present exemplary embodiment may be performed by an AP, GPU, or DSP, and may be performed by an NPU when the method involves neural network related processing, for example, the NPU may load neural network parameters and execute neural network related algorithm instructions.
An encoder may encode (i.e., compress) an image or video to reduce the data size for storage or transmission. The decoder may decode (i.e., decompress) the encoded data of the image or video to recover the image or video data. The mobile terminal 900 may support one or more encoders and decoders, for example: image formats such as JPEG (Joint Photographic Experts Group ), PNG (Portable Network Graphics, portable network graphics), BMP (Bitmap), and Video formats such as MPEG (Moving Picture Experts Group ) 1, MPEG10, h.1063, h.1064, HEVC (HIGH EFFICIENCY Video Coding).
The processor 901 may form a connection with the memory 902 or other components via the bus 903.
Memory 902 may be used to store computer-executable program code that includes instructions. The processor 901 performs various functional applications of the mobile terminal 900 and data processing by executing instructions stored in the memory 902. The memory 902 may also store application data, such as files that store images, videos, and the like.
The communication functions of the mobile terminal 900 may be implemented by the mobile communication module 904, the antenna 1, the wireless communication module 905, the antenna 2, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 904 may provide a mobile communication solution of 3G, 4G, 5G, etc. applied on the mobile terminal 900. The wireless communication module 905 may provide wireless communication solutions for wireless local area networks, bluetooth, near field communications, etc. applied on the mobile terminal 900.
The display screen 906 is used to implement display functions such as displaying user interfaces, images, video, and the like. The image capturing module 907 is used to implement capturing functions, such as capturing images, videos, and the like. The audio module 908 is used to implement audio functions such as playing audio, capturing speech, etc. The power module 909 is configured to perform power management functions such as charging a battery, powering a device, monitoring a battery status, and the like.
The sensor module 910 may include one or more sensors for implementing corresponding sensing functions. For example, the sensor module 910 may include an inertial sensor for detecting a motion pose of the mobile terminal 900 and outputting inertial sensing data.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, the program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. An image alignment method, comprising:
Acquiring at least two frames of images to be processed;
Performing image registration processing on the images to be processed to obtain initial optical flow images among the images to be processed;
Determining an occlusion region in the initial optical flow image;
Restoring the initial optical flow image based on the shielding area, determining a target optical flow image, and realizing image alignment of the image to be processed through the target optical flow image.
2. The method of claim 1, wherein the repairing the initial optical flow image based on the occlusion region, determining a target optical flow image, comprises:
Determining a non-occlusion region in the initial optical flow image according to the occlusion region;
And filling the occlusion region based on the optical flow value in the non-occlusion region, and determining the target optical flow image.
3. The method of claim 2, wherein the occlusion region includes edge pixels and intermediate pixels, the filling the occlusion region based on optical flow values in the non-occlusion region, determining the target optical flow image, comprising:
determining adjacent optical flow values corresponding to the edge pixels in the non-occlusion region, and updating the optical flow values of the edge pixels according to the adjacent optical flow values to obtain first optical flow values;
based on the first light value or the first light value and the adjacent light value, updating the light flow value of the middle pixel layer by layer from outside to inside to obtain a second light value;
And filling the shielding area according to the first light value and the second light value, and determining the target light flow image.
4. A method according to claim 3, wherein determining, in the non-occlusion region, adjacent optical flow values corresponding to the edge pixels, and updating the optical flow values of the edge pixels according to the adjacent optical flow values, to obtain a first optical flow value, comprises:
determining at least one adjacent light flow value in the non-occlusion region according to the pixel position of the edge pixel;
Determining an updated light flow value for at least one of the edge pixels based on the adjacent light flow values;
And carrying out weighting processing on the updated light current value to obtain a first light current value corresponding to the edge pixel.
5. The method of claim 4, wherein the determining an updated light flow value for at least one of the edge pixels based on the adjacent light flow values comprises:
Acquiring an original light value of the edge pixel;
determining an optical flow gradient value for the edge pixel from the original optical flow value and the adjacent optical flow value;
Determining distance data from adjacent pixels corresponding to the adjacent optical flow values to the edge pixels;
and determining an updated light value of the edge pixel through the original light value, the light flow gradient value and distance data.
6. The method of claim 4, wherein the weighting the updated light values to obtain first light values corresponding to the edge pixels comprises:
determining distance data from the adjacent pixels corresponding to the adjacent optical flow values to the edge pixels, and determining weight data according to a preset related mapping relation and the distance data;
and weighting the updated light value according to the weight data to obtain a first light value corresponding to the edge pixel.
7. The method of claim 1, wherein the image to be processed comprises a first image and a second image, the determining occlusion regions in the initial optical flow image comprising:
determining a first initial optical flow image of the first image to the second image, and determining a second initial optical flow image of the second image to the first image;
determining an optical flow differential value of the pixels at the same position according to the first initial optical flow image and the second initial optical flow image;
and determining an occlusion region in the initial optical flow image based on pixels of which the optical flow differential value is greater than or equal to a preset optical flow differential threshold value.
8. The method of claim 1, wherein the determining an occlusion region in the initial optical flow image comprises:
determining optical flow value statistics of target pixels in the initial optical flow image and surrounding pixels of the target pixels;
And determining an occlusion region in the initial optical flow image according to the target pixels of which the optical flow value statistical data are greater than or equal to a preset statistical data threshold value.
9. An image alignment apparatus, comprising:
The image acquisition module is used for acquiring at least two frames of images to be processed;
The optical flow image determining module is used for carrying out image registration processing on the images to be processed to obtain initial optical flow images among the images to be processed;
The shielding area determining module is used for determining shielding areas in the initial optical flow image;
And the optical flow image updating module is used for restoring the initial optical flow image based on the shielding area, determining a target optical flow image and realizing the image alignment of the image of the figure 5 to be processed through the target optical flow image.
10. A computer readable medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any one of claims 1 to 8.
11. An electronic device, comprising:
A processor; and
0 A memory for storing executable instructions of the processor;
Wherein the processor is configured to perform the method of any one of claims 1 to 8 via execution of the executable instructions.
CN202211651771.8A 2022-12-21 2022-12-21 Image alignment method and device, computer readable medium and electronic equipment Pending CN118247319A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211651771.8A CN118247319A (en) 2022-12-21 2022-12-21 Image alignment method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211651771.8A CN118247319A (en) 2022-12-21 2022-12-21 Image alignment method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN118247319A true CN118247319A (en) 2024-06-25

Family

ID=91556039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211651771.8A Pending CN118247319A (en) 2022-12-21 2022-12-21 Image alignment method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN118247319A (en)

Similar Documents

Publication Publication Date Title
JP6746607B2 (en) Automatic generation of panning shots
CN112311965B (en) Virtual shooting method, device, system and storage medium
US20180218485A1 (en) Method and apparatus for fusing plurality of depth images
US10984583B2 (en) Reconstructing views of real world 3D scenes
CN106713755B (en) Panoramic image processing method and device
CN112261387B (en) Image fusion method and device for multi-camera module, storage medium and mobile terminal
JP6308748B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN102857739A (en) Distributed panorama monitoring system and method thereof
CN113129241B (en) Image processing method and device, computer readable medium and electronic equipment
CN112770042B (en) Image processing method and device, computer readable medium, wireless communication terminal
US20220182595A1 (en) Optical flow based omnidirectional stereo video processing method
US8644555B2 (en) Device and method for detecting movement of object
CN115546043B (en) Video processing method and related equipment thereof
CN110276714B (en) Method and device for synthesizing rapid scanning panoramic image
CN110969575B (en) Adaptive image stitching method and image processing device
CN115314635A (en) Model training method and device for determining defocus amount
CN107295261B (en) Image defogging method and device, storage medium and mobile terminal
CN113205011A (en) Image mask determining method and device, storage medium and electronic equipment
CN118247319A (en) Image alignment method and device, computer readable medium and electronic equipment
JP2020136774A (en) Image processing apparatus for detecting motion vector, control method of the same, and program
CN113395434B (en) Preview image blurring method, storage medium and terminal equipment
CN115035013A (en) Image processing method, image processing apparatus, terminal, and readable storage medium
CN118279369A (en) Image processing method and device, computer readable medium and electronic equipment
CN113938578B (en) Image blurring method, storage medium and terminal equipment
CN116260927A (en) Video processing method and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination