CN113077387A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN113077387A
CN113077387A CN202110402157.7A CN202110402157A CN113077387A CN 113077387 A CN113077387 A CN 113077387A CN 202110402157 A CN202110402157 A CN 202110402157A CN 113077387 A CN113077387 A CN 113077387A
Authority
CN
China
Prior art keywords
attenuation
region
splicing
image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110402157.7A
Other languages
Chinese (zh)
Other versions
CN113077387B (en
Inventor
田仁富
丁红艳
陈磊
刘刚
曾峰
徐鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110402157.7A priority Critical patent/CN113077387B/en
Publication of CN113077387A publication Critical patent/CN113077387A/en
Application granted granted Critical
Publication of CN113077387B publication Critical patent/CN113077387B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method and device, comprising the following steps: acquiring a spliced image based on the first original image and the second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area, and the other side of the spliced line is a second spliced area; determining a first attenuation region from the first splicing region, and performing attenuation processing on the first attenuation region based on the size of the first attenuation region, the attenuation speed value corresponding to the first attenuation region and the distance between a pixel point and the splicing line in the first attenuation region; determining a second attenuation region from the second splicing region, and performing attenuation processing on the second attenuation region based on the size of the second attenuation region, the attenuation speed value corresponding to the second attenuation region and the distance between the pixel point and the splicing line in the second attenuation region; and generating a target image based on the attenuated first splicing region and the attenuated second splicing region. Through the technical scheme of the application, the color difference and the brightness difference can be improved, and the image splicing effect is good.

Description

Image processing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
In the computer vision field and the image processing field, image stitching means: two or more frames of images with overlapped areas describing the same scene are spliced into a frame of panoramic image or high-resolution image (namely, ultra-wide view angle image). Image registration and image fusion are two main processes of image splicing, in the image registration process, the transformation relation between images needs to be determined, a mathematical model of image coordinate transformation is established, and the images are transformed to the same coordinate system by solving the parameters of the mathematical model. In the process of image fusion, images transformed to the same coordinate system need to be spliced into a panoramic image or a high-resolution image.
Due to the reasons that the consistency of the photosensitive devices of different cameras is poor, the exposure parameters of different cameras are different and the like, the images acquired by different cameras have larger differences (such as color difference and/or brightness difference), when the images with the larger differences are spliced into a panoramic image or a high-resolution image, the panoramic image or the high-resolution image has larger splicing traces, and the image splicing effect is poor.
Disclosure of Invention
The application provides an image processing method, which comprises the following steps:
acquiring a spliced image based on a first original image and a second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image;
determining a first attenuation region from the first splicing region, and performing attenuation processing on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point in the first attenuation region and the splicing line;
determining a second attenuation region from the second splicing region, and performing attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line;
generating a target image based on the attenuated first splicing region and the attenuated second splicing region, wherein the target image comprises the splicing line, one side of the splicing line is the attenuated first splicing region, and the other side of the splicing line is the attenuated second splicing region.
The present application provides an image processing apparatus, the apparatus including:
the device comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring a spliced image based on a first original image and a second original image, the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image;
the processing module is used for determining a first attenuation region from the first splicing region, and performing attenuation processing on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point in the first attenuation region and the splicing line of the splicing region; determining a second attenuation region from the second splicing region, and performing attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line of the splicing region;
the generation module is used for generating a target image based on the first splicing area after the attenuation processing and the second splicing area after the attenuation processing, the target image comprises a splicing line of the splicing area, one side of the splicing line of the splicing area is the first splicing area after the attenuation processing, and the other side of the splicing line of the splicing area is the second splicing area after the attenuation processing.
The application provides an image processing method, comprising the following steps:
acquiring a spliced image;
and performing smoothing treatment on at least one line of pixel points in the spliced image, wherein the smoothing treatment comprises the following steps:
selecting one pixel point from the line of pixel points as a splicing point;
responding to the seam points, and performing smoothing processing on a plurality of other pixel points except the seam points based on a target attenuation factor to generate a plurality of processed other pixel points; when the maximum attenuation factor is greater than 1, the variation between the pixel values of the other pixels after the smoothing processing and the pixel values before the smoothing processing is performed is reduced along with the increase of the transverse coordinate difference; wherein the pixel values are luminance values and/or chrominance values.
As can be seen from the above technical solutions, in the embodiment of the present application, for a first attenuation region in a first splicing region, attenuation processing may be performed on the first attenuation region based on the size of the first attenuation region, an attenuation speed value corresponding to the first attenuation region, and a distance between a pixel point and a splicing line in the first attenuation region. For a second attenuation region in the second splicing region, attenuation processing may be performed on the second attenuation region based on the size of the second attenuation region, an attenuation speed value corresponding to the second attenuation region, and a distance between a pixel point and a splicing line in the second attenuation region. After the first attenuation region in the first splicing region and the second attenuation region in the second splicing region are subjected to attenuation processing, the first splicing region after the attenuation processing and the second splicing region after the attenuation processing can be spliced into a target image (such as a panoramic image or a high-resolution image), so that a first original image and a second original image with large differences (such as color differences and/or brightness differences) can be spliced into the target image, large splicing traces cannot exist in the target image, the color differences and the brightness differences can be improved, the image splicing effect is good, and smooth transition of the target image is realized.
Drawings
FIG. 1 is a schematic flow chart diagram of an image processing method in one embodiment of the present application;
FIGS. 2A-2E are schematic diagrams of stitched images in one embodiment of the present application;
FIG. 3 is a schematic illustration of an attenuation region in one embodiment of the present application;
FIG. 4 is a schematic flow chart diagram of an image processing method according to another embodiment of the present application;
FIG. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a hardware configuration diagram of an image processing apparatus in an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image processing method, which can be applied to front-end equipment (such as a camera and the like) or back-end equipment (such as a background server and the like). If the image processing method is applied to the front-end equipment, the front-end equipment collects two or more frames of images with overlapped areas and splices the two or more frames of images into a frame of panoramic image or high-resolution image. If the image processing method is applied to the back-end equipment, the front-end equipment collects two or more frames of images with overlapped areas, inputs the two or more frames of images to the back-end equipment, and splices the two or more frames of images into a frame of panoramic image or high-resolution image by the back-end equipment.
If the two frames of images are spliced into one frame of panoramic image or high-resolution image, the two frames of images are recorded as a first original image and a second original image, and the two frames of images are spliced into one frame of panoramic image or high-resolution image by adopting the image processing method of the embodiment. If the multiple frames of images are spliced into one frame of panoramic image or high-resolution image, taking three frames of images (such as image a1, image a2, and image a3) as an example, image a1 and image a2 are recorded as a first original image and a second original image, and image a1 and image a2 are spliced into one frame of image a4 by using the image processing method of the present embodiment. The image a2 and the image a3 are recorded as a first original image and a second original image, and the image a2 and the image a3 are spliced into a frame image a5 by the image processing method of the embodiment. The image a4 and the image a5 are recorded as a first original image and a second original image, and the image a4 and the image a5 are spliced into one frame of panoramic image or high-resolution image by the image processing method of the embodiment.
For convenience of description, in the following embodiments, two frames of images to be stitched are taken as an example for explanation, and the two frames of images are referred to as a first original image and a second original image.
Referring to fig. 1, a flow chart of an image processing method is schematically shown, and the method may include:
step 101, obtaining a stitched image based on a first original image and a second original image, where the stitched image includes a stitching line, one side of the stitching line is a first stitched region determined based on the first original image, and the other side of the stitching line is a second stitched region determined based on the second original image.
For example, a first original image and a second original image may be acquired, where the first original image and the second original image are images acquired by different cameras or the same camera, the acquisition time of the first original image and the acquisition time of the second original image are the same, and the first original image and the second original image are images of the same scene.
Then, the first original image and the second original image are subjected to image registration, in the image registration process, a transformation relation between the first original image and the second original image can be determined, a mathematical model of image coordinate transformation is established based on the transformation relation, the first original image and the second original image are transformed to the same coordinate system by solving parameters of the mathematical model, and the image registration process is not limited.
And then, carrying out image fusion on the first original image and the second original image, and splicing the first original image and the second original image which are transformed to the same coordinate system into a frame of image in the image fusion process without limitation on the image fusion process. For the convenience of distinguishing, the spliced image is recorded as a spliced image, and the spliced image may include a splicing line, a first splicing region and a second splicing region.
If the first original image and the second original image need to be spliced left and right, in the spliced image, the splicing seam line can be a splicing seam vertical line, the left side of the splicing seam line is a first splicing area, and the right side of the splicing seam line is a second splicing area. Or, if the first original image and the second original image need to be stitched up and down, in the stitched image, the patchwork line may be a patchwork transverse line, an upper side of the patchwork line is a first stitching region, and a lower side of the patchwork line is a second stitching region. For convenience of description, in the following embodiments, left-right stitching of the first original image and the second original image is taken as an example.
In one possible embodiment, the first original image and the second original image may have an overlapping area, based on which a boundary line in the overlapping area in the stitched image is taken as a stitch line, for example, when the stitch line is a vertical stitch line, a vertical boundary line in the overlapping area in the stitched image is taken as a stitch line, that is, the height of the stitch line is the same as the height of the image, and each line of the stitch line has one pixel point. Based on the first and second images, the first splicing area is a first original image, and the second splicing area is a part of a second original image; or the first splicing area is a part of the first original image, and the second splicing area is the second original image, or the first splicing area is a part of the first original image, and the second splicing area is a part of the second original image.
For example, referring to fig. 2A, a region b1 in the first original image and a region b2 in the second original image are overlapping regions, that is, a region b1 and a region b2 are pictures of the same physical space, in this application scenario, in a stitched image, a region b1 and a region b2 may be overlapped, and a region where the two are overlapped is an overlapping region, and a boundary line located in the overlapping region may be used as a stitched line.
As shown in fig. 2B or fig. 2C, in fig. 2B, the left side of the stitching line is a first stitching region which is a part of the first original image, i.e., the overlapping region in the first original image has been discarded (i.e., region B1, that is, region B1 is discarded when region B1 and region B2 are overlapped), and the right side of the stitching line is a second stitching region which is a second original image, that is, the overlapping region in the second original image is retained (i.e., region B2, that is, region B1 and region B2 are overlapped, region B2 is retained). In fig. 2B, the seam line is located in the leftmost column of the region B2, but of course, the seam line may be in any column of the region B2, and the position of the seam line is not limited.
In fig. 2C, the left side of the stitching line is a first stitching region, which is the first original image, i.e. the overlapping region in the first original image is preserved (i.e. region b1), and the right side of the stitching line is a second stitching region, which is a part of the second original image, i.e. the overlapping region in the second original image has been discarded (i.e. region b 2). In fig. 2C, the seam line is located in the rightmost column of the region b1, but of course, the seam line may be in any column of the region b1, and the position of the seam line is not limited.
In another possible embodiment, the first original image and the second original image may not have an overlapping region, and based on this, a boundary line in the middle region in the stitched image is taken as a stitch line, for example, when the stitch line is a stitch vertical line, a boundary vertical line in the middle region in the stitched image is taken as a stitch line, that is, the height of the stitch line is the same as the height of the image, and each line of the stitch line has one pixel point. Based on this, the first splicing area is a first original image, and the second splicing area is a second original image.
The middle area of the stitched image may be a single line at the middle position or a plurality of lines at the middle position, and the width on the left side of the middle position in the stitched image matches the width on the right side of the middle position.
Referring to fig. 2D, the first original image and the second original image have no overlapping area, that is, there is no picture for the same physical space, and the stitched image may be referred to as shown in fig. 2E. The left side of the stitch line is a first stitching region, which is a first original image, and the first original image is stitched with the stitch line, i.e. the last column of the first original image is a butt-jointed stitch line. The right side of the patchcord line is a second patched area, the second patched area is a second original image, and the second original image is spliced with the patchcord line, namely the first column of the second original image is butted with the patchcord line. Obviously, fig. 2E exemplifies a case where the seam line is located at the center of the middle position, and the seam line may be located at other positions of the middle position, and the position of the seam line is not limited.
In the above embodiment, for the stitched image, as shown in fig. 2B, fig. 2C, and fig. 2E, for the first stitched region on the left side of the patched line, the pixel value of each pixel point in the first stitched region may be retained, that is, the pixel value in the region on the left side of the patched line remains unchanged. For the second splicing area on the right side of the splicing line, the pixel value of each pixel point in the second splicing area can be reserved, namely the pixel value of the area on the right side of the splicing line is kept unchanged.
For the pixel value of each pixel point in the patchwork line (namely, the pixel value of each pixel point in the column of the patchwork line, only one pixel point in each row), the following mode can be adopted: and determining the pixel value of each pixel point in the splicing line based on the pixel values of the N pixel points adjacent to the splicing line in the first splicing region and the pixel values of the N pixel points adjacent to the splicing line in the second splicing region, and determining the splicing line based on the pixel value of each pixel point in the splicing line, so that the splicing line in the spliced image can be obtained.
For example, assuming that the stitching line includes pixel c 1-pixel c8 (i.e., the stitching line has 8 rows of pixels, and each row has one pixel), for pixel c1 in the stitching line, the pixel value of pixel c1 may be determined based on the pixel values of N pixels on the left side of pixel c1 and adjacent to pixel c1 (i.e., the pixel values of N pixels on the same row as pixel c1 and at a smaller distance from pixel c 1), and the pixel values of N pixels on the right side of pixel c1 and adjacent to pixel c 1. For example, the average value of the pixel values of 2N pixels is used as the pixel value of the pixel c1, or the median of the pixel values of 2N pixels is used as the pixel value of the pixel c1, or the maximum value of the pixel values of 2N pixels is used as the pixel value of the pixel c1, or the minimum value of the pixel values of 2N pixels is used as the pixel value of the pixel c1, and the determination method is not limited. Similarly, the pixel values of c 2-c 8 can be obtained. The pixel values of pixel c 1-pixel c8 may then be combined into a stitching line.
In the above embodiment, the value of N may be configured empirically, such as 3, 5, etc., which is not limited herein.
In the above embodiment, the first original image and the second original image may be picture-like images, such as images acquired by an intelligent terminal, map images, and the like, and the process of splicing these images may be offline processing, which has low requirements on real-time performance, and therefore, the first original image and the second original image may be spliced in an offline manner. Or, the first original image and the second original image may also be video images, such as each frame of image in a video stream, and the process of splicing these images may be online processing, which has a high requirement on real-time performance, and therefore, the first original image and the second original image may be spliced in an online manner.
Step 102, for a first splicing area (i.e. a splicing area located on the left side of a splicing line) in a spliced image, determining a first attenuation area from the first splicing area, and performing attenuation processing on the first attenuation area based on the size of the first attenuation area, a configured attenuation speed value corresponding to the first attenuation area, and a distance between a pixel point in the first attenuation area and the splicing line to obtain the first splicing area after the attenuation processing.
Illustratively, for a first splicing region in the spliced image, M pixel points adjacent to the splicing line in the first splicing region may be used as a first attenuation region, and then the first attenuation region is determined.
For example, assume that the stitching line includes pixel c 1-pixel c8, and for pixel c1 in the stitching line, M pixels adjacent to pixel c1 in the first stitching region (i.e., M pixels that are in the same row as pixel c1 and have a smaller distance from pixel c 1) belong to the first attenuation region. For a pixel point c2 in the splicing line, M pixel points adjacent to the pixel point c2 in the first splicing area belong to the first attenuation area, and so on. Referring to fig. 3, a schematic view of the first attenuation region is shown.
As can be seen from fig. 3, if M is 6, a pixel point d11, a pixel point d12, a pixel point d13, a pixel point d14, a pixel point d15, and a pixel point d16 which are adjacent to the pixel point c1 in the first splicing region belong to a first attenuation region, and a pixel point d 21-a pixel point d26 which are adjacent to the pixel point c2 in the first splicing region belong to the first attenuation region, so on, and finally, the first attenuation region in the first splicing region is obtained.
For example, the size of the first attenuation region may include a width value (i.e., the number of horizontal pixels) of the first attenuation region and/or a height value (i.e., the number of vertical pixels) of the first attenuation region. The size of the first attenuation region is a width value of the first attenuation region when the first attenuation region is located on the left or right side of the stitch line, and the size of the first attenuation region is a height value of the first attenuation region when the first attenuation region is located on the upper or lower side of the stitch line. In this embodiment, the width of the first attenuation region is taken as an example.
For example, when M pixel points adjacent to the seam line in the first splicing region are taken as the first attenuation region, the width value of the first attenuation region is M, which indicates that the number of the horizontal pixel points is M.
The attenuation speed value corresponding to the first attenuation region represents a change speed of the attenuation degree value, that is, the change speed of the attenuation degree value is faster as the attenuation speed value is larger, and the change speed of the attenuation degree value is slower as the attenuation speed value is smaller. The attenuation speed value corresponding to the first attenuation region is a configured empirical value, that is, the attenuation speed value corresponding to the first attenuation region may be configured empirically, which is not limited.
The distance between the pixel point in the first attenuation region and the stitching line represents the number of pixel points spaced between the pixel point and the stitching line. For example, referring to fig. 3, the distance between the pixel d11 and the pixel c1 is 1 pixel, and thus the distance between d11 and the patchwork line is 1; the distance between the pixel point d12 and the pixel point c1 is 2 pixel points, so the distance between d12 and the patchwork line is 2, and so on.
In a possible embodiment, after determining a first attenuation region from the first splicing region, the first attenuation region may be attenuated based on the size of the first attenuation region, the attenuation speed value corresponding to the first attenuation region, and the distance between the pixel point and the splicing line in the first attenuation region, so as to obtain the attenuated first splicing region, and the process may include the following steps:
step S11, for each pixel point in the first attenuation region, determining an attenuation degree value corresponding to the pixel point based on the size of the first attenuation region, the attenuation speed value corresponding to the first attenuation region, and the distance between the pixel point and the stitching line. For example, the attenuation level value may be proportional to the size, the attenuation level value may be inversely proportional to the attenuation speed value, and the attenuation level value may be inversely proportional to the distance.
For example, for a pixel (x, y), the attenuation degree value corresponding to the pixel (x, y) can be determined by using formula (1) and formula (2), where x represents the abscissa of the pixel, and y represents the ordinate of the pixel.
Tx,yAbs (x-x 0)/max _ off equation (1)
alphax,y=1–Tx,yFormula ^ k (2)
In formula (1) and formula (2), a pixel point (x, y) represents any one pixel point in the first attenuation region, x0 represents the abscissa of the pixel point in the stitching line, and the abscissa x0 corresponds to the same row as the abscissa x, that is, x represents the x-th pixel point in the y-th row, and x0 represents the x0 pixel point in the y-th row. Obviously, abs (x-x 0) represents the distance between the pixel point (x, y) and the patchline, and abs represents the absolute value.
Rule with max _ off representing the first attenuation regionCun, the width of the first attenuation region. k represents the value of the attenuation speed corresponding to the first attenuation region, and is a configured empirical value, such as 2, 3, 4, etc. alphax,yAnd representing the attenuation degree value corresponding to the pixel point (x, y). T isx,yK represents Tx,yTo the k power of, Tx,yIs an intermediate variable that represents abs (x-x 0)/max _ off, i.e., is determined by abs (x-x 0) and max _ off.
To sum up, for each pixel point (x, y) in the first attenuation region, after knowing the size max _ off of the first attenuation region, the attenuation speed value k corresponding to the first attenuation region, and the distance abs (x-x 0) between the pixel point (x, y) and the patchwork line, the attenuation degree value alpha corresponding to the pixel point (x, y) can be determined based on the formula (1) and the formula (2)x,y. As can be seen from the formulas (1) and (2), the attenuation degree value alphax,yProportional to the size max _ off, the greater max _ off, alphax,yThe larger the max _ off, the smaller alphax,yThe smaller. Attenuation degree value alphax,yInversely proportional to the attenuation velocity value k, the larger k, alphax,yThe smaller the k, the alphax,yThe larger. Attenuation degree value alphax,yInversely proportional to the distance abs (x-x 0), the greater abs (x-x 0) the alphax,yThe smaller the abs (x-x 0), the smaller the alphax,yThe larger.
As can be seen from equation (2), the attenuation degree value alphax,yIs inversely proportional to the attenuation speed value k, and the larger the attenuation speed value k is, the greater the attenuation degree value alpha isx,yThe faster the variation speed of (2), the smaller the attenuation speed value k, and the attenuation degree value alphax,yThe slower the rate of change of (c). For example, alphax,yHas a value range of 0-1, alphax,yHas a maximum value of 1, alphax,yHas a minimum value of 0, alphax,yVarying from 1 to 0. Obviously, when k is greater than 1, alpha is greater if k is greaterx,yThe faster the variation speed of (A), the smaller k, alphax,yThe slower the rate of change of (c).
Of course, the formula (1) and the formula (2) are only examples of determining the attenuation degree value, and the determination manner of the attenuation degree value is not limited as long as the attenuation degree value can be determined based on the size of the first attenuation region, the attenuation speed value corresponding to the first attenuation region, and the distance between the pixel point and the stitching line.
Step S12, determining, for each pixel point in the first attenuation region, a target attenuation factor corresponding to the pixel point based on the attenuation degree value corresponding to the pixel point. For example, a maximum attenuation factor corresponding to the pixel point is obtained, and a target attenuation factor corresponding to the pixel point is determined based on the attenuation degree value and the maximum attenuation factor, that is, a target attenuation factor corresponding to each pixel point in the first attenuation region is obtained.
For example, obtaining the maximum attenuation factor corresponding to the pixel point may include, but is not limited to:
and acquiring a configured maximum attenuation factor corresponding to the pixel point, for example, pre-configuring an attenuation factor, and taking the attenuation factor as the maximum attenuation factor corresponding to each pixel point in the first attenuation region.
Or determining an initial scale factor corresponding to the splicing line based on the first splicing area and the second splicing area, and filtering the initial scale factor to obtain a target scale factor corresponding to the splicing line. A maximum attenuation factor corresponding to the pixel point may then be determined based on the target scale factor.
In a possible implementation manner, in step S12, for each pixel in the first attenuation region, the following steps may be adopted to determine a target attenuation factor corresponding to the pixel:
step S121, determining an initial scale factor corresponding to the stitching line based on the first stitching region and the second stitching region, for example, determining an initial scale factor R corresponding to the stitching line by using formula (3)x0,y
Rx0,y=(Px0+1,y+Px0+2,y+…Px0+n,y+1)/(Px0-1,y+Px0-2,y+…Px0-n,y+1 formula (3)
For example, the pixel points in the stitch line are marked as (x)0Y), and pixel point (x)0Y) the corresponding initial scale factor is denoted as Rx0,y。x0Is the abscissa of the pixel point in the stitch line, and the abscissas of all the pixel points in the stitch line are the same and are x0Y is the ordinate of the pixel points in the stitch line, and the ordinate of all the pixel points in the stitch line is different and is 1,2 … in sequence. Referring to FIG. 3, the patchwork line includes pixel c 1-pixel c8, and pixel c1 is (x)01), pixel c2 is (x)02), …, pixel c8 is (x)0,8)。
In formula (3), pixel (x)0+1Y) representing a pixel point (x)0Y) the first pixel on the right side, Px0+1,yRepresenting a pixel (x)0+1Y) pixel value, pixel point (x)0+2Y) representing a pixel point (x)0Y) the second pixel on the right side, Px0+2,yRepresenting a pixel (x)0+2Y) pixel value, …, pixel point (x)0+nY) representing a pixel point (x)0Y) the nth pixel point on the right side, Px0+n,yRepresenting a pixel (x)0+nY) pixel values. Obviously, regarding the pixel point (x)0And y) the pixel points on the right side are all the pixel points in the second splicing region.
In formula (3), pixel (x)0-1Y) representing a pixel point (x)0Y) the first pixel on the left side, Px0-1,yRepresenting a pixel (x)0-1Y) pixel value, pixel point (x)0-2Y) representing a pixel point (x)0Y) the second pixel on the left side, Px0-2,yRepresenting a pixel (x)0-2Y) pixel value, …, pixel point (x)0-nY) representing a pixel point (x)0Y) the nth pixel point on the left side, Px0-n,yRepresenting a pixel (x)0-nY) pixel values. Obviously, regarding the pixel point (x)0And y) the pixel points on the left side are all the pixel points in the first splicing region.
In summary, the pixel point (x) in the stitching line can be determined based on the pixel value of the pixel point in the first stitching region and the pixel value of the pixel point in the second stitching region0Y) corresponding initial scale factors. Of course, equation (3) is merely an example, and the determination method is not limited as long as the initial scale factor can be determined.
Referring to fig. 3, if y is 1, the initial scale factor corresponding to the pixel point c1 is determined by using formula (3), if y is 2, the initial scale factor corresponding to the pixel point c2 is determined by using formula (3), and so on, the initial scale factor corresponding to each pixel point in the stitching line can be determined by using formula (3).
In formula (3), for any pixel point (i.e. any vertical component y) in the joint line, selecting n pixel points on the left side of the joint line and n pixel points on the right side of the joint line, and determining an initial scale factor Rx0,y,Rx0,yFor the initial scale factor of row y, n may be less than x0,x0Is the abscissa of the pixel point in the stitching line. The addition of 1 in equation (3) is to prevent division by zero and an initial scale factor of 0.
And S122, filtering the initial scale factor to obtain a target scale factor corresponding to the splicing line.
For example, for a pixel point (x) in a stitch line0Y) is obtained pixel point (x)0Y) corresponding initial scale factor Rx0,yThe initial scale factor R may then be scaledx0,yIs determined as a pixel point (x)0Y) corresponding target scale factor, or, for the initial scale factor Rx0,yFiltering to obtain pixel point (x)0Y) the corresponding target scale factor. Say, based on pixel point (x)0Y) upper m pixel points (all located on the stitching line) corresponding to the initial scale factor and pixel point (x)0Y) lower m pixel points (all located on the stitching line) corresponding to the initial scale factor for pixel point (x)0Y) corresponding initial scale factor Rx0,yFiltering to obtain pixel point (x)0Y) corresponding target scale factor Tx0,y. For example, the initial scale factors corresponding to the upper m pixel points, the lower m pixel points, and the initial scale factor R may be setx0,yIs determined as a pixel point (x)0Y) corresponding target scale factor Tx0,y
m is a smoothing radius, and can be configured empirically, such as 1,2, 3, etc., and m is subsequently taken as 1.
As shown with reference to figure 3 of the drawings,the patchwork line may include pixel c 1-pixel c8, pixel c1 being (x)01), pixel c2 is (x)02), …, pixel c8 is (x)0,8). At an initial scale factor R corresponding to pixel point c1x0,1When filtering is performed, there is no pixel point on the upper side of the pixel point c1, and m (i.e. 1) pixel points on the lower side of the pixel point c1 are the pixel points c2, so that the initial scale factor R can be determinedx0,1And an initial scale factor Rx0,2Is the average value of (x), and the average value is the pixel point01) corresponding target scale factor Tx0,1
Similarly, at the initial scale factor R corresponding to pixel point c2x0,2When filtering, m pixel points on the upper side of the pixel point c2 are the pixel point c1, and m pixel points on the lower side of the pixel point c2 are the pixel point c3, so that the initial scale factor R can be determinedx0,1Initial scale factor Rx0,2And an initial scale factor Rx0,3Is the average value of (x), and the average value is the pixel point0And 2) corresponding target scale factor Tx0,2
By analogy, a target scale factor corresponding to each pixel point in the splicing line can be obtained.
To sum up, the pixel point (x) in the stitch line is aimed at0Y), may be for a pixel point (x)0Y) corresponding initial scale factor Rx0,yPerforming smooth filtering to obtain pixel points (x)0Y) corresponding target scale factor Tx0,y
Step S123, determining, for each pixel point (x, y) in the first attenuation region, a maximum attenuation factor corresponding to the pixel point (x, y) based on the target scale factor. For example, based on the ordinate of the pixel (x, y), the pixel (x) having the same ordinate is determined from the patchwork line0Y) and based on pixel point (x)0Y) corresponding target scale factor Tx0,yThe maximum attenuation factor corresponding to the pixel point (x, y) is determined.
For example, each pixel point (x,1) in the first attenuation region, that is, all pixel points in the first attenuation region where y is 1, is based on the pixel point (x)01) corresponding target scale factor Tx0,1DeterminingAnd (4) maximum attenuation factor corresponding to the pixel point (x, 1). For each pixel point (x,2) in the first attenuation region, based on the pixel point (x)0And 2) corresponding target scale factor Tx0,2And determining the maximum attenuation factor corresponding to the pixel point (x,2), and so on.
Illustratively, based on pixel point (x)0Y) corresponding target scale factor Tx0,yDetermining the maximum attenuation factor corresponding to pixel (x, y) may include, but is not limited to, the following: determining the maximum attenuation factor LDiv corresponding to the pixel point (x, y) by adopting the following formulax,y:LDivx,y=(1+Tx0,y)/2. Of course, the above formula is merely an example, and the determination manner is not limited. It should be noted that, for all the pixel points in the first attenuation region where y is 1, the maximum attenuation factors are the same and are LDivx,1Aiming at all the pixel points with y being 2 in the first attenuation area, the maximum attenuation factors are the same and are LDivx,2And so on.
Step S124, for each pixel point (x, y) in the first attenuation region, determining a target attenuation factor corresponding to the pixel point (x, y) based on the attenuation degree value corresponding to the pixel point (x, y) and the maximum attenuation factor corresponding to the pixel point (x, y). For example, the target attenuation factor LS may be determined using equation (4)x,y
LSx,y=1–alphax,y+LDivx,y*alphax,yFormula (4)
alphax,yRepresenting the attenuation degree value corresponding to the pixel point (x, y), and determining the attenuation degree value in step S11, LDivx,yThe maximum attenuation factor corresponding to the pixel point (x, y) is represented, and the determination manner is referred to from step S121 to step S123, which is not repeated herein. LS (least squares)x,yAnd representing the target attenuation factor corresponding to the pixel point (x, y).
To this end, in step S12, a target attenuation factor corresponding to each pixel point in the first attenuation region is obtained.
Step S13, for each pixel point in the first attenuation region, determining a target pixel value of the pixel point based on a target attenuation factor corresponding to the pixel point and an original pixel value of the pixel point. For example, the target pixel value of the pixel point may be determined based on a product of the target attenuation factor corresponding to the pixel point and the original pixel value of the pixel point, and for example, the target pixel value may be determined by the following formula (5).
LTx,y=LPx,y*LSx,yFormula (5)
In the formula (5), LPx,yOriginal pixel value, LS, representing a pixel point (x, y)x,yRepresenting the target attenuation factor, LT, corresponding to the pixel point (x, y)x,yRepresenting the target pixel value of pixel point (x, y).
Step S14, a first splicing region after attenuation processing is generated based on the target pixel value of each pixel point in the first attenuation region, that is, the first splicing region in the spliced image is the first splicing region after attenuation processing. The first splicing region after the attenuation processing may include a first attenuation region after the attenuation processing and a non-attenuation region, and the first attenuation region after the attenuation processing is composed of a target pixel value of each pixel point in the first attenuation region. And aiming at the unattenuated region, the unattenuated region consists of the original pixel value of each pixel point in the unattenuated region, namely the pixel value of each pixel point in the unattenuated region is kept unchanged.
At this point, attenuation processing is performed on the first mosaic area in the mosaic image, that is, the mosaic image includes the first mosaic area after attenuation processing, but not the first mosaic area before attenuation processing.
Step 103, determining a second attenuation region from the second splicing region for a second splicing region (i.e. the splicing region located on the right side of the splicing line) in the spliced image, and performing attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region, and the distance between the pixel point in the second attenuation region and the splicing line to obtain the attenuated second splicing region.
Illustratively, for a second splicing region in the spliced image, M pixel points adjacent to the splicing line in the second splicing region may be used as a second attenuation region, and then the second attenuation region is determined.
For example, the size of the second attenuation region may include a width value of the second attenuation region and/or a height value of the second attenuation region. This dimension is a width value when the second attenuation region is located on the left or right side of the stitch line and a height value when the second attenuation region is located on the upper or lower side of the stitch line.
The attenuation speed value corresponding to the second attenuation area represents the change speed of the attenuation degree value, and the attenuation speed value corresponding to the second attenuation area is a configured empirical value. And the distance between the pixel point in the second attenuation region and the splicing line represents the number of the spaced pixel points between the pixel point and the splicing line.
For example, the second attenuation region may be attenuated by the following steps:
step S21, for each pixel point in the second attenuation region, determining an attenuation degree value corresponding to the pixel point based on the size of the second attenuation region, the attenuation speed value corresponding to the second attenuation region, and the distance between the pixel point and the stitching line. For example, the attenuation level value may be proportional to the size, the attenuation level value may be inversely proportional to the attenuation speed value, and the attenuation level value may be inversely proportional to the distance.
The implementation process of step S21 is similar to that of step S11, and is not described herein again.
Step S22, determining, for each pixel point in the second attenuation region, a target attenuation factor corresponding to the pixel point based on the attenuation degree value corresponding to the pixel point. For example, a maximum attenuation factor corresponding to the pixel point is obtained, and a target attenuation factor corresponding to the pixel point is determined based on the attenuation degree value and the maximum attenuation factor, so as to obtain a target attenuation factor corresponding to each pixel point in the second attenuation region.
The process of obtaining the maximum attenuation factor may refer to step S12, and is not described herein again.
In a possible implementation manner, in step S22, for each pixel point in the second attenuation region, the following steps may be adopted to determine a target attenuation factor corresponding to the pixel point:
step S221, determining an initial scale factor corresponding to the splicing line based on the first splicing area and the second splicing area, wherein an implementation process of step S221 is the same as that of step S121, and is not described herein again.
And S222, filtering the initial scale factor to obtain a target scale factor corresponding to the splicing line.
The implementation process of step S222 is the same as the implementation process of step S122, and is not described herein again.
Step S223, for each pixel point (x, y) in the second attenuation region, determining a maximum attenuation factor corresponding to the pixel point (x, y) based on the target scale factor. For example, based on pixel points (x) in the stitch line0Y) corresponding target scale factor Tx0,yThe maximum attenuation factor corresponding to pixel point (x, y) is determined. Determining the maximum attenuation factor RDiv corresponding to the pixel point (x, y) by using the following formulax,y:RDivx,y=(1+1/Tx0,y)/2。
The implementation process of step S223 is similar to that of step S123, except that the maximum attenuation factor LDiv corresponding to the pixel point (x, y) in the first attenuation region is usedx,yReplacing the maximum attenuation factor RDiv corresponding to the pixel point (x, y) in the second attenuation regionx,yAnd will not be repeated herein.
Step S224, for each pixel point (x, y) in the second attenuation region, determining a target attenuation factor corresponding to the pixel point (x, y) based on the attenuation degree value corresponding to the pixel point (x, y) and the maximum attenuation factor corresponding to the pixel point (x, y). Determining the target attenuation factor RS, e.g., using the following equationx,y:RSx,y=1-alphax,y+RDivx,y*alphax,y. The implementation process of step S224 is similar to that of step S124, except that the target attenuation factor LS corresponding to the pixel point (x, y) in the first attenuation region is usedx,yReplacing the target attenuation factor RS corresponding to the pixel point (x, y) in the second attenuation regionx,yAnd will not be repeated herein.
Step S23, for each pixel point in the second attenuation region, determining a target pixel value of the pixel point based on the target attenuation factor corresponding to the pixel point and the original pixel value of the pixel point.
For example, the target pixel value may be determined by the following formula: RT (reverse transcription)x,y=RPx,y*RSx,y,RPx,yOriginal pixel value, RT, representing a pixel point (x, y)x,yRepresenting the target pixel value of pixel point (x, y).
Step S24, a second splicing region after the attenuation processing is generated based on the target pixel value of each pixel point in the second attenuation region, that is, the second splicing region in the spliced image is the second splicing region after the attenuation processing. The second splicing region after the attenuation processing may include a second attenuation region after the attenuation processing and a non-attenuation region, and the second attenuation region after the attenuation processing is composed of a target pixel value of each pixel point in the second attenuation region. And aiming at the unattenuated region, the unattenuated region consists of the original pixel value of each pixel point in the unattenuated region, namely the pixel value of each pixel point in the unattenuated region is kept unchanged.
At this point, attenuation processing is performed on the second mosaic area in the mosaic image, that is, the mosaic image includes the second mosaic area after attenuation processing, but not the second mosaic area before attenuation processing.
And 104, generating a target image based on the attenuated first splicing area and the attenuated second splicing area, wherein the target image comprises a splicing line, one side of the splicing line is the attenuated first splicing area, and the other side of the splicing line is the attenuated second splicing area.
For example, after the stitched image is obtained, the first stitched region in the stitched image may be attenuated to obtain the attenuated first stitched region, and the second stitched region in the stitched image may be attenuated to obtain the attenuated second stitched region. At this point, the first splicing area after the attenuation processing, the splicing lines in the spliced image, and the second splicing area after the attenuation processing may be combined into one frame of target image, where the target image is a panoramic image or a high-resolution image (i.e., an ultra-wide view angle image).
For example, the execution sequence is only an example given for convenience of description, and in practical applications, the execution sequence between the steps may also be changed, and the execution sequence is not limited. Moreover, in other embodiments, the steps of the respective methods do not have to be performed in the order shown and described herein, and the methods may include more or less steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
According to the technical scheme, after the attenuation processing is performed on the first attenuation area in the first splicing area and the second attenuation area in the second splicing area, the first splicing area after the attenuation processing and the second splicing area after the attenuation processing can be spliced into the target image (such as a panoramic image or a high-resolution image), so that the first original image and the second original image with larger differences (such as color differences and/or brightness differences) are spliced into the target image, the target image does not have larger splicing traces, the color differences and the brightness differences can be improved, the image splicing effect is better, and the smooth transition of the target image is realized.
In a possible embodiment, the first stitched region may include a first low-frequency component and a first high-frequency component, the second stitched region may include a second low-frequency component and a second high-frequency component, and the stitched image may include a first stitched image and a second stitched image, based on which, referring to fig. 4, which is another schematic flow chart of an image processing method, the method may include the following steps:
step 401, obtaining a first mosaic image and a second mosaic image, where one side of a mosaic line of the first mosaic image is a first low-frequency component, the other side of the mosaic line is a second low-frequency component, one side of a mosaic line of the second mosaic image is a first high-frequency component, and the other side of the mosaic line is a second high-frequency component.
Illustratively, the implementation of step 401 may be similar to the implementation of step 101.
For example, a first low-frequency component and a first high-frequency component in the first splicing region may be obtained, and a second low-frequency component and a second high-frequency component in the second splicing region may be obtained. For the sake of convenience of distinction, the low-frequency component in the first stitching region may be referred to as a first low-frequency component (which may also be referred to as a first base layer image or a first blurred layer image), and the high-frequency component in the first stitching region may be referred to as a first high-frequency component (which may also be referred to as a first detail layer image). And, the low-frequency component in the second splicing region may be denoted as a second low-frequency component (which may also be referred to as a second base layer image or a second blurred layer image), and the high-frequency component in the second splicing region may be denoted as a second high-frequency component (which may also be referred to as a second detail layer image).
For example, after the first splicing region is obtained, the first splicing region may be subjected to low-pass filtering processing to obtain a first low-frequency component in the first splicing region. Then, a first high frequency component in the first splicing region is determined based on the first splicing region and the first low frequency component, for example, a difference between the first splicing region and the first low frequency component is determined as the first high frequency component.
For example, after the second splicing region is obtained, low-pass filtering processing may be performed on the second splicing region to obtain a second low-frequency component in the second splicing region. Then, a second high frequency component in the second splicing region is determined based on the second splicing region and the second low frequency component, for example, a difference between the second splicing region and the second low frequency component is determined as the second high frequency component.
In a possible implementation manner, since the high frequency is a main cause causing a ghost and the low frequency is a main cause causing a luminance difference and a chrominance difference, the first splicing region and the second splicing region may be respectively subjected to frequency division processing to obtain a first low-frequency component and a first high-frequency component in the first splicing region and a second low-frequency component and a second high-frequency component in the second splicing region, and then the first low-frequency component and the second low-frequency component are fused and the first high-frequency component and the second high-frequency component are fused, so that the low-frequency component and the high-frequency component are fused differently, and the image can be free from ghosting while the luminance difference and the chrominance difference are improved.
In one possible embodiment, the first low frequency component and the second low frequency component may be stitched into a first stitched image, the first stitched image comprising a first stitch line (i.e. a stitch line in the first stitched image) with the first low frequency component on one side and the second low frequency component on the other side.
And the first high frequency component and the second high frequency component may be stitched into a second stitched image, the second stitched image comprising a second stitch line (i.e. a stitch line in the second stitched image), one side of the second stitch line being the first high frequency component and the other side of the second stitch line being the second high frequency component.
Step 402, for a first low-frequency component in the first stitched image, determining a first attenuation region (hereinafter referred to as a first attenuation region E1) from the first low-frequency component, and performing attenuation processing on the first attenuation region E1 based on the size of the first attenuation region E1 (hereinafter referred to as a first size), the configured attenuation velocity value (hereinafter referred to as a first attenuation velocity value) corresponding to the first attenuation region E1, and the distance between the pixel point in the first attenuation region E1 and the first stitched line, to obtain the attenuated first low-frequency component.
Exemplarily, step 402 is similar to step 102, the first splicing region in step 102 is replaced by the first low-frequency component, and the first attenuation region is denoted as E1, which is not repeated herein.
Step 403, for the first high-frequency component in the second stitched image, determine a first attenuation region (hereinafter referred to as a first attenuation region E2) from the first high-frequency component, and perform attenuation processing on the first attenuation region E2 based on the size (hereinafter referred to as a second size) of the first attenuation region E2, the configured attenuation velocity value (hereinafter referred to as a second attenuation velocity value) corresponding to the first attenuation region E2, and the distance between the pixel point in the first attenuation region E2 and the second stitched line, so as to obtain the attenuated first high-frequency component.
Exemplarily, step 403 is similar to step 102, the first splicing region in step 102 is replaced by the first high-frequency component, and the first attenuation region is denoted as E2, which is not repeated herein.
Step 404, for the second low-frequency component in the first stitched image, determining a second attenuation region (hereinafter referred to as a second attenuation region E3) from the second low-frequency component, and performing attenuation processing on the second attenuation region E3 based on the size (hereinafter referred to as a third size) of the second attenuation region E3, the configured attenuation velocity value (hereinafter referred to as a third attenuation velocity value) corresponding to the second attenuation region E3, and the distance between the pixel point in the second attenuation region E3 and the first stitched line, to obtain the attenuated second low-frequency component.
Exemplarily, step 404 is similar to step 103, the second splicing area in step 103 is replaced by the second low-frequency component, and the second attenuation area is denoted as E3, which is not repeated herein.
Step 405, for the second high-frequency component in the second mosaic image, a second attenuation region (hereinafter referred to as a second attenuation region E4) is determined from the second high-frequency component, and attenuation processing is performed on the second attenuation region E4 based on the size of the second attenuation region E4 (hereinafter referred to as a fourth size), the configured attenuation velocity value (hereinafter referred to as a fourth attenuation velocity value) corresponding to the second attenuation region E4, and the distance between the pixel point in the second attenuation region E4 and the second mosaic line, so as to obtain the attenuated second high-frequency component.
Exemplarily, step 405 is similar to step 103, the second splicing region in step 103 is replaced by the second high-frequency component, and the second attenuation region is denoted as E4, which is not repeated herein.
In the above embodiment, referring to steps 402 and 403, the first size of the first attenuation region E1 may be greater than the second size of the first attenuation region E2, and the first attenuation speed value corresponding to the first attenuation region E1 is less than or equal to the second attenuation speed value corresponding to the first attenuation region E2. Alternatively, the first size of the first attenuation region E1 may be equal to the second size of the first attenuation region E2, and the first attenuation speed value corresponding to the first attenuation region E1 is smaller than the second attenuation speed value corresponding to the first attenuation region E2.
The reason for adopting the above design is: as shown in formula (1), formula (2) and formula (4),
for low frequency components, alpha is greater for the first sizex,yThe larger the target attenuation factor LSx,yThe smaller (LDiv)x,yWhen the value is a value greater than 0 and less than 1, the conclusion can be obtained through the formula (4), that is, when the pixel point on the right side of the first attenuation region is attenuated to the pixel point on the left side, the target attenuation factors become larger in sequence, but the change speed of the target attenuation factors is smaller, so that the low-frequency component is attenuated slowly (that is, the number of the attenuated pixel points is larger), the pixel points can be in smooth transition, and the low-frequency characteristic of the first splicing region is more reflected. For high frequency components, the smaller the second dimension, alphax,yThe smaller, the target attenuation factor LSx,yThe larger the attenuation is, namely the attenuation is performed from the pixel point on the right side to the pixel point on the left side in the first attenuation area, the target attenuation factor is sequentially increased, and the change speed of the target attenuation factor is higher, so that the high-frequency component is quickly attenuated (the number of the attenuated pixel points is small), the high-frequency characteristic of the first splicing area can be kept, but the high-frequency characteristic is quickly changed, and the visual experience cannot be influenced. In summary, the first dimension may be larger and the second dimension may be smaller, that is, the first dimension of the first attenuation region E1 may be larger than the second dimension of the first attenuation region E2.
For low frequency components, the smaller the first attenuation velocity value, alphax,yThe larger the target attenuation factor LSx,yThe smaller the attenuation is, namely the attenuation is from the pixel point on the right side to the pixel point on the left side of the first attenuation region, the target attenuation factors are sequentially increased, but the change speed of the target attenuation factors is smaller, so that the low-frequency component is attenuated slowly, the pixel points can be in smooth transition, and the low-frequency characteristics of the first splicing region are more reflected. Alpha for high frequency components, the greater the second attenuation velocity valuex,yThe smaller, the target attenuation factor LSx,yThe larger, i.e. from the first decayWhen the pixel points on the right side of the reduction area attenuate to the pixel points on the left side, the target attenuation factor is sequentially increased, and the change speed of the target attenuation factor is higher, so that the high-frequency component is quickly attenuated, the high-frequency characteristics of the first splicing area can be reserved, but the high-frequency characteristics are quickly changed, and the visual experience is not influenced. In summary, the first attenuation speed value may be smaller and the second attenuation speed value may be larger, that is, the first attenuation speed value corresponding to the first attenuation region E1 may be smaller than the second attenuation speed value corresponding to the first attenuation region E2.
In the above embodiment, referring to steps 404 and 405, the third size of the second attenuation region E3 may be larger than the fourth size of the second attenuation region E4, and the third attenuation speed value corresponding to the second attenuation region E3 is smaller than or equal to the fourth attenuation speed value corresponding to the second attenuation region E4. Alternatively, the third size of the second attenuation region E3 may be equal to the fourth size of the second attenuation region E4, and the third attenuation speed value corresponding to the second attenuation region E3 is smaller than the fourth attenuation speed value corresponding to the second attenuation region E4.
Similarly, the reason for adopting the above design is: for the low-frequency component of the second splicing region, when the pixel points on the left side of the second attenuation region attenuate to the pixel points on the right side, the target attenuation factors sequentially become large, but the variation speed of the target attenuation factors is low, so that the low-frequency component attenuates slowly, the pixel points can be in smooth transition, and the low-frequency characteristic of the second splicing region is reflected more. For the high-frequency component of the second splicing region, when the pixel points on the left side of the second attenuation region attenuate to the pixel points on the right side, the target attenuation factor is sequentially increased, and the change speed of the target attenuation factor is high, so that the high-frequency characteristic of the first splicing region can be reserved due to the rapid attenuation of the high-frequency component, but the high-frequency characteristic rapidly changes, and the visual experience cannot be influenced.
In one possible embodiment, for each of the above attribute values, the first size of the first attenuation region E1 may be equal to the third size of the second attenuation region E3, and the first attenuation speed value corresponding to the first attenuation region E1 may be equal to the third attenuation speed value corresponding to the second attenuation region E3. And, the second size of the first attenuation region E2 may be equal to the fourth size of the second attenuation region E4, and the second attenuation speed value corresponding to the first attenuation region E2 may be equal to the fourth attenuation speed value corresponding to the second attenuation region E4.
Step 406, obtaining a low-frequency stitched image based on the attenuated first low-frequency component and the attenuated second low-frequency component, where one side of a stitched line of the low-frequency stitched image is the low-frequency component (i.e., the attenuated first low-frequency component) obtained by attenuating the first attenuated region of the first low-frequency component, and the other side of the stitched line is the low-frequency component (i.e., the attenuated second low-frequency component) obtained by attenuating the second attenuated region of the second low-frequency component. Step 406 is similar to step 104 and will not be repeated herein.
Step 407, obtaining a high-frequency stitched image based on the attenuated first high-frequency component and the attenuated second high-frequency component, where one side of a stitched line of the high-frequency stitched image is the high-frequency component (i.e., the attenuated first high-frequency component) obtained by attenuating the first attenuated region of the first high-frequency component, and the other side of the stitched line is the high-frequency component (i.e., the attenuated second high-frequency component) obtained by attenuating the second attenuated region of the second high-frequency component. Step 407 is similar to step 104, and will not be repeated herein.
And step 408, fusing the low-frequency spliced image and the high-frequency spliced image to obtain a target image.
For example, the attenuated first low-frequency component in the low-frequency stitched image and the attenuated first high-frequency component in the high-frequency stitched image are fused to obtain the attenuated first stitched region. And fusing the second low-frequency component after the attenuation treatment in the low-frequency spliced image and the second high-frequency component after the attenuation treatment in the high-frequency spliced image to obtain a second spliced area after the attenuation treatment. And fusing the splicing lines in the low-frequency splicing image and the splicing lines in the high-frequency splicing image to obtain fused splicing lines. To this end, a target image may be obtained, where the target image may include a fused stitch line, one side of the stitch line is a first stitch area after attenuation processing, the other side of the stitch line is a second stitch area after attenuation processing, and the target image is a panoramic image or a high-resolution image (i.e., an ultra-wide view angle image).
In the above embodiment, the pixel value of the pixel point may be a luminance value, or the pixel value of the pixel point may be a chromatic value, or the pixel value of the pixel point may be a luminance value and a chromatic value.
According to the technical scheme, in the embodiment of the application, the frequency division is carried out on the image through filtering, and the attenuation factors with different areas are smoothed for the high frequency and the low frequency respectively, so that the ghost-free smoothing method for the spliced image is realized, the smoothness of the spliced image can be realized under the condition that the overlapped area is very small or has no overlapped area, the chromatic aberration and the brightness difference can be eliminated, and the splicing trace caused by the error of transition registration can be realized.
Based on the same application concept as the method, the embodiment of the application provides another image processing method, which may include: and acquiring a spliced image, and performing smoothing processing on at least one line of pixel points in the spliced image. The process of obtaining the stitched image may refer to step 101, and for example, left and right stitching is used, smoothing may be performed on each row of pixel points in the stitched image, and a specific smoothing process may include:
and step S51, selecting one pixel point in a row of pixel points as a splicing point.
Illustratively, the patchwork points may be: when the spliced images have the overlapped area, selecting a left boundary point or a right boundary point of the overlapped area as the splicing point; and when the spliced image does not have the overlapping area, selecting a pixel point positioned in the middle area of the spliced image as the splicing point.
And step S52, responding to the seam points, and performing smoothing processing on a plurality of other pixel points except the seam points based on the target attenuation factors to generate a plurality of processed other pixel points. Illustratively, the target attenuation factor is defined as a function operation taking the lateral coordinate difference between other pixel points and the patchwork point and the maximum attenuation factor as parameters, so that when the maximum attenuation factor is smaller than 1, the variation between the pixel value of the other pixel points after the smoothing processing and the pixel value before the smoothing processing is executed increases with the increase of the lateral coordinate difference, and when the maximum attenuation factor is larger than 1, the variation between the pixel value of the other pixel points after the smoothing processing and the pixel value before the smoothing processing is executed decreases with the increase of the lateral coordinate difference.
In the above embodiment, the plurality of other pixel points on the left side of the seam point correspond to the same left-side maximum attenuation factor, and the left-side maximum attenuation factor is defined as a function operation obtained by taking the preset smooth radius, the sum of the pixel values of the plurality of other pixel points on the left side of the seam point, and the sum of the pixel values of the plurality of other pixel points on the right side of the seam point as parameters. And a plurality of other pixel points on the right side of the seam point correspond to the same right-side maximum attenuation factor, and the right-side maximum attenuation factor is obtained by function operation with the preset smooth radius, the sum of the pixel values of a plurality of other pixel points on the left side of the seam point and the sum of the pixel values of a plurality of other pixel points on the right side of the seam point as parameters.
Illustratively, when the left-side maximum attenuation factor is greater than 1, the right-side maximum attenuation factor is less than 1. Alternatively, when the left-side maximum attenuation factor is less than 1, the right-side maximum attenuation factor is greater than 1.
In the above embodiment, when the left maximum attenuation factor is greater than 1 and the right maximum attenuation factor is less than 1, the target attenuation factors corresponding to the other pixel points on the left side of the seam point are defined to gradually decrease with increasing distance from the seam point, so that the attenuation ratios of the pixel values of the other pixel points on the left side of the seam point gradually decrease to 1 to the left; and a plurality of target attenuation factors corresponding to a plurality of other pixel points on the right side of the seam point are defined to gradually increase along with the distance from the seam point from small to large, so that the attenuation proportion of the pixel values of the other pixel points on the right side of the seam point is gradually increased to 1 rightward.
In the above embodiment, when the left maximum attenuation factor is smaller than 1 and the right maximum attenuation factor is greater than 1, the target attenuation factors corresponding to other pixel points on the left side of the seam point are defined to gradually increase with increasing distance from the seam point, so that the attenuation ratios of the pixel values of other pixel points on the left side of the seam point gradually increase to 1 to the left; and a plurality of target attenuation factors corresponding to a plurality of other pixel points on the right side of the seam point are limited to gradually decrease along with the distance from the seam point from small to large, so that the attenuation proportion of the pixel values of the other pixel points on the right side of the seam point is gradually decreased to 1 rightward.
In the above embodiments, the pixel values may be luminance values and/or chrominance values.
In one possible implementation, referring to formula (5), for each pixel point in the first attenuation region on the left side of the patchwork point, the target attenuation factor LS of the pixel point may be based on the target attenuation factor LSx,yAnd performing smoothing processing on the pixel point to generate a processed pixel point, namely, performing smoothing processing on a plurality of pixel points in a first attenuation region on the left side of the splicing point to generate a plurality of processed pixel points. Similarly, a plurality of pixel points in a first attenuation area on the right side of the abutted seam point can be subjected to smoothing processing, and a plurality of processed pixel points are generated.
Referring to formula (4), the target attenuation factor LS of the pixel pointx,yFrom alpha of the pixel pointx,y(i.e., attenuation degree value) and LDiv of the pixel pointx,y(i.e., the maximum attenuation factor) determination, as shown in equations (1) and (2), alphax,yDetermined by the difference (x-x 0) of the horizontal coordinates of the pixel point and the splicing seam point, so that the target attenuation factor LS of the pixel pointx,yIs defined as the difference (x-x 0) of the transverse coordinates of the pixel point and the patchwork point, and the maximum attenuation factor LDivx,yIs obtained by function operation of the parameter.
Illustratively, for a plurality of pixel points on the left side of the patchwork point, the pixel points correspond to the same maximum attenuation factor, namely the maximum attenuation factor on the left side, which is recorded as LDivx,yAiming at a plurality of pixel points on the right side of the abutted seam point, the pixel points correspond to the same maximum attenuation factor, namely the maximum attenuation factor on the right side, and are recorded as RDivx,y
Left side maximum attenuation corresponding to multiple pixel points on the left side of the seam pointReduction factor LDivx,ySee steps S121-S123, LDivx,yFrom the formula LDivx,y=(1+Tx0,y) 2 is determined, and Tx0,yThe sum (P) of pixel values of a plurality of pixel points on the left side of the splicing point and the preset smooth radius mx0-1,y+Px0-2,y+…Px0-n,y) Sum of pixel values (P) of a plurality of pixel points on the right side of the abutted seam pointx0+1,y+Px0+2,y+…Px0+n,y) Determine, therefore, LDivx,yThe method is obtained by function operation which takes the preset smooth radius, the sum of the pixel values of a plurality of other pixel points on the left side of the splicing point and the sum of the pixel values of a plurality of other pixel points on the right side of the splicing point as parameters.
Right side maximum attenuation factor RDiv corresponding to multiple pixel points to the right of the patchwork pointx,ySee step S221-step S223, RDivx,yFrom the formula RDivx,y=(1+1/Tx0,y) 2 is determined, and Tx0,yThe sum (P) of pixel values of a plurality of pixel points on the left side of the splicing point and the preset smooth radius mx0-1,y+Px0-2,y+…Px0-n,y) Sum of pixel values (P) of a plurality of pixel points on the right side of the abutted seam pointx0+1,y+Px0+2,y+…Px0+n,y) Determining, therefore, RDivx,yThe method is obtained by function operation which takes the preset smooth radius, the sum of the pixel values of a plurality of other pixel points on the left side of the splicing point and the sum of the pixel values of a plurality of other pixel points on the right side of the splicing point as parameters.
Illustratively, from the above formula LDivx,y=(1+Tx0,y) /2 and RDivx,y=(1+1/Tx0,y) It can be seen that if Tx0,yIf greater than 1, LDivx,yGreater than 1, and RDivx,yLess than 1; or, if Tx0,yLess than 1, then LDivx,yLess than 1, and RDivx,yGreater than 1. In summary, the left side maximum attenuation factor LDivx,yGreater than 1, right maximum attenuation factor RDivx,yLess than 1. Or, when the left side maximum attenuation factor LDivx,yLess than 1, right maximum attenuation factor RDivx,yGreater than 1.
Exemplary, when the left side maximum attenuation factor LDivx,yWhen greater than 1, LS is shown in equation (4)x,yAnd alphax,yProportional, see formula (1) and formula (2), alphax,yInversely proportional to the difference in lateral coordinates (x-x 0), and therefore LSx,yInversely proportional to the difference in lateral coordinates (x-x 0). The attenuation proportion of the pixel values of a plurality of pixel points on the left side of the seam point is gradually reduced to 1 leftwards, and LS corresponding to the pixel values of the plurality of pixel points is gradually reduced tox,yGradually decreases to 1 to the left, i.e. changes from large to small.
Maximum attenuation factor RDiv on rightx,yWhen the ratio is less than 1, RS can be obtained by the following formulax,y=1-alphax,y+RDivx,y*alphax,y,RSx,yAnd alphax,yIs inversely proportional, and is due to alphax,yInversely proportional to the difference in lateral coordinates (x-x 0), and therefore RSx,yProportional to the difference in lateral coordinates (x-x 0). The attenuation ratios of the pixel values of a plurality of pixel points on the right side of the seam point are gradually increased to 1 rightward, and the RS corresponding to the pixel values of the pixel points is defined as the attenuation factors of a plurality of targets corresponding to the pixel points on the right side of the seam point gradually increasing with the distance from the seam point from small to largex,yGradually increasing to the right to 1, i.e. changing from small to large.
Exemplary, when the left side maximum attenuation factor LDivx,yLess than 1, LSx,yAnd alphax,yIn inverse ratio, alphax,yInversely proportional to the difference in lateral coordinates (x-x 0), and therefore LSx,yProportional to the difference in lateral coordinates (x-x 0). The attenuation ratios of the pixel values of a plurality of pixel points on the left side of the seam point are gradually increased to 1 leftwards, namely LSx,yGradually increasing to the left to 1, i.e. changing from small to large.
Maximum attenuation factor RDiv on rightx,yGreater than 1, RSx,yAnd alphax,yProportional, alphax,yIn a direction transverse toThe coordinate difference (x-x 0) is inversely proportional, and therefore, RSx,yInversely proportional to the difference in lateral coordinates (x-x 0). Based on the above, the target attenuation factors corresponding to the pixel points on the right side of the seam point are defined as gradually decreasing with the distance from the seam point from small to large, and the attenuation proportion of the pixel values of the pixel points on the right side of the seam point is gradually decreased to 1 rightward, that is, the RS isx,yGradually decreases to 1 to the right, i.e. changes from large to small.
In summary, when the maximum attenuation factor is smaller than 1, the variation between the pixel value of the plurality of pixels after the smoothing processing and the pixel value before the smoothing processing increases with the increase of the horizontal coordinate difference, and when the maximum attenuation factor is larger than 1, the variation between the pixel value of the plurality of pixels after the smoothing processing and the pixel value before the smoothing processing decreases with the increase of the horizontal coordinate difference. For example, the left side maximum attenuation factor LDivx,yGreater than 1, right maximum attenuation factor RDivx,yWhen the difference value is less than 1, the variation between the pixel value after the smoothing processing is executed by the plurality of pixel points on the right side and the pixel value before the smoothing processing is executed is increased along with the increase of the difference value of the transverse coordinates, and the variation between the pixel value after the smoothing processing is executed by the plurality of pixel points on the left side and the pixel value before the smoothing processing is executed is reduced along with the increase of the difference value of the transverse coordinates. Left side maximum attenuation factor LDivx,yLess than 1, right maximum attenuation factor RDivx,yWhen the difference value of the horizontal coordinate is larger than 1, the variation between the pixel value of the plurality of pixel points on the left side after the smoothing processing and the pixel value before the smoothing processing is executed is increased along with the increase of the difference value of the horizontal coordinate, and the variation between the pixel value of the plurality of pixel points on the right side after the smoothing processing and the pixel value before the smoothing processing is executed is reduced along with the increase of the difference value of the horizontal coordinate.
In the above embodiment, the stitched image may include a blurred stitched image generated after filtering based on the original stitched image (i.e., a first stitched image, one side of a stitching line of the first stitched image is a first low-frequency component, and the other side of the stitching line is a second low-frequency component), and a detail stitched image generated based on subtracting the blurred stitched image from the original stitched image (i.e., a second stitched image, one side of a stitching line of the second stitched image is a first high-frequency component, and the other side of the stitching line is a second high-frequency component), and performing smoothing processing on at least one row of pixel points in the blurred stitched image and the detail stitched image. For example, the processed blurred stitched image and the processed detail stitched image may be added and fused to generate a fused stitched image.
Based on the same application concept as the method, an image processing apparatus is proposed in the embodiment of the present application, and referring to fig. 5, the image processing apparatus is a schematic structural diagram of the image processing apparatus, and the apparatus may include: an obtaining module 51, configured to obtain a stitched image based on a first original image and a second original image, where the stitched image includes a stitching line, one side of the stitching line is a first stitched region determined based on the first original image, and the other side of the stitching line is a second stitched region determined based on the second original image; a processing module 52, configured to determine a first attenuation region from the first splicing region, and perform attenuation processing on the first attenuation region based on the size of the first attenuation region, a configured attenuation speed value corresponding to the first attenuation region, and a distance between a pixel point in the first attenuation region and the splicing line; determining a second attenuation region from the second splicing region, and performing attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line; a generating module 53, configured to generate a target image based on the attenuated first splicing region and the attenuated second splicing region, where the target image includes the splicing line, one side of the splicing line is the attenuated first splicing region, and the other side of the splicing line is the attenuated second splicing region.
For example, the processing module 52 is configured to perform attenuation processing on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region, and the distance between the pixel point in the first attenuation region and the stitch line, specifically:
for each pixel point in the first attenuation region, determining an attenuation degree value corresponding to the pixel point based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point and the splicing line; wherein the attenuation degree value is in direct proportion to the size, the attenuation degree value is in inverse proportion to the attenuation speed value, and the attenuation degree value is in inverse proportion to the distance;
determining a target attenuation factor corresponding to the pixel point based on the attenuation degree value, determining a target pixel value of the pixel point based on the target attenuation factor and an original pixel value of the pixel point, and generating a first splicing region after attenuation processing based on the target pixel value of each pixel point in the first attenuation region.
Illustratively, the first stitched region includes a first low-frequency component and a first high-frequency component, the second stitched region includes a second low-frequency component and a second high-frequency component, the stitched image includes a first stitched image and a second stitched image, one side of a stitching line of the first stitched image is the first low-frequency component, the other side of the stitching line is the second low-frequency component, one side of a stitching line of the second stitched image is the first high-frequency component, and the other side of the stitching line is the second high-frequency component; the generating module 53 is specifically configured to, when generating the target image based on the attenuated first splicing region and the attenuated second splicing region: generating a low-frequency spliced image based on the first spliced image, wherein one side of a spliced line of the low-frequency spliced image is a low-frequency component obtained by attenuating a first attenuation region of a first low-frequency component, and the other side of the spliced line is a low-frequency component obtained by attenuating a second attenuation region of a second low-frequency component;
generating a high-frequency spliced image based on a second spliced image, wherein one side of a spliced line of the high-frequency spliced image is a high-frequency component obtained by attenuating a first attenuation region of a first high-frequency component, and the other side of the spliced line is a high-frequency component obtained by attenuating a second attenuation region of a second high-frequency component;
and fusing the low-frequency spliced image and the high-frequency spliced image to obtain the target image.
Based on the same application concept as the method described above, an image processing apparatus is proposed in the embodiment of the present application, and as shown in fig. 6, the image processing apparatus may include: a processor 61 and a machine-readable storage medium 62, the machine-readable storage medium 62 storing machine-executable instructions executable by the processor 61; the processor 61 is configured to execute machine executable instructions to perform the following steps:
acquiring a spliced image based on a first original image and a second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image;
determining a first attenuation region from the first splicing region, and performing attenuation processing on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point in the first attenuation region and the splicing line;
determining a second attenuation region from the second splicing region, and performing attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line;
generating a target image based on the attenuated first splicing region and the attenuated second splicing region, wherein the target image comprises the splicing line, one side of the splicing line is the attenuated first splicing region, and the other side of the splicing line is the attenuated second splicing region.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the image processing method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a spliced image based on a first original image and a second original image, wherein the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image;
determining a first attenuation region from the first splicing region, and performing attenuation processing on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point in the first attenuation region and the splicing line;
determining a second attenuation region from the second splicing region, and performing attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line;
generating a target image based on the attenuated first splicing region and the attenuated second splicing region, wherein the target image comprises the splicing line, one side of the splicing line is the attenuated first splicing region, and the other side of the splicing line is the attenuated second splicing region.
2. The method of claim 1, wherein the attenuating the first attenuation region based on the size of the first attenuation region, the configured attenuation velocity value corresponding to the first attenuation region, and the distance between the pixel point and the stitch line in the first attenuation region comprises:
for each pixel point in the first attenuation region, determining an attenuation degree value corresponding to the pixel point based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point and the splicing line; wherein the attenuation degree value is in direct proportion to the size, the attenuation degree value is in inverse proportion to the attenuation speed value, and the attenuation degree value is in inverse proportion to the distance;
determining a target attenuation factor corresponding to the pixel point based on the attenuation degree value, determining a target pixel value of the pixel point based on the target attenuation factor and an original pixel value of the pixel point, and generating a first splicing region after attenuation processing based on the target pixel value of each pixel point in the first attenuation region.
3. The method of claim 2,
the determining the target attenuation factor corresponding to the pixel point based on the attenuation degree value includes:
acquiring a maximum attenuation factor corresponding to the pixel point;
and determining a target attenuation factor corresponding to the pixel point based on the attenuation degree value and the maximum attenuation factor.
4. The method of claim 3,
the obtaining of the maximum attenuation factor corresponding to the pixel point includes:
acquiring a configured maximum attenuation factor corresponding to the pixel point; alternatively, the first and second electrodes may be,
and determining an initial scale factor corresponding to the splicing line based on the first splicing area and the second splicing area, filtering the initial scale factor to obtain a target scale factor corresponding to the splicing line, and determining a maximum attenuation factor corresponding to the pixel point based on the target scale factor.
5. The method according to any one of claims 1 to 4,
the first splicing region comprises a first low-frequency component and a first high-frequency component, the second splicing region comprises a second low-frequency component and a second high-frequency component, the spliced images comprise a first spliced image and a second spliced image, one side of a splicing line of the first spliced image is the first low-frequency component, the other side of the splicing line is the second low-frequency component, one side of a splicing line of the second spliced image is the first high-frequency component, and the other side of the splicing line is the second high-frequency component; the generating of the target image based on the first splicing region after the attenuation processing and the second splicing region after the attenuation processing includes:
generating a low-frequency spliced image based on the first spliced image, wherein one side of a spliced line of the low-frequency spliced image is a low-frequency component obtained by attenuating a first attenuation region of a first low-frequency component, and the other side of the spliced line is a low-frequency component obtained by attenuating a second attenuation region of a second low-frequency component;
generating a high-frequency spliced image based on a second spliced image, wherein one side of a spliced line of the high-frequency spliced image is a high-frequency component obtained by attenuating a first attenuation region of a first high-frequency component, and the other side of the spliced line is a high-frequency component obtained by attenuating a second attenuation region of a second high-frequency component;
and fusing the low-frequency spliced image and the high-frequency spliced image to obtain the target image.
6. The method according to any one of claims 1 to 4,
when the first original image and the second original image do not have an overlapping area, taking a boundary line positioned in a middle area in the spliced image as the splicing line, wherein the first spliced area is the first original image, and the second spliced area is the second original image;
when the first original image and the second original image have an overlapping area, taking a boundary line of the spliced image located in the overlapping area as the splicing line, wherein the first spliced area is a first original image, and the second spliced area is a local part of a second original image; or the first splicing area is a part of a first original image, and the second splicing area is a second original image, or the first splicing area is a part of the first original image, and the second splicing area is a part of the second original image;
wherein, the acquisition mode of the stitch line in the stitching image comprises:
determining the pixel value of each pixel point in the splicing line based on the pixel values of the N pixel points adjacent to the splicing line in the first splicing region and the pixel values of the N pixel points adjacent to the splicing line in the second splicing region, and determining the splicing line based on the pixel value of each pixel point in the splicing line.
7. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring a spliced image based on a first original image and a second original image, the spliced image comprises a spliced line, one side of the spliced line is a first spliced area determined based on the first original image, and the other side of the spliced line is a second spliced area determined based on the second original image;
the processing module is used for determining a first attenuation region from the first splicing region, and performing attenuation processing on the first attenuation region based on the size of the first attenuation region, the configured attenuation speed value corresponding to the first attenuation region and the distance between the pixel point in the first attenuation region and the splicing line of the splicing region; determining a second attenuation region from the second splicing region, and performing attenuation processing on the second attenuation region based on the size of the second attenuation region, the configured attenuation speed value corresponding to the second attenuation region and the distance between the pixel point in the second attenuation region and the splicing line of the splicing region;
the generation module is used for generating a target image based on the first splicing area after the attenuation processing and the second splicing area after the attenuation processing, the target image comprises a splicing line of the splicing area, one side of the splicing line of the splicing area is the first splicing area after the attenuation processing, and the other side of the splicing line of the splicing area is the second splicing area after the attenuation processing.
8. An image processing method, comprising:
acquiring a spliced image;
and performing smoothing treatment on at least one line of pixel points in the spliced image, wherein the smoothing treatment comprises the following steps:
selecting one pixel point from the line of pixel points as a splicing point;
responding to the seam points, and performing smoothing processing on a plurality of other pixel points except the seam points based on a target attenuation factor to generate a plurality of processed other pixel points; when the maximum attenuation factor is greater than 1, the variation between the pixel values of the other pixels after the smoothing processing and the pixel values before the smoothing processing is performed is reduced along with the increase of the transverse coordinate difference; wherein the pixel values are luminance values and/or chrominance values.
9. The method of claim 8, wherein the patchwork point is:
when the spliced image has an overlapping area, selecting a left boundary point or a right boundary point of the overlapping area as the splicing point; and
and when the spliced image has no overlapping area, selecting a pixel point in the middle area of the spliced image as the splicing point.
10. The method of claim 8,
the multiple other pixel points on the left side of the seam point correspond to the same left-side maximum attenuation factor, and the left-side maximum attenuation factor is obtained by function operation with the preset smooth radius, the sum of the pixel values of the multiple other pixel points on the left side of the seam point and the sum of the pixel values of the multiple other pixel points on the right side of the seam point as parameters;
the other pixel points on the right side of the seam point correspond to the same right-side maximum attenuation factor, and the right-side maximum attenuation factor is obtained by function operation with the preset smooth radius, the sum of the pixel values of the other pixel points on the left side of the seam point and the sum of the pixel values of the other pixel points on the right side of the seam point as parameters;
wherein the right side maximum attenuation factor is less than 1 when the left side maximum attenuation factor is greater than 1, and the right side maximum attenuation factor is greater than 1 when the left side maximum attenuation factor is less than 1.
CN202110402157.7A 2021-04-14 2021-04-14 Image processing method and device Active CN113077387B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110402157.7A CN113077387B (en) 2021-04-14 2021-04-14 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110402157.7A CN113077387B (en) 2021-04-14 2021-04-14 Image processing method and device

Publications (2)

Publication Number Publication Date
CN113077387A true CN113077387A (en) 2021-07-06
CN113077387B CN113077387B (en) 2023-06-27

Family

ID=76617901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110402157.7A Active CN113077387B (en) 2021-04-14 2021-04-14 Image processing method and device

Country Status (1)

Country Link
CN (1) CN113077387B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101583974A (en) * 2006-12-13 2009-11-18 杜比实验室特许公司 Methods and apparatus for stitching digital images
CN105931188A (en) * 2016-05-06 2016-09-07 安徽伟合电子科技有限公司 Method for image stitching based on mean value duplication removal
CN105957018A (en) * 2016-07-15 2016-09-21 武汉大学 Unmanned aerial vehicle image filtering frequency division jointing method
CN106469444A (en) * 2016-09-20 2017-03-01 天津大学 Eliminate the rapid image fusion method in splicing gap
CN106940877A (en) * 2016-01-05 2017-07-11 富士通株式会社 Image processing apparatus and method
CN107248137A (en) * 2017-04-27 2017-10-13 努比亚技术有限公司 A kind of method and mobile terminal for realizing image procossing
CN109300084A (en) * 2017-07-25 2019-02-01 杭州海康汽车技术有限公司 A kind of image split-joint method, device, electronic equipment and storage medium
CN110782424A (en) * 2019-11-08 2020-02-11 重庆紫光华山智安科技有限公司 Image fusion method and device, electronic equipment and computer readable storage medium
CN112233154A (en) * 2020-11-02 2021-01-15 影石创新科技股份有限公司 Color difference elimination method, device and equipment for spliced image and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101583974A (en) * 2006-12-13 2009-11-18 杜比实验室特许公司 Methods and apparatus for stitching digital images
CN106940877A (en) * 2016-01-05 2017-07-11 富士通株式会社 Image processing apparatus and method
CN105931188A (en) * 2016-05-06 2016-09-07 安徽伟合电子科技有限公司 Method for image stitching based on mean value duplication removal
CN105957018A (en) * 2016-07-15 2016-09-21 武汉大学 Unmanned aerial vehicle image filtering frequency division jointing method
CN106469444A (en) * 2016-09-20 2017-03-01 天津大学 Eliminate the rapid image fusion method in splicing gap
CN107248137A (en) * 2017-04-27 2017-10-13 努比亚技术有限公司 A kind of method and mobile terminal for realizing image procossing
CN109300084A (en) * 2017-07-25 2019-02-01 杭州海康汽车技术有限公司 A kind of image split-joint method, device, electronic equipment and storage medium
CN110782424A (en) * 2019-11-08 2020-02-11 重庆紫光华山智安科技有限公司 Image fusion method and device, electronic equipment and computer readable storage medium
CN112233154A (en) * 2020-11-02 2021-01-15 影石创新科技股份有限公司 Color difference elimination method, device and equipment for spliced image and readable storage medium

Also Published As

Publication number Publication date
CN113077387B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US10389948B2 (en) Depth-based zoom function using multiple cameras
TWI381719B (en) Full-frame video stabilization with a polyline-fitted camcorder path
CN106447602B (en) Image splicing method and device
US20090052532A1 (en) Automatically identifying edges of moving objects
DE102008059372A1 (en) Bildverzeichnungskorrektur
JP2011060216A (en) Device and method of processing image
CN109509146A (en) Image split-joint method and device, storage medium
JP5225313B2 (en) Image generating apparatus, image generating method, and program
GB2473247A (en) Aligning camera images using transforms based on image characteristics in overlapping fields of view
US20210021756A1 (en) Multi-camera post-capture image processing
US7840070B2 (en) Rendering images based on image segmentation
CN111052737A (en) Apparatus and method for generating image
Ceulemans et al. Robust multiview synthesis for wide-baseline camera arrays
CN109194888B (en) DIBR free viewpoint synthesis method for low-quality depth map
US20180322671A1 (en) Method and apparatus for visualizing a ball trajectory
CN113077387A (en) Image processing method and device
JP3492151B2 (en) Image sharpness processing method and apparatus, and storage medium storing program
Liang et al. Guidance network with staged learning for image enhancement
KR102576700B1 (en) Method and apparatus for virtual viewpoint image synthesis by mixing warped image
CN116228855A (en) Visual angle image processing method and device, electronic equipment and computer storage medium
CN112669355B (en) Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation
CN111709877A (en) Image fusion method for industrial detection
JP3058769B2 (en) 3D image generation method
Cao et al. View Transition based Dual Camera Image Fusion
US11995799B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant