CN109314776B - Image processing method, image processing apparatus, and storage medium - Google Patents

Image processing method, image processing apparatus, and storage medium Download PDF

Info

Publication number
CN109314776B
CN109314776B CN201780034126.2A CN201780034126A CN109314776B CN 109314776 B CN109314776 B CN 109314776B CN 201780034126 A CN201780034126 A CN 201780034126A CN 109314776 B CN109314776 B CN 109314776B
Authority
CN
China
Prior art keywords
image
images
pixel points
hdr
target space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780034126.2A
Other languages
Chinese (zh)
Other versions
CN109314776A (en
Inventor
阳光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen A&E Intelligent Technology Institute Co Ltd
Original Assignee
Shenzhen A&E Intelligent Technology Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen A&E Intelligent Technology Institute Co Ltd filed Critical Shenzhen A&E Intelligent Technology Institute Co Ltd
Publication of CN109314776A publication Critical patent/CN109314776A/en
Application granted granted Critical
Publication of CN109314776B publication Critical patent/CN109314776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method, an image processing apparatus, and a storage medium. The depth calculation method comprises the following steps: acquiring a plurality of HDR images of multiple viewpoints, wherein the HDR images are obtained by collecting and processing the same target space; and calculating a depth image of the target space by using the image data of the HDR images. By the aid of the method, accuracy of the depth information in the depth image obtained through calculation can be improved.

Description

Image processing method, image processing apparatus, and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
Currently, the depth information in the image information is usually acquired by directly acquiring a related image such as a color image by using a camera or the like, and calculating image data of the related image by using a related algorithm to obtain a depth image, where the depth image includes depth information of any point on the image.
As the application of depth information is becoming widespread, it is often desirable to obtain depth information of moving objects in real time. However, the image obtained by capturing the moving object is easily blurred or is not properly exposed in time when the light changes, which results in the loss of image data in highlight or low-light areas, and further results in inaccurate depth information calculated from the captured image.
Disclosure of Invention
The invention mainly solves the technical problem of providing an image processing method, an image processing device and a storage medium, which can improve the calculation accuracy of depth information.
In order to solve the technical problems, the first technical scheme adopted by the invention is as follows: provided is an image processing method including: acquiring a plurality of high dynamic illumination rendering HDR images of multiple viewpoints, wherein the HDR images are obtained by collecting and processing the same target space; and calculating a depth image of the target space by using the image data of the HDR images.
In order to solve the above technical problem, a second technical solution of the present invention is to provide an image processing apparatus, including a processor and a memory connected to each other; the memory is used for storing computer instructions and data; the processor executing the computer instructions to: acquiring a plurality of high dynamic illumination rendering HDR images of multiple viewpoints, wherein the HDR images are obtained by collecting and processing the same target space; and calculating a depth image of the target space by using the image data of the HDR images.
In order to solve the above technical problem, a third technical solution according to the present invention is to provide a nonvolatile storage medium storing a computer instruction executable by a processor, where the computer instruction is used in the image processing method according to the first technical solution.
In order to solve the above technical problem, a fourth technical solution according to the present invention is to provide an image processing method, including: acquiring a plurality of images with different exposure times, wherein the images are acquired from the same target space at the same viewpoint; calculating depth information of at least one image in a plurality of images acquired by the target space from the current viewpoint by using the calculated depth information of the depth image of the target space, and deblurring the at least one image in the plurality of images according to the depth information; carrying out pixel point matching on the deblurred images; and synthesizing the plurality of images according to the pixel point matching result to obtain a high dynamic illumination rendering HDR image.
In order to solve the above technical problem, a fifth technical solution adopted by the present invention is to provide an image processing apparatus, comprising a processor and a memory, which are connected to each other; the memory is used for storing computer instructions and data; the processor executing the computer instructions to: acquiring a plurality of images with different exposure times, wherein the images are acquired from the same target space at the same viewpoint; calculating depth information of at least one image in a plurality of images acquired by the target space from the current viewpoint by using the calculated depth information of the depth image of the target space, and deblurring the at least one image in the plurality of images according to the depth information; carrying out pixel point matching on the deblurred images; and synthesizing the plurality of images according to the pixel point matching result to obtain a high dynamic illumination rendering HDR image.
In order to solve the above technical problem, a sixth technical means of the present invention is to provide a nonvolatile storage medium, wherein a computer instruction executable by a processor is stored, and the computer instruction is used for executing the image processing method according to the fourth technical means.
According to the scheme, the HDR image obtained by processing the acquired image is utilized, the depth image is calculated by utilizing the HDR image, and the accuracy of calculating the depth information of the target space can be improved.
In addition, when the HDR image is synthesized, the acquired image can be deblurred firstly, the definition of the HDR image and the accuracy of HDR image data are improved, the image data which cannot be used due to light of the acquired image is avoided after the deblurring processing is carried out, the adaptability of the image acquisition to complex light is further improved, and the influence of motion on the image acquisition can be eliminated after the deblurring processing is carried out, so that the HDR image synthesis and the depth calculation under the adaptable motion state are realized.
Drawings
FIG. 1 is a flow chart of an embodiment of an image processing method of the present invention;
FIG. 2 is a schematic diagram of image acquisition of an application scene shown in FIG. 1;
FIG. 3 is a flow chart of the step S12 shown in FIG. 1 in another embodiment;
FIG. 4 is a schematic diagram of image matching in an application scenario shown in FIG. 3;
FIG. 5 is a flow chart of another embodiment of an image processing method of the present invention;
FIG. 6 is a flowchart of the step S52 shown in FIG. 5 in a further embodiment;
fig. 7 is a schematic structural diagram of an embodiment of the image processing apparatus of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The scheme provided by the invention can be applied to a scene that the image collector and the shot target space have relative motion, for example, the image collector is arranged in a running vehicle or a reversing vehicle, and the depth information of the current surrounding environment is obtained by calculating the surrounding environment image collected by the image collector in real time (for example, collected once every set time), namely the distance information between the current vehicle and the surrounding environment object is obtained. For another example, the shooting target is a vehicle on a road, the image collector is fixedly arranged on the roadside, and the image of the passing vehicle is collected in real time through the image collector to calculate the current depth information of the vehicle, that is, the distance information between the vehicle and the image collector.
Referring to fig. 1, fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the invention. In this embodiment, the method is used for calculating a depth image of a target space, and is executed by an image processing device, and includes the following steps:
s11: multiple HDR images of multiple viewpoints are acquired.
The multiple HDR (High-Dynamic Range) images are obtained by collecting and processing the same target space, that is, the multiple HDR images of the multiple viewpoints have a certain overlap. In this embodiment, the multiple viewpoints collect images simultaneously, and the collected target spaces have a certain overlap. In other embodiments, the HDR images of multiple viewpoints may also be HDR images acquired and synthesized by the same viewpoint in different positions at different times for the same target space.
Specifically, the HDR image of each viewpoint may be synthesized from a plurality of images respectively acquired by the viewpoint at different exposure times. For example, the S11 specifically includes: matching pixel points of a plurality of images with different exposure time of each viewpoint; and synthesizing the plurality of images of each viewpoint into an HDR image of the viewpoint according to the pixel point matching result. Taking a first image and a second image acquired by each viewpoint at two different exposure times respectively as an example, finding out matched pixel points in the first image and the second image of each viewpoint, and calculating to obtain HDR image data of the viewpoint according to the pixel points matched with the viewpoints and image data in the first image and the second image.
As shown in fig. 2, the image processing apparatus first acquires a first image a1 and a second image a2 at a viewpoint, and a first image B1 and a second image B2 at a viewpoint in real time with the image collectors 21 and 22 set at a and B viewpoints, respectively, for a road setting direction. The first image a1 and the first image b1 are acquired under the first exposure time t1, and the second image a2 and the second image b2 are acquired under the second exposure time t 2. This first exposure time is different from the second exposure time, which in this embodiment is larger than the second exposure time by Δ t. The image processing device respectively matches the two images acquired from each viewpoint, that is, the pixels corresponding to the same spatial point in the first image a1 and the second image a2 of the viewpoint a are respectively matched, and the pixels corresponding to the same spatial point in the first image B1 and the second image B2 of the viewpoint B are respectively matched. After the matching is performed, the image data of the pixel point in the HDR image can be obtained by using a set synthesis algorithm and the image data of the pixel point having a matching relationship in the two images a1 and a2 of the a viewpoint, so as to obtain a frame HDR image of the a viewpoint; similarly, the image data of the pixel point in the HDR image can be calculated by using the set synthesis algorithm and the image data of the pixel point having the matching relationship in the two images B1 and B2 of the B viewpoint, so as to obtain a frame HDR image of the B viewpoint. It can be understood that, in the above processes, the processes of pixel matching and HDR image composition are all existing technical solutions, for example, the pixel matching adopts a grayscale-based template matching algorithm, a feature-based matching algorithm, and the like, which are not the inventive points of the present invention, and are not limited herein.
Further, the image processing apparatus may also specifically perform the steps of the image processing method embodiment regarding HDR image composition shown in fig. 5 below to acquire an HDR image of each viewpoint.
S12: and calculating a depth image of the target space by using the image data of the HDR images.
In this embodiment, a depth image of a target space at an acquisition time is calculated by using image data of multiple viewpoints, where the image data is specifically visual data such as RGB values, gray scales, and brightness of a color image. For example, the image processing apparatus synthesizes an HDR image of a corresponding viewpoint using images acquired from multiple viewpoints, and calculates a depth image of a target space by using a setting algorithm and image data of the HDR image of different viewpoints. The depth image includes depth information of any pixel point thereon. The conventional depth calculation method can be referred to as a method for calculating and obtaining related depth information by using image data of different viewpoints.
Compared with a common image, the HDR image can provide more dynamic range and image details, and better reflect the visual effect in a real environment, so that the depth image is calculated by using the HDR image obtained by processing the acquired image instead of the acquired image, and the accuracy of depth calculation for the target space can be improved.
In another embodiment, referring to fig. 3 in combination, the step S12 includes the following sub-steps:
s121: and acquiring pixel points of which the matching degree exceeds a preset degree in a plurality of images for synthesizing the HDR image of each viewpoint, and taking the pixel points as robust pixel points of the HDR image of the corresponding viewpoint. Wherein the HDR image of each viewpoint is synthesized from a plurality of images acquired from the viewpoint.
The robust pixel point is a pixel point with higher robustness, namely a pixel point with higher matching degree in a plurality of images collected at each viewpoint. When matching of pixel points is performed on a plurality of images collected from each viewpoint, the matching degree of the plurality of pixel points corresponding to each space point can be obtained. In the images of the viewpoints a and B obtained as shown in fig. 2, it is calculated that a pixel point e1 of the first image a1 of the viewpoint a has a matching relationship with a pixel point e2 of the second image a2, and the matching degree of e1 and e2 is 70%; the matching degree of the pixel point f1 in the first image a1 of the viewpoint a and the pixel point f2 in the second image a2 are calculated to have a matching relationship, and the matching degree of f1 and f2 is 30%, so that the matching degree of all the pixel points having the matching relationship in the first image a1 and the second image a2 of the viewpoint a can be obtained, and the matching degree of all the pixel points having the matching relationship in the first image B1 and the second image B2 of the viewpoint B can also be obtained in the same way. Comparing the matching degree of the pixel points having the matching relationship in the two images of each viewpoint with a preset degree value (for example: 60%), obtaining the pixel points (such as e1 and e2) of which the matching degree exceeds the preset degree value, and determining the pixel point e, corresponding to the obtained pixel points (such as e1 and e2), in the HDR image of the viewpoint as a robust pixel point.
S122: and matching the robust pixel points among the HDR images of the plurality of viewpoints.
S123: and determining the matching relationship of other pixel points in the HDR images of the multiple viewpoints according to the matching relationship of the robust pixel points and the position relationship between the robust pixel points and other pixel points in the HDR images of the corresponding viewpoints.
S124: and further calculating to obtain a depth image of a target space corresponding to the HDR images of the multiple viewpoints according to the matching relationship of the pixel points among the HDR images of the multiple viewpoints.
In this embodiment, since the images of the multiple viewpoints are acquired from the same target space, that is, the images of the multiple viewpoints at least have an overlapping portion, and the robust pixel point is a pixel point with higher robustness in the corresponding viewpoint and a clearer acquired pixel point, the matching relationship of other peripheral pixel points can be quickly determined by comparing the robust pixel points of the images of the different viewpoints without comparing the whole image, thereby reducing the amount of calculation of depth information and the calculation time. Therefore, when the depth information of the target space corresponding to the images of the multiple viewpoints is calculated, firstly, the matching relationship between robust pixel points in the HDR images of the multiple viewpoints can be calculated according to the image data of the robust pixel points of each viewpoint, and therefore the matching relationship between other pixel points in the HDR images of the multiple viewpoints can be quickly calculated.
For example, as shown in fig. 4, a robust pixel point h of the HDR image of the a viewpointA,jA,kARobust pixel h in HDR image of B viewpointB,jB,kBThe matching is one-to-one, so that it can be determined that the pixels in the triangular regions 41 and 42 formed by the three robust pixels matched with the HDR images of the a viewpoint and the B viewpoint have a matching relationship, and the matching relationship between each other pixel in the triangular regions of the plurality of viewpoints can be quickly calculated by using the image data of the pixel in the triangular region of the HDR image of each viewpoint, such as the pixel g in the triangular region 41 of the a viewpointAAnd g in this triangular region 42 for B viewsBAnd (6) matching.
According to the method, the matching relationship between each pixel point which can be matched in the HDR images of the multiple viewpoints can be obtained, and further, according to each group of matched pixel points (such as h shown in fig. 4)A-hB,jA-jB,kA-kB,gA-gB) And calculating the depth information of the corresponding space points of the group of pixel points at the acquisition time in the image data in the corresponding HDR image, and further calculating to obtain a depth image corresponding to the target space.
In the embodiment of the invention, when the depth information is calculated and the depth image is synthesized, the matching relationship of the robust pixel point with higher matching degree determined in the HDR image synthesized from each viewpoint can be utilized to determine the matching relationship between other pixel points related to the robust pixel point position in the HDR images of different viewpoints, and the whole HDR image is not required to be utilized for matching the pixel points, so that the calculation amount is reduced. Moreover, the robust pixel points are relatively clear pixel points acquired by each viewpoint, so that the matching of the pixel points of the HDR images of different viewpoints is realized by using the method, and the accuracy of pixel point matching is ensured.
Referring to fig. 5, fig. 5 is a flowchart illustrating an image processing method according to another embodiment of the present invention. In the present embodiment, the method is used for synthesizing an HDR image, and is performed by an image processing apparatus, and the method is described by taking as an example that an HDR image is synthesized using two images with different exposure times for each viewpoint, and it is understood that a method of synthesizing an HDR image using a plurality of images with three or more different exposure times is similarly available. In this embodiment, the method includes the steps of:
s51: a first image of a first exposure time and a second image of a second exposure time are acquired.
And the first image and the second image are acquired from the same target space at the same viewpoint. It is understood that the same target space acquisition does not simply mean that the target spaces acquired by the first image and the second image are identical, and here, the same target space acquisition is understood to mean that the image acquisition device is fixed at a viewpoint and the shooting angle is unchanged to acquire the target space. When the image collector and the target space do not have relative motion, the target spaces collected by the first image and the second image are completely the same, but when the image collector and the target space have relative motion, as shown in fig. 2, the first image and the second image are two frames of images collected by the image collector at successive moments to the target space with relative motion, and the two frames of images have different light data of the target space due to different exposure times.
The first exposure time and the second exposure time are two different exposure durations, so that the light intensities acquired by corresponding space points in the first image and the second image are different. Specifically, the exposure start points of the first image and the second image may be set to be the same or different, for example, the image acquirer obtains one frame image as the first image through the first exposure time T1 at time T0, and then obtains one frame image as the second image through the second exposure time T3 at time T2.
S52: and judging whether at least one of the first image and the second image meets a deblurring condition. If yes, go to step S53, otherwise go to step S54.
In different applications, the image collector and the target space are difficult to be absolutely static, for example, a hand shaking condition exists when the image collector is manually operated to collect images, or the image collector is arranged in a moving vehicle to collect images of the front or the surrounding environment, so that the images collected by the image collector are easy to appear blurred parts, especially in images with long exposure time; or due to the variation of the intensity of the light of the acquisition environment, the acquired image may have an overexposed or underexposed area, which may also be referred to as the blurred portion. Therefore, in this embodiment, when the image processing device acquires the first image and the second image from the viewpoint, the image processing device may perform deblurring determination on at least one of the first image and the second image, and further perform deblurring processing accordingly. Specifically, an image requiring deblurring determination (hereinafter, simply referred to as a setting image) may be set according to an actual application, such as the first image, the second image, or both images. In one embodiment, the image processing apparatus selects only an image with a longer exposure time of the two images for the deblurring determination.
It is to be understood that, in another implementation, the image processing apparatus may also directly perform S53 without performing the determination described in S52. The setting can be specifically carried out according to actual requirements.
In this embodiment, the deblurring condition may be specifically set to be related to a blurring coefficient of a pixel point in the set image. Correspondingly, referring to fig. 6 in combination, the S52 specifically includes the following sub-steps:
s521: and calculating the depth information of at least one of the first image and the second image by using the calculated depth information of the depth image of the target space, and determining the blurring coefficient of each pixel point of at least one of the first image and the second image according to the depth information.
In this embodiment, images are acquired and the depth image of the target space is calculated once every set time. The depth information of the previously calculated depth image of the target space may be calculated from an image acquired at a previous time, that is, an HDR image of multiple viewpoints obtained by processing an image acquired before the first image is used to calculate the depth image of the target space at the previous acquisition time, which may specifically refer to the embodiment of the depth calculation method described above. The depth image includes depth information of any pixel point thereon.
The image processing device selects at least one image from the first image and the second image as a setting image needing deblurring judgment, wherein the selection can be carried out according to the original setting information of a user or a self-preset selection algorithm. The image processing apparatus further calculates depth information of the setting image from the depth information of the previously acquired depth image, specifically, obtains depth information of a previous frame depth image and depth change information between the previous frame depth image and the previous two frames depth images from the previously acquired depth image, calculates relative motion information between the current acquisition viewpoint and the target space of the previous frame from the depth change information (for example, calculates speed, position, angle, distance change, etc. of the acquisition viewpoint relative to the target space according to the depth difference between the previous frame depth image and the previous two frames depth images and the acquisition interval of the two frames depth images), calculates depth information of the current setting image according to the depth information of the previous frame depth image and the relative motion information (for example, calculates the product of the uniform speed of the viewpoint relative to the target space and the image acquisition interval, and the depth value of each pixel point of the previous frame depth image is subtracted from the product to obtain the depth value of the corresponding pixel point of the current set image). Further, the image processing apparatus calculates a blur coefficient of the current setting image from the depth information of the setting image. In an embodiment, the image processing device may directly preset a first relationship between the depth information of the previous frame of depth image, the depth change information between the previous frame of depth image and the previous two frames of depth images, and the blur coefficient, so that the image processing device may obtain the depth information in the previous frame of depth image of the target space and the depth change information between the previous frame of depth image and the previous two frames of depth images, and may calculate the blur coefficient of each pixel point of the set image according to the first relationship. In another embodiment, the image processing device may preset a second relationship between depth information of a previous frame of depth image, depth change information between the previous frame of depth image and two previous frames of depth images, and current depth information, and a third relationship between the current depth information and a blur coefficient, so that the image processing device may obtain the depth information in the previous frame of depth image of the target space and the depth change information between the previous frame of depth image and two previous frames of depth images, and may calculate depth information of each pixel point of a set image according to the second relationship, and further calculate the blur coefficient of each pixel point of the set image according to the third relationship. The first relationship, the second relationship, and the third relationship may be an existing correlation algorithm, and are not limited herein.
For example, as shown in fig. 2, a first image of a first exposure time and a second image of a second exposure time are alternately captured forward at set time intervals in both a viewpoint and a B viewpoint, so that a first frame image of each viewpoint is obtained as the first image, an even frame image of each viewpoint is obtained as the second image, each time two adjacent frame images of each viewpoint are used to synthesize one HDR image, for example, a first HDR image is synthesized by using the first frame image and the second frame image thereof, a second HDR image is synthesized by using the second frame image and the third frame image thereof, a third HDR image is synthesized by using the third frame image and the fourth frame image thereof, and a B viewpoint similarly can synthesize the first HDR image, the second HDR image, and the third HDR image. The image processing device calculates depth images of the target space corresponding to the HDR images of the multiple viewpoints according to the HDR images of the multiple viewpoints. The image processing device calculates a depth image of a first HDR image of an A viewpoint and a B viewpoint according to the first HDR image. Then, in the process of synthesizing the second HDR image, when performing deblurring judgment on the second frame image and/or the third frame image, the depth information of the second HDR image is calculated according to the depth information in the depth image and the motion information of the viewpoint, which are calculated by using the first HDR image of the a viewpoint and the B viewpoint, and according to a predetermined second relationship. Then, the image processing device determines a blurring coefficient of each pixel point of the second frame image and/or the third frame image for synthesizing the second HDR image according to a predetermined third relation.
S522: and finding out pixel points with the blurring coefficients larger than a set value in the at least one image.
S523: and when the pixel points of which the blurring coefficients are larger than the set numerical value in the at least one image meet the set pixel point condition, determining that the at least one image meets the deblurring condition, otherwise, determining that the at least one image does not meet the deblurring condition.
And traversing the fuzzy coefficient of each pixel point of the set image by the image processing equipment to search out the pixel points of which the fuzzy coefficients are larger than the set numerical value. The deblurring condition may also be set according to an actual application, and the image processing device corresponds to the deblurring condition according to a determination manner of the searched pixel point, which is not limited herein. For example, the deblurring condition is that the concentration degree of the pixel points of which the blur coefficient is greater than the set value is greater than the set proportion, the image processing device counts the concentration degree of the searched pixel points, and when the concentration degree is determined to be greater than the set proportion, the image is determined to meet the deblurring condition.
Wherein, the set value and the set proportion can be adjusted according to the actual requirement.
S53: and calculating depth information of at least one of the first image and the second image according to the calculated depth information in the depth image of the target space, and deblurring at least one of the first image and the second image according to the depth information of at least one of the first image and the second image.
Specifically, the S52 may include: and calculating the deblurring processing matched with the depth information on the at least one image (namely the setting image) by using the depth information of the calculated depth image. The conventional deblurring method is based on the assumption that the depth of each pixel point in an image is the same, however, when objects with different distances exist in a target space, the method easily causes a large error in deblurring. Thus, the image processing apparatus may determine depth information of the currently acquired setting image (the first image and/or the second image) according to the already calculated depth information of the depth image (for example, directly use the depth information of the previously calculated depth image as the depth information of the currently acquired setting image, or as described above, calculate the depth information of the currently acquired setting image according to the depth information of the previously calculated previous depth image and the relative motion information between the acquisition viewpoint and the target space), and when performing deblurring processing by using a preset deblurring algorithm such as a point spread function, combine the determined depth information of each pixel point of the setting image to perform different deblurring on the corresponding pixel point of the setting image according to the different depth information. The depth information of the previously acquired image may be referred to as described in S52 above.
Continuing with the example of S523, the image processing apparatus may deblur only the region in which the concentration degree of the pixel points searched out in the setting image exceeds the setting ratio.
S54: and matching pixel points of the first image and the second image.
And matching pixel points of the first image and the second image according to the existing algorithm so as to obtain robust pixel points on the first image and the second image.
S55: and synthesizing the matched first image and second image to obtain an HDR image.
For example, according to a set synthesis algorithm, the image processing device calculates image data of a pixel point in the HDR image by using image data of the pixel point in the first image and the second image, and further obtains a frame of HDR image of the viewpoint.
In the embodiment, the image processing device performs deblurring processing on the acquired image before the HDR image is synthesized, so that the definition of the HDR image and the accuracy of HDR image data are improved, the problem that the acquired image cannot be used due to light is avoided after deblurring processing, and the adaptability of image acquisition to complex light is improved. Further, when depth calculation is performed using the HDR image, the accuracy of depth calculation can be further improved. Moreover, the influence of motion on image acquisition can be removed after deblurring processing, so that HDR image synthesis and depth calculation which can adapt to the motion state are realized.
The image processing device of the method can be applied to a vehicle-mounted system, namely, an image collector is arranged on a vehicle to collect images of the surrounding environment of the vehicle, and after the image processing device in the vehicle-mounted system acquires the images collected by the image collector, the method is executed to obtain depth information of the surrounding environment or obtain corresponding HDR images. Because each frame of image has larger image data, the vehicle-mounted system can obtain more effective data by utilizing the big data advantage of the image.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. The image processing apparatus may perform the steps of the above-described method. For a detailed description of the above method, please refer to the above method, which is not repeated herein.
In the present embodiment, the image processing apparatus 70 includes: a processor 71 and a memory 72 connected to the processor 71.
The memory 72 is used for storing computer instructions, data, and the like, which are executed by the processor 71.
The processor 71 executing said computer instructions is operable to perform at least one of the following first and second aspects.
In a first aspect:
the processor 71 obtains a plurality of HDR images of multiple viewpoints, wherein the HDR images are obtained by performing acquisition processing on the same target space;
and calculating a depth image of the target space by using the image data of the HDR images.
Optionally, the processor 71 obtaining multiple high dynamic lighting rendering HDR images of multiple viewpoints comprises: matching pixel points of a plurality of images with different exposure time of each viewpoint; and synthesizing the plurality of images of each viewpoint into an HDR image of the viewpoint according to the pixel point matching result.
Further, before the pixel matching is performed on the plurality of images with different exposure times of each viewpoint, the processor 71 may be further configured to: and performing deblurring processing matched with the depth information on at least one image in the plurality of images of at least one viewpoint according to the calculated depth information in the depth image of the target space.
Further, before the deblurring process matched with the depth information is performed on at least one of the plurality of images of at least one viewpoint according to the already calculated depth information in the depth image of the target space, the processor 71 is further configured to: calculating the depth information of the at least one image according to the calculated depth information of the depth image of the target space, and determining a fuzzy coefficient of each pixel point of the at least one image according to the depth information of the at least one image; finding out pixel points with blurring coefficients larger than a set value in the image; and when the pixel point with the blurring coefficient larger than the set numerical value meets the set pixel point condition, performing deblurring processing matched with the depth information on at least one image in the plurality of images of at least one viewpoint according to the calculated depth information in the depth image of the target space.
Optionally, the processor 71, calculating a depth image of the target space by using the image data of the plurality of HDR images, includes: acquiring pixel points of which the matching degree exceeds a preset degree value in the plurality of images of each viewpoint, and taking the pixel points as robust pixel points of the corresponding viewpoint; matching the robust pixel points among the HDR images of the multiple viewpoints; determining the matching relationship of other pixel points in the HDR images of the multiple viewpoints according to the matching relationship of the robust pixel points and the position relationship between the robust pixel points and other pixel points in the HDR images of the corresponding viewpoints; and further calculating to obtain a depth image of a target space corresponding to the HDR images of the multiple viewpoints according to the matching relationship of the pixel points among the HDR images of the multiple viewpoints.
In a second aspect:
the processor 71 is configured to acquire a plurality of images with different exposure times, where the plurality of images are acquired from the same target space at the same viewpoint;
calculating depth information of at least one image in a plurality of images acquired by the target space from the current viewpoint by using the calculated depth information of the depth image of the target space, and deblurring the at least one image in the plurality of images according to the depth information;
carrying out pixel point matching on the deblurred images;
and synthesizing the plurality of images according to the pixel point matching result to obtain a high dynamic illumination rendering HDR image.
Optionally, before deblurring at least one of the plurality of images, the processor 71 is further configured to: determining whether at least one of the plurality of images satisfies a deblurring condition; and if the deblurring condition is met, performing deblurring processing on at least one image in the plurality of images.
Further, the processor 71 determines whether at least one of the plurality of images satisfies a deblurring condition, including: according to the calculated depth information of the depth image of the target space, calculating to obtain the depth information of at least one image in a plurality of images collected by the target space from the current viewpoint, and determining the blurring coefficient of each pixel point of at least one image in the plurality of images according to the depth information; finding out pixel points with blurring coefficients larger than a set value in the at least one image; and when the pixel point with the blurring coefficient larger than the set value in the at least one image meets the set pixel point condition, determining that the at least one image meets the deblurring condition.
Optionally, the processor 71 is further configured to calculate a depth image of the target space according to the image data of the HDR images of multiple viewpoints acquired by the above steps.
Further, the processor 71 calculates a depth image of the target space according to the image data of the HDR image of the plurality of viewpoints acquired by the above steps, and may include: acquiring pixel points of which the matching degree exceeds a preset degree value in the plurality of images for synthesizing the HDR image of each viewpoint, and taking the pixel points as robust pixel points of the corresponding viewpoint; matching the robust pixel points among the HDR images of the plurality of viewpoints obtained in the previous step; calculating and determining the matching relationship of other pixel points in the HDR images of the multiple viewpoints according to the matching relationship of the robust pixel points and the position relationship between the robust pixel points and other pixel points in the HDR images of the corresponding viewpoints; and calculating to obtain the depth image of the target space corresponding to the HDR images of the multiple viewpoints according to the matching relationship of the pixel points among the HDR images of the multiple viewpoints.
Optionally, in combination with the first aspect and/or the second aspect, the image processing apparatus 70 further includes an image collector 73 configured to collect images, for example, multiple frames of images collected at different times from a target space having a relative motion with the target space, and send the images to the memory 72, and the processor 71 is further configured to obtain the first image and the second image from the memory 72. In an embodiment where an image processing apparatus is used to calculate a depth image, the image collector 73 may include a first image collector and a second image collector, where the first image collector and the second image collector are disposed at different viewpoints, and collect one frame of image to a same target space at set time intervals.
The present invention also provides a non-volatile storage medium storing processor-executable computer instructions for performing the above-described method embodiments, specifically, the above-described memory 72.
With the above arrangement, the image processing apparatus does not directly calculate depth information using the captured image, but calculates depth information using an HDR image obtained by processing the captured image, and can improve accuracy of depth calculation for the target space. Further, during depth calculation, the matching relationship between other pixel points related to the robust pixel point position in different viewpoint HDR images can be determined by using the matching relationship of the robust pixel point with higher matching degree determined in the synthesized HDR image, and the pixel point matching does not need to be performed by using the whole HDR image, so that the image depth calculation amount is reduced. In addition, when the HDR image is synthesized, the acquired image can be deblurred firstly, the definition of the HDR image and the accuracy of HDR image data are improved, the image data which cannot be used due to light of the acquired image is avoided after the deblurring processing is carried out, the adaptability of the image acquisition to complex light is further improved, and the influence of motion on the image acquisition can be eliminated after the deblurring processing is carried out, so that the HDR image synthesis and the depth calculation under the adaptable motion state are realized.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (20)

1. An image processing method, comprising:
acquiring a plurality of high dynamic illumination rendering HDR images of multiple viewpoints, wherein the HDR images are obtained by collecting and processing the same target space;
calculating a depth image of the target space by using the image data of the HDR images;
wherein the acquiring the plurality of HDR images of the multiple viewpoints comprises:
performing deblurring processing matched with the depth information on at least one image in the plurality of images of at least one viewpoint according to the calculated depth information in the depth image of the target space;
matching pixel points of a plurality of images with different exposure time of each viewpoint;
synthesizing the plurality of images of each viewpoint into an HDR image of the viewpoint according to the pixel point matching result;
and/or, the calculating the depth image of the target space by using the image data of the HDR images comprises:
acquiring pixel points of which the matching degree exceeds a preset degree value in the plurality of images of each viewpoint, and taking the pixel points as robust pixel points of the corresponding viewpoint;
matching the robust pixel points among the HDR images of the multiple viewpoints;
determining the matching relationship of other pixel points in the HDR images of the multiple viewpoints according to the matching relationship of the robust pixel points and the position relationship between the robust pixel points and other pixel points in the HDR images of the corresponding viewpoints;
and further calculating to obtain a depth image of a target space corresponding to the HDR images of the multiple viewpoints according to the matching relationship of the pixel points among the HDR images of the multiple viewpoints.
2. The method of claim 1, further comprising, before the deblurring at least one of the plurality of images of at least one viewpoint from the depth information in the calculated depth image of the target space that matches the depth information,:
calculating the depth information of the at least one image according to the calculated depth information of the depth image of the target space, and determining a fuzzy coefficient of each pixel point of the at least one image according to the depth information of the at least one image;
finding out pixel points with blurring coefficients larger than a set value in the at least one image;
and when the pixel point with the blurring coefficient larger than the set numerical value meets the set pixel point condition, performing deblurring processing matched with the depth information on at least one image in the plurality of images of at least one viewpoint according to the calculated depth information in the depth image of the target space.
3. The method of claim 1, wherein the plurality of images are multi-frame images acquired by an image acquisition device at different time instants for a target space with relative motion.
4. An image processing apparatus, comprising a processor, a memory, connected to each other;
the memory is used for storing computer instructions and data;
the processor executing the computer instructions to:
acquiring a plurality of high dynamic illumination rendering HDR images of multiple viewpoints, wherein the HDR images are obtained by collecting and processing the same target space;
calculating a depth image of the target space by using the image data of the HDR images;
wherein the processor obtaining a plurality of high dynamic illumination rendering HDR images of multiple viewpoints comprises:
performing deblurring processing matched with the depth information on at least one image in the plurality of images of at least one viewpoint according to the calculated depth information in the depth image of the target space;
matching pixel points of a plurality of images with different exposure time of each viewpoint;
synthesizing the plurality of images of each viewpoint into an HDR image of the viewpoint according to the pixel point matching result;
and/or, the processor calculating a depth image of the target space using the image data of the plurality of HDR images comprises:
acquiring pixel points of which the matching degree exceeds a preset degree value in the plurality of images of each viewpoint, and taking the pixel points as robust pixel points of the corresponding viewpoint;
matching the robust pixel points among the HDR images of the multiple viewpoints;
determining the matching relationship of other pixel points in the HDR images of the multiple viewpoints according to the matching relationship of the robust pixel points and the position relationship between the robust pixel points and other pixel points in the HDR images of the corresponding viewpoints;
and further calculating to obtain a depth image of a target space corresponding to the HDR images of the multiple viewpoints according to the matching relationship of the pixel points among the HDR images of the multiple viewpoints.
5. The image processing apparatus according to claim 4, wherein before the deblurring process matching with the depth information is performed on at least one of the plurality of images of at least one viewpoint from the depth information in the depth image of the target space that has been calculated, the processor is further configured to:
calculating the depth information of the at least one image according to the calculated depth information of the depth image of the target space, and determining a fuzzy coefficient of each pixel point of the at least one image according to the depth information of the at least one image;
finding out pixel points with blurring coefficients larger than a set value in the image;
and when the pixel point with the blurring coefficient larger than the set numerical value meets the set pixel point condition, performing deblurring processing matched with the depth information on at least one image in the plurality of images of at least one viewpoint according to the calculated depth information in the depth image of the target space.
6. The image processing apparatus of claim 4, further comprising an image collector for collecting the plurality of images at different times for a target space having relative motion thereto.
7. A non-volatile storage medium having stored thereon computer instructions executable by a processor to perform the image processing method of any one of claims 1 to 3.
8. An image processing method, comprising:
acquiring a plurality of images with different exposure times, wherein the images are acquired from the same target space at the same viewpoint;
calculating depth information of at least one image in a plurality of images acquired by the target space from the current viewpoint by using the calculated depth information of the depth image of the target space, and deblurring the at least one image in the plurality of images according to the depth information;
carrying out pixel point matching on the deblurred images;
and synthesizing the plurality of images according to the pixel point matching result to obtain a high dynamic illumination rendering HDR image.
9. The method of claim 8, prior to deblurring at least one of the plurality of images, further comprising:
determining whether at least one of the plurality of images satisfies a deblurring condition;
and if the deblurring condition is met, performing deblurring processing on at least one image in the plurality of images.
10. The method of claim 9, wherein said determining whether at least one of the plurality of images satisfies a deblurring condition comprises:
according to the calculated depth information of the depth image of the target space, calculating to obtain the depth information of at least one image in a plurality of images collected by the target space from the current viewpoint, and determining the blurring coefficient of each pixel point of at least one image in the plurality of images according to the depth information;
finding out pixel points with blurring coefficients larger than a set value in the at least one image;
and when the pixel point with the blurring coefficient larger than the set value in the at least one image meets the set pixel point condition, determining that the at least one image meets the deblurring condition.
11. The method of claim 8, further comprising:
and calculating the depth image of the target space according to the image data of the HDR images of the plurality of viewpoints acquired by the steps.
12. The method as claimed in claim 11, wherein said calculating a depth image of the target space from the image data of the HDR image of the plurality of viewpoints acquired by the above steps comprises:
acquiring pixel points of which the matching degree exceeds a preset degree value in the plurality of images for synthesizing the HDR image of each viewpoint, and taking the pixel points as robust pixel points of the corresponding viewpoint;
matching the robust pixel points among the HDR images of the plurality of viewpoints obtained in the previous step;
calculating and determining the matching relationship of other pixel points in the HDR images of the multiple viewpoints according to the matching relationship of the robust pixel points and the position relationship between the robust pixel points and other pixel points in the HDR images of the corresponding viewpoints;
and calculating to obtain the depth image of the target space corresponding to the HDR images of the multiple viewpoints according to the matching relationship of the pixel points among the HDR images of the multiple viewpoints.
13. The method of claim 8, wherein the plurality of images are multi-frame images acquired by an image acquisition device at different times for a target space with relative motion.
14. An image processing apparatus, comprising a processor, a memory, connected to each other;
the memory is used for storing computer instructions and data;
the processor executing the computer instructions to:
acquiring a plurality of images with different exposure times, wherein the images are acquired from the same target space at the same viewpoint;
calculating depth information of at least one image in a plurality of images acquired by the target space from the current viewpoint by using the calculated depth information of the depth image of the target space, and deblurring the at least one image in the plurality of images according to the depth information;
carrying out pixel point matching on the deblurred images;
and synthesizing the plurality of images according to the pixel point matching result to obtain a high dynamic illumination rendering HDR image.
15. The image processing device of claim 14, wherein prior to deblurring at least one of the plurality of images, the processor is further to:
determining whether at least one of the plurality of images satisfies a deblurring condition;
and if the deblurring condition is met, performing deblurring processing on at least one image in the plurality of images.
16. The image processing device of claim 15, wherein the processor determining whether at least one of the plurality of images satisfies a deblurring condition comprises:
according to the calculated depth information of the depth image of the target space, calculating to obtain the depth information of at least one image in a plurality of images collected by the target space from the current viewpoint, and determining the blurring coefficient of each pixel point of at least one image in the plurality of images according to the depth information;
finding out pixel points with blurring coefficients larger than a set value in the at least one image;
and when the pixel point with the blurring coefficient larger than the set value in the at least one image meets the set pixel point condition, determining that the at least one image meets the deblurring condition.
17. The image processing device of claim 14, wherein the processor is further configured to:
and calculating the depth image of the target space according to the image data of the HDR images of the plurality of viewpoints acquired by the steps.
18. The image processing apparatus as claimed in claim 17, wherein the processor calculates the depth image of the target space from the image data of the HDR image of a plurality of viewpoints acquired by the above steps, comprising:
acquiring pixel points of which the matching degree exceeds a preset degree value in the plurality of images for synthesizing the HDR image of each viewpoint, and taking the pixel points as robust pixel points of the corresponding viewpoint;
matching the robust pixel points among the HDR images of the plurality of viewpoints obtained in the previous step;
calculating and determining the matching relationship of other pixel points in the HDR images of the multiple viewpoints according to the matching relationship of the robust pixel points and the position relationship between the robust pixel points and other pixel points in the HDR images of the corresponding viewpoints;
and calculating to obtain the depth image of the target space corresponding to the HDR images of the multiple viewpoints according to the matching relationship of the pixel points among the HDR images of the multiple viewpoints.
19. The image processing apparatus of claim 14, further comprising an image collector for collecting a plurality of images of the target space having relative motion thereto at different times.
20. A non-volatile storage medium having stored thereon computer instructions executable by a processor to perform the image processing method of any one of claims 8 to 13.
CN201780034126.2A 2017-05-17 2017-05-17 Image processing method, image processing apparatus, and storage medium Active CN109314776B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/084736 WO2018209603A1 (en) 2017-05-17 2017-05-17 Image processing method, image processing device, and storage medium

Publications (2)

Publication Number Publication Date
CN109314776A CN109314776A (en) 2019-02-05
CN109314776B true CN109314776B (en) 2021-02-26

Family

ID=64273047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780034126.2A Active CN109314776B (en) 2017-05-17 2017-05-17 Image processing method, image processing apparatus, and storage medium

Country Status (2)

Country Link
CN (1) CN109314776B (en)
WO (1) WO2018209603A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488964B (en) * 2020-12-18 2024-04-16 深圳市镜玩科技有限公司 Image processing method, related device, equipment and medium for sliding list

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3530906B2 (en) * 2001-03-30 2004-05-24 ミノルタ株式会社 Imaging position detection program and camera
JP4806329B2 (en) * 2006-10-23 2011-11-02 三洋電機株式会社 Imaging apparatus and imaging method
JP4513905B2 (en) * 2008-06-27 2010-07-28 ソニー株式会社 Signal processing apparatus, signal processing method, program, and recording medium
CN101916455B (en) * 2010-07-01 2012-06-27 清华大学 Method and device for reconstructing three-dimensional model of high dynamic range texture
US20120056982A1 (en) * 2010-09-08 2012-03-08 Microsoft Corporation Depth camera based on structured light and stereo vision
US8274552B2 (en) * 2010-12-27 2012-09-25 3Dmedia Corporation Primary and auxiliary image capture devices for image processing and related methods
WO2012164881A1 (en) * 2011-05-27 2012-12-06 パナソニック株式会社 Image processing apparatus and image processing method
CN102436639B (en) * 2011-09-02 2013-12-04 清华大学 Image acquiring method for removing image blurring and image acquiring system
JP5843599B2 (en) * 2011-12-19 2016-01-13 キヤノン株式会社 Image processing apparatus, imaging apparatus, and method thereof
US9294754B2 (en) * 2012-02-03 2016-03-22 Lumentum Operations Llc High dynamic range and depth of field depth camera
US9571818B2 (en) * 2012-06-07 2017-02-14 Nvidia Corporation Techniques for generating robust stereo images from a pair of corresponding stereo images captured with and without the use of a flash device
CN104935911B (en) * 2014-03-18 2017-07-21 华为技术有限公司 A kind of method and device of high dynamic range images synthesis
CN104299268B (en) * 2014-11-02 2017-04-05 北京航空航天大学 A kind of flame three dimensional displacement fields method of high dynamic range imaging
CN105959578A (en) * 2016-07-18 2016-09-21 四川君逸数码科技股份有限公司 Wide dynamic high-definition camera based on intelligent video technology

Also Published As

Publication number Publication date
WO2018209603A1 (en) 2018-11-22
CN109314776A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
US11430103B2 (en) Method for image processing, non-transitory computer readable storage medium, and electronic device
CN101742123B (en) Image processing apparatus and method
WO2019105154A1 (en) Image processing method, apparatus and device
EP3216216B1 (en) Methods and systems for multi-view high-speed motion capture
JP7077395B2 (en) Multiplexed high dynamic range image
Lin et al. Vehicle speed detection from a single motion blurred image
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
KR101664123B1 (en) Apparatus and method of creating high dynamic range image empty ghost image by using filtering
WO2016139260A9 (en) Method and system for real-time noise removal and image enhancement of high-dynamic range images
KR20110078175A (en) Method and apparatus for generating of image data
JP2009093644A (en) Computer-implemented method for tacking 3d position of object moving in scene
CN111917991B (en) Image quality control method, device, equipment and storage medium
JP6711396B2 (en) Image processing device, imaging device, image processing method, and program
CN103914810A (en) Image super-resolution for dynamic rearview mirror
Mangiat et al. Spatially adaptive filtering for registration artifact removal in HDR video
KR20140118031A (en) Image processing apparatus and method thereof
WO2023273868A1 (en) Image denoising method and apparatus, terminal, and storage medium
CN108989699B (en) Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium
Lin Vehicle speed detection and identification from a single motion blurred image
Lin et al. Motion blur removal and its application to vehicle speed detection
CN109314776B (en) Image processing method, image processing apparatus, and storage medium
JP6292968B2 (en) Pseudo HDR image estimation apparatus and method
KR101437898B1 (en) Apparatus and method for generating a High Dynamic Range image using single image
JP2016110312A (en) Image processing method, and image processor and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518063 23 Floor (Room 2303-2306) of Desai Science and Technology Building, Yuehai Street High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen AANDE Intelligent Technology Research Institute Co., Ltd.

Address before: 518104 Shajing Industrial Co., Ltd. No. 3 Industrial Zone, Hexiang Road, Shajing Street, Baoan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen AANDE Intelligent Technology Research Institute Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant