CN114359055B - Image splicing method and related device for multi-camera shooting screen body - Google Patents

Image splicing method and related device for multi-camera shooting screen body Download PDF

Info

Publication number
CN114359055B
CN114359055B CN202210274472.0A CN202210274472A CN114359055B CN 114359055 B CN114359055 B CN 114359055B CN 202210274472 A CN202210274472 A CN 202210274472A CN 114359055 B CN114359055 B CN 114359055B
Authority
CN
China
Prior art keywords
image
camera
gray scale
spliced
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210274472.0A
Other languages
Chinese (zh)
Other versions
CN114359055A (en
Inventor
毛建旭
张耀
王耀南
刘彩苹
朱青
张辉
刘敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202210274472.0A priority Critical patent/CN114359055B/en
Publication of CN114359055A publication Critical patent/CN114359055A/en
Application granted granted Critical
Publication of CN114359055B publication Critical patent/CN114359055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image splicing method and a related device for a multi-camera shooting screen body, which are used for reducing the difficulty in detecting or compensating defects of the screen body to be detected. The method comprises the following steps: providing a first and a second camera; shooting a screen body to be detected through a first camera and a second camera to generate a first image sequence and a second image sequence; generating first luminance data, second luminance data, and third luminance data by a luminance meter; calculating a first, a second, a third and a third gray sequence; generating a first luminance grayscale conversion coefficient, a second luminance grayscale conversion coefficient, a third luminance grayscale conversion coefficient and a fourth luminance grayscale conversion coefficient; displaying a target detection picture through a screen body to be detected, and shooting through a first camera and a second camera to generate a first image to be spliced and a second image to be spliced; performing gray correction on the first image to be spliced and the second image to be spliced; and splicing the first image to be spliced and the second image to be spliced to generate a splicing result picture.

Description

Image splicing method and related device for multi-camera shooting screen body
Technical Field
The embodiment of the application relates to the field of display screen detection, in particular to an image splicing method and a related device for a multi-camera shooting screen body.
Background
With the continuous development of information Display technology, the Display screen (OLED) is gradually replacing the conventional LCD by virtue of its advantages of self-luminescence, flexibility, wide viewing angle, fast response speed, simple process, etc., and is rapidly and deeply applied to various fields of modern society.
However, as the market demands for the display quality of the display screen to be higher and higher, the appearance design requirements are also more and more diversified, and the shipment volume and the appearance design requirements of the display screens of electronic products such as mobile phone screens, tablet computer screens, notebook computer screens, desktop computer screens and the like are also higher and higher, for example: bang screen, water drop screen, large-curvature OLED display screen (curved screen), etc. In a defect compensation system (De-Mura system) of an oversized LCD display screen, because the size of the display screen is too large (generally more than or equal to 100 inches), if a single camera is used for imaging the whole display screen, the working distance of the camera is too large, the imaging effect of a screen body area far away from the center position of the camera is the best, but the imaging effect of the screen body area far away from the center position of the camera is deteriorated, and the quality of subsequent De-Mura data processing and compensation is further influenced. Examples are as follows: a 50mm lens is matched with a 151M industrial camera to form a sampling camera, when a 100-inch 8KLCD display screen is photographed, the working distance of the sampling camera is about 3.9 meters, and the photographing range of the sampling camera can cover the whole LCD display screen; when the 130 cun 8KLCD screen is shot, the working distance of the sampling camera is about 5.1 meters, and the shooting range of the sampling camera can cover the whole LCD display screen. The working distance is greater if a longer focal length lens is used. This can bring very high requirement for the volume production equipment height or the width of super large-size LCD display screen, and current LCD display screen workshop height and handling line technology also are difficult to satisfy this kind of requirement. Therefore, in order to shorten the working distance requirement of the oversized LCD display screen on the sampling camera in the defect compensation system, the oversized LCD display screen is usually imaged by using a dual-camera or multi-camera splicing mode, that is, only a partial area of the screen body to be detected is shot by a single sampling camera, after the specification area of the screen body to be detected is shot by a plurality of sampling cameras, the shot images are spliced, and then subsequent De-Mura data processing and compensation are performed.
However, due to differences in photoelectric conversion effects of different sampling cameras with chips of the same type, even when camera parameters such as the same exposure time and gain are used, the gray scales of the image of a part of the screen body photographed by each sampling camera are also different, and after image splicing is performed, errors are introduced into the De-Mura data generation process, so that new Mura defects are generated after the screen body to be detected is subjected to defect compensation processing, and the difficulty in detecting or compensating the defects of the screen body to be detected is increased.
Disclosure of Invention
The application provides a method for splicing images of multi-camera shooting screens in a first aspect, which comprises the following steps:
setting a first camera and a second camera on the defect compensation system;
displaying a group of detection pictures through the screen body to be detected, and shooting through a first camera and a second camera to generate a first image sequence and a second image sequence, wherein the detection pictures in the group of detection pictures comprise a first 0 gray scale region corresponding to the center position of the first camera, a second 0 gray scale region corresponding to the center position of the first camera and a third 0 gray scale region corresponding to the center position of the screen body to be detected;
detecting a first 0 gray scale area, a second 0 gray scale area and a third 0 gray scale area through a luminance meter to generate first luminance data, second luminance data and third luminance data;
calculating a first gray sequence of a first 0 gray scale region and a third gray sequence of a third 0 gray scale region in the first image sequence;
calculating a second gray sequence of a second 0 gray scale region and a fourth gray sequence of the second 0 gray scale region in the second image sequence;
generating a first luminance grayscale conversion coefficient according to the first grayscale sequence and the first luminance data;
generating a second brightness gray scale conversion coefficient according to the third gray scale sequence and the third brightness data;
generating a third luminance gray scale conversion coefficient according to the second gray scale sequence and the second luminance data;
generating a fourth luminance grayscale conversion coefficient according to the fourth grayscale sequence and the third luminance data;
displaying a target detection picture through a screen body to be detected, and shooting through a first camera and a second camera to generate a first image to be spliced and a second image to be spliced;
performing gray correction on the first image to be spliced and the second image to be spliced through the first brightness gray conversion coefficient, the second brightness gray conversion coefficient, the third brightness gray conversion coefficient and the fourth brightness gray conversion coefficient;
and splicing the first image to be spliced and the second image to be spliced to generate a splicing result graph.
Optionally, after the target detection picture is displayed by the screen body to be detected and is shot by the first camera and the second camera to generate the first image to be stitched and the second image to be stitched, before performing gray level correction on the first image to be stitched and the second image to be stitched through the first luminance gray level conversion coefficient, the second luminance gray level conversion coefficient, the third luminance gray level conversion coefficient and the fourth luminance gray level conversion coefficient, the image stitching method further includes:
constructing a calibration dot matrix detection picture, and displaying the calibration dot matrix detection picture through a screen body to be detected;
respectively shooting a screen body to be detected through a first camera and a second camera to generate a first distorted dot matrix image and a second distorted dot matrix image;
respectively generating a first distorted dot matrix coordinate and a second distorted dot matrix coordinate according to the first distorted dot matrix image and the second distorted dot matrix image;
generating a first undistorted coordinate and a second undistorted coordinate according to the first distorted dot matrix coordinate and the second distorted dot matrix coordinate;
generating a first correction coefficient matrix according to the first distorted dot matrix coordinates and the first undistorted coordinates;
generating a second correction coefficient matrix according to the second distorted dot matrix dot coordinates and the second undistorted coordinates;
creating a first null image and a second null image;
determining coordinates in the first empty image, and calculating floating point number distortion coordinates according to the first correction coefficient matrix;
assigning gray scale information on the first image to be spliced to a coordinate in the first empty image according to the floating point number distortion coordinate and a preset formula;
determining invalid coordinates in the first aerial image;
performing gray scale assignment on the first empty image by combining the first image to be spliced according to the mode, assigning the invalid coordinate to 0 gray scale, and determining the first empty image as the first image to be spliced after geometric distortion correction;
according to the method, the second correction coefficient matrix and the first image to be spliced are combined to carry out gray scale assignment on the second null image, the invalid coordinate is assigned with 0 gray scale, and the second null image is determined as the second image to be spliced after geometric distortion correction.
Optionally, after performing gray-scale correction on the first image to be stitched and the second image to be stitched through the first luminance gray-scale conversion coefficient, the second luminance gray-scale conversion coefficient, the third luminance gray-scale conversion coefficient, and the fourth luminance gray-scale conversion coefficient, the image stitching method further includes, before stitching the first image to be stitched and the second image to be stitched to generate a stitching result image:
and cutting the 0 gray scale area in the first image to be spliced and the second image to be spliced.
Optionally, generating a first undistorted coordinate and a second undistorted coordinate according to the first distorted dot matrix coordinate and the second distorted dot matrix coordinate includes:
acquiring an ideal pixel ratio of a defect compensation system, wherein the ideal pixel ratio is the ratio of the number of pixels of an image to the number of physical pixels of a display screen;
determining a central coordinate of a dot matrix point of a calibration dot matrix detection picture according to the first distorted dot matrix point coordinate and the second distorted dot matrix point coordinate;
and calculating according to the ideal pixel ratio, the central coordinates of the lattice points, the coordinates of the first distorted lattice points and the coordinates of the second distorted lattice points to generate a first undistorted coordinate and a second undistorted coordinate.
Optionally, the first image to be stitched and the second image to be stitched are stitched to generate a stitching result graph, which includes:
selecting a row of lattice points positioned at the center of the screen body to be detected from the calibration lattice detection picture as a reference lattice point row;
calculating corresponding coordinates of the reference dot matrix point array on the first image to be spliced and the second image to be spliced, and splicing lines on the first image to be spliced and the second image to be spliced;
respectively determining a first overlapping area and a second overlapping area on the first image to be stitched and the second image to be stitched according to the splicing seam;
performing pixel fusion on the first overlapping area and the corresponding overlapping area in the second image to be spliced;
performing pixel fusion on the second overlapping area and the corresponding overlapping area in the first image to be spliced;
and splicing the first image to be spliced and the second image to be spliced according to the splicing line.
Optionally, after the first camera and the second camera are arranged on the defect compensation system, before a group of detection pictures are displayed by the screen to be detected and the first image sequence and the second image sequence are generated by shooting with the first camera and the second camera, the image stitching method further includes:
fixing the aperture focal length of the lens of the first camera under the condition of meeting the working mode of the defect compensation system, and enabling the standard uniform surface light to be close to the front end of the lens of the first camera;
fixing the aperture focal length of the lens of the second camera under the condition of meeting the working mode of the defect compensation system, and enabling the standard uniform surface light to be close to the front end of the lens of the second camera;
and performing flat field correction on the first camera and the second camera, and saving the flat field correction data of the first camera and the second camera to a camera firmware.
Optionally, after a group of detection pictures is displayed through the screen body to be detected, and a first image sequence and a second image sequence are generated through shooting of the first camera and the second camera, the image stitching method further includes:
and carrying out noise reduction processing on the generated image in the defect compensation system, wherein the noise reduction processing comprises inherent noise reduction, time domain filtering and mean value filtering.
This application second aspect provides an image splicing apparatus of screen body is shot to polyphaser, includes:
a setting unit for setting the first camera and the second camera on the defect compensation system;
the acquisition unit is used for displaying a group of detection pictures through the screen body to be detected, and generating a first image sequence and a second image sequence through shooting of a first camera and a second camera, wherein the detection pictures in the group of detection pictures comprise a first 0 gray scale region corresponding to the center position of the first camera, a second 0 gray scale region corresponding to the center position of the first camera and a third 0 gray scale region corresponding to the center position of the screen body to be detected;
the detection unit is used for detecting a first 0 gray scale area, a second 0 gray scale area and a third 0 gray scale area through a luminance meter to generate first luminance data, second luminance data and third luminance data;
the first calculation unit is used for calculating a first gray sequence of a first 0 gray scale area and a third gray sequence of a third 0 gray scale area in the first image sequence;
the second calculation unit is used for calculating a second gray sequence of a second 0 gray scale region and a fourth gray sequence of the second 0 gray scale region in the second image sequence;
a first generation unit configured to generate a first luminance grayscale conversion coefficient from the first grayscale sequence and the first luminance data;
a second generation unit configured to generate a second luminance gradation conversion coefficient from the third gradation sequence and the third luminance data;
a third generation unit configured to generate a third luminance grayscale conversion coefficient from the second grayscale sequence and the second luminance data;
a fourth generation unit configured to generate a fourth luminance grayscale conversion coefficient based on the fourth grayscale sequence and the third luminance data;
the fifth generation unit is used for displaying a target detection picture through the screen body to be detected, shooting through the first camera and the second camera and generating a first image to be spliced and a second image to be spliced;
the gray correction unit is used for carrying out gray correction on the first image to be spliced and the second image to be spliced through the first brightness gray conversion coefficient, the second brightness gray conversion coefficient, the third brightness gray conversion coefficient and the fourth brightness gray conversion coefficient;
and the splicing unit is used for splicing the first image to be spliced and the second image to be spliced to generate a splicing result picture.
Optionally, the image stitching device further includes:
the construction unit is used for constructing a calibrated dot matrix detection picture and displaying the calibrated dot matrix detection picture through the screen body to be detected;
the shooting unit is used for respectively shooting the screen body to be detected through the first camera and the second camera to generate a first distorted dot matrix image and a second distorted dot matrix image;
a sixth generating unit, configured to generate a first distorted dot coordinate and a second distorted dot coordinate according to the first distorted dot image and the second distorted dot image;
a seventh generating unit, configured to generate a first undistorted coordinate and a second undistorted coordinate according to the first distorted dot matrix coordinate and the second distorted dot matrix coordinate;
an eighth generating unit, configured to generate a first correction coefficient matrix according to the first distorted dot matrix coordinates and the first undistorted coordinates;
a ninth generating unit configured to generate a second correction coefficient matrix from the second distorted dot matrix coordinates and the second undistorted coordinates;
a creating unit configured to create a first null image and a second null image;
the first determining unit is used for determining coordinates in the first empty image and calculating the floating point number distortion coordinates according to the first correction coefficient matrix;
the assignment unit is used for assigning the gray scale information on the first image to be spliced to the coordinates in the first empty image according to the floating point number distortion coordinates and a preset formula;
a second determination unit configured to determine an invalid coordinate in the first empty image;
the first distortion correction unit is used for carrying out gray scale assignment on the first empty image by combining the first image to be spliced according to the mode, assigning the invalid coordinate with 0 gray scale, and determining the first empty image as the first image to be spliced after geometric distortion correction;
and the second distortion correction unit is used for carrying out gray scale assignment on the second null image by combining the second correction coefficient matrix and the first image to be spliced according to the method, assigning the invalid coordinate with 0 gray scale, and determining the second null image as the second image to be spliced after geometric distortion correction.
Optionally, the image stitching device further includes:
and the cutting unit is used for cutting the 0 gray scale area in the first image to be spliced and the second image to be spliced.
Optionally, the seventh generating unit specifically includes:
acquiring an ideal pixel ratio of a defect compensation system, wherein the ideal pixel ratio is the ratio of the number of pixels of an image to the number of physical pixels of a display screen;
determining a central coordinate of a dot matrix point of a calibration dot matrix detection picture according to the first distorted dot matrix point coordinate and the second distorted dot matrix point coordinate;
and calculating according to the ideal pixel ratio, the central coordinates of the lattice points, the coordinates of the first distorted lattice points and the coordinates of the second distorted lattice points to generate a first undistorted coordinate and a second undistorted coordinate.
Optionally, the splicing unit specifically includes:
selecting a row of dot matrix points positioned at the central position of a screen body to be detected from a calibration dot matrix detection picture as a reference dot matrix point row;
calculating corresponding coordinates of the reference dot matrix point array on the first image to be spliced and the second image to be spliced, and splicing lines on the first image to be spliced and the second image to be spliced;
respectively determining a first overlapping area and a second overlapping area on the first image to be stitched and the second image to be stitched according to the splicing seam;
performing pixel fusion on the first overlapping area and the corresponding overlapping area in the second image to be spliced;
performing pixel fusion on the second overlapping area and the corresponding overlapping area in the first image to be spliced;
and splicing the first image to be spliced and the second image to be spliced according to the splicing line.
Optionally, the image stitching device further includes:
the first fixing unit is used for fixing the aperture focal length of the lens of the first camera under the condition of meeting the working mode of the defect compensation system and enabling the standard uniform surface light to be close to the front end of the lens of the first camera;
the second fixing unit is used for fixing the aperture focal length of the lens of the second camera under the condition of meeting the working mode of the defect compensation system and enabling the standard uniform surface light to be close to the front end of the lens of the second camera;
and the flat field correction unit is used for carrying out flat field correction on the first camera and the second camera and saving the flat field correction data of the first camera and the second camera to the camera firmware.
Optionally, the image stitching device further includes:
and the noise reduction processing unit is used for performing noise reduction processing on the generated image in the defect compensation system, wherein the noise reduction processing comprises inherent noise reduction, time domain filtering and mean value filtering.
A third aspect of the present application provides an electronic device comprising:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that is called by the processor to perform any of the optional image stitching methods as described in the first aspect and the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having a program stored thereon, the program, when executed on a computer, performing any of the optional image stitching methods of the first aspect and the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
the method comprises the steps of firstly, arranging a first camera and a second camera for shooting a large-scale screen body on a defect compensation system, wherein the shooting ranges of the first camera and the second camera are overlapped and both comprise the central position of the screen body to be detected. And shooting the screen body to be detected by the first camera and the second camera to generate a first image sequence and a second image sequence. And detecting the first 0 gray scale area, the second 0 gray scale area and the third 0 gray scale area by using a luminance meter to generate first luminance data, second luminance data and third luminance data. After calculating the average value of the gray scales of the first image sequence and the second image sequence, calculating a brightness gray scale conversion coefficient as a reference, wherein the brightness gray scale conversion coefficient can perform brightness gray scale adjustment on the shot images of different cameras so as to reduce the difference of gray scale information in the overlapped area of the two cameras.
And then, displaying a picture needing to be subjected to defect compensation target detection through the screen body to be detected, and shooting through the first camera and the second camera to generate a first image to be spliced and a second image to be spliced. The gray information of the first image to be spliced and the second image to be spliced is adjusted only through the luminance gray conversion coefficient, and the difference of the gray information of the two images is reduced. And finally, splicing the first image to be spliced and the second image to be spliced to generate a splicing result graph, wherein the difference of the gray information of the splicing result graph is small, and the splicing result graph is used for defect compensation processing, so that the defect compensation condition of the new Mura is reduced, and the defect detection or compensation difficulty of the screen body to be detected is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of an embodiment of an image stitching method for a multi-camera shooting screen body according to the present application;
FIG. 2 is a schematic view of a test frame of the image stitching method for a multi-camera shooting screen body according to the present application;
3-1, 3-2, 3-3, 3-4 and 3-5 are schematic diagrams of another embodiment of the image stitching method of the multi-camera shooting screen body of the application;
FIG. 4 is a schematic view of an embodiment of an image stitching apparatus for a multi-camera shooting screen according to the present application;
FIG. 5 is a schematic diagram of another embodiment of an image stitching apparatus for a multi-camera shooting screen according to the present application;
fig. 6 is a schematic diagram of an embodiment of an electronic device of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the prior art, due to the fact that photoelectric conversion effects of different sampling cameras with chips of the same type are different, even if camera parameters such as the same exposure time and gain are used, the gray scales of partial screen images shot by each sampling camera are also different, after image splicing is carried out, errors are introduced into the De-Mura data generation process, new Mura defects are generated after the screen to be detected is subjected to defect compensation processing, and the difficulty in detecting or compensating the defects of the screen to be detected is increased.
Based on the above, the application discloses an image splicing method and a related device for a multi-camera shooting screen body, which are used for reducing the difficulty in detecting or compensating defects of the screen body to be detected.
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The method of the present application may be applied to a server, a device, a terminal, or other devices with logic processing capability, and the present application is not limited thereto. For convenience of description, the following description will be given taking the execution body as an example.
Referring to fig. 1, the present application provides an embodiment of an image stitching method for a multi-camera screen, including:
101. setting a first camera and a second camera on the defect compensation system;
in this embodiment, a first camera and a second camera need to be disposed on the defect compensation system, where a shooting range of the first camera covers a partial area of the screen to be detected, a shooting range of the second camera covers a partial screen area of the screen to be detected, coverage areas of the first camera and the second camera overlap, and the subsequent splicing is performed in the overlapping area. It should be noted that, there may be more than 2 sampling cameras on the defect compensation system, and the present embodiment may splice any two sampling cameras having an overlapping area.
The defect compensation system (De-Mura system) is used for carrying out Mura defect compensation on the screen body, is mainly in a darkroom environment, places the screen body to be detected on the De-Mura system, and shoots through the sampling camera and the lens to obtain an image for carrying out Mura defect compensation.
The defect compensation system is provided with a first camera and a second camera, the shooting range of the first camera and the shooting range of the second camera cannot cover a whole screen body to be detected respectively, and the shooting ranges of the first camera and the second camera are overlapped.
The first camera and the second camera are symmetrically distributed on two sides of the screen body to be detected in the length direction, can also be two sides of the screen body to be detected in the width direction, and can also be not symmetrically distributed, and only the shooting ranges of the first camera and the second camera are added to cover the whole screen body to be detected.
The first camera and the second camera in the embodiment are sampling cameras, and both comprise a camera part and a detachable lens part, and the camera part and the lens part can be replaced and adjusted according to different application scenes and parameters of the display screen to be detected. For example: a50 mm lens is matched with a 151M industrial camera.
It should be noted that, this embodiment is not limited to use only 2 sampling cameras for shooting, when the size of the screen to be detected is large, more than 2 sampling cameras may be used to respectively shoot partial areas of the screen to be detected, with the increase of the sampling cameras, the area that each sampling camera can shoot is smaller, but the better the grey scale information that is shot is, and the number of the specific sampling cameras needs to be determined according to different application scenarios and parameters of the display screen to be detected.
102. Displaying a group of detection pictures through the screen body to be detected, and generating a first image sequence and a second image sequence through shooting of a first camera and a second camera, wherein the detection pictures in the group of detection pictures comprise a first 0 gray scale area corresponding to the center position of the first camera, a second 0 gray scale area corresponding to the center position of the first camera and a third 0 gray scale area corresponding to the center position of the screen body to be detected
In this embodiment, the terminal displays a group of detection pictures through the screen body to be detected, the group of detection pictures includes k detection pictures, k is an integer greater than 0, the detection pictures include a first 0 gray scale region, a second 0 gray scale region and a third 0 gray scale region, the first 0 gray scale region corresponds to a central position of the first camera, the second 0 gray scale region corresponds to a central position of the second camera, the third 0 gray scale region corresponds to a central position of the screen body to be detected, gray scale values of remaining regions of the detection pictures except the three 0 gray scale regions are the same, and gray scale values of remaining regions of any two detection pictures in the group of detection pictures are different. The first 0 gray scale area, the second 0 gray scale area and the third 0 gray scale area are used for determining the center positions, the center positions of the first camera and the display screen to be detected are included in the shooting range of the first camera, and the center positions of the second camera and the display screen to be detected are included in the shooting range of the second camera.
Specifically, the terminal acquires a group of detection pictures, the group of detection pictures comprises a plurality of detection pictures, each detection picture has three regions with a gray scale value of 0, namely a first 0 gray scale region, a second 0 gray scale region and a third 0 gray scale region, the first 0 gray scale region corresponds to the central position of the first camera (the optical axis position of the first camera), the second 0 gray scale region corresponds to the central position of the second camera (the optical axis position of the second camera), and the third 0 gray scale region corresponds to the central position of the screen body to be detected. The gray scale regions except for the first 0 gray scale region, the second 0 gray scale region and the third 0 gray scale region have the same gray scale value, but the gray scale values of the rest regions of any two detection pictures are different.
Referring to fig. 2, a gray level on a black circle in a detected picture is 0, three black circles are respectively a first 0 gray level region corresponding to a center position of a first camera, a third 0 gray level region corresponding to a center position of a screen to be detected and a second 0 gray level region corresponding to a center position of a second camera from left to right, pixel points on edges of the three black circles are all 0 gray levels, gray levels in the circles are the same as those in other regions outside the circles and are uniformly g1, g1 is a gray level value in a group of gray levels g, the number of gray levels in the group of gray levels g is k, corresponding to k detected pictures is represented as { g1, g2, g3, …, gk }, and the value of the group of gray levels g is greater than 0 and smaller than 255.
The terminal inputs the group of detection pictures into the screen body to be detected, so that the detection pictures are displayed on the screen body to be detected one by one.
After the detection picture is input into the screen body to be detected, the terminal controls the first camera to shoot each detection picture displayed by the screen body to be detected, a first image sequence is generated, the first image sequence is expressed as { I11, I12, I13, … and I1k }, similarly, the second camera shoots each detection picture displayed by the screen body to be detected, and a second image sequence is generated, and the second image sequence is expressed as { I21, I22, I23, … and I2k }. Before shooting, the terminal sets the relevant parameters of the first camera and the second camera as identical as possible, for example, setting the exposure time to both T0.
103. Detecting a first 0 gray scale area, a second 0 gray scale area and a third 0 gray scale area through a luminance meter to generate first luminance data, second luminance data and third luminance data;
the terminal detects the brightness of each detection picture displayed by a screen body to be detected by using a brightness meter, specifically, brightness detection is respectively carried out on a first 0-gray scale area, a second 0-gray scale area and a third 0-gray scale area in each detection picture, first brightness data corresponding to the first 0-gray scale area are generated, the first brightness data are expressed as { Lo11, Lo12, Lo13, … and Lo1k }, second brightness data corresponding to the second 0-gray scale area are generated, the second brightness data are expressed as { Lo21, Lo22, Lo23, … and Lo2k }, third brightness data corresponding to the third 0-gray scale area (namely the central position of the screen body to be detected) are generated, and the third brightness data are expressed as { Lo31, Lo32, Lo33, … and Lo3k }.
104. Calculating a first gray sequence of a first 0 gray scale region and a third gray sequence of a third 0 gray scale region in the first image sequence;
105. calculating a second gray sequence of a second 0 gray scale region and a fourth gray sequence of the second 0 gray scale region in the second image sequence;
in this embodiment, the terminal calculates a first gray sequence, a second gray sequence, a third gray sequence and a fourth gray sequence according to the first image sequence and the second image sequence, where the first gray sequence is a gray mean data set of a first 0 gray scale region in the first image sequence, the second gray sequence is a gray mean data set of a second 0 gray scale region in the second image sequence, the third gray sequence is a gray mean data set of a third 0 gray scale region in the first image sequence, and the fourth gray sequence is a gray mean data set of a third 0 gray scale region in the second image sequence;
specifically, the terminal obtains the average value of the pixel gray scales in the first 0 gray scale region and the third 0 gray scale region according to the first image sequence { I11, I12, I13, …, I1k }, obtains the first gray scale sequence { Vo11, Vo12, Vo13, …, Vo1k } corresponding to the first 0 gray scale region in the first image sequence and obtains the third gray scale sequence { Vo31, Vo32, Vo33, …, Vo 6853 } corresponding to the third 0 gray scale region in the first image sequence and the third luminance data { Lo11, Vo12, Vo13, …, Vo1k } corresponding to the third 0 gray scale region in the first image sequence.
The terminal finds a mean value of gray levels of pixels in the second 0-gray scale region and the third 0-gray scale region from the second image series { I21, I22, I23, …, I2k }, obtains a second gray level series { Wo21, Wo22, Wo23, …, Wo2k } corresponding to the second 0-gray scale region in the second image series with the second luminance data { Lo21, Lo22, Lo23, …, Lo2k }, and obtains a fourth gray level series { Wo31, Wo32, Wo33, …, Wo3k } corresponding to the third 0-gray scale region in the second image series with the third luminance data { Lo31, Lo32, Lo33, …, Lo3k }.
106. Generating a first luminance grayscale conversion coefficient according to the first grayscale sequence and the first luminance data;
107. generating a second brightness gray scale conversion coefficient according to the third gray scale sequence and the third brightness data;
108. generating a third luminance gray scale conversion coefficient according to the second gray scale sequence and the second luminance data;
109. generating a fourth luminance grayscale conversion coefficient according to the fourth grayscale sequence and the third luminance data;
in this embodiment, the terminal performs the first gray-scale sequence, the second gray-scale sequence, the third gray-scale sequence, and the fourth gray-scale sequence, the first brightness data, the second brightness data and the third brightness data generate a first brightness gray scale conversion coefficient, a second brightness gray scale conversion coefficient, a third brightness gray scale conversion coefficient and a fourth brightness gray scale conversion coefficient, the first brightness gray scale conversion coefficient is a photoelectric conversion relation parameter generated after straight line fitting is carried out on the first gray scale sequence and the first brightness data, the third brightness gray scale conversion coefficient is a photoelectric conversion relation parameter generated after straight line fitting is carried out on the second gray scale sequence and the second brightness data, the second brightness gray scale conversion coefficient is a photoelectric conversion relation parameter generated after straight line fitting is carried out on the third gray scale sequence and the third brightness data, and the fourth brightness gray scale conversion coefficient is a photoelectric conversion relation parameter generated after straight line fitting is carried out on the fourth gray scale sequence and the third brightness data.
Specifically, the terminal generates a first luminance grayscale conversion coefficient, a second luminance grayscale conversion coefficient, a third luminance grayscale conversion coefficient, and a fourth luminance grayscale conversion coefficient according to the first grayscale sequence, the second grayscale sequence, the third grayscale sequence, the fourth grayscale sequence, the first luminance data, the second luminance data, and the third luminance data, and specifically:
the terminal performs line fitting using the first luminance data { Lo11, Lo12, Lo13, …, Lo1k } and the first gray-scale sequence { Vo11, Vo12, Vo13, …, Vo1k } to obtain a photoelectric conversion relationship of the first camera at the center position of the first camera: v = h1 × L, where h1 is the first luminance grayscale conversion coefficient.
And the terminal performs straight line fitting by using the third brightness data { Lo31, Lo32, Lo33, …, Lo3k } and the third gray sequence { Vo31, Vo32, Vo33, …, Vo3k } to obtain the photoelectric conversion relation of the first camera at the central position of the screen to be detected: v = r1 × L, where r1 is the second luminance grayscale conversion coefficient.
The terminal performs line fitting using the second luminance series { Lo21, Lo22, Lo23, …, Lo2k } and the second grayscale series { Wo21, Wo22, Wo23, …, Wo2k } to obtain a photoelectric conversion relationship of the second camera at the center position of the second camera: w = h2 × L, where h2 is the third luminance grayscale conversion coefficient.
The terminal performs straight line fitting using the third luminance data { Lo31, Lo32, Lo33, …, Lo3k } and the fourth grayscale sequence { Wo31, Wo32, Wo33, …, Wo3k } to obtain a photoelectric conversion relationship of the second camera at the center position of the screen to be detected: w = r2 × L, where r2 is a fourth luminance grayscale conversion coefficient.
110. Displaying a target detection picture through a screen body to be detected, and shooting through a first camera and a second camera to generate a first image to be spliced and a second image to be spliced;
in this embodiment, a terminal generates a first image to be spliced and a second image to be spliced through a first camera and a second camera, the first image to be spliced is an image shot by the first camera after a target detection picture is displayed on a screen to be detected, the second image to be spliced is an image shot by the second camera after the target detection picture is displayed on the screen to be detected, and the target detection picture is a gray-scale picture used for performing defect compensation processing on the screen to be detected;
the terminal generates a first image to be spliced and a second image to be spliced through a first camera and a second camera, specifically, a target detection picture is obtained, the target detection picture is a gray-scale picture used for carrying out defect compensation processing on a screen body to be detected, at the moment, the first camera and the second camera are used for respectively shooting the screen body to be detected, and the first image to be spliced and the second image to be spliced can be obtained.
111. Performing gray correction on the first image to be spliced and the second image to be spliced through the first brightness gray conversion coefficient, the second brightness gray conversion coefficient, the third brightness gray conversion coefficient and the fourth brightness gray conversion coefficient;
in this embodiment, the terminal performs gray level correction on the first image to be stitched and the second image to be stitched through the first luminance gray level conversion coefficient h1, the second luminance gray level conversion coefficient r1, the third luminance gray level conversion coefficient h2, and the fourth luminance gray level conversion coefficient r2, and the specific correction process is as follows:
taking h = (h1+ h2)/2, and carrying out image splicing on the first image I to be spliced according to the following formula1jCarrying out gray level correction to obtain a corrected first image I to be spliced1h
Figure 850693DEST_PATH_IMAGE001
Take h= (h1+ h2)/2, for the second image to be stitched I according to the following formula2jCarrying out gray level correction to obtain a corrected second image I to be spliced2h
Figure 967554DEST_PATH_IMAGE002
Wherein, I1j(x, y) is gray information of the first image to be spliced at the coordinate (x, y), I1hFor the corrected first image to be stitched, I2j(x, y) is the gray information of the second image to be spliced at the coordinate (x, y), I2hThe corrected second image to be spliced is obtained. Through the gray scale correction of the first luminance gray scale conversion coefficient h1, the second luminance gray scale conversion coefficient r1, the third luminance gray scale conversion coefficient h2 and the fourth luminance gray scale conversion coefficient r2, the difference generated by the photoelectric conversion effect of the cameras between the first camera and the second camera can be reduced, and therefore the difference of the gray scales of partial screen images shot by each sampling camera is reduced.
112. And splicing the first image to be spliced and the second image to be spliced to generate a splicing result graph.
And the terminal carries out image splicing on the corrected first image to be spliced and the second image to be spliced to generate a splicing result graph.
In this embodiment, first, a first camera and a second camera for shooting a large-scale screen are arranged on a defect compensation system, where shooting ranges of the first camera and the second camera have overlapped regions and both include a center position of a screen to be detected. And shooting the screen body to be detected by the first camera and the second camera to generate a first image sequence and a second image sequence. And detecting the first 0 gray scale area, the second 0 gray scale area and the third 0 gray scale area by using a luminance meter to generate first luminance data, second luminance data and third luminance data. After calculating the average value of the gray scales of the first image sequence and the second image sequence, calculating a brightness gray scale conversion coefficient as a reference, wherein the brightness gray scale conversion coefficient can perform brightness gray scale adjustment on the shot images of different cameras so as to reduce the difference of gray scale information in the overlapped area of the two cameras.
And then, displaying a picture needing to be subjected to defect compensation target detection through the screen body to be detected, and shooting through the first camera and the second camera to generate a first image to be spliced and a second image to be spliced. The gray information of the first image to be spliced and the second image to be spliced is adjusted only through the brightness gray conversion coefficient, and the difference of the gray information of the two images is reduced. And finally, splicing the first image to be spliced and the second image to be spliced to generate a splicing result graph, wherein the difference of the gray information of the splicing result graph is small, and the splicing result graph is used for defect compensation processing, so that the defect compensation condition of the new Mura is reduced, and the defect detection or compensation difficulty of the screen body to be detected is reduced.
Referring to fig. 3-1, 3-2, 3-3, 3-4 and 3-5, the present application provides another embodiment of an image stitching method for a multi-camera shooting screen, including:
301. fixing the aperture focal length of the lens of the first camera under the condition of meeting the working mode of the defect compensation system, and enabling the standard uniform surface light to be close to the front end of the lens of the first camera;
302. fixing the aperture focal length of the lens of the second camera under the condition of meeting the working mode of the defect compensation system, and enabling the standard uniform surface light to be close to the front end of the lens of the second camera;
303. performing flat field correction on the first camera and the second camera, and storing the flat field correction data of the first camera and the second camera to a camera firmware;
the terminal needs to perform flat field correction on the sampling camera, and the specific mode is as follows:
the method comprises the steps of firstly fixing the aperture focal length of a lens of a first camera under the condition of meeting the working mode of a defect compensation system, enabling standard uniform surface light to be close to the front end of the lens of the first camera, then fixing the aperture focal length of a lens of a second camera under the condition of meeting the working mode of the defect compensation system, enabling the standard uniform surface light to be close to the front end of the lens of the second camera, carrying out Flat Field Correction (FFC) on the first camera and the second camera, storing Flat Field Correction data of the first camera and the second camera to a camera firmware, and acquiring subsequent images under the condition of starting the FFC function of the cameras.
304. Setting a first camera and a second camera on the defect compensation system;
305. displaying a group of detection pictures through a screen body to be detected, and generating a first image sequence and a second image sequence through shooting of a first camera and a second camera, wherein the detection pictures in the group of detection pictures comprise a first 0 gray scale region corresponding to the center position of the first camera, a second 0 gray scale region corresponding to the center position of the first camera and a third 0 gray scale region corresponding to the center position of the screen body to be detected;
306. detecting a first 0 gray scale area, a second 0 gray scale area and a third 0 gray scale area through a luminance meter to generate first luminance data, second luminance data and third luminance data;
steps 304 to 306 in this embodiment are similar to steps 101 to 103 in the previous embodiment, and are not described again here.
307. Performing noise reduction processing on the generated image in the defect compensation system, wherein the noise reduction processing comprises inherent noise subtraction, time domain filtering and mean value filtering;
because various noises exist in the process of acquiring images by the sampling camera, in order to ensure the accuracy of data acquisition of the De-Mura system, the acquired images of the first camera and the second camera are subjected to noise reduction treatment, wherein the noise reduction treatment specifically comprises inherent noise subtraction, time domain filtering and mean value filtering. Taking the first camera as an example:
firstly, the intrinsic noise is required to be subtracted, and the processing method comprises the following steps: covering the first camera by using a cover, acquiring N black images, wherein N is an integer larger than 1, correspondingly averaging the N black images according to pixels to obtain an intrinsic noise image of the first camera, and subtracting the intrinsic noise image after all images are acquired. In this embodiment, the value of N is greater than 100.
Then, time-domain filtering is required, and the processing method comprises the following steps: and taking M images with the inherent noise subtracted, wherein M is an integer larger than 1, and averaging the M images correspondingly according to pixels to obtain a noise reduction image after time domain filtering. In this embodiment, M is 3 or more.
And finally, mean filtering is required, and the processing method comprises the following steps: and filtering the noise-reduced image generated after the time-domain filtering by using a 3 multiplied by 3 mean value filtering window to obtain the noise-reduced image.
It should be noted that the noise reduction processing is performed after the first camera or the second camera takes a picture, and except that the noise reduction processing is performed when the first image sequence and the second image sequence are generated in step 306, the noise reduction processing may be performed mainly when the sampling camera takes an image, so as to improve the accuracy of the data acquisition of the De-Mura system.
308. Calculating a first gray sequence of a first 0 gray scale region and a third gray sequence of a third 0 gray scale region in the first image sequence;
309. calculating a second gray sequence of a second 0 gray scale region and a fourth gray sequence of the second 0 gray scale region in the second image sequence;
310. generating a first luminance grayscale conversion coefficient according to the first grayscale sequence and the first luminance data;
311. generating a second brightness gray scale conversion coefficient according to the third gray scale sequence and the third brightness data;
312. generating a third luminance gray scale conversion coefficient according to the second gray scale sequence and the second luminance data;
313. generating a fourth luminance grayscale conversion coefficient according to the fourth grayscale sequence and the third luminance data;
314. displaying a target detection picture through a screen body to be detected, and shooting through a first camera and a second camera to generate a first image to be spliced and a second image to be spliced;
steps 308 to 314 in this embodiment are similar to steps 104 to 110 in the previous embodiment, and are not described again here.
315. Constructing a calibration dot matrix detection picture, and displaying the calibration dot matrix detection picture through a screen body to be detected;
316. respectively shooting a screen body to be detected through a first camera and a second camera to generate a first distorted dot matrix image and a second distorted dot matrix image;
317. respectively generating a first distorted dot matrix coordinate and a second distorted dot matrix coordinate according to the first distorted dot matrix image and the second distorted dot matrix image;
318. acquiring an ideal pixel ratio of a defect compensation system, wherein the ideal pixel ratio is the ratio of the number of pixels of an image to the number of physical pixels of a display screen;
319. determining a central coordinate of a dot matrix point of a calibration dot matrix detection picture according to the first distorted dot matrix point coordinate and the second distorted dot matrix point coordinate;
320. calculating according to the ideal pixel ratio, the central coordinates of the lattice points, the coordinates of the first distorted lattice points and the coordinates of the second distorted lattice points to generate a first undistorted coordinate and a second undistorted coordinate;
321. generating a first correction coefficient matrix according to the first distorted dot matrix coordinates and the first undistorted coordinates;
322. generating a second correction coefficient matrix according to the second distorted dot matrix coordinates and the second undistorted coordinates;
323. creating a first null image and a second null image;
324. determining coordinates in the first empty image, and calculating distortion coordinates of floating point numbers according to the first correction coefficient matrix;
325. assigning gray scale information on the first image to be spliced to a coordinate in the first empty image according to the floating point number distortion coordinate and a preset formula;
326. determining invalid coordinates in the first aerial image;
327. performing gray scale assignment on the first empty image by combining the first image to be spliced according to the mode, assigning the invalid coordinate to 0 gray scale, and determining the first empty image as the first image to be spliced after geometric distortion correction;
328. according to the method, gray scale assignment is carried out on the second null image by combining the second correction coefficient matrix and the first image to be spliced, the invalid coordinate is assigned with 0 gray scale, and the second null image is determined as the second image to be spliced after geometric distortion correction;
the terminal acquires a first image to be spliced and a second image to be spliced which need to be corrected, because the installation accuracy and the pose between the sampling cameras are different, the lens distortion and the perspective distortion generated by partial screen images shot by each sampling camera are different, the geometric distortion correction needs to be carried out on the first image to be spliced and the second image to be spliced, and the specific mode is as follows:
firstly, a terminal constructs a calibration dot matrix detection picture, and displays the calibration dot matrix detection picture through a screen body to be detected, wherein the calibration dot matrix detection picture is provided with a calibration grid, the dot matrix in the calibration grid adopts circular white dots with preset pixel radius, the number of the dot matrix is P multiplied by Q (rows multiplied by columns), and P and Q are integers more than 2.
For example: the dot matrix in the calibration grid adopts circular white dots, the diameter of the dot matrix is more than or equal to 10 pixels, and the number of the dot matrix is not less than 20 multiplied by 15 for a 4:3 standard LCD display screen.
And then the terminal needs to shoot the screen body to be detected through the first camera and the second camera respectively to generate a first distorted dot matrix image I1P and a second distorted dot matrix image I2P, and noise reduction processing can be performed after the generation. The terminal calculates the center coordinates of each lattice point in the first distorted lattice point image I1P to generate the coordinates of the first distorted lattice points
Figure 131819DEST_PATH_IMAGE003
The terminal calculates the center coordinates of each lattice point in the second distorted lattice point image I2P to generate the coordinates of the second distorted lattice point
Figure 107865DEST_PATH_IMAGE004
And then the terminal respectively generates a first distorted dot coordinate and a second distorted dot coordinate according to the first distorted dot image and the second distorted dot image, determines a dot center coordinate of a calibration dot detection picture according to the first distorted dot coordinate and the second distorted dot coordinate, and calculates according to the ideal pixel ratio, the dot center coordinate, the first distorted dot coordinate and the second distorted dot coordinate to generate a first undistorted coordinate and a second undistorted coordinate. The specific generation method of the first undistorted coordinate and the second undistorted coordinate is as follows:
assuming that the ideal pixel Ratio is MR (Mapping Ratio: the number of pixels of the image and the number of physical pixels of the display screen), the physical pixel resolution of the screen body to be detected is Width × Height.
Figure 117410DEST_PATH_IMAGE005
Wherein, cx1P、cy1P、cx2PAnd cy2PAll are the lattice point coordinates in the calibration lattice detection picture, which are known items,
Figure 121400DEST_PATH_IMAGE006
is the first undistorted coordinate and is,
Figure 925408DEST_PATH_IMAGE007
is the second undistorted coordinate.
The terminal generates a first correction coefficient matrix according to the first distorted dot matrix coordinates and the first undistorted coordinates, and the terminal generates a second correction coefficient matrix according to the second distorted dot matrix coordinates and the second undistorted coordinates, and the specific mode is as follows:
the terminal establishes a relationship between distorted coordinates and undistorted coordinates using a 2-dimensional polynomial model, wherein the 2-dimensional polynomial model is represented by the following equation. For simplicity of calculation, the order n =1 is taken in the formula.
Figure 388750DEST_PATH_IMAGE008
In order to obtain a correction coefficient matrix (distortion model coefficient set) of each lattice point, 3 points on the right side, the lower side and the right lower side of the lattice point to be obtained are obtained, undistorted coordinates and distorted coordinates of the points are substituted into the formula to form a linear equation set, and coefficients a and b are obtained by solving, wherein (x, y) in the formula represents undistorted coordinates (first undistorted coordinates and second undistorted coordinates), and (u, v) represents distorted coordinates (including first distorted lattice point coordinates and second distorted lattice point coordinates) at the moment.
To finally obtain
Figure 326619DEST_PATH_IMAGE009
To
Figure 559018DEST_PATH_IMAGE010
Coefficient of (3) dimensional matrix
Figure 799506DEST_PATH_IMAGE011
I.e., the first correction coefficient matrix,
Figure 609199DEST_PATH_IMAGE012
to
Figure 226125DEST_PATH_IMAGE013
Coefficient of (3) dimensional matrix
Figure 313030DEST_PATH_IMAGE014
I.e. the second correction coefficient matrix.
The terminal creates a first null image and a second null image on which coordinates exist, specifically two first null images I1c and second null images I2c with a resolution of (MR × Width +8MR) × (MR × Height +8 MR). And then the terminal determines the coordinates in the first empty image, and selects the coefficient vector of the lattice point closest to the coordinates in the first correction coefficient matrix to calculate the floating point number distortion coordinates. The specific modes are as follows:
the terminal traverses the coordinates (x, y) of the first empty image I1c in the first correction coefficient matrix
Figure 849054DEST_PATH_IMAGE015
Selecting the coefficient vector of the lattice point nearest to the coordinate (x, y)
Figure 21409DEST_PATH_IMAGE016
Substituting the following equation to calculate the floating-point distortion coordinate (u)1,v1)。
Figure 176447DEST_PATH_IMAGE017
The terminal determines a coordinate which is a negative value or exceeds the resolution of the first image to be spliced in the floating point number distorted coordinates in the first empty image as an invalid coordinate, and if the floating point number distorted coordinates have the negative value or exceed the resolution of the first camera picture, the floating point number distorted coordinates (u) are displayed1,v1) The value is (-1, -1), i.e. the invalid coordinate is set.
Distortion coordinates (u) using floating point numbers1,v1) And taking data from a first image to be spliced shot by a first camera according to the following formula, and assigning the data to a first empty image. If the floating point number distortion coordinate is (-1, -1), then the corresponding pixel value on the corresponding first null image is assigned a value of 0.
Figure 383437DEST_PATH_IMAGE018
Wherein m = [ u ]],n=[v],[]The rounding calculation of the order of (b) is carried out,
Figure 854476DEST_PATH_IMAGE019
is the pixel value of the first null image at coordinates (x, y).
And the terminal performs gray scale assignment on the first empty image by combining the first image to be spliced according to the mode, assigns the invalid coordinate with 0 gray scale, and determines the first empty image as the first image to be spliced after geometric distortion correction.
And the terminal performs gray scale assignment on the second null image according to the method by combining the second correction coefficient matrix and the first image to be spliced, assigns the invalid coordinate with 0 gray scale, and determines the second null image as the second image to be spliced after geometric distortion correction.
329. Performing gray correction on the first image to be spliced and the second image to be spliced through the first brightness gray conversion coefficient, the second brightness gray conversion coefficient, the third brightness gray conversion coefficient and the fourth brightness gray conversion coefficient;
step 329 in this embodiment is similar to step 111 in the previous embodiment, and is not described herein again.
330. Cutting 0 gray scale areas in the first image to be spliced and the second image to be spliced;
and the terminal cuts the 0 gray scale area in the first image to be spliced and the second image to be spliced, and removes the invalid area.
331. Selecting a row of lattice points positioned at the center of the screen body to be detected from the calibration lattice detection picture as a reference lattice point row;
332. calculating corresponding coordinates of the reference dot matrix point array on the first image to be spliced and the second image to be spliced, and splicing lines on the first image to be spliced and the second image to be spliced;
333. respectively determining a first overlapping area and a second overlapping area on the first image to be stitched and the second image to be stitched according to the splicing seam;
334. performing pixel fusion on the first overlapping area and the corresponding overlapping area in the second image to be spliced;
335. performing pixel fusion on the second overlapping area and the corresponding overlapping area in the first image to be spliced;
336. and splicing the first image to be spliced and the second image to be spliced according to the splicing line.
The terminal selects a row of lattice points positioned at the center of the screen body to be detected from the calibrated lattice detection picture as a reference lattice point row, calculates the corresponding coordinates of the reference lattice point row on the first image to be spliced and the second image to be spliced, generates a splicing line on the first image to be spliced and the second image to be spliced, and respectively determines a first overlapping area and a second overlapping area on the first image to be spliced and the second image to be spliced according to the splicing line.
And finally, the terminal splices the first image to be spliced and the second image to be spliced according to the splicing line.
Assuming that the pixel width of the overlapping area of the images is e,the overlap region has an x-coordinate range of (x)s,xe) In order to avoid excessive unevenness in the splicing process, a pixel fusion strategy described by the following formula is adopted:
Figure 248549DEST_PATH_IMAGE020
Figure 207277DEST_PATH_IMAGE021
fusing coordinates of pixels in coordinates (x, y), I, for overlapping regions1h(x, y) is the gray information of the first image to be stitched at the coordinate (x, y), I2hAnd (x, y) is the gray information of the second image to be spliced at the coordinate (x, y).
In this embodiment, first, the terminal fixes the aperture focal length of the lens of the first camera under the condition that the defect compensation system operation mode is satisfied, attaches the standard uniform surface light to the front end of the lens of the first camera, fixes the aperture focal length of the lens of the second camera under the condition that the defect compensation system operation mode is satisfied, attaches the standard uniform surface light to the front end of the lens of the second camera, and then performs flat field correction on the first camera and the second camera, and stores the flat field correction data of the first camera and the second camera to the camera firmware, thereby completing the flat field correction.
The terminal sets a first camera and a second camera on the defect compensation system. The terminal displays a group of detection pictures through the screen body to be detected, and generates a first image sequence and a second image sequence through shooting of a first camera and a second camera, wherein the detection pictures in the group of detection pictures comprise a first 0 gray scale region corresponding to the center position of the first camera, a second 0 gray scale region corresponding to the center position of the first camera and a third 0 gray scale region corresponding to the center position of the screen body to be detected. At this time, the terminal detects the first 0 gray scale region, the second 0 gray scale region and the third 0 gray scale region through the luminance meter, and generates first luminance data, second luminance data and third luminance data. And the terminal performs noise reduction processing on the generated image in the defect compensation system, wherein the noise reduction processing comprises inherent noise reduction, time domain filtering and mean value filtering. Then, the terminal calculates a first gray sequence of a first 0 gray scale region and a third gray sequence of a third 0 gray scale region in the first image sequence, and the terminal calculates a second gray sequence of a second 0 gray scale region and a fourth gray sequence of the second 0 gray scale region in the second image sequence. The terminal generates a second brightness gray scale conversion coefficient according to the third gray scale sequence and the third brightness data, generates a third brightness gray scale conversion coefficient according to the second gray scale sequence and the second brightness data, and generates a fourth brightness gray scale conversion coefficient according to the fourth gray scale sequence and the third brightness data. The terminal displays a target detection picture through the screen body to be detected, and generates a first image to be spliced and a second image to be spliced through shooting by the first camera and the second camera.
And then, a set distortion correction link needs to be entered, the terminal constructs a calibration dot matrix detection picture, displays the calibration dot matrix detection picture through the screen body to be detected, and respectively shoots the screen body to be detected through the first camera and the second camera to generate a first distorted dot matrix image and a second distorted dot matrix image. And the terminal respectively generates a first distorted dot matrix coordinate and a second distorted dot matrix coordinate according to the first distorted dot matrix image and the second distorted dot matrix image. And the terminal acquires an ideal pixel ratio of the defect compensation system, wherein the ideal pixel ratio is the ratio of the pixel number of the image to the physical pixel number of the display screen, and the terminal determines the central coordinate of the dot matrix of the calibration dot matrix detection picture according to the first distorted dot matrix coordinate and the second distorted dot matrix coordinate. And finally, the terminal calculates according to the ideal pixel ratio, the central coordinates of the lattice points, the coordinates of the first distorted lattice points and the coordinates of the second distorted lattice points to generate first undistorted coordinates and second undistorted coordinates. And the terminal generates a first correction coefficient matrix according to the first distorted dot matrix coordinates and the first undistorted coordinates, and generates a second correction coefficient matrix according to the second distorted dot matrix coordinates and the second undistorted coordinates.
The terminal creates a first null image and a second null image. And the terminal determines the coordinates in the first empty image and calculates the distortion coordinates of the floating point number according to the first correction coefficient matrix. And the terminal assigns the gray scale information on the first image to be spliced to the coordinates in the first empty image according to the floating point number distortion coordinates and a preset formula. The terminal determines invalid coordinates in the first empty image. And the terminal performs gray scale assignment on the first empty image by combining the first image to be spliced according to the mode, assigns the invalid coordinate with 0 gray scale, and determines the first empty image as the first image to be spliced after geometric distortion correction. And the terminal performs gray scale assignment on the second null image according to the method by combining the second correction coefficient matrix and the first image to be spliced, assigns the invalid coordinate with 0 gray scale, and determines the second null image as the second image to be spliced after geometric distortion correction. Thereby reducing geometric distortion caused by differences in the installation accuracy and pose among the sampling cameras.
And the terminal performs gray level correction on the first image to be spliced and the second image to be spliced through the first brightness gray level conversion coefficient, the second brightness gray level conversion coefficient, the third brightness gray level conversion coefficient and the fourth brightness gray level conversion coefficient to complete the adjustment of the gray level information. And the terminal cuts the 0 gray scale area in the first image to be spliced and the second image to be spliced, and removes the useless area. And finally, selecting a row of dot matrix points positioned at the center of the screen body to be detected from the calibrated dot matrix detection picture by the terminal as a reference dot matrix point row, calculating the corresponding coordinates of the reference dot matrix point row on the first image to be spliced and the second image to be spliced by the terminal, and splicing lines on the generated first image to be spliced and the generated second image to be spliced. The terminal respectively determines a first overlapping area and a second overlapping area on a first image to be spliced and a second image to be spliced according to a splicing line, the terminal performs pixel fusion on the corresponding overlapping areas in the first overlapping area and the second image to be spliced, the terminal performs pixel fusion on the corresponding overlapping areas in the second overlapping area and the first image to be spliced, and the terminal splices the first image to be spliced and the second image to be spliced according to the splicing line.
According to the method and the device, the gray information difference of the splicing result graph is small through the luminance gray conversion coefficient, and the splicing result graph is used for defect compensation processing, so that the defect compensation condition of a new Mura is reduced, and the difficulty in detecting or compensating the defects of the screen body to be detected is reduced.
Secondly, the screen body to be detected is aligned and calculated at the center position of the display screen in the P-gamma process, the compensation amount calculation of De-Mura data is also aligned and calculated according to the brightness at the center position of the screen body to be detected, the center position of each sampling camera is not the center position of the screen body to be detected in the multi-camera splicing imaging mode, the center position of the display screen to be detected needs to be determined again, in the embodiment of the application, the center position of the screen body to be detected can be accurately positioned by setting the center position of the screen body to be detected on a detection picture, and the multi-camera splicing imaging mode improves the adjusting effect of each screen body in the adjusting process.
Secondly, because the installation accuracy and the pose among the sampling cameras are different, the perspective distortion generated by partial screen body images shot by each sampling camera is different, and the situation of distortion generated by the difference of the installation accuracy and the pose among the sampling cameras is reduced by carrying out geometric distortion correction in the embodiment of the application.
Secondly, the sampling camera is normally corrected before detection, and noise of the device and environmental noise are processed, so that the image obtained in the shooting process is more accurate.
Referring to fig. 4, the present application provides an embodiment of an image stitching apparatus for a multi-camera screen, including:
a setting unit 401 for setting a first camera and a second camera on the defect compensation system;
the acquisition unit 402 is configured to display a group of detection pictures through the screen body to be detected, and generate a first image sequence and a second image sequence through shooting of a first camera and a second camera, where a detection picture in the group of detection pictures includes a first 0 gray scale region corresponding to a center position of the first camera, a second 0 gray scale region corresponding to the center position of the first camera, and a third 0 gray scale region corresponding to the center position of the screen body to be detected;
a detection unit 403, configured to detect a first 0 gray scale region, a second 0 gray scale region, and a third 0 gray scale region through a luminance meter, and generate first luminance data, second luminance data, and third luminance data;
a first calculating unit 404, configured to calculate a first gray sequence of a first 0 gray scale region and a third gray sequence of a third 0 gray scale region in the first image sequence;
a second calculating unit 405, configured to calculate a second gray scale sequence of a second 0 gray scale region and a fourth gray scale sequence of the second 0 gray scale region in the second image sequence;
a first generating unit 406 configured to generate a first luminance grayscale conversion coefficient from the first grayscale sequence and the first luminance data;
a second generating unit 407 for generating a second luminance gradation conversion coefficient from the third gradation sequence and the third luminance data;
a third generating unit 408 for generating a third luminance grayscale conversion coefficient from the second grayscale sequence and the second luminance data;
a fourth generating unit 409 for generating a fourth luminance grayscale conversion coefficient from the fourth grayscale sequence and the third luminance data;
a fifth generating unit 410, configured to display a target detection picture through a screen to be detected, and generate a first image to be stitched and a second image to be stitched by shooting with the first camera and the second camera;
the gray scale correction unit 411 is configured to perform gray scale correction on the first image to be stitched and the second image to be stitched through the first luminance gray scale conversion coefficient, the second luminance gray scale conversion coefficient, the third luminance gray scale conversion coefficient, and the fourth luminance gray scale conversion coefficient;
and the splicing unit 412 is configured to splice the first image to be spliced and the second image to be spliced, and generate a splicing result map.
Referring to fig. 5, the present application provides another embodiment of an image stitching apparatus for a multi-camera screen, including:
a first fixing unit 501, configured to fix a diaphragm focal length of a lens of the first camera under a condition that a defect compensation system operating mode is satisfied, and attach standard uniform surface light to a front end of the lens of the first camera;
a second fixing unit 502, configured to fix a diaphragm focal length of a lens of the second camera under a condition that a working mode of the defect compensation system is satisfied, and attach the standard uniform surface light to a front end of the lens of the second camera;
a flat field correction unit 503, configured to perform flat field correction on the first camera and the second camera, and store the flat field correction data of the first camera and the second camera in a camera firmware;
a setting unit 504 for setting a first camera and a second camera on the defect compensation system;
the acquisition unit 505 is configured to display a group of detection pictures through the screen body to be detected, and generate a first image sequence and a second image sequence through shooting by a first camera and a second camera, where a detection picture in the group of detection pictures includes a first 0 gray scale region corresponding to a center position of the first camera, a second 0 gray scale region corresponding to the center position of the first camera, and a third 0 gray scale region corresponding to the center position of the screen body to be detected;
a noise reduction unit 506, configured to perform noise reduction processing on the generated image in the defect compensation system, where the noise reduction processing includes inherent noise subtraction, time domain filtering, and mean value filtering;
a detection unit 507, configured to detect a first 0 gray scale region, a second 0 gray scale region, and a third 0 gray scale region through a luminance meter, and generate first luminance data, second luminance data, and third luminance data;
a first calculating unit 508, configured to calculate a first gray sequence of a first 0 gray scale region and a third gray sequence of a third 0 gray scale region in the first image sequence;
a second calculating unit 509, configured to calculate a second gray scale sequence of a second 0 gray scale region and a fourth gray scale sequence of the second 0 gray scale region in the second image sequence;
a first generating unit 510 for generating a first luminance grayscale conversion coefficient from the first grayscale sequence and the first luminance data;
a second generating unit 511 configured to generate a second luminance gradation conversion coefficient from the third gradation sequence and the third luminance data;
a third generating unit 512, configured to generate a third luminance grayscale conversion coefficient according to the second grayscale sequence and the second luminance data;
a fourth generating unit 513 configured to generate a fourth luminance grayscale conversion coefficient from the fourth grayscale sequence and the third luminance data;
a fifth generating unit 514, configured to display a target detection picture through the screen to be detected, and generate a first image to be stitched and a second image to be stitched by shooting with the first camera and the second camera;
the construction unit 515 is configured to construct a calibrated dot matrix detection picture, and display the calibrated dot matrix detection picture through the screen body to be detected;
a shooting unit 516, configured to respectively shoot the screen body to be detected through the first camera and the second camera, and generate a first distorted dot array image and a second distorted dot array image;
a sixth generating unit 517, configured to generate a first distorted dot coordinate and a second distorted dot coordinate according to the first distorted dot image and the second distorted dot image;
a seventh generating unit 518, configured to generate a first undistorted coordinate and a second undistorted coordinate according to the first distorted dot matrix coordinate and the second distorted dot matrix coordinate;
optionally, the seventh generating unit 518 specifically includes:
acquiring an ideal pixel ratio of a defect compensation system, wherein the ideal pixel ratio is the ratio of the number of pixels of an image to the number of physical pixels of a display screen;
determining a central coordinate of a dot matrix point of a calibration dot matrix detection picture according to the first distorted dot matrix point coordinate and the second distorted dot matrix point coordinate;
and calculating according to the ideal pixel ratio, the central coordinates of the lattice points, the coordinates of the first distorted lattice points and the coordinates of the second distorted lattice points to generate a first undistorted coordinate and a second undistorted coordinate.
An eighth generating unit 519, configured to generate a first correction coefficient matrix according to the first distorted dot matrix coordinates and the first undistorted coordinates;
a ninth generating unit 520, configured to generate a second correction coefficient matrix according to the second distorted dot matrix coordinates and the second undistorted coordinates;
a creating unit 521 for creating a first null image and a second null image;
a first determining unit 522, configured to determine coordinates in the first empty image, and calculate floating-point number distortion coordinates according to the first correction coefficient matrix;
an assigning unit 523, configured to assign gray scale information on the first image to be stitched to a coordinate in the first empty image according to the floating point number distortion coordinate and a preset formula;
a second determining unit 524, configured to determine invalid coordinates in the first empty image;
the first distortion correcting unit 525 is configured to perform gray scale assignment on the first empty image in combination with the first image to be stitched according to the above manner, assign a 0 gray scale to the invalid coordinate, and determine the first empty image as the first image to be stitched after geometric distortion correction;
the second distortion correcting unit 526 is configured to perform gray scale assignment on the second null image according to the above method in combination with the second correction coefficient matrix and the first image to be stitched, assign a gray scale of 0 to the invalid coordinate, and determine the second null image as the second image to be stitched after geometric distortion correction;
the gray correction unit 527 is used for performing gray correction on the first image to be spliced and the second image to be spliced through the first brightness gray conversion coefficient, the second brightness gray conversion coefficient, the third brightness gray conversion coefficient and the fourth brightness gray conversion coefficient;
the clipping unit 528 is configured to clip 0 gray scale regions in the first image to be stitched and the second image to be stitched;
and the splicing unit 529 is configured to splice the first image to be spliced and the second image to be spliced, and generate a splicing result map.
Optionally, the splicing unit 529 specifically includes:
selecting a row of lattice points positioned at the center of the screen body to be detected from the calibration lattice detection picture as a reference lattice point row;
calculating corresponding coordinates of the reference dot matrix point array on the first image to be spliced and the second image to be spliced, and splicing lines on the first image to be spliced and the second image to be spliced;
respectively determining a first overlapping area and a second overlapping area on the first image to be stitched and the second image to be stitched according to the splicing seam;
performing pixel fusion on the first overlapping area and the corresponding overlapping area in the second image to be spliced;
performing pixel fusion on the second overlapping area and the corresponding overlapping area in the first image to be spliced;
and splicing the first image to be spliced and the second image to be spliced according to the splicing line.
Referring to fig. 6, the present application provides an electronic device, including:
a processor 601, a memory 602, an input-output unit 603, and a bus 604.
The processor 601 is connected to a memory 602, an input/output unit 603 and a bus 604.
The memory 601 holds a program, and the processor 601 calls the program to perform the image stitching method as in fig. 1 and 3-1, 3-2, 3-3, 3-4, and 3-5.
The present application provides a computer-readable storage medium having a program stored thereon, the program, when executed on a computer, performing an image stitching method as in fig. 1 and 3-1, 3-2, 3-3, 3-4, and 3-5.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.

Claims (10)

1. An image splicing method for a multi-camera shooting screen body is characterized by comprising the following steps:
setting a first camera and a second camera on the defect compensation system;
displaying a group of detection pictures through a screen body to be detected, and shooting through the first camera and the second camera to generate a first image sequence and a second image sequence, wherein the detection pictures in the group of detection pictures comprise a first 0 gray scale region corresponding to the center position of the first camera, a second 0 gray scale region corresponding to the center position of the first camera and a third 0 gray scale region corresponding to the center position of the screen body to be detected;
detecting the first 0 gray scale area, the second 0 gray scale area and the third 0 gray scale area through a luminance meter to generate first luminance data, second luminance data and third luminance data;
calculating a first gray sequence of the first 0 gray scale region and a third gray sequence of the third 0 gray scale region in the first image sequence;
calculating a second gray scale sequence of the second 0 gray scale region and a fourth gray scale sequence of the second 0 gray scale region in the second image sequence;
generating a first luminance grayscale conversion coefficient according to the first grayscale sequence and the first luminance data;
generating a second luminance grayscale conversion coefficient according to the third grayscale sequence and the third luminance data;
generating a third luminance grayscale conversion coefficient according to the second grayscale sequence and the second luminance data;
generating a fourth luminance grayscale conversion coefficient according to the fourth grayscale sequence and the third luminance data;
displaying a target detection picture through the screen body to be detected, and shooting through the first camera and the second camera to generate a first image to be spliced and a second image to be spliced;
performing gray correction on the first image to be spliced and the second image to be spliced through the first brightness gray conversion coefficient, the second brightness gray conversion coefficient, the third brightness gray conversion coefficient and the fourth brightness gray conversion coefficient;
and splicing the first image to be spliced and the second image to be spliced to generate a splicing result graph.
2. The image stitching method according to claim 1, wherein after the target detection picture is displayed by the screen body to be detected and the first image to be stitched and the second image to be stitched are generated by the first camera and the second camera, before the first image to be stitched and the second image to be stitched are subjected to the gray scale correction by the first luminance grayscale conversion coefficient, the second luminance grayscale conversion coefficient, the third luminance grayscale conversion coefficient and the fourth luminance grayscale conversion coefficient, the image stitching method further comprises:
constructing a calibration dot matrix detection picture, and displaying the calibration dot matrix detection picture through the screen body to be detected;
shooting the screen body to be detected through the first camera and the second camera respectively to generate a first distorted dot matrix image and a second distorted dot matrix image;
respectively generating a first distorted dot matrix coordinate and a second distorted dot matrix coordinate according to the first distorted dot matrix image and the second distorted dot matrix image;
generating a first undistorted coordinate and a second undistorted coordinate according to the first distorted dot matrix coordinate and the second distorted dot matrix coordinate;
generating a first correction coefficient matrix according to the first distorted dot matrix coordinates and the first undistorted coordinates;
generating a second correction coefficient matrix according to the second distorted dot matrix coordinates and the second undistorted coordinates;
creating a first null image and a second null image;
determining coordinates in the first empty image, and calculating floating point number distortion coordinates according to a first correction coefficient matrix;
assigning gray scale information on the first image to be spliced to the coordinates in the first empty image according to the floating point number distortion coordinates and a preset formula;
determining invalid coordinates in the first aerial image;
performing gray scale assignment on the first empty image by combining the first image to be spliced according to the mode, assigning the invalid coordinate to 0 gray scale, and determining the first empty image as the first image to be spliced after geometric distortion correction;
according to the method, gray scale assignment is carried out on the second null image by combining the second correction coefficient matrix and the first image to be spliced, the invalid coordinate is assigned with 0 gray scale, and the second null image is determined as the second image to be spliced after geometric distortion correction.
3. The image stitching method according to claim 2, wherein after the performing the gamma correction on the first image to be stitched and the second image to be stitched through the first luminance/grayscale conversion coefficient, the second luminance/grayscale conversion coefficient, the third luminance/grayscale conversion coefficient and the fourth luminance/grayscale conversion coefficient, and before the stitching the first image to be stitched and the second image to be stitched to generate a stitching result map, the image stitching method further comprises:
and cutting the 0 gray scale area in the first image to be spliced and the second image to be spliced.
4. The image stitching method of claim 2, wherein generating first undistorted coordinates and second undistorted coordinates from the first distorted lattice point coordinates and the second distorted lattice point coordinates comprises:
acquiring an ideal pixel ratio of the defect compensation system, wherein the ideal pixel ratio is the ratio of the number of pixels of an image to the number of physical pixels of a display screen;
determining the central coordinates of the lattice points of the calibration lattice detection picture according to the first distorted lattice point coordinates and the second distorted lattice point coordinates;
and calculating according to the ideal pixel ratio, the central coordinates of the lattice points, the coordinates of the first distorted lattice points and the coordinates of the second distorted lattice points to generate a first undistorted coordinate and a second undistorted coordinate.
5. The image stitching method according to claim 2, wherein the stitching the first image to be stitched and the second image to be stitched to generate a stitching result map includes:
selecting a row of lattice points positioned at the center of the screen body to be detected from the calibration lattice detection picture as a reference lattice point row;
calculating the corresponding coordinates of the reference dot matrix point array on the first image to be spliced and the second image to be spliced, and generating a splicing line on the first image to be spliced and the second image to be spliced;
respectively determining a first overlapping area and a second overlapping area on the first image to be stitched and the second image to be stitched according to the splicing seam;
performing pixel fusion on the first overlapping area and the corresponding overlapping area in the second image to be spliced;
performing pixel fusion on the second overlapping area and the corresponding overlapping area in the first image to be spliced;
and splicing the first image to be spliced and the second image to be spliced according to the splicing line.
6. The image stitching method according to any one of claims 1 to 5, wherein after the first camera and the second camera are arranged on the defect compensation system, before the set of detection pictures is displayed by the screen body to be detected and the first image sequence and the second image sequence are generated by shooting of the first camera and the second camera, the image stitching method further comprises:
fixing the aperture focal length of the lens of the first camera under the condition of meeting the working mode of the defect compensation system, and enabling standard uniform surface light to be close to the front end of the lens of the first camera;
fixing the aperture focal length of the lens of the second camera under the condition of meeting the working mode of the defect compensation system, and enabling standard uniform surface light to be close to the front end of the lens of the second camera;
and performing flat field correction on the first camera and the second camera, and saving the flat field correction data of the first camera and the second camera to camera firmware.
7. The image stitching method according to any one of claims 1 to 5, wherein after the displaying of a group of detection pictures by the screen to be detected and the generation of the first image sequence and the second image sequence by the shooting of the first camera and the second camera, the image stitching method further comprises:
and performing noise reduction processing on the generated image in the defect compensation system, wherein the noise reduction processing comprises inherent noise reduction, time domain filtering and mean value filtering.
8. The utility model provides an image splicing apparatus of multicamera shooting screen body which characterized in that includes:
a setting unit for setting the first camera and the second camera on the defect compensation system;
the acquisition unit is used for displaying a group of detection pictures through the screen body to be detected, and generating a first image sequence and a second image sequence through shooting of the first camera and the second camera, wherein the detection pictures in the group of detection pictures comprise a first 0 gray scale region corresponding to the center position of the first camera, a second 0 gray scale region corresponding to the center position of the first camera and a third 0 gray scale region corresponding to the center position of the screen body to be detected;
the detection unit is used for detecting the first 0 gray scale area, the second 0 gray scale area and the third 0 gray scale area through a luminance meter to generate first luminance data, second luminance data and third luminance data;
the first calculation unit is used for calculating a first gray sequence of the first 0 gray scale region and a third gray sequence of the third 0 gray scale region in the first image sequence;
the second calculation unit is used for calculating a second gray scale sequence of the second 0 gray scale region and a fourth gray scale sequence of the second 0 gray scale region in the second image sequence;
a first generation unit configured to generate a first luminance grayscale conversion coefficient from the first grayscale sequence and the first luminance data;
a second generation unit configured to generate a second luminance gradation conversion coefficient from the third gradation sequence and the third luminance data;
a third generating unit configured to generate a third luminance grayscale conversion coefficient according to the second grayscale sequence and the second luminance data;
a fourth generation unit configured to generate a fourth luminance grayscale conversion coefficient from the fourth grayscale sequence and the third luminance data;
the fifth generating unit is used for displaying a target detection picture through the screen body to be detected, and generating a first image to be spliced and a second image to be spliced through shooting by the first camera and the second camera;
the gray correction unit is used for carrying out gray correction on the first image to be spliced and the second image to be spliced through the first brightness gray conversion coefficient, the second brightness gray conversion coefficient, the third brightness gray conversion coefficient and the fourth brightness gray conversion coefficient;
and the sixth generating unit is used for splicing the first image to be spliced and the second image to be spliced to generate a splicing result graph.
9. An electronic device, comprising:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that the processor calls to execute the image stitching method according to any one of claims 1 to 7.
10. A computer-readable storage medium having a program stored thereon, the program, when executed on a computer, performing the image stitching method according to any one of claims 1 to 7.
CN202210274472.0A 2022-03-21 2022-03-21 Image splicing method and related device for multi-camera shooting screen body Active CN114359055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210274472.0A CN114359055B (en) 2022-03-21 2022-03-21 Image splicing method and related device for multi-camera shooting screen body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210274472.0A CN114359055B (en) 2022-03-21 2022-03-21 Image splicing method and related device for multi-camera shooting screen body

Publications (2)

Publication Number Publication Date
CN114359055A CN114359055A (en) 2022-04-15
CN114359055B true CN114359055B (en) 2022-05-31

Family

ID=81095237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210274472.0A Active CN114359055B (en) 2022-03-21 2022-03-21 Image splicing method and related device for multi-camera shooting screen body

Country Status (1)

Country Link
CN (1) CN114359055B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078677B (en) * 2023-10-16 2024-01-30 江西天鑫冶金装备技术有限公司 Defect detection method and system for starting sheet

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011010230A (en) * 2009-06-29 2011-01-13 Toshiba Teli Corp Luminance correction circuit for imaging device
JP2012238025A (en) * 2012-08-13 2012-12-06 Nlt Technologies Ltd Driving method for backlight of liquid crystal display, device therefor and liquid crystal display
CN103258321A (en) * 2013-05-14 2013-08-21 杭州海康希牧智能科技有限公司 Image stitching method
US8736685B1 (en) * 2013-12-11 2014-05-27 Anritsu Company Systems and methods for measuring brightness response of a camera operating in automatic exposure mode
CN104882098A (en) * 2015-06-08 2015-09-02 广东威创视讯科技股份有限公司 Image correction method based on LED splicing display screen and image sensor
CN105244007A (en) * 2015-10-30 2016-01-13 青岛海信电器股份有限公司 Method and device for generating gray scale correction table of curved surface display screen
CN105427822A (en) * 2015-12-29 2016-03-23 深圳市华星光电技术有限公司 Gray-scale compensation data resetting device and method
WO2017024787A1 (en) * 2015-08-12 2017-02-16 青岛海信电器股份有限公司 Image correction method and device
JP2017142408A (en) * 2016-02-12 2017-08-17 日本電信電話株式会社 Information presenting system, information presenting method, and data structure
CN107221306A (en) * 2017-06-29 2017-09-29 上海顺久电子科技有限公司 Method, device and the display device of brightness of image in correction splicing device screen
CN108508022A (en) * 2018-03-28 2018-09-07 苏州巨能图像检测技术有限公司 Polyphaser joining image-forming detection method
JP2019004230A (en) * 2017-06-12 2019-01-10 キヤノン株式会社 Image processing device and method, and imaging apparatus
CN111402825A (en) * 2020-03-31 2020-07-10 浙江宇视科技有限公司 Screen correction method, device and system and logic board

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286941B2 (en) * 2001-05-04 2016-03-15 Legend3D, Inc. Image sequence enhancement and motion picture project management system
KR102552012B1 (en) * 2018-12-26 2023-07-05 주식회사 엘엑스세미콘 Mura compensation system
US11158056B2 (en) * 2019-06-26 2021-10-26 Intel Corporation Surround camera system with seamless stitching for arbitrary viewpoint selection

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011010230A (en) * 2009-06-29 2011-01-13 Toshiba Teli Corp Luminance correction circuit for imaging device
JP2012238025A (en) * 2012-08-13 2012-12-06 Nlt Technologies Ltd Driving method for backlight of liquid crystal display, device therefor and liquid crystal display
CN103258321A (en) * 2013-05-14 2013-08-21 杭州海康希牧智能科技有限公司 Image stitching method
US8736685B1 (en) * 2013-12-11 2014-05-27 Anritsu Company Systems and methods for measuring brightness response of a camera operating in automatic exposure mode
CN104882098A (en) * 2015-06-08 2015-09-02 广东威创视讯科技股份有限公司 Image correction method based on LED splicing display screen and image sensor
WO2017024787A1 (en) * 2015-08-12 2017-02-16 青岛海信电器股份有限公司 Image correction method and device
CN105244007A (en) * 2015-10-30 2016-01-13 青岛海信电器股份有限公司 Method and device for generating gray scale correction table of curved surface display screen
CN105427822A (en) * 2015-12-29 2016-03-23 深圳市华星光电技术有限公司 Gray-scale compensation data resetting device and method
JP2017142408A (en) * 2016-02-12 2017-08-17 日本電信電話株式会社 Information presenting system, information presenting method, and data structure
JP2019004230A (en) * 2017-06-12 2019-01-10 キヤノン株式会社 Image processing device and method, and imaging apparatus
CN107221306A (en) * 2017-06-29 2017-09-29 上海顺久电子科技有限公司 Method, device and the display device of brightness of image in correction splicing device screen
CN108508022A (en) * 2018-03-28 2018-09-07 苏州巨能图像检测技术有限公司 Polyphaser joining image-forming detection method
CN111402825A (en) * 2020-03-31 2020-07-10 浙江宇视科技有限公司 Screen correction method, device and system and logic board

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
video completion by motion field transfer;Shiratori等;《IEEE》;20061231;17-22 *
基于数字摄像技术测量气象能见度――双亮度差方法和试验研究;吕伟涛等;《大气科学》;20040708(第04期);80-91 *
拼接显示墙颜色自动校正***设计;李静等;《微型机与应用》;20171231(第01期);101-102、106 *

Also Published As

Publication number Publication date
CN114359055A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
TWI511122B (en) Calibration method and system to correct for image distortion of a camera
US7529424B2 (en) Correction of optical distortion by image processing
US7453502B2 (en) Lens shading algorithm
KR20190004699A (en) Mura phenomenon compensation method
US8463068B2 (en) Methods, systems and apparatuses for pixel value correction using multiple vertical and/or horizontal correction curves
EP2031861A2 (en) Method of correcting image distortion and apparatus for processing image using the method
WO2005002240A1 (en) Method for calculating display characteristic correction data, program for calculating display characteristic correction data, and device for calculating display characteristic correction data
CN103200409B (en) Color correction method of multi-projector display system
US10839729B2 (en) Apparatus for testing display panel and driving method thereof
JP2009171008A (en) Color reproduction apparatus and color reproduction program
CN105049734A (en) License camera capable of giving shooting environment shooting prompt and shooting environment detection method
JP6461426B2 (en) Brightness adjusting apparatus and method, image display system, program, and recording medium
CN114757853B (en) Method and system for acquiring flat field correction function and flat field correction method and system
CN114359055B (en) Image splicing method and related device for multi-camera shooting screen body
JP5986461B2 (en) Image processing apparatus, image processing method, program, and storage medium
CN113920037B (en) Endoscope picture correction method, device, correction system and storage medium
CN113542709B (en) Projection image brightness adjusting method and device, storage medium and projection equipment
JP5240517B2 (en) Car camera calibration system
KR100645634B1 (en) Automatic correction method and apparatus for lens shading
CN108055487B (en) Method and system for consistent correction of image sensor array nonuniformity
CN113114975B (en) Image splicing method and device, electronic equipment and storage medium
JP5446285B2 (en) Image processing apparatus and image processing method
CN115100078B (en) Method and related device for correcting and filling dot matrix coordinates in curved screen image
CN113542708B (en) Projection surface parameter confirmation method and device, storage medium and projection equipment
CN114792288B (en) Curved screen image gray scale correction method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant