CN115442591B - Camera quality testing method, system, electronic device and storage medium - Google Patents

Camera quality testing method, system, electronic device and storage medium Download PDF

Info

Publication number
CN115442591B
CN115442591B CN202211373241.1A CN202211373241A CN115442591B CN 115442591 B CN115442591 B CN 115442591B CN 202211373241 A CN202211373241 A CN 202211373241A CN 115442591 B CN115442591 B CN 115442591B
Authority
CN
China
Prior art keywords
area
depth
camera
point
central
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211373241.1A
Other languages
Chinese (zh)
Other versions
CN115442591A (en
Inventor
刘祺昌
王海彬
李东洋
化雪诚
户磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Lushenshi Technology Co ltd
Original Assignee
Hefei Dilusense Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Dilusense Technology Co Ltd filed Critical Hefei Dilusense Technology Co Ltd
Priority to CN202211373241.1A priority Critical patent/CN115442591B/en
Publication of CN115442591A publication Critical patent/CN115442591A/en
Application granted granted Critical
Publication of CN115442591B publication Critical patent/CN115442591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application relates to the technical field of machine vision, and discloses a camera quality testing method, a system, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a first depth area corresponding to a central area of a detection plate and a second depth area corresponding to a peripheral area of the detection plate in a depth map obtained by shooting a preset detection plate by a camera to be detected; determining the relative precision of the camera to be detected according to the depth values of all points in the first depth area; determining the absolute precision of the camera to be tested according to the depth values of all points in the second depth area and the real distance between the camera to be tested and the peripheral area; if at least one of the relative precision and the absolute precision is smaller than a preset precision threshold value, the camera to be tested is reworked.

Description

Camera quality testing method, system, electronic device and storage medium
Technical Field
The embodiment of the application relates to the technical field of machine vision, in particular to a camera quality testing method, a camera quality testing system, an electronic device and a storage medium.
Background
With the rapid development of machine vision technology, a simple two-dimensional image cannot meet the requirements of people for production and life, various related technologies based on three-dimensional images are developed and matured day by day, the three-dimensional image has the important characteristic of increasing depth information compared with the two-dimensional image, the three-dimensional image is widely applied to the fields of security monitoring, electronic consumption, smart homes, motion sensing games, transportation and distribution and the like, people can acquire the depth information and the three-dimensional image of a target object through a structured light camera, a core component of the structured light camera comprises a speckle projector and an infrared lens, the speckle projector projects a structured light pattern to the target object, the infrared lens shoots the projection of the structured light pattern on the target object to obtain an infrared image, and therefore depth recovery and three-dimensional calculation are carried out according to the change of a light structure, and finally the depth information and the three-dimensional image of the target object are obtained.
The structured light camera is a very precise instrument, strict quality control is required in the production process, the structured light camera which leaves the factory is guaranteed to have high depth recovery precision, technical personnel generally use the structured light camera which leaves the factory one by one in the industry to shoot, the quality of the structured light camera which leaves the factory is judged based on the quality of a shot depth map, the quality testing process is long in time consumption, a large amount of labor cost is required to be input, the subjective factor of the testing process is too high, and the efficiency and the accuracy of the quality testing process are low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a camera quality testing method, system, electronic device, and storage medium, which can perform quality testing on a structured light camera highly automatically, objectively, and efficiently, and effectively improve the production efficiency and factory quality of the structured light camera.
In order to solve the above technical problem, an embodiment of the present application provides a camera quality testing method, including the following steps: determining a first depth area corresponding to a central area of a detection plate and a second depth area corresponding to a peripheral area of the detection plate in a depth map obtained by shooting a preset detection plate by a camera to be detected; the central area is a plurality of white patterns and black patterns which are arrayed at intervals, the peripheral area is a white plane, the patterns are all higher than the white plane, and the height difference between different patterns and the white plane is different; determining the relative precision of the camera to be detected according to the depth values of all points in the first depth area; determining the absolute precision of the camera to be detected according to the depth values of all points in the second depth area and the real distance between the camera to be detected and the peripheral area; and if at least one of the relative precision and the absolute precision is smaller than a preset precision threshold, reworking the camera to be tested.
An embodiment of the present application further provides a camera quality testing system, including: the detection device comprises a detection plate and a detection unit, wherein the detection plate comprises a central area and a peripheral area, the central area is provided with a plurality of white patterns and black patterns which are arrayed at intervals, the peripheral area is a white plane, the patterns are all higher than the white plane, and the height difference between different patterns and the white plane is different; the shooting module is used for shooting the detection plate through a camera to be detected to obtain a depth map; the positioning module is used for determining a first depth area corresponding to the central area of the detection plate and a second depth area corresponding to the peripheral area of the detection plate in the depth map; the relative precision calculation module is used for determining the relative precision of the camera to be detected according to the depth values of all points in the first depth area; the absolute precision calculation module is used for determining the absolute precision of the camera to be detected according to the depth values of all points in the second depth area and the real distance between the camera to be detected and the peripheral area; and the judging module is used for judging whether at least one of the relative precision and the absolute precision is smaller than a preset precision threshold value or not, and reworking the camera to be tested when at least one of the relative precision and the absolute precision is smaller than the preset precision threshold value.
An embodiment of the present application further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the camera quality testing method described above.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the above-mentioned camera quality testing method.
The camera quality testing method, the system, the electronic device and the storage medium provided by the embodiment of the application firstly call a camera to be tested to shoot a preset detection board to obtain a depth map, the preset detection board comprises a central area and a peripheral area, the central area is provided with a plurality of white patterns and black patterns which are arrayed at intervals, the peripheral area is a white plane, the patterns in the central area are all higher than the white plane, the height difference between different patterns and the white plane is different, after the depth map is shot, a first depth area corresponding to the central area of the detection board and a second depth area corresponding to the peripheral area of the detection board are determined in the depth map, the relative precision of the camera to be tested is determined according to the depth value of each point in the first depth area, the relative precision of the camera to be tested is determined according to the depth value of each point in the second depth area and the real distance between the camera to be tested and the peripheral area of the detection board, determining the absolute precision of a camera to be tested, reworking the camera to be tested if at least one of the relative precision and the absolute precision of the camera to be tested is smaller than a preset precision threshold, considering that when a structured light camera in the industry is subjected to factory quality test, a technician detects the camera to be tested according to a depth map shot by the camera to be tested, the subjective factor of the test process is overhigh, and the efficiency and the accuracy of the quality test process are lower, whereas in the embodiment of the application, the camera to be tested shoots a customized detection plate, the middle area of the detection plate is a relative precision detection area, the peripheral area of the detection plate is an absolute precision detection area, a test system can scientifically and accurately calculate the relative precision and the absolute precision of the camera to be tested through the depth map shot by the camera to be tested to the detection plate, and the relative precision and the absolute precision can well measure the quality of the camera to be tested, when one of the relative precision and the absolute precision is smaller than the precision threshold, the camera to be tested does not pass the quality test, so that the quality test is performed on the structured light camera in a highly automatic, objective and efficient manner, and the production efficiency and the delivery quality of the structured light camera are effectively improved.
In addition, in a depth map obtained by shooting a preset detection board by a camera to be detected, determining a first depth area corresponding to a central area of the detection board and a second depth area corresponding to a peripheral area of the detection board includes: acquiring an infrared image and a depth image which are obtained by shooting the detection plate by the camera to be detected; determining a first infrared region corresponding to the central region and an infrared detection plate region corresponding to the whole detection plate in the infrared image; determining the first depth region corresponding to the central region in the depth map according to the position coordinate of the first infrared region; determining a depth detection plate area corresponding to the whole detection plate in the depth map according to the position coordinates of the infrared detection plate area; the first depth area is subtracted from the depth detection board area to obtain the second depth area corresponding to the peripheral area, the fact that template matching is directly carried out in a depth map is considered to determine the area corresponding to the detection board, matching calculation amount is large, matching precision is low, when quality testing is conducted, a camera to be tested is used for shooting an infrared image and a depth map at the same time, the infrared image and the depth map are aligned in natural pixels, the server determines the areas corresponding to the central area and the peripheral area of the detection board in the infrared image, the first depth area and the second depth area can be determined according to position coordinates, the matching calculation amount is small, the matching precision is high, and therefore accuracy of quality testing of the camera is further improved.
In addition, the central area is rectangular, the plurality of arrays of white patterns and black patterns are arranged at intervals, specifically, the arrays of white rectangular blocks and black rectangular blocks are arranged at intervals, the first white rectangular block is positioned in the upper left area of the central area, the first black rectangular block is positioned in the upper right area of the central area, the second black rectangular block is positioned in the lower left area of the central area, and the second white rectangular block is positioned in the lower right area of the central area; the determining a first infrared region corresponding to the center region in the infrared map and an infrared detection panel region corresponding to the entire detection panel includes: determining a central homonymous point with the highest matching degree with the central point of the central area in a first preset search range taking the central point of the infrared image as a center; determining an infrared detection plate area corresponding to the whole detection plate in the infrared image according to a preset search step by taking the central homonymy point as a center; determining a conversion ratio according to the size of the infrared detection plate area and the size of the detection plate; according to the center point of the center area, the upper right corner point and the lower right corner point of the first black rectangular block, the upper left corner point and the lower left corner point of the second black rectangular block, the center homonymy point and the conversion proportion, the first infrared area corresponding to the center area is determined in the infrared image.
In addition, the determining, according to the center point of the center region, the upper right corner point and the lower right corner point of the first black rectangular block, the upper left corner point and the lower left corner point of the second black rectangular block, the center homonymy point, and the conversion ratio, the first infrared region corresponding to the center region in the infrared image includes: determining a second search range in the infrared detection plate area according to the central point of the central area, the upper right corner point and the lower right corner point of the first black rectangular block, the central homonymy point and the conversion ratio; determining a first homonymous point with the highest matching degree with the upper right corner point of the first black rectangular block and a second homonymous point with the highest matching degree with the lower right corner point of the first black rectangular block in the second search range; determining a third search range in the infrared detection plate area according to the central point of the central area, the upper left corner point and the lower left corner point of the second black rectangular block, the central homonymy point and the conversion ratio; determining a third homonymous point with the highest matching degree with the upper left corner point of the second black rectangular block and a fourth homonymous point with the highest matching degree with the lower left corner point of the second black rectangular block in the third search range; according to the first homonymous point, the second homonymous point, the third homonymous point and the fourth homonymous point, a first infrared region corresponding to the central region is determined, the first infrared region determined by directly carrying out proportion conversion is not the most accurate, certain errors possibly exist, the characteristics of the upper right corner point and the lower right corner point of the first black rectangular block and the upper left corner point and the lower left corner point of the second black rectangular block are quite abundant, the rough search range is determined firstly through proportion conversion, and then matching calculation is carried out in the search range, so that the first infrared region corresponding to the central region is determined more scientifically and more accurately, and the quality detection effect of the camera is further improved.
In addition, after the determining the central homologous point with the highest matching degree with the central point of the central area, the method further includes: calculating the distance between the central point of the infrared image and the central homonymous point; judging whether the distance is larger than a preset displacement threshold value or not; if the distance is larger than the displacement threshold value, reworking the camera to be tested; if the distance is smaller than or equal to the displacement threshold, translating the camera to be detected on the clamp according to the distance, wherein the central point and the central homonymy point of the infrared image are coincident in an optimal state, if the central point and the central homonymy point are not coincident, the camera to be detected is likely to be displaced on the clamp, translation adjustment is needed to correct the posture, and if the distance between the central point and the central homonymy point is extremely large, the camera to be detected is unqualified in assembly and does not reach the standard in quality, rework is directly performed, so that the problem that the camera to be detected with serious quality wastes quality detection resources is avoided, and the quality detection efficiency is further improved.
In addition, after the determining the first infrared region corresponding to the central region, the method further includes: calculating a difference value of vertical coordinates between the first homologous point and the second homologous point, or calculating a difference value of vertical coordinates between the third homologous point and the fourth homologous point; judging whether the difference value of the vertical coordinates is larger than a preset rotation threshold value or not; if the difference value of the vertical coordinates is larger than the rotation threshold value, reworking the camera to be tested; if the difference value of the vertical coordinates is smaller than or equal to the rotation threshold value, the camera to be detected is rotated on the clamp along the direction of the z axis according to the difference value of the vertical coordinates, under the optimal state, the vertical coordinates of the first homonymous point and the second homonymous point or the vertical coordinates of the third homonymous point and the fourth homonymous point are the same, if the difference value of the vertical coordinates is not the same, the camera to be detected is probably rotated on the clamp along the z axis, the posture needs to be corrected by rotating along the z axis, if the difference value of the vertical coordinates is overlarge, the camera to be detected is unqualified in assembly, the quality does not reach the standard, and rework is directly performed, so that the cameras to be detected with serious quality problems are prevented from wasting quality detection resources, and the quality detection efficiency is further improved.
In addition, after the determining the central homologous point with the highest matching degree with the central point of the central area, the method further includes: determining a tilt detection reference point in the depth map that is the same as the center homologous point coordinate; respectively determining a first inclination detection area, a second inclination detection area, a third inclination detection area and a fourth inclination detection area which are equal in area and distance from the inclination detection reference point on the depth map; wherein the first tilt detection area is located above the tilt detection reference point, the second tilt detection area is located below the tilt detection reference point, the third tilt detection area is located on the left side of the tilt detection reference point, and the fourth tilt detection area is located on the right side of the tilt detection reference point; calculating a mean value of first depth values of the first tilt detection area, a mean value of second depth values of the second tilt detection area, a mean value of third depth values of the third tilt detection area, and a mean value of fourth depth values of the fourth tilt detection area; calculating a first depth value difference value between the first depth value mean value and the second depth value mean value and a second depth value difference value between the third depth value mean value and the fourth depth value mean value; if at least one of the first depth value difference and the second depth value difference is larger than a preset inclination threshold, reworking the camera to be tested; if the first depth value difference and the second depth value difference are both smaller than or equal to the tilt threshold, rotating the camera to be detected along the x-axis direction and/or the y-axis direction on the fixture according to the first depth value difference and the second depth value difference, and under an optimal state, if the depth value mean value difference does not exist, the camera to be detected tilts and rotates on the fixture, namely the camera to be detected rotates along the x-axis and/or the y-axis, the camera to be detected needs to rotate along the x-axis and/or the y-axis to correct the posture, if the depth value mean value difference is too large, the camera to be detected is unqualified in assembly, the quality does not reach the standard, and rework is directly performed, so that the camera to be detected with serious quality problems is prevented from wasting quality detection resources, and the quality detection efficiency is further improved.
In addition, the determining the relative precision of the camera to be tested according to the depth values of the points in the first depth area includes: respectively calculating the average value of the depth values of the areas corresponding to the patterns in the first depth area; randomly selecting an area corresponding to one pattern as a compensation reference area, and taking the depth value average value of the compensation reference area as a compensation reference value; compensating the depth values of all points in the area corresponding to each pattern according to the difference between the compensation reference value and the average value of the depth values of the area corresponding to each pattern; combining the compensation reference region and the regions corresponding to the compensated patterns except the compensation reference region into a relative precision detection region; and calculating a first depth standard deviation of the relative precision detection area, taking three times of the first standard deviation as the relative precision of the camera to be detected, and determining the relative precision of the camera to be detected in a depth compensation mode, so that the determined relative precision is more accurate and reliable, and the quality of the camera to be detected is better measured.
In addition, the determining the absolute accuracy of the camera to be measured according to the depth values of the points in the second depth area and the real distance between the camera to be measured and the peripheral area includes: respectively calculating a third depth value difference value between the depth value of each point in the second depth area and the real distance; and calculating a second standard deviation according to each third depth value difference value, and taking the triple second standard deviation as the absolute precision of the camera to be detected.
Additionally, the accuracy threshold is determined by: acquiring the focal length and the base length of the camera to be detected; determining the precision threshold according to the real distance, the focal length of the camera to be detected and the length of a base line; the focal length of the camera to be detected is determined according to the real distanceAnd a baseline length, determining the accuracy threshold, represented by the following notations: c 1 =Z 2 (F x L), wherein Z is the real distance, F is the focal length of the camera to be tested, L is the base length of the camera to be tested, and C 1 For the precision threshold, in consideration of different specific situations of different cameras to be detected, the unified precision threshold is used for detection which is unscientific and not in accordance with the actual situation, and the precision threshold is determined in a self-adaptive manner in the embodiment of the application, namely the precision threshold in accordance with the actual situation of each camera to be detected is determined for each camera to be detected, so that the quality detection effect of the cameras is further improved.
In addition, after the determining the absolute accuracy of the camera under test, the method further includes: calculating the depth value of each pixel point in the first depth area and the second depth area; determining an effective depth range according to the real distance and a preset effective threshold; taking pixel points of which the depth values are not within the effective depth range from the pixel points in the first depth region and the second depth region as cavity points, and determining the number of the cavity points; calculating the void rate of the depth map according to the number of the void points and the total number of pixel points in the first depth area and the second depth area; if the voidage is greater than a preset voidage threshold value, reworking is carried out on the camera to be detected, the camera quality detection item also comprises a voidage detection item, if the voidage of the depth map shot by the camera to be detected is too high, the assembly is unqualified, the quality does not reach the standard, and the camera can be out of the field only by reworking and repairing, so that the camera quality detection effect is further improved, and the production efficiency and the delivery quality of the structured light camera are further improved.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
FIG. 1 is a flow chart of a camera quality testing method provided by an embodiment of the present application;
FIG. 2 is a top view of a sensing plate provided by one embodiment of the present application;
FIG. 3 is a side view of a sensing plate provided by one embodiment of the present application;
FIG. 4 is a perspective view of a detector board provided by one embodiment of the present application;
FIG. 5 is a flow chart illustrating determining relative accuracy of a camera under test based on depth values of points in a first depth zone according to an embodiment of the present application;
FIG. 6 is a flowchart for determining the absolute accuracy of the camera under test according to the depth values of the points in the second depth area and the real distance between the camera under test and the peripheral area, in an embodiment of the present application;
fig. 7 is a flowchart of determining a first depth area corresponding to a central area of a pickup plate and a second depth area corresponding to a peripheral area of the pickup plate in a depth map obtained by a camera to be tested by photographing a preset pickup plate, according to an embodiment of the present application;
FIG. 8 is a flow chart of the determination of a first infrared region in an infrared map corresponding to a center region, and an infrared pickup plate region corresponding to an entire pickup plate in one embodiment of the present application;
fig. 9 is a flowchart of determining a first infrared region corresponding to a central region in an infrared image according to a central point of the central region, upper right corner points and lower right corner points of a first black rectangular block, upper left corner points and lower left corner points of a second black rectangular block, a central homonymy point, and a conversion ratio in an embodiment of the present application;
FIG. 10 is a schematic illustration of a second search scope and a third search scope in an embodiment of the present application;
FIG. 11 is a flowchart illustrating displacement gesture detection of a camera under test based on a center homonym point in an embodiment of the present application;
FIG. 12 is a flowchart illustrating a rotational gesture detection of a camera under test based on a first homologous point, a second homologous point, a third homologous point, and a fourth homologous point, according to an embodiment of the present application;
FIG. 13 is a flow chart illustrating tilt gesture detection for a camera under test according to an embodiment of the present application;
FIG. 14 is a flowchart illustrating a void rate detection performed by a camera under test according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a machine quality testing system provided by another embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that in the examples of the present application, numerous technical details are set forth in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present application, and the embodiments may be mutually incorporated and referred to without contradiction.
An embodiment of the present application relates to a method for testing camera quality, which is applied to an electronic device, where the electronic device may be a terminal or a server, and the electronic device in this embodiment and the following embodiments are described by taking the server as an example.
The specific flow of the camera quality testing method of the embodiment may be as shown in fig. 1, and includes:
step 101, in a depth map obtained by shooting a preset detection plate by a camera to be detected, determining a first depth area corresponding to a central area of the detection plate and a second depth area corresponding to a peripheral area of the detection plate.
Specifically, the camera quality testing method provided by the application needs to use an important tool of a preset customized testing board, the testing board comprises a central area and a peripheral area, the central area is located in the center of the testing board, the central area is subtracted from the testing board, namely the peripheral area, the central area is provided with a plurality of white patterns and black patterns which are arrayed at intervals, the peripheral area is a white plane, the central area is integrally higher than the peripheral area, namely all the patterns in the central area are higher than the peripheral area, and the height difference between different patterns in the central area and the peripheral area, namely the white plane, is different.
In one example, the detector plate is rectangular, the central area of the detector plate is embodied as two white rectangular blocks and two black rectangular blocks arranged at intervals in an array, the top view of the detector plate is shown in fig. 2, the side view of the detector plate is shown in fig. 3, the perspective view of the detector plate is shown in fig. 4, the central area of the detector plate is clockwise arranged from the top left as a first white rectangular block, a first black rectangular block, a second white rectangular block and a second black rectangular block, the first white rectangular block is the highest, the first black rectangular block is the second highest, the second black rectangular block is the third highest, and the second white rectangular block is the lowest.
It should be noted that the peripheral area of the detection board is a white plane, and when the background is a white wall, it is difficult for the server to perform area search and scale search quickly and accurately, so that a black triangle having no height difference with the peripheral area is disposed in the peripheral area of the detection board shown in fig. 2, and the black triangle is used to assist in performing area search and scale search, and improve the speed and accuracy of search.
In the specific implementation, the server calls the camera to be detected to shoot the detection board, so that a depth map shot by the camera to be detected on the detection board is obtained, and a first depth area corresponding to a central area of the detection board and a second depth area corresponding to a peripheral area of the detection board are determined in the depth map by taking a design map of the detection board as a template.
In one example, when the server determines the second depth area corresponding to the peripheral area of the detection plate in the depth map, the server may first determine a depth detection plate area corresponding to the entire detection plate in the depth map, and subtract the first depth area from the depth detection plate area to obtain the second depth area corresponding to the peripheral area of the detection plate.
In one example, when the camera to be detected is used for shooting, the rear of the detection plate is actually a clear background area, such as a far white wall, and therefore the server can detect the depth detection plate area corresponding to the whole detection plate based on the gradient change of the depth values in the depth map, and therefore the server needs to calculate the gradient of the depth values of each point in each direction according to the depth values of each point in the depth map.
And 102, determining the relative precision of the camera to be detected according to the depth values of all points in the first depth area.
In a specific implementation, a central area of the detection plate is a relative accuracy detection area, and after the server determines a first depth area corresponding to the central area of the detection plate in the depth map, the server may determine the relative accuracy of the camera to be detected according to depth values of points in the first depth area.
In one example, the server may calculate a mean value of the depth values of the first depth area, and calculate absolute values of differences between the depth values of the respective points and the mean value of the depth values, respectively, and calculate a standard deviation based on the absolute values, and take three times the standard deviation as the relative accuracy of the camera to be measured.
In an example, the server determines the relative accuracy of the camera to be measured according to the depth values of the points in the first depth area, which may be implemented by the sub-steps shown in fig. 5, specifically including:
and a substep 1021, calculating the mean value of the depth values of the areas corresponding to the patterns in the first depth area.
In one example, as shown in fig. 2, the central area of the detection board includes a first white rectangular block, a first black rectangular block, a second white rectangular block, and a second black rectangular block, and the server calculates a mean depth value of the first white rectangular block, a mean depth value of the first black rectangular block, a mean depth value of the second white rectangular block, and a mean depth value of the second black rectangular block, respectively.
In the sub-step 1022, a region corresponding to a pattern is randomly selected as a compensation reference region, and the mean value of the depth values of the compensation reference region is used as a compensation reference value.
Specifically, because different height differences exist between each pattern in the central area and the peripheral area, and the relative accuracy of the camera to be measured is calculated by directly taking the whole first depth area as a reference is not scientific and reasonable enough, the method and the device need to perform depth compensation on the area corresponding to each pattern, that is, an area corresponding to one pattern is randomly selected from each pattern to serve as a compensation reference area, and the mean value of the depth values of the compensation reference area is used as a compensation reference value.
In one example, as shown in fig. 2, the detection board selects an area corresponding to the first black rectangular block as a compensation reference area, and then the area corresponding to the first white rectangular block, the area corresponding to the second black rectangular block, and the area corresponding to the second white rectangular block are compensation required areas.
And a substep 1023 of compensating the depth values of the points in the area corresponding to the patterns according to the difference between the compensation reference value and the average value of the depth values of the area corresponding to the patterns.
Specifically, after the server determines the compensation reference area and the compensation reference value, the server may compensate the depth values of the points in the area corresponding to each pattern according to the difference between the compensation reference value and the average value of the depth values of the area corresponding to each pattern.
In one example, the criterion of the depth compensation is "how to complement", and the first black rectangular block is higher than the second white rectangular block and the second black rectangular block, and the first black rectangular block is lower than the first white rectangular block, so that the depth compensation is to subtract an absolute value of a difference between a mean value of depth values of the first white rectangular block and a mean value of depth values of the first black rectangular block from a depth value of each point in the area corresponding to the first white rectangular block, to add an absolute value of a difference between a mean value of depth values of the second white rectangular block and a mean value of depth values of the first black rectangular block to a depth value of each point in the area corresponding to the second white rectangular block, and to add an absolute value of a difference between a mean value of depth values of the second black rectangular block and a mean value of depth values of the first black rectangular block to a depth value of each point in the area corresponding to the second black rectangular block.
And a substep 1024 of combining the compensation reference region and the regions corresponding to the compensated patterns other than the compensation reference region into a relative accuracy detection region.
Specifically, the depth values of the respective areas after the depth compensation are not much different from the depth values of the respective points in the compensation reference area, and may be regarded as an entire area, and the server merges the compensation reference area and the areas corresponding to the respective patterns after the compensation except the compensation reference area to obtain the relative accuracy detection area.
And a substep 1025 of calculating a first standard deviation of the relative accuracy detection area, and taking the three times of the first standard deviation as the relative accuracy of the camera to be detected.
Specifically, the first standard deviation of the relative accuracy detection area may be denoted as σ 1, and the relative accuracy of the camera to be measured may be denoted as 3 σ 1.
In the concrete implementation, the relative precision of the camera to be measured is determined by adopting a depth compensation mode, so that the determined relative precision is more accurate and reliable, and the quality of the camera to be measured is better measured.
And 103, determining the absolute accuracy of the camera to be detected according to the depth values of all points in the second depth area and the real distance between the camera to be detected and the peripheral area.
Specifically, the peripheral area of the board is an absolute accuracy detection area, and after the server determines the second depth area in the depth map, the server can calculate the absolute accuracy of the camera to be detected according to the depth values of the points in the second depth area and the real distance between the camera to be detected and the peripheral area.
In one example, the actual distance between the camera under test and the peripheral area is measured by a laser rangefinder.
In an example, the server determines the absolute accuracy of the camera to be measured according to the depth values of the points in the second depth area and the real distance between the camera to be measured and the peripheral area, and the determining may be implemented through sub-steps as shown in fig. 6, which specifically include:
and a substep 1031 of calculating third depth value differences between the depth values of the respective points in the second depth area and the true distances, respectively.
Specifically, in an ideal situation, the depth values of the points in the second depth area should all be the same and equal to the real distance, but not in the actual shooting, so the application calculates the third depth value difference between the depth values of the points in the second depth area and the real distance, determines the absolute accuracy of the camera to be tested based on the third depth value difference, and can well measure the quality of the camera to be tested.
And a substep 1032 of calculating a second standard deviation according to each third depth value difference value, and taking the triple second standard deviation as the absolute precision of the camera to be measured.
Specifically, the second standard deviation may be denoted as σ 2, and the absolute accuracy of the camera to be measured may be denoted as 3 σ 2.
And 104, if at least one of the relative precision and the absolute precision is smaller than a preset precision threshold, reworking the camera to be tested.
In specific implementation, after respectively calculating the relative precision and the absolute precision of the camera to be detected, the server may determine whether at least one of the relative precision and the absolute precision of the camera to be detected is smaller than a preset precision threshold, if at least one of the relative precision and the absolute precision of the camera to be detected is smaller than the precision threshold, it indicates that the quality of the camera to be detected is unqualified, and rework and reinstallation are required, and if both the relative precision and the absolute precision of the camera to be detected are greater than or equal to the precision threshold, it indicates that the quality of the camera to be detected reaches the standard, and the camera to be detected can leave the factory, where the preset precision threshold may be set by a person skilled in the art according to actual needs.
In one example, the server may obtain the focal length and the baseline length of the camera under test and determine the actual distance between the camera under test and the peripheral area, the focal length of the camera under test, and the distance of the camera under testThe method comprises the following steps of determining a precision threshold value according to the base length, the focal length of a camera to be measured and the base length, and determining the precision threshold value by a server according to the real distance, the focal length of the camera to be measured and the base length, wherein the precision threshold value can be realized by the following formula: c 1 =Z 2 (F X L), wherein Z is the real distance between the camera to be measured and the peripheral area, F is the focal length of the camera to be measured, L is the base length of the camera to be measured, C 1 For the determined precision threshold, the specific conditions of different cameras to be detected are different, the detection is carried out by using the uniform precision threshold, the detection is not scientific and not in accordance with the actual conditions, the precision threshold can be determined in a self-adaptive manner, namely the precision threshold in accordance with the actual conditions of each camera to be detected is determined, and the quality detection effect of the cameras can be further improved.
In this embodiment, a server first calls a camera to be tested to shoot a preset detection board to obtain a depth map, the preset detection board comprises a central area and a peripheral area, the central area is a plurality of white patterns and black patterns which are arrayed at intervals, the peripheral area is a white plane, the patterns in the central area are all higher than the white plane, the height difference between different patterns and the white plane is different, after the depth map is obtained by shooting, a first depth area corresponding to the central area of the detection board and a second depth area corresponding to the peripheral area of the detection board are determined in the depth map, the relative precision of the camera to be tested is determined according to the depth values of points in the first depth area, the absolute precision of the camera to be tested is determined according to the depth values of points in the second depth area and the real distance between the camera to be tested and the peripheral area of the detection board, if at least one of the relative precision and the absolute precision of the camera to be tested is smaller than a preset precision threshold, reworking is carried out on the camera to be tested, and considering that when the factory quality test is carried out on the structured light camera in the industry, technical personnel detect according to a depth map shot by the camera to be tested, the subjective factor of the test process is overhigh, and the efficiency and the accuracy of the quality test process are lower, but the camera to be tested shoots a customized detection plate in the embodiment of the application, the middle area of the detection plate is a relative precision detection area, the peripheral area of the detection plate is an absolute precision detection area, a test system can scientifically and accurately calculate the relative precision and the absolute precision of the camera to be tested through the depth map shot by the camera to be tested to the detection plate, the relative precision and the absolute precision can well measure the quality of the camera to be tested, and the relative precision and the absolute precision of the camera to be tested can not pass the quality test when one of the relative precision and the absolute precision is smaller than the precision threshold, therefore, the quality of the structured light camera is tested highly automatically, objectively and efficiently, and the production efficiency and the delivery quality of the structured light camera are effectively improved.
In an embodiment, the server determines, in a depth map obtained by shooting a preset detection board by a camera to be detected, a first depth area corresponding to a central area of the detection board and a second depth area corresponding to a peripheral area of the detection board, which may be implemented by the steps shown in fig. 7, and specifically includes:
step 201, acquiring an infrared image and a depth image obtained by shooting a detection plate by a camera to be detected.
Specifically, when the server calls the camera to be measured to shoot the detection plate, the infrared image and the depth image can be obtained through simultaneous shooting, and the infrared image and the depth image shot by the camera to be measured are naturally aligned in pixels.
In step 202, a first infrared region corresponding to the central region and an infrared pickup plate region corresponding to the entire pickup plate are determined in the infrared map.
In the specific implementation, the first depth area and the second depth area are directly determined in the depth map and can only be determined according to the change of the gradient of the depth value, the calculation amount is very large and is not accurate enough, but the first infrared area corresponding to the central area of the detection plate and the infrared detection plate area corresponding to the whole detection plate are determined in the infrared map only by using a common matching algorithm, the calculation amount is reduced greatly, and the accuracy is improved greatly.
Step 203, a first depth area corresponding to the central area is determined in the depth map according to the position coordinates of the first infrared area.
Specifically, because the natural pixels of the infrared image and the depth image are aligned, the server can determine the first depth area corresponding to the central area only by finding the corresponding position in the depth image according to the position coordinates of the first infrared area.
And step 204, determining a depth detection plate area corresponding to the whole detection plate in the depth map according to the position coordinates of the infrared detection plate area.
Specifically, because the natural pixels of the infrared image and the depth image are aligned, the server can determine the depth detection plate area corresponding to the whole detection plate by only finding the corresponding position in the depth image according to the position coordinates of the infrared detection plate area.
Step 205, subtracting the first depth area from the depth detection board area to obtain a second depth area corresponding to the peripheral area.
Specifically, since the depth detection plate area actually includes the first depth area and the second depth area, the depth detection plate area corresponding to the peripheral area of the detection plate can be obtained by subtracting the first depth area from the depth detection plate area.
In the embodiment, the fact that the area corresponding to the detection plate is determined by directly performing template matching in the depth map is considered, the matching calculation amount is large, and the matching precision is low, when the quality test is performed, the infrared image and the depth map are shot by the camera to be tested at the same time, the infrared image and the depth map are aligned in natural pixels, the server determines the areas corresponding to the central area and the peripheral area of the detection plate in the infrared image, the first depth area and the second depth area can be determined according to the position coordinates, the matching calculation amount is small, the matching precision is high, and therefore the accuracy of the quality test of the camera is further improved.
In one embodiment, the central area of the detection board is rectangular, the plurality of arrays of spaced white patterns and black patterns in the central area are specifically two white rectangular blocks and two black rectangular blocks in the arrays of spaced white rectangular blocks, the first white rectangular block is located in the upper left area of the central area, the first black rectangular block is located in the upper right area of the central area, the second black rectangular block is located in the lower left area of the central area, and the second white rectangular block is located in the lower right area of the central area, the top view of the detection board of this embodiment may be as shown in fig. 2, and the server determines the first infrared area corresponding to the central area and the infrared detection board area corresponding to the whole detection board in the infrared diagram, which may be implemented through the steps as shown in fig. 8, and specifically includes:
step 301, determining a central homonymous point with the highest matching degree with the central point of the central area in a first preset search range taking the central point of the infrared image as the center.
In specific implementation, in order to further reduce the amount of calculation and increase the speed of camera quality detection, the server does not need to perform matching calculation on all pixel points of the central area of the detection board, and only needs to match some key points, where the most important key point is the central point of the central area, and theoretically, the homonymous point of the central area on the infrared image should be very close to the central point of the infrared image, so the server performs matching search within a first preset search range taking the central point of the infrared image as the center, and determines the central homonymous point with the highest matching degree with the central point of the central area, where the first preset search range can be set by a person skilled in the art according to actual needs.
In one example, the size of the first preset search range is 2 times the size of the central area of the pickup plate.
In one example, the top left corner point of the central area, which is the highest matching degree among the points in the central area of the detection board and can be found in the infrared image, is often the top left corner point of the central area, that is, the homonymy point of the top left corner point of the central area is easily determined in the infrared image, and the server can perform corresponding displacement on the homonymy point of the top left corner point of the central area, so as to obtain the top center homonymy point.
And step 302, determining an infrared detection plate area corresponding to the whole detection plate in an infrared image according to a preset search step length by taking the central same-name point as a center.
In specific implementation, under the influence of the focal length of a camera to be detected and the distance between the camera to be detected and a detection board, the size of an infrared detection board area in an infrared image is often different from the size of the detection board, so that the infrared detection board area cannot be determined in the infrared image simply according to the size of the detection board, further matching calculation is still needed, a server takes a central same-name point as a center immediately, and the infrared detection board area corresponding to the whole detection board is determined step by step in the infrared image according to a preset search step length, wherein the preset search step length can be set by a person skilled in the art according to actual needs.
In one example, the server may perform a search with a larger search step length, that is, perform a search within a larger range, and then perform a search with a smaller search step length, and generally perform two searches to determine the area of the infrared detection panel with a high accuracy.
Step 303, determining a conversion ratio according to the size of the infrared detection board region and the size of the detection board.
Specifically, the size of the infrared detection plate area for the server is divided by the size of the detection plate to obtain a conversion ratio, and the conversion ratio may be smaller than 1, may be larger than 1, and may be equal to 1.
And 304, determining a first infrared region corresponding to the central region in the infrared image according to the central point of the central region, the upper right corner point and the lower right corner point of the first black rectangular block, the upper left corner point and the lower left corner point of the second black rectangular block, the central homonymy point and the conversion ratio.
Specifically, after the server determines the conversion ratio, the server may determine the first infrared region corresponding to the central region in the infrared image according to the central point of the central region, the upper right corner point and the lower right corner point of the first black rectangular block, the upper left corner point and the lower left corner point of the second black rectangular block, the central homonymy point, and the conversion ratio.
In one example, the server may obtain a distance between the center homologous point and the homologous point of the upper right corner point of the first black rectangular block according to the distance between the center point of the center region and the upper right corner point of the first black rectangular block and the conversion ratio, so as to locate the homologous point of the upper right corner point of the first black rectangular block in the infrared image.
In the embodiment, the server does not need to find the infrared detection board area corresponding to the whole detection board in the infrared image by pixel matching, but firstly finds the homonymy point of the central area in the infrared image, then gradually expands and matches to determine the infrared detection board area, and when the first infrared area corresponding to the central area is determined, the determination is also based on the conversion ratio, the speed of the whole positioning process is high, the calculated amount is small, and the speed and the efficiency of camera quality testing are further improved.
In an embodiment, the server determines, according to a center point of the center region, an upper right corner point and a lower right corner point of the first black rectangular block, an upper left corner point and a lower left corner point of the second black rectangular block, a center homonymy point, and a conversion ratio, a first infrared region corresponding to the center region in an infrared image, which may be implemented through the steps shown in fig. 9, and specifically includes:
step 401, determining a second search range in the infrared detection plate area according to the center point of the center area, the upper right corner point, the lower right corner point, the center homonymy point and the conversion ratio of the first black rectangular block.
Specifically, the angular points are obtained through a scaling mode, and a large error often exists, so that the server determines a second search range in the infrared detection plate area according to the center point of the center area, the upper right angular point, the lower right angular point, the center homonymy point and the scaling ratio of the first black rectangular block, namely, the angular points are found more accurately through a matching search mode.
In one example, the second search range may be as shown by the dashed area in fig. 10, and to better illustrate the second search range, the first black rectangular block is represented by a dot-filled rectangular block in fig. 10.
And 402, determining a first homologous point with the highest matching degree with the upper right corner point of the first black rectangular block and a second homologous point with the highest matching degree with the lower right corner point of the first black rectangular block in a second search range.
Specifically, after the server determines the second search range, a first homologous point with the highest matching degree with the upper right corner point of the first black rectangular block and a second homologous point with the highest matching degree with the lower right corner point of the first black rectangular block may be determined in the second search range.
And step 403, determining a third search range in the infrared detection plate area according to the central point of the central area, the upper left corner point and the lower left corner point of the second black rectangular block, the central homonymy point and the conversion ratio.
In one example, the second search range may be as shown by the dotted area in fig. 10, and to better illustrate the third search range, the second black rectangular block is represented by a dot-filled rectangular block in fig. 10.
And step 404, determining a third homonymous point with the highest matching degree with the upper left corner point of the second black rectangular block and a fourth homonymous point with the highest matching degree with the lower left corner point of the second black rectangular block in the third search range.
Step 405, determining a first infrared region corresponding to the central region according to the first homologous point, the second homologous point, the third homologous point and the fourth homologous point.
Specifically, after the server determines the first homologous point, the second homologous point, the third homologous point and the fourth homologous point, the server may extend based on the four homologous points to obtain the first infrared region corresponding to the central region.
In the embodiment, considering that the first infrared region determined by directly performing scaling is not the most accurate and may have a certain error, and the upper right corner and the lower right corner of the first black rectangular block and the upper left corner and the lower left corner of the second black rectangular block have abundant characteristics, the method determines a rough search range through scaling first and then performs matching calculation in the search range, thereby more scientifically and accurately determining the first infrared region corresponding to the central region and further improving the quality detection effect of the camera.
In an embodiment, after determining the central homonymous point with the highest matching degree with the central point of the central area, the server may perform displacement posture detection on the camera to be detected based on the central homonymous point through the steps shown in fig. 11, which specifically includes:
step 501, calculating the distance between the center point of the infrared image and the point with the same name as the center.
Step 502, determining whether the distance is greater than a preset shift threshold, if so, executing step 503, otherwise, executing step 504.
Step 503, reworking the camera to be tested.
And step 504, translating the camera to be measured on the fixture according to the distance.
In a specific implementation, in an optimal state, a center point and a center homologous point of an infrared image should coincide, if the center point and the center homologous point do not coincide, it is indicated that a camera to be tested may shift on a fixture, and a translation adjustment is required to correct a posture, a server may calculate a distance between the center point and the center homologous point of the infrared image, and determine whether the distance is greater than a preset shift threshold value, where the preset shift threshold value may be set by a person skilled in the art according to actual needs, and if the distance is greater than the preset shift threshold value, that is, the distance between the center point and the center homologous point of the infrared image is extremely large, it is indicated that the camera to be tested is unqualified in assembly and does not reach a quality standard, rework is directly performed, so as to avoid that quality detection resources of the cameras to be tested with serious quality problems are wasted, further improve quality detection efficiency, and if the distance is less than or equal to the preset shift threshold value, the server only needs to translate the camera to be tested on the fixture according to the distance, thereby ensuring normal performance of subsequent quality tests.
In an embodiment, after determining the first infrared region corresponding to the central region, the server may perform rotation gesture detection on the camera to be detected based on the first homologous point, the second homologous point, the third homologous point, and the fourth homologous point through the steps shown in fig. 12, which specifically includes:
step 601, calculating a difference value of vertical coordinates between the first homonymous point and the second homonymous point, or calculating a difference value of vertical coordinates between the third homonymous point and the fourth homonymous point.
Step 602, determining whether the ordinate difference is greater than a preset rotation threshold, if so, performing step 603, otherwise, performing step 604.
Step 603, reworking the camera to be tested.
And step 604, rotating the camera to be measured on the fixture along the z-axis direction according to the vertical coordinate difference value.
In a specific implementation, in an optimal state, the ordinate of the first homonymous point and the ordinate of the second homonymous point, or the ordinate of the third homonymous point and the ordinate of the fourth homonymous point should be the same, if they are different, it indicates that the camera to be tested may rotate on the fixture along the z-axis, and rotation along the z-axis is required to correct the posture, the server may calculate the ordinate difference between the first homonymous point and the second homonymous point, or calculate the ordinate difference between the third homonymous point and the fourth homonymous point, and determine whether the ordinate difference is greater than a preset rotation threshold, the preset rotation threshold may be set by a technician in the art according to actual needs, if the ordinate difference is greater than the preset rotation threshold, it indicates that the camera to be tested is unqualified to be assembled, the quality does not reach the standard, and directly performs rework repair, thereby avoiding wasting quality testing resources for the cameras to be tested with serious quality problems, further improving the efficiency of quality testing, if the ordinate difference is less than or equal to the preset rotation threshold, it only needs to perform normal quality testing for the camera to be tested along the z-axis.
In an embodiment, after determining the central homonymous point with the highest matching degree with the central point of the central area, the server may perform tilt gesture detection on the camera to be detected through the steps shown in fig. 13, which specifically includes:
step 701, determining an inclination detection reference point with the same coordinate as the center homonymous point in the depth map.
Specifically, the server processes precision detection of the camera to be detected and also can perform inclination posture detection, which needs to perform testing by taking a depth map as a reference, and the server firstly determines a point with the same coordinate as a central homonymous point in the depth map as an inclination detection reference point.
Step 702 is to determine a first tilt detection region, a second tilt detection region, a third tilt detection region, and a fourth tilt detection region having the same area and the same distance from the tilt detection reference point on the depth map.
In a specific implementation, after the server determines the tilt detection reference point, a first tilt detection region, a second tilt detection region, a third tilt detection region and a fourth tilt detection region which are equal in area and equal in distance from the tilt detection reference point may be determined on the depth map, where the first tilt detection region is located above the tilt detection reference point, the second tilt detection region is located below the tilt detection reference point, the third tilt detection region is located on the left side of the tilt detection reference point, and the fourth tilt detection region is located on the right side of the tilt detection reference point.
In step 703, a mean value of the first depth values of the first tilt detection area, a mean value of the second depth values of the second tilt detection area, a mean value of the third depth values of the third tilt detection area, and a mean value of the fourth depth values of the fourth tilt detection area are calculated.
Step 704, a first depth value difference between the first depth value mean and the second depth value mean, and a second depth value difference between the third depth value mean and the fourth depth value mean are calculated.
Specifically, after the server determines the first tilt detection area, the second tilt detection area, the third tilt detection area, and the fourth tilt detection area, a first depth value mean of the first tilt detection area, a second depth value mean of the second tilt detection area, a third depth value mean of the third tilt detection area, and a fourth depth value mean of the fourth tilt detection area may be calculated, and a first depth value difference between the first depth value mean and the second depth value mean, and a second depth value difference between the third depth value mean and the fourth depth value mean are calculated, where the first depth value difference is used to detect whether the camera to be detected is tilted along the x axis, and the second depth value difference is used to detect whether the camera to be detected is tilted along the y axis.
Step 705, determining whether at least one of the first depth value difference and the second depth value difference is greater than a preset tilt threshold, if so, executing step 706, otherwise, executing step 707.
Step 706, reworking the camera to be tested.
And step 707, rotating the camera to be measured on the fixture along the x-axis direction and/or the y-axis direction according to the first depth value difference and the second depth value difference.
In a specific implementation, after the server calculates the first depth value difference and the second depth value difference, it may be determined whether at least one of the first depth value difference and the second depth value difference is greater than a preset tilt threshold, the preset tilt threshold may be set by a person skilled in the art according to actual needs, if at least one of the first depth value difference and the second depth value difference is greater than the preset tilt threshold, it indicates that the to-be-tested camera is not assembled properly, the quality does not reach the standard, and rework repair is required directly.
In an optimal state, depth value mean value differences should not exist between the first inclination detection area and the second inclination detection area, and between the third inclination detection area and the fourth detection area, if the depth value mean value differences exist, it is indicated that the camera to be detected rotates obliquely on the fixture, that is, the camera to be detected rotates along the x axis and/or the y axis, and the posture needs to be corrected by rotating along the x axis and/or the y axis, and if the depth value mean value differences are too large, it is indicated that the camera to be detected is unqualified in assembly, the quality does not reach the standard, and rework is directly performed, so that quality detection resources are prevented from being wasted by the cameras to be detected with serious quality problems, and the quality detection efficiency is further improved.
In an embodiment, after determining the absolute accuracy of the camera to be detected, the server may perform void rate detection on the camera to be detected through the steps shown in fig. 14, which specifically includes:
step 801, calculating the depth value of each pixel point in the first depth area and the second depth area.
Step 802, determining an effective depth range according to the real distance and a preset effective threshold.
Specifically, the quality of the camera to be detected can be well evaluated through precision detection, but the server can also perform void rate detection on the camera to be detected, firstly, it needs to be determined that void points are determined in a first depth area and a second depth area, an effective depth range is determined by one sentence of the void points, the effective depth range is determined by the server according to the real distance between the camera to be detected and a peripheral area and a preset effective threshold, and the preset effective threshold can be set by technical personnel in the field according to actual needs.
In one example, the preset valid threshold and the preset precision threshold are equal in value.
And 803, taking the pixel points of which the depth values are not within the effective depth range from the pixel points in the first depth region and the second depth region as the cavity points, and determining the number of the cavity points.
Specifically, the server may traverse each pixel point in the first depth area and the second depth area, determine whether the depth value of each pixel point is within the effective depth range, regard the pixel point whose value is not within the effective depth range as a hole point, and determine the number of the hole points.
And 804, calculating the void rate of the depth map according to the number of the void points and the total number of the pixel points in the first depth area and the second depth area.
In specific implementation, the number of the void points is divided by the total number of the pixel points in the first depth area and the second depth area, so that the void rate of the depth map can be obtained.
Step 805, if the void ratio is greater than the preset void threshold, reworking the camera to be tested.
In specific implementation, after the server determines the voidage of the depth map, it may be determined whether the voidage of the depth map is greater than a preset voidage threshold, if the voidage of the depth map is greater than the preset voidage threshold, it is indicated that the quality of the camera to be detected does not reach the standard, and rework repair is required, and if the voidage of the depth map is less than or equal to the preset voidage threshold, it is indicated that the quality of the camera to be detected reaches the standard, and the camera can leave a factory.
In the embodiment, the specific situations of different cameras to be detected are different, the detection by using the unified precision threshold is not scientific and not in accordance with the actual situation, the precision threshold is determined in a self-adaptive manner, namely the precision threshold in accordance with the actual situation of each camera to be detected is determined for each camera to be detected, and therefore the quality detection effect of the camera is further improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
Another embodiment of the present application relates to a camera quality testing system, and the following details of the implementation of the camera quality testing system of the present embodiment are specifically described, and the following details are provided only for easy understanding, and are not necessary for implementing the present solution, and a schematic diagram of the camera quality testing system of the present embodiment may be as shown in fig. 15, and includes:
the detection plate 901, the detection plate 901 includes a central area and a peripheral area, the central area is a plurality of white patterns and black patterns which are arrayed at intervals, the peripheral area is a white plane, the patterns in the central area are all higher than the white plane, and the height difference between different patterns and the white plane is different.
The shooting module 902 is configured to shoot the pickup plate 901 by the camera to be detected to obtain a depth map.
A positioning module 903 for determining a first depth zone corresponding to a central region of the sensing plate 901 and a second depth zone corresponding to a peripheral region of the sensing plate 901 in the depth map.
And a relative accuracy calculating module 904, configured to determine the relative accuracy of the camera to be measured according to the depth values of the points in the first depth area.
And the absolute precision calculating module 905 is configured to determine the absolute precision of the camera to be detected according to the depth values of the points in the second depth area and the real distance between the camera to be detected and the peripheral area.
The judging module 906 is configured to judge whether at least one of the relative accuracy and the absolute accuracy is smaller than a preset accuracy threshold, and rework the camera to be tested when at least one of the relative accuracy and the absolute accuracy is smaller than the preset accuracy threshold.
It should be noted that, all the modules involved in this embodiment are logic modules, and in practical application, one logic unit may be one physical unit, may also be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present application, a unit that is not so closely related to solving the technical problem proposed by the present application is not introduced in the present embodiment, but this does not indicate that there is no other unit in the present embodiment.
Another embodiment of the present application relates to an electronic device, as shown in fig. 16, including: at least one processor 1001; and a memory 1002 communicatively coupled to the at least one processor 1001; the memory 1002 stores instructions executable by the at least one processor 1001, and the instructions are executed by the at least one processor 1001, so that the at least one processor 1001 can execute the camera quality testing method in the above embodiments.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the bus connecting together various circuits of the memory and the processor or processors. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. While the memory may be used to store data used by the processor in performing operations.
Another embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.

Claims (14)

1. A method for testing camera quality, comprising:
determining a first depth area corresponding to a central area of a detection plate and a second depth area corresponding to a peripheral area of the detection plate in a depth map obtained by shooting a preset detection plate by a camera to be detected; the central area is a plurality of white patterns and black patterns which are arrayed at intervals, the peripheral area is a white plane, the patterns are all higher than the white plane, and the height difference between different patterns and the white plane is different;
determining the relative precision of the camera to be detected according to the depth values of all points in the first depth area;
determining the absolute precision of the camera to be detected according to the depth values of all points in the second depth area and the real distance between the camera to be detected and the peripheral area;
if at least one of the relative precision and the absolute precision is smaller than a preset precision threshold value, reworking the camera to be tested;
the detection plate comprises a detection plate and is characterized in that a black triangular block is arranged in a peripheral area of the detection plate, no height difference exists between the black triangular block and the peripheral area, and the black triangular block is used for assisting in determining the first depth area and the second depth area.
2. The camera quality testing method according to claim 1, wherein determining a first depth zone corresponding to a central area of a pickup plate and a second depth zone corresponding to a peripheral area of the pickup plate in a depth map photographed by a camera to be tested on a preset pickup plate comprises:
acquiring an infrared image and a depth image which are obtained by shooting the detection plate by the camera to be detected;
determining a first infrared region corresponding to the central region and an infrared detection plate region corresponding to the whole detection plate in the infrared image;
determining the first depth area corresponding to the central area in the depth map according to the position coordinate of the first infrared area;
determining a depth detection plate area corresponding to the whole detection plate in the depth map according to the position coordinates of the infrared detection plate area;
and subtracting the first depth area from the depth detection plate area to obtain a second depth area corresponding to the peripheral area.
3. The camera quality testing method according to claim 2, wherein the central area is rectangular, the plurality of arrays of spaced white patterns and black patterns are specifically two white rectangular blocks and two black rectangular blocks, the first white rectangular block is located in an upper left area of the central area, the first black rectangular block is located in an upper right area of the central area, the second black rectangular block is located in a lower left area of the central area, and the second white rectangular block is located in a lower right area of the central area;
the determining a first infrared region corresponding to the center region in the infrared map and an infrared detection panel region corresponding to the entire detection panel includes:
determining a central homonymous point with the highest matching degree with the central point of the central area in a first preset search range taking the central point of the infrared image as a center;
determining an infrared detection plate area corresponding to the whole detection plate in the infrared image according to a preset search step by taking the central homonymy point as a center;
determining a conversion ratio according to the size of the infrared detection plate area and the size of the detection plate;
and determining a first infrared region corresponding to the central region in the infrared image according to the central point of the central region, the upper right corner point and the lower right corner point of the first black rectangular block, the upper left corner point and the lower left corner point of the second black rectangular block, the central homonymy point and the conversion proportion.
4. The method for testing the quality of the camera according to claim 3, wherein the determining a first infrared region corresponding to the central region in the infrared image according to the central point of the central region, the upper right corner point and the lower right corner point of the first black rectangular block, the upper left corner point and the lower left corner point of the second black rectangular block, the central homonymy point and the conversion ratio comprises:
determining a second search range in the infrared detection plate area according to the central point of the central area, the upper right corner point and the lower right corner point of the first black rectangular block, the central homonymy point and the conversion ratio;
determining a first homonymous point with the highest matching degree with the upper right corner point of the first black rectangular block and a second homonymous point with the highest matching degree with the lower right corner point of the first black rectangular block in the second search range;
determining a third search range in the infrared detection plate area according to the central point of the central area, the upper left corner point and the lower left corner point of the second black rectangular block, the central homonymy point and the conversion ratio;
determining a third homonymous point with the highest matching degree with the upper left corner point of the second black rectangular block and a fourth homonymous point with the highest matching degree with the lower left corner point of the second black rectangular block in the third search range;
and determining a first infrared region corresponding to the central region according to the first homologous point, the second homologous point, the third homologous point and the fourth homologous point.
5. The camera quality testing method according to claim 3, further comprising, after the determining the central homonymous point having the highest matching degree with the central point of the central region:
calculating the distance between the central point of the infrared image and the central homonymous point;
judging whether the distance is larger than a preset displacement threshold value or not;
if the distance is larger than the displacement threshold value, reworking the camera to be tested;
and if the distance is smaller than or equal to the displacement threshold, translating the camera to be detected on the clamp according to the distance.
6. The camera quality testing method according to claim 4, further comprising, after the determining the first infrared region corresponding to the central region:
calculating a difference value of vertical coordinates between the first homologous point and the second homologous point, or calculating a difference value of vertical coordinates between the third homologous point and the fourth homologous point;
judging whether the difference value of the vertical coordinates is larger than a preset rotation threshold value or not;
if the difference value of the vertical coordinates is larger than the rotation threshold value, reworking the camera to be tested;
and if the vertical coordinate difference value is smaller than or equal to the rotation threshold value, rotating the camera to be detected on the clamp along the z-axis direction according to the vertical coordinate difference value.
7. The camera quality testing method according to claim 3, further comprising, after the determining the central homonymous point having the highest matching degree with the central point of the central region:
determining a tilt detection reference point in the depth map having the same coordinates as the center homologous point;
respectively determining a first inclination detection area, a second inclination detection area, a third inclination detection area and a fourth inclination detection area which are equal in area and distance from the inclination detection reference point on the depth map; wherein the first tilt detection area is located above the tilt detection reference point, the second tilt detection area is located below the tilt detection reference point, the third tilt detection area is located on the left side of the tilt detection reference point, and the fourth tilt detection area is located on the right side of the tilt detection reference point;
calculating a mean value of first depth values of the first tilt detection area, a mean value of second depth values of the second tilt detection area, a mean value of third depth values of the third tilt detection area, and a mean value of fourth depth values of the fourth tilt detection area;
calculating a first depth value difference value between the first depth value mean value and the second depth value mean value and a second depth value difference value between the third depth value mean value and the fourth depth value mean value;
if at least one of the first depth value difference and the second depth value difference is larger than a preset inclination threshold, reworking the camera to be tested;
and if the first depth value difference value and the second depth value difference value are both smaller than or equal to the inclination threshold value, rotating the camera to be detected on a fixture along the x-axis direction and/or the y-axis direction according to the first depth value difference value and the second depth value difference value.
8. The method for testing the quality of the camera according to any one of claims 1 to 7, wherein the determining the relative accuracy of the camera to be tested according to the depth values of the points in the first depth area comprises:
respectively calculating the average value of the depth values of the areas corresponding to the patterns in the first depth area;
randomly selecting an area corresponding to one pattern as a compensation reference area, and taking the depth value average value of the compensation reference area as a compensation reference value;
compensating the depth values of all points in the area corresponding to each pattern according to the difference between the compensation reference value and the average value of the depth values of the area corresponding to each pattern;
combining the compensation reference region and the regions corresponding to the compensated patterns except the compensation reference region into a relative precision detection region;
and calculating a first standard deviation of the relative precision detection area, and taking three times of the first standard deviation as the relative precision of the camera to be detected.
9. The method for testing the quality of the camera according to any one of claims 1 to 7, wherein the determining the absolute accuracy of the camera to be tested according to the depth values of the points in the second depth area and the real distance between the camera to be tested and the peripheral area comprises:
respectively calculating a third depth value difference value between the depth value of each point in the second depth area and the real distance;
and calculating a second standard deviation according to each third depth value difference value, and taking three times of the second standard deviation as the absolute precision of the camera to be tested.
10. The camera quality testing method according to any one of claims 1 to 7, wherein the accuracy threshold is determined by:
acquiring the focal length and the base length of the camera to be detected;
determining the precision threshold according to the real distance, the focal length of the camera to be detected and the length of a base line;
and determining the precision threshold according to the real distance, the focal length of the camera to be detected and the length of the base line, wherein the precision threshold is represented by the following notations:
C 1 =Z 2 /(F*L)
wherein Z is the real distance, F is the focal length of the camera to be measured, L is the base length of the camera to be measured, C 1 Is the precision threshold.
11. The camera quality testing method according to any one of claims 1 to 7, further comprising, after the determining the absolute accuracy of the camera under test:
calculating the depth value of each pixel point in the first depth area and the second depth area;
determining an effective depth range according to the real distance and a preset effective threshold;
taking the pixel points with the depth values not in the effective depth range in the pixel points in the first depth area and the second depth area as the cavity points, and determining the number of the cavity points;
calculating the void rate of the depth map according to the number of the void points and the total number of pixel points in the first depth area and the second depth area;
and if the void rate is greater than a preset void threshold value, reworking the camera to be tested.
12. A camera quality testing system, comprising:
the detection board comprises a central area and a peripheral area, wherein the central area is a plurality of white patterns and black patterns which are arrayed at intervals, the peripheral area is a white plane, the patterns are all higher than the white plane, and the height difference between different patterns and the white plane is different;
the shooting module is used for shooting the detection plate through a camera to be detected to obtain a depth map;
the positioning module is used for determining a first depth area corresponding to the central area of the detection plate and a second depth area corresponding to the peripheral area of the detection plate in the depth map;
the relative precision calculation module is used for determining the relative precision of the camera to be detected according to the depth values of all points in the first depth area;
the absolute precision calculation module is used for determining the absolute precision of the camera to be detected according to the depth values of all points in the second depth area and the real distance between the camera to be detected and the peripheral area;
the judgment module is used for judging whether at least one of the relative precision and the absolute precision is smaller than a preset precision threshold value or not, and reworking the camera to be tested when at least one of the relative precision and the absolute precision is smaller than the preset precision threshold value;
the detection plate comprises a detection plate and is characterized in that a black triangular block is arranged in a peripheral area of the detection plate, no height difference exists between the black triangular block and the peripheral area, and the black triangular block is used for assisting in determining the first depth area and the second depth area.
13. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the camera quality testing method of any one of claims 1 to 11.
14. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the camera quality testing method of any one of claims 1 to 11.
CN202211373241.1A 2022-11-04 2022-11-04 Camera quality testing method, system, electronic device and storage medium Active CN115442591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211373241.1A CN115442591B (en) 2022-11-04 2022-11-04 Camera quality testing method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211373241.1A CN115442591B (en) 2022-11-04 2022-11-04 Camera quality testing method, system, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN115442591A CN115442591A (en) 2022-12-06
CN115442591B true CN115442591B (en) 2023-03-24

Family

ID=84252203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211373241.1A Active CN115442591B (en) 2022-11-04 2022-11-04 Camera quality testing method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115442591B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10866321B2 (en) * 2016-09-01 2020-12-15 Sony Semiconductor Solutions Corporation Imaging device
CN110728713B (en) * 2018-07-16 2022-09-30 Oppo广东移动通信有限公司 Test method and test system
CN111402188A (en) * 2018-12-28 2020-07-10 浙江舜宇智能光学技术有限公司 TOF camera module depth measurement evaluation method and TOF camera module depth measurement evaluation device
CN109741405B (en) * 2019-01-21 2021-02-02 同济大学 Depth information acquisition system based on dual structured light RGB-D camera
CN110673114B (en) * 2019-08-27 2023-04-18 三赢科技(深圳)有限公司 Method and device for calibrating depth of three-dimensional camera, computer device and storage medium
CN113838146A (en) * 2021-09-26 2021-12-24 昆山丘钛光电科技有限公司 Method and device for verifying calibration precision of camera module and method and device for testing camera module
CN115049658B (en) * 2022-08-15 2022-12-16 合肥的卢深视科技有限公司 RGB-D camera quality detection method, electronic device and storage medium

Also Published As

Publication number Publication date
CN115442591A (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN111210468B (en) Image depth information acquisition method and device
CN112326202B (en) Binocular parallax testing method, device and tool of virtual reality equipment
CN107920246B (en) The gradient test method and device of camera module
KR102235632B1 (en) Machine vision system
US9681118B2 (en) Method and system for recalibrating sensing devices without familiar targets
CN114255233B (en) Speckle pattern quality evaluation method and device, electronic device and storage medium
CN104833342B (en) Mobile terminal and method of establishing stereoscopic model through multipoint distance measurement
CN111709985A (en) Underwater target ranging method based on binocular vision
KR102468176B1 (en) Measurement of assemblies for camera calibration and assemblies for camera calibration
EP3547024B1 (en) Mass production method and system for panorama camera
CN113822920B (en) Method for acquiring depth information by structured light camera, electronic equipment and storage medium
CN100470239C (en) Projection test device and method
JP2001359126A (en) Optical axis tilt angle detector and image measurement device provided with it
CN115442591B (en) Camera quality testing method, system, electronic device and storage medium
CN115423808B (en) Quality detection method for speckle projector, electronic device, and storage medium
CN110322518B (en) Evaluation method, evaluation system and test equipment of stereo matching algorithm
CN116439652A (en) Diopter detection method, diopter detection device, upper computer and diopter detection system
CN115719387A (en) 3D camera calibration method, point cloud image acquisition method and camera calibration system
CN113379816B (en) Structure change detection method, electronic device, and storage medium
CN115546220B (en) Quality detection method and system for speckle projector, electronic device and storage medium
WO2022184928A1 (en) Calibration method of a portable electronic device
CN115018922A (en) Distortion parameter calibration method, electronic device and computer readable storage medium
CN111145674B (en) Display panel detection method, electronic device and storage medium
CN113570669A (en) Camera calibration method, platform and device
CN115760750B (en) DOE quality detection method, DOE quality detection system, DOE quality detection electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230828

Address after: Room 799-4, 7th Floor, Building A3A4, Zhong'an Chuanggu Science and Technology Park, No. 900 Wangjiang West Road, Gaoxin District, Hefei Free Trade Experimental Zone, Anhui Province, 230031

Patentee after: Anhui Lushenshi Technology Co.,Ltd.

Address before: 230091 room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province

Patentee before: Hefei lushenshi Technology Co.,Ltd.