CN114140521A - Method, device and system for identifying projection position and storage medium - Google Patents

Method, device and system for identifying projection position and storage medium Download PDF

Info

Publication number
CN114140521A
CN114140521A CN202010924130.XA CN202010924130A CN114140521A CN 114140521 A CN114140521 A CN 114140521A CN 202010924130 A CN202010924130 A CN 202010924130A CN 114140521 A CN114140521 A CN 114140521A
Authority
CN
China
Prior art keywords
projection
points
point
area
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010924130.XA
Other languages
Chinese (zh)
Inventor
贾坤
王霖
唐泽达
赵振宇
李屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Appotronics Corp Ltd
Original Assignee
Appotronics Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appotronics Corp Ltd filed Critical Appotronics Corp Ltd
Priority to CN202010924130.XA priority Critical patent/CN114140521A/en
Priority to PCT/CN2021/116345 priority patent/WO2022048617A1/en
Publication of CN114140521A publication Critical patent/CN114140521A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

The application discloses a method, a device, a system and a storage medium for identifying a projection position, wherein the method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises a screen area and a projection area; selecting a point from the projection area as a ray starting point; searching along a plurality of preset directions based on a ray starting point to obtain a boundary point of a screen region and a boundary point of a projection region, wherein two adjacent preset directions are separated by a first preset angle; screening the boundary points to obtain near angular points; and obtaining the corner points of the screen area and the projection area by using the near corner points, taking the corner point coordinates of the screen area as the position of the projection screen, and taking the corner point coordinates of the projection area as the position of the projection area. By means of the mode, the identification speed can be increased, the anti-interference performance is high, and the identification accuracy is high.

Description

Method, device and system for identifying projection position and storage medium
Technical Field
The present application relates to the field of projection technologies, and in particular, to a method, an apparatus, a system, and a storage medium for identifying a projection location.
Background
With the popularization of laser televisions in the market, the problems that the laser televisions are difficult to install and the difficulty in adjusting projection pictures to completely match the projection pictures with a screen is very high exist; even professional installer need spend a large amount of time adjusting projector position, work efficiency is not high, and in the life unexpected collide with lead to the projector position to change the back need readjust, has greatly influenced user experience.
To solve the above problems, common solutions include: firstly, using software to correct projection pictures at four points so as to adapt to a screen; secondly, projecting a test chart of a specific type, measuring data by a professional, and manually inputting the data into a projector position adjusting device to correct the position of the projector; thirdly, arranging a photosensitive device array at four corners of the screen, sensing the shape of a projection picture by the photosensitive device array, matching the projection picture with various preset conditions needing to be adjusted to obtain a matching result, and then driving projector position adjusting equipment; the four-point correction scheme can affect the quality of a projection picture, cause picture distortion to a certain degree and affect the long-term film watching effect; the scheme of inputting manual measurement data into the projector position adjusting device has the defects of inaccurate measurement, low measurement precision, inconvenient operation, poor usability and the like; the cost is increased and the attractiveness is reduced by arranging the photoreceptor arrays at the four corners of the screen; in addition, the conventional common angular point identification algorithm SUSAN (minimum singular value segmentation similar kernel) is not suitable for processing large-size images, is not sensitive to local noise, but is very sensitive to other objects in the images, and has extremely high difficulty in post-screening.
Disclosure of Invention
The application provides a method, a device and a system for identifying a projection position and a storage medium, which can improve the identification speed, and have strong anti-interference performance and high identification accuracy.
In order to solve the above technical problem, a technical solution adopted by the present application is to provide a method for identifying a projection position, where the method includes: acquiring an image to be processed, wherein the image to be processed comprises a screen area and a projection area; selecting a point from the projection area as a ray starting point; searching along a plurality of preset directions based on a ray starting point to obtain a boundary point of a screen region and a boundary point of a projection region, wherein two adjacent preset directions are separated by a first preset angle; screening the boundary points to obtain near angular points; and obtaining the corner points of the screen area and the projection area by using the near corner points, taking the corner point coordinates of the screen area as the position of the projection screen, and taking the corner point coordinates of the projection area as the position of the projection area.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a projection position recognition apparatus, which includes a memory and a processor connected to each other, wherein the memory is used for storing a computer program, and the computer program is used for implementing the above method for recognizing a projection position when being executed by the processor.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a projector position adjusting system, which includes a projection position identification device, where the projection position identification device is the above projection position identification device.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a computer-readable storage medium for storing a computer program, wherein the computer program is used for implementing the above method for identifying a projection position when being executed by a processor.
Through the scheme, the beneficial effects of the application are that: firstly, acquiring an image to be processed, selecting one point in a projection area as a ray starting point, and searching along a plurality of preset directions from the ray starting point to obtain boundary points of a screen area and the projection area; the method has the advantages that the corresponding near angular points can be obtained by screening the acquired boundary points, the angular points of the screen region and the angular points of the projection region can be obtained by utilizing the near angular points, so that the position of the projection screen and the position of the projection region can be automatically identified, the problem of inconvenience in measuring the position of a projection graph in a projector position adjusting system is solved, the accurate positions of the projection screen and the projection region in an image can be identified, the method can be applied to the projector position adjusting system and equipment, manual measurement and manual positioning are not needed, the identification speed is high, the operation of a user can be simplified, in addition, as the ray starting point is positioned in the projection screen, other objects outside the projection screen cannot cause interference to the identification process, the interference resistance is high, and the identification accuracy can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flowchart illustrating an embodiment of a method for identifying a projection position provided herein;
FIG. 2 is a schematic illustration of a projection area, a screen area, and rays in the embodiment shown in FIG. 1;
FIG. 3 is a schematic view of a projected area and a screen area after a correction position in the embodiment shown in FIG. 1;
FIG. 4 is a schematic flow chart diagram illustrating another embodiment of a method for identifying a projection location provided herein;
FIG. 5 is a schematic illustration of the projection area, screen area, and rays in the embodiment shown in FIG. 4;
FIG. 6 is a schematic diagram of boundary lines and pixels in the embodiment shown in FIG. 4;
FIG. 7 is a schematic diagram of boundary points on the projection area in the embodiment shown in FIG. 4;
FIG. 8 is a schematic view of a near corner point on the projection area in the embodiment shown in FIG. 4;
FIG. 9 is a schematic diagram of four edges and corner points corresponding to the projected area in the embodiment shown in FIG. 4;
FIG. 10 is a schematic structural diagram of an embodiment of a projection position recognition apparatus provided in the present application;
FIG. 11 is a schematic diagram of an embodiment of a projector position adjustment system provided herein;
FIG. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a method for identifying a projection position according to the present application, the method including:
step 11: and acquiring an image to be processed.
The projection screen can be shot by using a camera device (such as a mobile phone) so as to obtain a corresponding image to be processed, wherein the image to be processed comprises a screen area and a projection area, the screen area is an area where the projection screen is located, and the projection area is an area where an image displayed on the projection screen is located; the method of the embodiment can be applied to a projection display system, the projection display system comprises a projector and a projection screen, the projection screen can be a laser television, the screen used by the laser television is a light-resistant screen, the laser television displays black when no projection picture exists, and the corresponding gray value is smaller.
Further, the projection area can be a white field test chart, after projection, the laser television displays white, and the corresponding gray value is larger; the laser television can be hung on a wall body or a bracket, a background wall of the laser television is usually not darker than the light-resistant curtain and is not whiter than a white field, and the corresponding gray value is between the light-resistant curtain and the white field.
It can be understood that, since the situation of the background wall cannot be predicted, if the image is projected outside the projection screen, the analysis is difficult, so that the projection area is located inside the projection screen, and a reminding interface can be set to remind the user to project all four sides of the projection area on the projection screen.
Step 12: and selecting a point from the projection area as a ray starting point.
After the image to be processed is obtained, a pixel point can be selected from the area corresponding to the projection area as a ray starting point, namely the ray starting point falls in the area where the projection area is located; for example, as shown in fig. 2, the screen area is denoted as I1, the coordinates of the pixel point at the upper left corner thereof may be denoted as (0, 0), the projection area is denoted as I2, the center of the screen area I1 may be selected as the ray starting point P, and the coordinates of the point P may be denoted as (Px,Py) The ray from point P can be recorded as Ri (i is more than or equal to 1 and less than or equal to m), and m is the number of rays.
Step 13: and searching along a plurality of preset directions based on the ray starting point to obtain a boundary point.
360 ° may be divided into a preset number of preset directions, and two adjacent preset directions are separated by a first preset angle, that is, the product of the preset number and the first preset angle is 360 °, for example, as shown in fig. 2, the angle between the ray R1 and the ray R2 is a first preset angle, which is denoted as α; for each preset direction, the pixel points can be searched along the preset direction, and because the difference between the pixel value of the projection area and the pixel value of the screen area is large, if the pixel value is found to be changed greatly, the pixel point is indicated to be a pixel point (namely a boundary point) at the boundary, and the search of the boundary point is realized.
Step 14: and screening the boundary points to obtain near-corner points.
After a plurality of boundary points are obtained, whether all preset directions are searched can be judged; if all the preset directions are searched, in order to increase the accuracy of corner detection, a near corner point can be screened from boundary points, and the near corner point is a pixel point of which the distance between the near corner point and the boundary of a projection area or a screen area is within a preset range; for example, as shown in fig. 2, the preset range is an interval near 0, and the distance between the pixel point C and the boundary of the projection area I2 is within the preset range, which is a near-corner point; if all the preset directions are not searched, the step 13 is continuously executed until all the preset directions are searched.
Step 15: and obtaining the corner points of the screen area and the projection area by using the near corner points, taking the corner point coordinates of the screen area as the position of the projection screen, and taking the corner point coordinates of the projection area as the position of the projection area.
After the near corner points of the screen area are detected, the near corner points corresponding to the screen area can be processed to obtain the positions of the four corner points, so that the position of the projection screen is determined; similarly, near corner points corresponding to the projection area can be processed to obtain the positions of four corner points, so that the position of the projection area is determined; after the position of the projection screen and the position of the projection area are determined, the projector can be controlled according to the relative position relationship between the position of the projection screen and the position of the projection area, so that the size and the position of a picture projected by the projector are matched, namely the size of the projection area is slightly smaller than the size of the projection screen, and the boundary of the projection area is parallel to the corresponding boundary of the projection screen; for example, as shown in fig. 3, the screen region I1 coincides with the center of the projection region I2, and four sides of the screen region I1 are parallel to the corresponding sides of the projection region I2.
The embodiment provides a method for identifying a projection position, which comprises the steps of firstly obtaining an image to be processed, then selecting a point in a projection area as a ray starting point, and then searching along a plurality of preset directions from the ray starting point to obtain a boundary point; then screening the obtained boundary points to obtain corresponding near angular points, obtaining angular points of the screen area and angular points of the projection area by using the corresponding near angular points, and taking corresponding angular point coordinates as the position of the projection screen and the position of the projection area, thereby realizing automatic identification of four sides of the projection area and four sides of the projection area, having higher precision and higher speed, being capable of improving the use experience of the laser television, being capable of matching with a projector position adjusting device to complete the adaptation of a projection picture and the projection screen, and improving the work efficiency of engineering installers, and meanwhile, the scheme can also be applied to services such as laser splicing walls or cinemas.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating another embodiment of a method for identifying a projection position according to the present application, the method including:
step 41: and acquiring an image to be processed.
After the image to be processed is acquired, if the image to be processed is a color image or a depth image, the image to be processed may be converted into a grayscale image for convenience of processing, and specifically, the grayscale processing may be performed on the image to be processed by using the following formula:
G=0.2989*Gred+0.5870*Ggreen+0.1140*Gblue
where G is the processed pixel value, Gred、GgreenAnd GblueThe coefficients are respectively a red channel coefficient, a green channel coefficient and a blue channel coefficient of the image to be processed before processing, and C1, C2 and C3 are respectively a red channel coefficient, a green channel coefficient and a blue channel coefficient; further, the gray scale range of each pixel in the gray scale image is 0-255, 0 represents black, 255 represents white, and C1, C2 and C3 may be 0.2989, 0.5870 and 0.1140, respectively.
In other embodiments, if the background wallpaper of the projection screen is blue, the coefficients C1, C2, and C3 in the graying formula may be 0, and 1, respectively, so as to maximize the grayscale difference of the projection area, the screen area, and the background area.
Step 42: and selecting a point from the projection area as a ray starting point.
This step is the same as step 12 in the above embodiment, and is not described herein again.
Step 43: searching along each preset direction from the starting point of the ray, calculating a pixel difference value between two adjacent pixel points which are away from a preset step length in the preset direction, and recording the pixel difference value as a first pixel difference value.
As shown in FIG. 5, the screen area is denoted as I1, the projection area is denoted as I2, the ray R is taken in a predetermined direction with P as the ray starting point, the angle between the ray R and the horizontal axis is θ, and the coordinate of the point P is (P)x,Py) The preset step length is denoted as d, i.e. the step length d is used for the ray RSearching forwards, and then searching for a point P on the ray R with a distance d from the point P1The coordinates of (a) are: (P)x-d*cosθ,Py-d sin θ), connecting point P1The pixel value of (A) is denoted as GP1(ii) a Points 2 x d from point P are denoted as P2Will point P2The pixel value of (A) is denoted as GP2Point P1And point P2Pixel difference Δ G betweenP2-P1=GP2-GP1
Step 44: and judging whether the first pixel difference value is larger than a first preset value or not.
The gray difference threshold (i.e. the first preset value) can be designed in advance and recorded as Δ GcIn order to determine the specific position of the transition region, as shown in fig. 5, it may be determined whether the pixel difference between two adjacent pixels spaced by a distance d in the direction of the ray R is greater than the first preset value.
Step 45: and if the first pixel difference value is not the first preset value, recording an area between two adjacent pixel points in the preset direction as a transition area.
Taking the projection area as an example, as shown in FIG. 6, when Δ G is obtainedPn-Pn-1=GPn-GPn-1>ΔGc(n is more than or equal to 1 and less than or equal to h, and h is the number of pixel points obtained when sampling the pixel points on the ray R according to the step length d), the boundary of the projection area can be judged to pass through the point PnAnd point Pn-1Region in between, can be a point PnAnd point Pn-1The region in between is denoted as a transition region, which includes a plurality of pixel points.
In a specific embodiment, the preset step size may be set to 10 pixels, the first preset angle is 10 °, that is, one ray is taken every 10 ° and 36 rays are taken in total, and the first preset value Δ GcIs 30.
Step 46: and screening the pixel points in the transition region to obtain boundary points.
Point Pn-1And point PnThe pixel points in between are all the pixel points in the transition region, and a plurality of pixel points in the transition region can be screened to selectAnd taking the optimal pixel point as a boundary point.
Furthermore, the absolute difference between the pixel value of each pixel point in the transition region and the preset pixel value can be counted and recorded as a second pixel difference value; and then, marking the pixel point corresponding to the second pixel difference value with the minimum median of all the second pixel difference values as the boundary point of the projection area.
In one embodiment, the predetermined pixel value Ga=(GPn+GPn-1) The merit function may be: f (P)i)=|GPi-GaFor pixel value of GPiPoint P ofi(i is not less than 1 and not more than h), determining a corresponding pixel point when the evaluation function obtains the minimum value, and judging that the pixel point falls on the boundary of the projection area, as shown in fig. 6, wherein C is the boundary of the projection area.
In the above manner, m boundary points on the projection region boundary can be obtained by making m rays through the point P, and are marked as { PmAs shown in fig. 7.
Step 47: and calculating angles formed by the boundary points of every three adjacent projection areas so as to classify the boundary points of all the projection areas and divide the boundary points of each projection area into near-corner points or common points.
An included angle formed by the common point and two adjacent boundary points in the projection area is a second preset angle, and an included angle formed by the near-angle point and the two adjacent boundary points in the projection area is smaller than the second preset angle; specifically, the preset angle is 180 °, the m acquired boundary points are filtered, and as shown in fig. 7, the point P is selectedm1~Pm6These 6 boundary points are divided into two categories, point Pm3And point Pm4Is a near corner point, point Pm1Point Pm2Point Pm5And point Pm6Is a common point; the common points are characterized in that: the included angle formed by the common point and the adjacent boundary point is 180 degrees, as shown in fig. 8 by angle Pm1Pm2Pm3(ii) a The near corner point is characterized in that: the included angle formed by the near angular point and the adjacent boundary point is much less than 180 degrees, as shown by the angle P in fig. 8m2Pm3Pm4(ii) a Traverse { PmAll boundary points in (8) can find 8 near points on the quadrangle in FIG. 8Corner points: pA1-PA2、PB1-PB2、PC1-PC2And PD1-PD2As shown in fig. 9.
And 48: connecting two adjacent near-angle points in the projection area to obtain four straight lines; and calculating the intersection points of the four straight lines, and taking the intersection points of the four straight lines as the corner points of the projection area.
For 8 near-corner points of the obtained projection region, two adjacent near-corner points are connected to obtain 4 straight lines, and intersection points of the 4 straight lines are obtained to obtain corner points a-D of the projection region, as shown in fig. 9.
Step 49: and adjusting the position of the projector based on the position of the projection screen and the position of the projection area, so that the shape of the projection area projected by the projector is matched with the shape of the projection screen.
The method for identifying the position of the screen region is the same as the method for identifying the position of the projection region, and the screen region can be processed according to the steps to obtain the corner points of the screen region, and the corner point coordinates of the screen region are used as the position of the projection screen.
It can be understood that the corner identification of the projection screen and the corner identification of the projection area can be performed simultaneously, but in order to distinguish whether the detected corner belongs to the projection screen or the projection area, the judgment can be performed according to the pixel distribution of the screen area and the projection area; specifically, in a preset direction from a ray starting point, an image to be processed is changed from white to black, a pixel value is a larger gray value in a region corresponding to a projection region, the image continues to advance along the preset direction and enters a region where a screen region is located, the pixel value is a smaller gray value, namely, for the projection region, the pixel value near the boundary in the preset direction is decreased from large to small, and for the screen region, the pixel value near the boundary in the preset direction is increased from small to large, so that whether the pixel value of a pixel point closer to the ray starting point in two adjacent pixel points in the preset direction is larger than the pixel value of another pixel point or not can be judged; if the pixel value of a pixel point which is closer to the ray starting point in two adjacent pixel points is greater than that of another pixel point, the transition area is a transition area corresponding to the projection area, and the pixel point in the transition area is a boundary point of the projection area; and if the pixel value of the pixel point which is closer to the ray starting point in the two adjacent pixel points is smaller than the pixel value of the other pixel point, the transition area is the transition area corresponding to the screen area, and the pixel point in the transition area is the boundary point of the projection screen.
It can be understood that, since the same size object has different sizes on the image when the shooting positions are different, perspective correction can be used to correct the distortion caused by the shooting positions and angles, and the position of the projection screen and the position of the projection area can be obtained.
The embodiment provides a ray tracing method, which comprises the steps of only obtaining an image comprising a projection screen and a projection area, positioning side lines of the projection screen and the projection area according to pixel values, obtaining each vertex coordinate of a polygon by solving intersection points of the side lines, obtaining the size of each side of the polygon and the angle of each angle according to the vertex coordinates, thus obtaining a perspective image of a photographer under a visual angle, and obtaining the position of a real projection screen and the position of the projection area by utilizing a perspective correction algorithm; a boundary point can be searched on each ray, each edge of the polygon only needs 2 reliable boundary points to determine the straight line of the edge, and the robustness is high; compared with manual measurement and manual positioning modes, the position identification accuracy is higher; when the width of the projection screen is 2240mm, the real distance represented by each pixel of the picture with the size of 1920 x 1080 is less than 1.2mm, and the precision is high; only photographing is needed, the photographed image can be processed fully automatically, and the operation is simple and convenient; the calculation steps are few, and the running speed is high; in addition, because the starting point of the ray is positioned in the polygon, other objects outside the projection screen cannot interfere the identification process, and therefore the anti-interference performance is high.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a projection position recognition apparatus 100 provided in the present application, where the projection position recognition apparatus 100 includes a memory 101 and a processor 102 connected to each other, the memory 101 is used for storing a computer program, and the computer program is used for implementing the method for recognizing a projection position in the foregoing embodiment when being executed by the processor 102.
The embodiment provides a projection position recognition device 100, which can automatically recognize the accurate sizes of the four sides of the projection screen and the four sides of the projection area in the picture captured by the camera, and has the advantages of convenient operation, accurate measurement, high precision, high speed and the like, thereby facilitating the projector position adjustment device to quickly complete the adaptation of the projection picture and the projection screen.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of a projector position adjustment system provided in the present application, in which the projector position adjustment system 110 includes a projection position identification device 111, and the projection position identification device 111 is the projection position identification device in the above embodiment.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium 120 provided by the present application, where the computer-readable storage medium 120 is used for storing a computer program 121, and the computer program 121, when being executed by a processor, is used for implementing the method for identifying a projection position in the foregoing embodiment.
The computer readable storage medium 120 may be a server, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above embodiments are merely examples, and not intended to limit the scope of the present application, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present application, or those directly or indirectly applied to other related arts, are included in the scope of the present application.

Claims (12)

1. A method of identifying a projection location, comprising:
acquiring an image to be processed, wherein the image to be processed comprises a screen area and a projection area;
selecting a point from the projection area as a ray starting point;
searching along a plurality of preset directions based on the ray starting point to obtain boundary points, wherein two adjacent preset directions are separated by a first preset angle;
screening the boundary points to obtain near-angle points;
and obtaining the corner points of the screen area and the projection area by using the near corner points, taking the corner point coordinates of the screen area as the position of the projection screen, and taking the corner point coordinates of the projection area as the position of the projection area.
2. The method of claim 1, wherein the step of searching along a plurality of preset directions based on the ray starting point to obtain the boundary point comprises:
searching along each preset direction from the starting point of the ray, calculating a pixel difference value between two adjacent pixel points which are away from a preset step length in the preset direction, and recording the pixel difference value as a first pixel difference value;
judging whether the first pixel difference value is larger than a first preset value or not;
if so, recording an area between two adjacent pixel points in the preset direction as a transition area;
and screening the pixel points of the transition region to obtain the boundary points.
3. The method for identifying a projection position according to claim 2, wherein the step of filtering the pixel points in the transition region to obtain the boundary point comprises:
calculating the absolute difference between the pixel value of each pixel point in the transition region and a preset pixel value, and recording the absolute difference as a second pixel difference value;
and marking the pixel point corresponding to the second pixel difference value with the minimum median value of all the second pixel difference values as the boundary point.
4. The method of identifying a projection location of claim 2, further comprising:
judging whether the pixel value of a pixel point which is closer to the ray starting point in two adjacent pixel points in the preset direction is larger than that of another pixel point or not;
if so, the transition area is the transition area corresponding to the projection area;
and if not, the transition area is the transition area corresponding to the screen area.
5. The method of claim 1, wherein the step of filtering the boundary points to obtain near-corner points comprises:
calculating angles formed by every three adjacent boundary points of the projection regions so as to classify the boundary points of all the projection regions and divide the boundary points of each projection region into the near-angle points or common points;
the included angle formed by the common point and two adjacent boundary points in the projection area is a second preset angle, and the included angle formed by the near-angle point and the two adjacent boundary points in the projection area is smaller than the second preset angle.
6. The method of identifying a projection location of claim 1, further comprising:
and adjusting the position of the projector based on the position of the projection screen and the position of the projection area, so that the shape of the projection area projected by the projector is matched with the shape of the projection screen.
7. The method for identifying a projection position according to claim 1, wherein the step of obtaining the corner points of the projection area using the near corner points comprises:
connecting two adjacent near-angle points in the projection area to obtain four straight lines;
and calculating the intersection points of the four straight lines, and taking the intersection points of the four straight lines as the corner points of the projection area.
8. The method of claim 1, wherein the step of selecting a point from the projection region as a starting point of the ray comprises:
carrying out graying processing on the image to be processed by using the following formula:
G=C1*Gred+C2*Ggreen+C2*Gblue
where G is the processed pixel value, Gred、GgreenAnd GblueBefore processing, the pixel values of a red channel, a green channel and a blue channel of the image to be processed are respectively, and C1, C2 and C3 are respectively a coefficient corresponding to the red channel, a coefficient corresponding to the green channel and a coefficient corresponding to the blue channel.
9. The method of claim 1, wherein the step of screening the boundary points to obtain near-corner points comprises:
judging whether all the preset directions are searched;
if so, screening the boundary points to obtain the near angular points;
if not, continuing to execute the step of searching along a plurality of preset directions based on the ray starting point to obtain the boundary point of the screen area and the boundary point of the projection area.
10. A projection location identification device, characterized by comprising a memory and a processor connected to each other, wherein the memory is configured to store a computer program, which when executed by the processor is configured to implement the method of identifying a projection location according to any one of claims 1-9.
11. A projector position adjustment system characterized by comprising a projection position recognition device according to claim 10.
12. A computer-readable storage medium for storing a computer program, characterized in that the computer program, when being executed by a processor, is adapted to carry out the method of identifying a projection position of any of claims 1-9.
CN202010924130.XA 2020-09-04 2020-09-04 Method, device and system for identifying projection position and storage medium Pending CN114140521A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010924130.XA CN114140521A (en) 2020-09-04 2020-09-04 Method, device and system for identifying projection position and storage medium
PCT/CN2021/116345 WO2022048617A1 (en) 2020-09-04 2021-09-03 Method, device, and system for recognizing projection position, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010924130.XA CN114140521A (en) 2020-09-04 2020-09-04 Method, device and system for identifying projection position and storage medium

Publications (1)

Publication Number Publication Date
CN114140521A true CN114140521A (en) 2022-03-04

Family

ID=80438486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010924130.XA Pending CN114140521A (en) 2020-09-04 2020-09-04 Method, device and system for identifying projection position and storage medium

Country Status (2)

Country Link
CN (1) CN114140521A (en)
WO (1) WO2022048617A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862664A (en) * 2022-06-14 2022-08-05 广东宏石激光技术股份有限公司 Pipe characteristic identification method and equipment based on end face projection and storage medium
CN115190281B (en) * 2022-06-30 2024-01-02 海宁奕斯伟集成电路设计有限公司 Device and method for adjusting projection position of projector
CN117315664B (en) * 2023-09-18 2024-04-02 山东博昂信息科技有限公司 Scrap steel bucket number identification method based on image sequence
CN117075077B (en) * 2023-10-18 2023-12-15 北京中科睿信科技有限公司 Rapid calculation method, device and storage medium for radar scattering cross-sectional area

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3911456B2 (en) * 2002-08-12 2007-05-09 オリンパス株式会社 Multi-projection system and correction data acquisition method in multi-projection system
CN101750883B (en) * 2008-12-11 2011-02-09 北京大学 Method and device for detecting angular point of screened image
CN104751458B (en) * 2015-03-23 2017-08-25 华南理工大学 A kind of demarcation angular-point detection method based on 180 ° of rotation operators
CN109257582B (en) * 2018-09-26 2020-12-04 海信视像科技股份有限公司 Correction method and device for projection equipment
CN111290582B (en) * 2020-02-29 2021-09-21 华南理工大学 Projection interaction area positioning method based on improved linear detection

Also Published As

Publication number Publication date
WO2022048617A1 (en) 2022-03-10

Similar Documents

Publication Publication Date Title
CN114140521A (en) Method, device and system for identifying projection position and storage medium
EP3158532B1 (en) Local adaptive histogram equalization
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
US9420276B2 (en) Calibration of light-field camera geometry via robust fitting
US8508580B2 (en) Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene
US10762655B1 (en) Disparity estimation using sparsely-distributed phase detection pixels
US9756303B1 (en) Camera-assisted automatic screen fitting
US9692958B2 (en) Focus assist system and method
US8811751B1 (en) Method and system for correcting projective distortions with elimination steps on multiple levels
TWI424361B (en) Object tracking method
US8897600B1 (en) Method and system for determining vanishing point candidates for projective correction
Zheng et al. Single-image vignetting correction from gradient distribution symmetries
WO2018094648A1 (en) Guiding method and device for photography composition
CN105812790B (en) Method for evaluating verticality between photosensitive surface and optical axis of image sensor and optical test card
WO2018228466A1 (en) Focus region display method and apparatus, and terminal device
US8913836B1 (en) Method and system for correcting projective distortions using eigenpoints
CN112272292A (en) Projection correction method, apparatus and storage medium
US9336607B1 (en) Automatic identification of projection surfaces
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN117061868A (en) Automatic photographing device based on image recognition
CN116337412A (en) Screen detection method, device and storage medium
CN115983304A (en) Two-dimensional code dynamic adjustment method and device, electronic equipment and storage medium
CN117115488B (en) Water meter detection method based on image processing
CN106324976B (en) Test macro and test method
WO2022252007A1 (en) Distance measurement method, distance measurement apparatus, and computer-program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination