CN107277380B - Zooming method and device - Google Patents

Zooming method and device Download PDF

Info

Publication number
CN107277380B
CN107277380B CN201710702359.7A CN201710702359A CN107277380B CN 107277380 B CN107277380 B CN 107277380B CN 201710702359 A CN201710702359 A CN 201710702359A CN 107277380 B CN107277380 B CN 107277380B
Authority
CN
China
Prior art keywords
projection
image
feature point
feature points
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710702359.7A
Other languages
Chinese (zh)
Other versions
CN107277380A (en
Inventor
钟波
肖适
刘志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jimi Technology Co Ltd
Original Assignee
Chengdu Jimi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jimi Technology Co Ltd filed Critical Chengdu Jimi Technology Co Ltd
Priority to CN201710702359.7A priority Critical patent/CN107277380B/en
Publication of CN107277380A publication Critical patent/CN107277380A/en
Application granted granted Critical
Publication of CN107277380B publication Critical patent/CN107277380B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/317Convergence or focusing systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3188Scale or resolution adjustment

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Projection Apparatus (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention provides a zooming method and a zooming device, which are applied to projection equipment connected with camera equipment, wherein a coded information image containing a feature point set is prestored in the projection equipment, and the method comprises the following steps: projecting the coded information image and receiving a shot image of the projected coded information image sent by the camera equipment; extracting a feature point set in a shot image and matching the feature point set with a feature point set in a coded information image; calculating to obtain a projection depth value according to the successfully matched feature points; and adjusting the size of the projected coded information image according to the projection depth value, and adjusting the size of the projection picture according to the size. According to the zooming scheme provided by the invention, the projection depth value, namely the distance between the projection surface and the projection equipment, is calculated through feature point matching. The proper size of the projection picture is determined according to the distance, the method can automatically adjust the picture size of the projection image according to the projection depth value, time is saved, and the process is convenient and fast.

Description

Zooming method and device
Technical Field
The invention relates to the technical field of projection, in particular to a zooming method and a zooming device.
Background
The size of a picture projected by the existing projection electronic equipment depends on the distance between the equipment and a projection wall surface, and the longer the distance is, the larger the size of the projection picture is, and the larger the size is, which further causes poor projection picture quality. In use, if the projection device can only be placed at a long distance but needs a moderate picture size, the projection device needs to be manually adjusted to realize the adjustment, and the process is complicated and time-consuming. Therefore, how to implement a scheme capable of automatically adjusting the size of the projection picture according to the distance between the projection device and the projection wall surface is particularly important.
Disclosure of Invention
Accordingly, the present invention is directed to a method and an apparatus for zooming to solve the above-mentioned problems.
A preferred embodiment of the present invention provides a zooming method applied to a projection apparatus, where the projection apparatus is connected to an image capturing apparatus, and a coded information image including a feature point set is prestored in the projection apparatus, and the method includes:
projecting the coded information image and receiving a shot image of the projected coded information image sent by the camera equipment;
extracting a feature point set in the shot image, and matching each feature point in the extracted feature point set of the shot image with each feature point in the feature point set of the coded information image;
calculating to obtain a projection depth value according to the successfully matched feature points in the shot image and the feature points in the coded information image;
and adjusting the size of the projected coded information image according to the projection depth value, and adjusting the size of a projection picture of the projection equipment according to the size.
Further, the step of calculating the projection depth value according to the successfully matched feature points in the captured image and the successfully matched feature points in the encoded information image includes:
calculating to obtain a plurality of depth values according to the plurality of groups of feature points successfully matched;
and calculating the average value of the depth values, and taking the average value as a projection depth value.
Further, the method further comprises:
acquiring correction parameters of the camera equipment, wherein the correction parameters comprise a rotation matrix and a translation matrix;
the step of calculating a projection depth value according to the successfully matched feature points in the shot image and the feature points in the encoded information image includes:
obtaining the coordinate values of the feature points in the shot image and the coordinate values of the feature points in the coded information image which are successfully matched;
and calculating according to a preset depth calculation formula, the coordinate values of the feature points in the shot image, the coordinate values of the feature points in the coded information image, the rotation matrix and the translation matrix which are successfully matched to obtain a projection depth value.
Further, the preset depth calculation formula is as follows:
Figure BDA0001380646710000031
Figure BDA0001380646710000032
wherein, x is the distance between the projection device and the projection image in the x-axis direction, y is the distance between the projection device and the projection image in the y-axis direction, z is the distance between the projection device and the projection image in the z-axis direction, namely the projection depth value, and x is the distance between the projection device and the projection image in the z-axis directioniFor the abscissa value, y, of the feature point in the successfully matched photographed imageiFor the longitudinal coordinate values, x, of the feature points in the successfully matched captured imagegFor the abscissa value, y, of the feature point in the successfully matched image of the encoded informationgF is the focal length of the image pickup device, f' is the focal length of the projection device, t is the ordinate value of the feature point in the encoded information image for which matching was successfulxTranslation parameter, t, for the x-axis in the translation matrixzFor the translation parameter of the z-axis in the translation matrix, r4-r9Respectively the parameter values in the rotation matrix.
Further, the projecting device pre-stores a relationship between a projected depth value and a size of the projected image, and the step of adjusting the size of the projected encoded information image according to the projected depth value includes:
searching the relation between the pre-stored projection depth value and the projection image size, and obtaining the projection image size corresponding to the calculated projection depth value;
and adjusting the size of the projected coded information image according to the searched projected image size.
Another preferred embodiment of the present invention further provides a zoom apparatus applied to a projection device, where the projection device is connected to an image capturing device, and an encoded information image including a feature point set is prestored in the projection device, and the zoom apparatus includes:
the projection module is used for projecting the coded information image and receiving a shot image of the projected coded information image sent by the camera equipment;
the matching module is used for extracting a feature point set in the shot image and matching each feature point in the extracted feature point set of the shot image with each feature point in the feature point set of the coded information image;
the computing module is used for computing to obtain a projection depth value according to the feature points in the shot image and the feature points in the coded information image which are successfully matched;
and the adjusting module is used for adjusting the size of the projected coded information image according to the projection depth value and adjusting the size of a projection picture of the projection equipment according to the size.
Further, the feature points which can be successfully matched in the shot image and the coded information image comprise a plurality of groups, and the computing module comprises a first computing unit and a second computing unit;
the first calculation unit is used for calculating a plurality of depth values according to the plurality of groups of feature points successfully matched;
the second calculation unit is configured to calculate an average value of the plurality of depth values, and use the average value as a projection depth value.
Further, the zoom apparatus further comprises a correction parameter obtaining module, wherein the correction parameter obtaining module is configured to obtain correction parameters of the image capturing device, and the correction parameters include a rotation matrix and a translation matrix;
the calculation module also comprises a coordinate value acquisition unit and a projection depth value calculation unit;
the coordinate value acquisition unit is used for acquiring coordinate values of the feature points in the shot image and coordinate values of the feature points in the coded information image which are successfully matched;
and the projection depth value calculation unit is used for calculating to obtain a projection depth value according to a preset depth calculation formula according to the coordinate values of the feature points in the shot image, the coordinate values of the feature points in the coded information image, the rotation matrix and the translation matrix which are successfully matched.
Further, the preset depth calculation formula is as follows:
Figure BDA0001380646710000051
Figure BDA0001380646710000052
wherein, x is the distance between the projection device and the projection image in the x-axis direction, y is the distance between the projection device and the projection image in the y-axis direction, z is the distance between the projection device and the projection image in the z-axis direction, namely the projection depth value, and x is the distance between the projection device and the projection image in the z-axis directioniFor the abscissa value, y, of the feature point in the successfully matched photographed imageiFor the longitudinal coordinate values, x, of the feature points in the successfully matched captured imagegFor the abscissa value, y, of the feature point in the successfully matched image of the encoded informationgF is the focal length of the image pickup device, f' is the focal length of the projection device, t is the ordinate value of the feature point in the encoded information image for which matching was successfulxTranslation parameter, t, for the x-axis in the translation matrixzFor the translation parameter of the z-axis in the translation matrix, r4-r9Respectively the parameter values in the rotation matrix.
Furthermore, the projection device prestores a relationship between a projection depth value and a projection image size, and the adjusting module comprises a searching unit and an adjusting unit;
the searching unit is used for searching the relation between the pre-stored projection depth value and the projection image size to obtain the projection image size corresponding to the calculated projection depth value;
the adjusting unit is used for adjusting the size of the projected coded information image according to the size of the searched projected image.
The zooming method and the zooming device provided by the embodiment of the invention are applied to projection equipment connected with camera equipment. The method comprises the steps of projecting a pre-stored coded information image and receiving a shot image of the projected coded information image shot by the camera shooting device. And extracting a feature point set in the shot image, matching the feature point set with a feature point set in the coded information image, calculating a projection depth value according to the successfully matched shot image and the feature points in the coded information image, and adjusting the size of a projection picture of the projection equipment according to the projection depth value. According to the zooming scheme provided by the embodiment of the invention, the projection depth value, namely the distance between the projection surface and the projection equipment is calculated by matching the feature point set of the shot image of the camera equipment and the feature point set of the projected coded information image. The proper size of the projection picture is determined according to the distance, the method can automatically adjust the picture size of the projection image according to the projection depth value, time is saved, and the process is convenient and fast.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of a zoom system according to a preferred embodiment of the present invention.
Fig. 2 is a schematic structural block diagram of a projection apparatus according to a preferred embodiment of the present invention.
Fig. 3 is a flowchart of a zooming method according to a preferred embodiment of the invention.
Fig. 4 is a flowchart of the substeps of step S105 in fig. 3.
Fig. 5 is another flowchart of the substeps of step S105 in fig. 3.
Fig. 6 is a flowchart of the substeps of step S107 in fig. 3.
Fig. 7 is a functional block diagram of a zoom apparatus according to a preferred embodiment of the present invention.
FIG. 8 is a functional block diagram of a computing module according to a preferred embodiment of the present invention.
Fig. 9 is a functional block diagram of an adjusting module according to a preferred embodiment of the present invention.
Icon: 10-a zoom system; 100-a projection device; 110-a zoom device; 111-a projection module; 112-a matching module; 113-a calculation module; 1131-coordinate value acquisition unit; 1132-projection depth value calculation unit; 1133 — a first computing unit; 1134 — a second computing unit; 114-an adjustment module; 1141-a lookup unit; 1142-an adjusting unit; 115-a correction parameter acquisition module; 120-a processor; 130-a memory; 200-image pickup apparatus.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "disposed," and "connected" are to be construed broadly, e.g., as being fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The zoom system 10 according to the preferred embodiment of the present invention includes a projection apparatus 100 and an image capturing apparatus 200 disposed in front of the projection apparatus 100, as shown in fig. 1. The projection apparatus 100 can be connected to the image pickup apparatus 200, and the projection apparatus 100 can acquire image information captured by the image pickup apparatus 200. The projection apparatus 100 and the image capturing apparatus 200 may be connected by a wire, and the projection apparatus 100 and the image capturing apparatus 200 may also be connected by a wireless, and particularly in this embodiment, the present invention is not limited thereto.
Referring to fig. 2, a schematic block diagram of a projection apparatus 100 according to an embodiment of the present invention is shown, where the projection apparatus 100 includes a zoom device 110, a processor 120, and a memory 130. Wherein, the memory 130 is electrically connected with the processor 120 directly or indirectly to realize the data transmission or interaction. The projection device 100 includes at least one software functional module, which may be stored in the memory 130 in the form of software or firmware or solidified in the operating system of the projection device 100. The processor 120 is configured to execute executable modules stored in the memory 130, such as software functional modules or computer programs included in the projection device 100.
Referring to fig. 3, a flowchart of a zooming method applied to the projection apparatus 100 according to a preferred embodiment of the invention is shown. It should be noted that the method provided by the present invention is not limited by the specific sequence shown in fig. 3 and described below. The respective steps shown in fig. 3 will be described in detail below.
Step S101 of projecting the encoded information image and receiving a captured image of the projected encoded information image transmitted by the image pickup apparatus 200.
In the present embodiment, the image capturing apparatus 200 is disposed in front of the projection apparatus 100, and the image capturing apparatus 200 may capture an image projected on a projection surface by the projection apparatus 100. Alternatively, an image with encoded information is first produced and stored in the projection apparatus 100, the encoded information image containing a feature point set including a plurality of feature points. In specific implementation, the projection device 100 receives a projection instruction, and projects a pre-stored encoded information image on a projection surface after starting a distance measurement function. The image pickup apparatus 200 picks up the projected encoded information image and transmits the picked-up image to the projection apparatus 100.
Step S103 is to extract a feature point set in the captured image, and match each feature point in the extracted feature point set of the captured image with each feature point in the feature point set of the encoded information image.
And step S105, calculating to obtain a projection depth value according to the feature points in the successfully matched shot image and the feature points in the coding information image.
Optionally, in this embodiment, the projection apparatus 100 extracts a feature point set in the captured image according to a preset feature point extraction algorithm. In this embodiment, the preset feature point extraction algorithm may be a feature point extraction algorithm SURF, or a feature point extraction algorithm SIFT. The projection apparatus 100 may extract a feature point set in a captured image using a SURF algorithm or a SIFT algorithm. Wherein, the feature point set comprises a plurality of feature points. The adoption of SURF algorithm or SIFT algorithm for feature point extraction belongs to the conventional means in the prior art, and therefore, no further description is given in this embodiment. The projection apparatus 100 matches the extracted feature point set in the captured image with a feature point set in a pre-stored encoded information image.
Optionally, the coordinate analysis is performed on the feature points in the feature point set in the extracted captured image, and each feature point in the feature point set in the captured image is matched with each feature point in the encoded information image one by one. And finding out the characteristic points which can be successfully matched with the characteristic points in the characteristic point set of the coded information image in the shot image.
In this embodiment, the projection apparatus 100 calculates the projection depth value according to the feature points in the captured image and the feature points in the encoded information image, which are successfully matched.
Optionally, in this embodiment, the zooming method further includes the following steps:
correction parameters of the image capturing apparatus 200 are acquired, the correction parameters including a rotation matrix and a translation matrix.
In the present embodiment, correction needs to be performed on the image pickup apparatus 200, and it should be understood that, for an image of a specific object captured by the image pickup apparatus 200, the relative position of the object, that is, the coordinates of the object in the real physical world, can be described by rotation and translation on the coordinate system of the image pickup apparatus 200. The projection apparatus obtains correction parameters of the image capturing apparatus 200 including a rotation matrix and a translation matrix for subsequent use in performing projection depth value calculation.
It should be understood that in correcting the image pickup apparatus 200, the rotation may be decomposed into two-dimensional rotations about respective coordinate axes in a three-dimensional space. If the angles α, β, θ are rotated sequentially about the x, y, z axes, the total rotation matrix r is the three matrices rx(α),ry(β),rzThe product of (θ) is as follows:
Figure BDA0001380646710000111
Figure BDA0001380646710000112
Figure BDA0001380646710000113
because r is rz(θ)ry(β)rx(α), then r can be obtained as follows:
Figure BDA0001380646710000114
in correcting the image pickup apparatus 200, the translation matrix is used to indicate movement from the origin of one coordinate system to the origin of the other coordinate system. Wherein the translation matrix is represented as follows:
Figure BDA0001380646710000115
wherein, tx,ty,tzThe translation amounts in the x-axis, y-axis and z-axis directions are respectively indicated.
Referring to fig. 4, in the present embodiment, the step S105 may include two substeps, i.e., a step S1051 and a step S1053.
Step S1051, obtaining coordinate values of the feature points in the captured image and coordinate values of the feature points in the encoded information image that are successfully matched.
And step S1053, calculating to obtain a projection depth value according to the coordinate values of the feature points in the successfully matched shot image, the coordinate values of the feature points in the coded information image, the rotation matrix and the translation matrix according to a preset depth calculation formula.
In this embodiment, after finding out the feature points in the feature point set of the captured image that can be successfully matched with the feature points in the feature point set of the encoded information image, coordinate values of the matched feature points are obtained, respectively, and expressed as (x)i,yi) And (x)g,yg). The image capturing apparatus 200 calculates the projection depth value according to a preset depth calculation formula by combining the rotation matrix and the translation matrix in the correction parameters of the image capturing apparatus 200.
Optionally, in this embodiment, the preset depth calculation formula is as follows:
Figure BDA0001380646710000121
Figure BDA0001380646710000122
where x is a distance between the projection apparatus 100 and the projected image in the x-axis direction, y is a distance between the projection apparatus 100 and the projected image in the y-axis direction, z is a distance between the projection apparatus 100 and the projected image in the z-axis direction, i.e. a projection depth value, and x isiFor matching successfully taken imagesThe abscissa value, y, of the feature point in (1)iFor the longitudinal coordinate values, x, of the feature points in the successfully matched captured imagegFor the abscissa value, y, of the feature point in the successfully matched image of the encoded informationgF is the focal length of the image pickup apparatus 200, f' is the focal length of the projection apparatus 100, t is the ordinate value of the feature point in the encoded information image for which matching is successfulxTranslation parameter, t, for the x-axis in the translation matrixzFor the translation parameter of the z-axis in the translation matrix, r4-r9Respectively the parameter values in the rotation matrix.
Wherein, combining the above formula can obtain:
r4=sinαsinβcosα-cosθsinα
r5=sinθsinβsinα+cosθcosα
r6=sinθcosβ
r7=cosθsinβcosα-sinθsinα
r8=cosθsinθsinα-sinθcosα
r9=cosθcosβ
it should be understood that, in the present embodiment, a plurality of feature points are included in the feature point set of the captured image, and a plurality of feature points are included in the feature point set of the encoded information image. The feature points which can be successfully matched between the two feature points comprise a plurality of groups. Referring to fig. 5, in the present embodiment, step S105 may further include two substeps, step S1055 and step S1057.
Step S1055, calculating a plurality of depth values according to the plurality of groups of feature points successfully matched.
Step S1057, calculating an average value of the plurality of depth values, and taking the average value as a projection depth value.
In this embodiment, the depth value may be calculated from the feature points in the successfully matched sets of captured images and the feature points in the encoded information image. A plurality of depth values can be obtained from the plurality of sets of feature points for which matching is successful. The obtained multiple depth values can be analyzed and compared, an optimal value is obtained according to the multiple depth values, and the optimal value is used as a final projection depth value.
The optimal value may be obtained by calculating an average value of the depth values, and taking the average value as the optimal value. Alternatively, an average value of the plurality of depth values may be calculated, and a depth value closest to the average value among the plurality of depth values may be regarded as an optimal value as a final projected depth value. Of course, other ways may be used to obtain the optimal value, which is not particularly limited in this embodiment and may be set according to the requirement.
Step S107, the size of the projected encoded information image is adjusted according to the projected depth value, and the size of the projection screen of the projection apparatus 100 is adjusted according to the size.
Referring to fig. 6, in the present embodiment, the step S107 may include two substeps, i.e., a step S1071 and a step S1073.
Step S1071, finds a relationship between the pre-stored projection depth value and the size of the projected image, and obtains the size of the projected image corresponding to the calculated projection depth value.
Step S1073, adjusting the size of the projected encoded information image according to the searched projected image size.
In this embodiment, the projection apparatus 100 also prestores a relationship between projection depth values and projection image sizes, i.e., each projection depth value has a projection image size adapted to the projection depth value. After the projection device 100 obtains the current projection depth value, i.e. the distance between the projection device 100 and the projection image, the relationship between the pre-stored projection depth value and the size of the projection image is searched, and the size of the projection image adapted to the current projection depth value is obtained. The projection device 100 adjusts the size of the projected encoded information image according to the size of the projected image, and uses the adjusted size as a reference size for the projection device 100 to project other images, that is, the size of the projection screen of the projection device 100 is adjusted according to the size of the obtained encoded information image. With this arrangement, it is possible to avoid the problem of poor projection image quality due to the mismatch between the distance between the projection apparatus 100 and the projection surface and the size of the projection image. In addition, the problems of complicated process, time consumption and the like caused by manual operation can be solved.
Referring to fig. 7, a functional block diagram of a zoom apparatus 110 applied to the projection apparatus 100 according to a preferred embodiment of the invention is shown. The zoom apparatus 110 includes a projection module 111, a matching module 112, a calculation module 113, and an adjustment module 114.
The projection module 111 is configured to project the encoded information image and receive a captured image of the projected encoded information image transmitted by the image capturing apparatus 200. Specifically, the projection module 111 can be used to execute step S101 shown in fig. 3, and the detailed description of step S101 can be referred to for a specific operation method.
The matching module 112 is configured to extract a feature point set in the captured image, and match each feature point in the extracted feature point set of the captured image with each feature point in the feature point set of the encoded information image. Specifically, the matching module 112 may be configured to execute step S103 shown in fig. 3, and the detailed description of step S103 may be referred to for a specific operation method.
The calculating module 113 is configured to calculate a projection depth value according to the feature points in the successfully matched shot image and the feature points in the encoded information image. Specifically, the calculating module 113 may be configured to execute step S105 shown in fig. 3, and the detailed description of step S105 may be referred to for a specific operation method.
The adjusting module 114 is configured to adjust the size of the projected encoded information image according to the projected depth value, and adjust the size of the projection picture of the projection apparatus 100 according to the size. Specifically, the adjusting module 114 can be used to execute step S107 shown in fig. 3, and the detailed description of step S107 can be referred to for a specific operation method.
In this embodiment, the zoom apparatus 110 further includes a correction parameter acquiring module 115, and the correction parameter acquiring module 115 is configured to acquire correction parameters of the image capturing apparatus 200, where the correction parameters include a rotation matrix and a translation matrix.
Optionally, referring to fig. 8, the calculating module 113 further includes a coordinate value obtaining unit 1131 and a projected depth value calculating unit 1132.
The coordinate value obtaining unit 1131 is configured to obtain coordinate values of the feature points in the captured image and coordinate values of the feature points in the encoded information image, which are successfully matched. Specifically, the coordinate value acquiring unit 1131 may be configured to execute step S1051 shown in fig. 4, and a specific operation method may refer to the detailed description of step S1051.
The projection depth value calculating unit 1132 is configured to calculate, according to a preset depth calculation formula, a projection depth value according to the coordinate values of the feature points in the captured image, the coordinate values of the feature points in the encoded information image, and the rotation matrix and the translation matrix that are successfully matched. Specifically, the projected depth value calculating unit 1132 may be used to perform step S1053 shown in fig. 4, and a specific operation method may refer to the detailed description of step S1053.
In this embodiment, the feature points that can be successfully matched in the captured image and the encoded information image include multiple groups, and the calculating module 113 may further include a first calculating unit 1133 and a second calculating unit 1134.
The first calculating unit 1133 is configured to calculate a plurality of depth values according to the plurality of groups of feature points successfully matched. Specifically, the first computing unit 1133 may be configured to execute step S1055 shown in fig. 5, and the detailed description of step S1055 may be referred to for a specific operation method.
The second calculation unit 1134 is configured to calculate an average value of the depth values, and use the average value as a projection depth value. Specifically, the second calculating unit 1134 can be used to execute step S1057 shown in fig. 5, and the detailed description of step S1057 can be referred to for a specific operation method.
In the present embodiment, the projection apparatus 100 pre-stores a relationship between a projection depth value and a projection image size, referring to fig. 9, the adjusting module 114 includes a searching unit 1141 and an adjusting unit 1142.
The searching unit 1141 is configured to search a relationship between a pre-stored projection depth value and a size of the projection image, and obtain a size of the projection image corresponding to the calculated projection depth value. Specifically, the search unit 1141 may be configured to execute step S1071 shown in fig. 6, and the detailed description of step S1071 may be referred to for a specific operation method.
The adjusting unit 1142 is configured to adjust the size of the projected encoded information image according to the searched size of the projected image. Specifically, the adjusting unit 1142 may be configured to execute step S1073 shown in fig. 6, and the detailed description of the step S1073 may be referred to for a specific operation method.
In summary, the zooming method and apparatus provided by the embodiment of the invention are applied to the projection apparatus 100 connected to the image capturing apparatus 200. By projecting a pre-stored encoded information image and receiving a photographed image of the projected encoded information image transmitted by the image pickup apparatus 200. And extracting a feature point set in the shot image, matching the feature point set with a feature point set in the coded information image, calculating a projection depth value according to the successfully matched shot image and the feature points in the coded information image, and adjusting the size of the projected coded information image according to the projection depth value. The zoom scheme provided by the embodiment of the present invention calculates a projection depth value, i.e., a distance between a projection surface and the projection apparatus 100, by matching a feature point set of a photographed image of the image pickup apparatus 200 and a feature point set of a projected encoded information image. The proper size of the projection picture is determined according to the distance, the method can automatically adjust the picture size of the projection image according to the projection depth value, time is saved, and the process is convenient and fast.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (6)

1. A zooming method applied to a projection apparatus connected to an image pickup apparatus in which an encoded information image including a feature point set is prestored, comprising:
projecting the coded information image and receiving a shot image of the projected coded information image sent by the camera equipment;
extracting a feature point set in the shot image, and matching each feature point in the extracted feature point set of the shot image with each feature point in the feature point set of the coded information image;
calculating to obtain a projection depth value according to the successfully matched feature points in the shot image and the feature points in the coded information image;
adjusting the size of the projected coded information image according to the projection depth value, and adjusting the size of a projection picture of the projection equipment according to the size;
the method further comprises the following steps:
acquiring correction parameters of the camera equipment, wherein the correction parameters comprise a rotation matrix and a translation matrix;
the step of calculating a projection depth value according to the successfully matched feature points in the shot image and the feature points in the encoded information image includes:
obtaining the coordinate values of the feature points in the shot image and the coordinate values of the feature points in the coded information image which are successfully matched;
calculating according to a preset depth calculation formula, the coordinate values of the feature points in the shot image, the coordinate values of the feature points in the coded information image, the rotation matrix and the translation matrix which are successfully matched, so as to obtain a projection depth value;
the preset depth calculation formula is as follows:
Figure FDA0002670353860000021
Figure FDA0002670353860000022
wherein, x is the distance between the projection device and the projection image in the x-axis direction, y is the distance between the projection device and the projection image in the y-axis direction, z is the distance between the projection device and the projection image in the z-axis direction, namely the projection depth value, and x is the distance between the projection device and the projection image in the z-axis directioniFor the abscissa value, y, of the feature point in the successfully matched photographed imageiFor the longitudinal coordinate values, x, of the feature points in the successfully matched captured imagegFor the abscissa value, y, of the feature point in the successfully matched image of the encoded informationgF is the focal length of the image pickup device, f' is the focal length of the projection device, t is the ordinate value of the feature point in the encoded information image for which matching was successfulxTranslation parameter, t, for the x-axis in the translation matrixzFor the translation parameter of the z-axis in the translation matrix, r4-r9Respectively the parameter values in the rotation matrix.
2. The zooming method according to claim 1, wherein the successfully matched feature points in the captured image and the encoded information image include a plurality of groups, and the step of calculating the projection depth values based on the successfully matched feature points in the captured image and the successfully matched feature points in the encoded information image comprises:
calculating to obtain a plurality of depth values according to the plurality of groups of feature points successfully matched;
and calculating the average value of the depth values, and taking the average value as a projection depth value.
3. The zooming method of claim 1, wherein the projection device has a pre-stored relationship between projected depth values and projected image sizes, and wherein the step of adjusting the size of the projected encoded information image according to the projected depth values comprises:
searching the relation between the pre-stored projection depth value and the projection image size, and obtaining the projection image size corresponding to the calculated projection depth value;
and adjusting the size of the projected coded information image according to the searched projected image size.
4. A zoom apparatus applied to a projection device connected to an image pickup device, in which an encoded information image including a feature point set is prestored, the zoom apparatus comprising:
the projection module is used for projecting the coded information image and receiving a shot image of the projected coded information image sent by the camera equipment;
the matching module is used for extracting a feature point set in the shot image and matching each feature point in the extracted feature point set of the shot image with each feature point in the feature point set of the coded information image;
the computing module is used for computing to obtain a projection depth value according to the feature points in the shot image and the feature points in the coded information image which are successfully matched;
the adjusting module is used for adjusting the size of the projected coded information image according to the projection depth value and adjusting the size of a projection picture of the projection equipment according to the size;
the zoom device further comprises a correction parameter acquisition module, wherein the correction parameter acquisition module is used for acquiring correction parameters of the camera equipment, and the correction parameters comprise a rotation matrix and a translation matrix;
the calculation module also comprises a coordinate value acquisition unit and a projection depth value calculation unit;
the coordinate value acquisition unit is used for acquiring coordinate values of the feature points in the shot image and coordinate values of the feature points in the coded information image which are successfully matched;
the projection depth value calculation unit is used for calculating to obtain a projection depth value according to a preset depth calculation formula according to the coordinate values of the feature points in the shot image, the coordinate values of the feature points in the coded information image, the rotation matrix and the translation matrix which are successfully matched;
the preset depth calculation formula is as follows:
Figure FDA0002670353860000041
Figure FDA0002670353860000042
wherein, x is the distance between the projection device and the projection image in the x-axis direction, y is the distance between the projection device and the projection image in the y-axis direction, z is the distance between the projection device and the projection image in the z-axis direction, namely the projection depth value, and x is the distance between the projection device and the projection image in the z-axis directioniFor the abscissa value, y, of the feature point in the successfully matched photographed imageiFor the longitudinal coordinate values, x, of the feature points in the successfully matched captured imagegFor the abscissa value, y, of the feature point in the successfully matched image of the encoded informationgF is the focal length of the image pickup device, f' is the focal length of the projection device, t is the ordinate value of the feature point in the encoded information image for which matching was successfulxTranslation parameter, t, for the x-axis in the translation matrixzFor the translation parameter of the z-axis in the translation matrix, r4-r9Respectively the parameter values in the rotation matrix.
5. The zoom device according to claim 4, wherein the feature points that can be successfully matched in the captured image and the encoded information image include a plurality of sets, and the calculation module includes a first calculation unit and a second calculation unit;
the first calculation unit is used for calculating a plurality of depth values according to the plurality of groups of feature points successfully matched;
the second calculation unit is configured to calculate an average value of the plurality of depth values, and use the average value as a projection depth value.
6. The zoom apparatus according to claim 4, wherein the projection device pre-stores a relationship between a projection depth value and a projection image size, and the adjustment module comprises a search unit and an adjustment unit;
the searching unit is used for searching the relation between the pre-stored projection depth value and the projection image size to obtain the projection image size corresponding to the calculated projection depth value;
the adjusting unit is used for adjusting the size of the projected coded information image according to the size of the searched projected image.
CN201710702359.7A 2017-08-16 2017-08-16 Zooming method and device Active CN107277380B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710702359.7A CN107277380B (en) 2017-08-16 2017-08-16 Zooming method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710702359.7A CN107277380B (en) 2017-08-16 2017-08-16 Zooming method and device

Publications (2)

Publication Number Publication Date
CN107277380A CN107277380A (en) 2017-10-20
CN107277380B true CN107277380B (en) 2020-10-30

Family

ID=60080118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710702359.7A Active CN107277380B (en) 2017-08-16 2017-08-16 Zooming method and device

Country Status (1)

Country Link
CN (1) CN107277380B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961855B (en) * 2018-05-04 2021-02-19 何战涛 Portable early education equipment and use method thereof
CN110087049A (en) * 2019-05-27 2019-08-02 广州市讯码通讯科技有限公司 Automatic focusing system, method and projector
CN110491316A (en) * 2019-07-08 2019-11-22 青岛小鸟看看科技有限公司 A kind of projector and its method for controlling projection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102841767A (en) * 2011-06-22 2012-12-26 华为终端有限公司 Multi-projection splicing geometric correcting method and device
CN104778658A (en) * 2015-04-01 2015-07-15 北京理工大学 Full-automatic geometric mosaic correction method for images projected by multiple projectors
CN106204574A (en) * 2016-07-07 2016-12-07 兰州理工大学 Camera pose self-calibrating method based on objective plane motion feature
CN106507084A (en) * 2016-10-18 2017-03-15 安徽协创物联网技术有限公司 A kind of panorama camera array multi-view image bearing calibration

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686107B (en) * 2013-12-13 2017-01-25 华为技术有限公司 Processing method and device based on projected image
CN103838437B (en) * 2014-03-14 2017-02-15 重庆大学 Touch positioning control method based on projection image
JP2016019194A (en) * 2014-07-09 2016-02-01 株式会社東芝 Image processing apparatus, image processing method, and image projection device
CN104090664B (en) * 2014-07-29 2017-03-29 广景科技有限公司 A kind of interactive projection method, apparatus and system
US9625719B2 (en) * 2014-09-04 2017-04-18 Yazaki Corporation Projection display device
US9674504B1 (en) * 2015-12-22 2017-06-06 Aquifi, Inc. Depth perceptive trinocular camera system
CN105554486A (en) * 2015-12-22 2016-05-04 Tcl集团股份有限公司 Projection calibration method and device
CN105979234B (en) * 2016-06-13 2019-03-19 Tcl集团股份有限公司 A kind of method and projection arrangement of projection image correction
CN106683134B (en) * 2017-01-25 2019-12-31 触景无限科技(北京)有限公司 Image calibration method and device for desk lamp
CN106973275A (en) * 2017-03-22 2017-07-21 北京小米移动软件有限公司 The control method and device of projector equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102841767A (en) * 2011-06-22 2012-12-26 华为终端有限公司 Multi-projection splicing geometric correcting method and device
CN104778658A (en) * 2015-04-01 2015-07-15 北京理工大学 Full-automatic geometric mosaic correction method for images projected by multiple projectors
CN106204574A (en) * 2016-07-07 2016-12-07 兰州理工大学 Camera pose self-calibrating method based on objective plane motion feature
CN106507084A (en) * 2016-10-18 2017-03-15 安徽协创物联网技术有限公司 A kind of panorama camera array multi-view image bearing calibration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于投影图像自适应校正方法的智能投影***研究》;朱博;《中国博士学位论文全文数据库》;20140615;全文 *

Also Published As

Publication number Publication date
CN107277380A (en) 2017-10-20

Similar Documents

Publication Publication Date Title
EP3252715B1 (en) Two-camera relative position calculation system, device and apparatus
CN111243035B (en) Camera calibration method and device, electronic equipment and computer-readable storage medium
CN107277380B (en) Zooming method and device
US20120127276A1 (en) Image retrieval system and method and computer product thereof
US9613404B2 (en) Image processing method, image processing apparatus and electronic device
JP2019004451A (en) Method, apparatus, device, and computer-readable storage medium for processing panorama video
JP2018092580A5 (en) Image processing apparatus, image processing method, and program
JP2014127151A5 (en)
WO2013190862A1 (en) Image processing device and image processing method
CN106570899B (en) Target object detection method and device
US20150147047A1 (en) Simulating tracking shots from image sequences
CN105809664B (en) Method and device for generating three-dimensional image
US20120162387A1 (en) Imaging parameter acquisition apparatus, imaging parameter acquisition method and storage medium
CN107256388B (en) Method and device for acquiring front face image
CN109443305B (en) Distance measuring method and device
CN113190120B (en) Pose acquisition method and device, electronic equipment and storage medium
KR20150031085A (en) 3D face-modeling device, system and method using Multiple cameras
CN115564842A (en) Parameter calibration method, device, equipment and storage medium for binocular fisheye camera
JP2019020778A5 (en)
JP2021106025A5 (en)
CN111741223B (en) Panoramic image shooting method, device and system
CN111712857A (en) Image processing method, device, holder and storage medium
CN112802112B (en) Visual positioning method, device, server and storage medium
US20140307053A1 (en) Method of prompting proper rotation angle for image depth establishing
JP6489655B2 (en) Camera calibration apparatus, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 610000 Tianfu Software Park Area A, 1129 Century City Road, Chengdu High-tech Zone, Sichuan Province

Applicant after: Chengdu Jimi Technology Co., Ltd.

Address before: No. 1129 Tianfu Software Park A District Century City high tech Zone of Chengdu City, Sichuan Province Road 610000 7 5 storey building No. 501

Applicant before: CHENGDU XGIMI TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant