CN116016878A - Artificial intelligent following type projection sand table and projection method thereof - Google Patents

Artificial intelligent following type projection sand table and projection method thereof Download PDF

Info

Publication number
CN116016878A
CN116016878A CN202211687548.9A CN202211687548A CN116016878A CN 116016878 A CN116016878 A CN 116016878A CN 202211687548 A CN202211687548 A CN 202211687548A CN 116016878 A CN116016878 A CN 116016878A
Authority
CN
China
Prior art keywords
projection
observer
data
sand table
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211687548.9A
Other languages
Chinese (zh)
Inventor
崔龙竹
刘毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wenbo Intelligent Technology Co ltd
Original Assignee
Beijing Wenbo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wenbo Intelligent Technology Co ltd filed Critical Beijing Wenbo Intelligent Technology Co ltd
Priority to CN202211687548.9A priority Critical patent/CN116016878A/en
Publication of CN116016878A publication Critical patent/CN116016878A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses an artificial intelligent following type projection sand table and a projection method thereof, which belong to the technical field of digital sand tables, wherein the sand table comprises a data acquisition device, a data processing terminal and a projector; the data processing terminal is connected with the data acquisition device, and the projector is connected with the data processing terminal; the data acquisition device is used for acquiring the projection area of the sand table, the data information of the observer and the distance information between the observer and the sand table; the data processing terminal is used for carrying out data processing on the observer data information and the distance information, calculating projection data by using a projection display prediction model and transmitting the projection data to the projector; the projector is used for projecting an image according to projection parameters of the projection data adjusting device. According to the scheme, the sensor in the sand table is matched with a corresponding analysis algorithm to perform data processing, so that the direction and effect of projection display of the sand table can be dynamically adjusted based on the position change of an observer.

Description

Artificial intelligent following type projection sand table and projection method thereof
Technical Field
The application belongs to the technical field of digital sand tables, and particularly relates to an artificial intelligent following type projection sand table and a projection method thereof.
Background
The existing sand table projection geographic information mode is that geographic information is projected into the sand table through a projector.
The projection effect of the sand table projector has strong correlation with the position of the viewer, the viewer can identify the correct content only at the set position, and if the position of the viewer moves, the observed content shifts, so that the understanding of the displayed content is greatly reduced; as shown in fig. 2 below, the viewer can only see the "display text" at position a, but in the case of position B, C, D, the content seen is offset to some extent. For example, the content seen at location B is presented with a 90 ° offset, greatly affecting the user's experience.
Disclosure of Invention
Therefore, the application provides an artificial intelligence following type projection sand table and a projection method thereof, which are beneficial to solving the problem that the prior projection sand table is limited by the position of a viewer and cannot dynamically adjust projection data, so that the viewing effect is poor.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides an artificial intelligence follow-up projection sand table comprising:
the data acquisition device is used for acquiring the projection area of the sand table, the data information of the observer and the distance information between the observer and the sand table;
the data processing terminal is connected with the data acquisition device and is used for carrying out data processing on the data information and the distance information of the observer, calculating projection data by using a projection display prediction model and transmitting the projection data to the projector;
and the projector is connected with the data processing terminal and is used for projecting images according to the projection parameters of the projection data adjusting equipment.
Further, the data acquisition device comprises a binocular camera, a depth camera and an eye movement sensor which are arranged around the sand table in a preset data acquisition direction, and the data processing terminal is respectively connected with the binocular camera, the depth camera and the eye movement sensor; the binocular camera is used for acquiring picture images in a preset data acquisition azimuth range according to a first preset acquisition frequency, identifying outline image information of an observer in the preset data acquisition azimuth range by utilizing a character detection and identification algorithm, and acquiring a projection area of the sand table; the depth camera is used for acquiring distance information between an observer and the sand table within a preset data acquisition azimuth range according to a first preset acquisition frequency; the eye movement sensor is used for capturing human eye data of an observer in a preset data acquisition azimuth range.
Further, the data processing terminal specifically comprises an observer data processing module, a projection display prediction module and a projection output module;
the observer data processing module is used for carrying out character modeling processing on the data information of the observer and the distance information between the observer and the sand table to obtain actual observer information, and analyzing a preset data acquisition azimuth with the maximum number of observers;
the projection display prediction module is used for respectively processing the information of the actual observer and the projection area of the sand table by using the projection display prediction model, a new projection direction and a new projection area after the projection direction is changed, and calculating the optimal projection data by combining the distance information between the observer and the sand table;
the projection output module is used for transmitting projection data to the projector.
Further, the preset data acquisition azimuth includes four preset data acquisition azimuth with the same azimuth angle, which are a first preset data acquisition azimuth, a second preset data acquisition azimuth, a third preset data acquisition azimuth and a fourth preset data acquisition azimuth, respectively.
In a second aspect, the present application provides a projection method of an artificial intelligence follow-up projection sand table, which is applied to the artificial intelligence follow-up projection sand table provided in the first aspect, and the projection method includes:
s1: identifying observer data, and acquiring a projection area of the sand table, data information of the observer and distance information between the observer and the sand table by using a data acquisition device;
s2: the observer data processing is carried out, the observer data information and the distance information are processed, projection data are calculated by using a projection display prediction model, and the projection data are transmitted to a projector;
s3: projection display control, in which projection parameters of the projection data adjusting device are adjusted to project an image.
Further, the step S1 specifically includes: firstly, a binocular camera collects picture images in a preset data collection azimuth range according to a first preset collection frequency, a character detection and identification algorithm is utilized to identify outline image information of an observer in the preset data collection azimuth range, and meanwhile, a projection area of a sand table is obtained; then, acquiring distance information between an observer and a sand table within a preset data acquisition azimuth range according to a first preset acquisition frequency by using a depth camera; and finally, capturing human eye data of an observer in a preset data acquisition azimuth range by using an eye tracker sensor.
Further, the step S2 specifically includes the following substeps:
s201: carrying out character modeling processing on the data information of the observer and the distance information between the data information and the sand table to obtain actual observer information, and analyzing a preset data acquisition azimuth with the maximum number of observers;
s202: the projection display prediction model is utilized to respectively process the information of an actual observer and the projection area of the sand table, a new projection direction and a new projection area after the projection direction is changed are obtained, and the optimal projection data is calculated by combining the distance information between the observer and the sand table;
s203: the optimal projection data is transmitted to the projector.
Further, the substep S201 specifically includes:
s2011: firstly, combining contour image information and distance information of observers in each preset data acquisition azimuth range, and drawing all observer images around a sand table, wherein each observer image comprises the contour information of the identified observer and the position of the identified observer;
s2022: based on human eye data of observers in each preset data acquisition azimuth range, carrying out human eye data analysis by taking the observers which do not watch the sand table as false observers to obtain actual observer numbers in each preset data acquisition azimuth range;
s2023: selecting any one preset data acquisition azimuth as a starting point, numbering actual observers in each preset data acquisition azimuth range according to a clockwise direction or a anticlockwise direction, and comparing the actual number of the observers in each preset data acquisition azimuth range to obtain the preset data acquisition azimuth with the maximum number of the observers;
s2024: the data in steps S2021 to S2023 are transmitted as observer information to the projection display prediction model.
Further, the substep S202 specifically includes:
s2021: calculating actual projection area data of the projector by utilizing a projection area algorithm based on the projection area of the sand table, and transmitting the actual projection area data to a projection display prediction model;
s2022: the projection display prediction model imports observer information and projection area data, and carries out data cleaning on the observer information and the projection area data according to a preset data cleaning rule;
s2023: the projection display prediction model judges a new projection direction according to a preset data acquisition direction with the largest number of people in the observer information, and calculates a new projection area based on the new projection direction and projection area data;
s2024: the projection display prediction model calculates optimal projection parameters based on the new projection direction and combined with the actual number of observers and the positions of the observers in the observer information, wherein the projection parameters comprise the offset angle, the resolution of a projection picture and the projection size of the projection.
The application adopts the technical scheme, possesses following beneficial effect at least:
the artificial intelligent following type projection sand table comprises a data acquisition device, a data processing terminal and a projector; the data processing terminal is connected with the data acquisition device, and the projector is connected with the data processing terminal; the data acquisition device is used for acquiring the projection area of the sand table, the data information of the observer and the distance information between the observer and the sand table; the data processing terminal is used for carrying out data processing on the observer data information and the distance information, calculating projection data by using a projection display prediction model and transmitting the projection data to the projector; the projector is used for projecting an image according to projection parameters of the projection data adjusting device. Under the setting, the data acquisition device recognizes the observer and the position parameter thereof through the sensor, then the data processing terminal uses the position of the observer as a display reference plane, calculates the offset angle (projection direction) and the projection display area of the projection picture by using the existing projection display prediction model, transmits the calculated offset angle (projection direction) and the projection display area to the projector, and finally controls the projector to perform projection display. According to the method and the device, the sensor in the sand table is matched with the corresponding analysis algorithm to perform data processing, the direction and the effect of projection display of the sand table can be dynamically adjusted based on the position change of an observer, and the problem that the existing projection sand table is limited by the position of the observer and cannot dynamically adjust projection data to cause poor watching effect is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an artificial intelligence followed by projection sand table according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a sand table projection of the related art according to one embodiment;
FIG. 3 is a schematic diagram of an artificial intelligence followed by projection sand table device layout shown according to an example embodiment;
FIG. 4 is a flowchart of a projection method, according to an exemplary embodiment;
FIG. 5 is a schematic view of an observer image around a sand table, according to an example embodiment;
FIG. 6 is a schematic diagram of a live-view observer image around a sand table, according to an exemplary embodiment;
FIG. 7 is a schematic view of an actual observer image around a sand table, according to an exemplary embodiment;
FIG. 8 is a schematic diagram showing actual observer numbering around a sand table according to an exemplary embodiment;
FIG. 9 is a schematic diagram showing a new projection direction of a sand table according to an example embodiment;
FIG. 10 illustrates a diagram of a new projected area of a sand table, according to an exemplary embodiment;
in fig. 1: 1-data acquisition device, 2-data processing terminal, 3-projecting apparatus.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, based on the examples herein, which are within the scope of the protection sought by those of ordinary skill in the art without undue effort, are intended to be encompassed by the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of an artificial intelligence following type projection sand table, as shown in fig. 1, including:
the data acquisition device 1 is used for acquiring the projection area of the sand table, the data information of the observer and the distance information between the observer and the sand table;
the data processing terminal 2 is connected with the data acquisition device 1 and is used for carrying out data processing on the observer data information and the distance information, calculating projection data by using a projection display prediction model and sending the projection data to the projector 3;
and a projector 3 connected to the data processing terminal 2 for projecting an image by adjusting the projection parameters of the apparatus according to the projection data.
Further, in one embodiment, the data acquisition device 1 comprises a binocular camera, a depth camera and an eye movement sensor which are arranged around the sand table and in preset data acquisition directions, and the data processing terminal 2 is respectively connected with the binocular camera, the depth camera and the eye movement sensor; the binocular camera is used for acquiring picture images in a preset data acquisition azimuth range according to a first preset acquisition frequency, identifying outline image information of an observer in the preset data acquisition azimuth range by utilizing a character detection and identification algorithm, and acquiring a projection area of the sand table; the depth camera is used for acquiring distance information between an observer and the sand table within a preset data acquisition azimuth range according to a first preset acquisition frequency; the eye movement sensor is used for capturing human eye data of an observer in a preset data acquisition azimuth range.
The binocular camera is used for identifying an image of an observer and distance information from the position of the sand table. On the one hand, the binocular cameras can realize character recognition by grabbing images and matching with corresponding algorithms, and can judge the distance through the image difference (parallax) presented by the two cameras to the same object based on the principle that people watch things with eyes, namely, the smaller the parallax is, the farther the distance is indicated, and the larger the parallax is, the closer the distance is. The depth camera is used for acquiring distance information between an observer and the sand table according to parallax of the binocular camera. The eye movement meter sensor is used for identifying how many people currently exist to observe the content of the sand table by capturing human eye data, so that the actual observer number is obtained, people around the sand table are prevented from gathering on one side due to some reasons, but the content of the sand table is not observed, and the situation of misjudgment of the data processing terminal 2 is caused. Finally, the three sensors send the collected data to an observer data processing module of the data processing terminal 2 for data processing.
Referring to fig. 3, in the present application, an area around a sand table is divided into four preset data collection positions with the same azimuth angle, which are a first preset data collection position, a second preset data collection position, a third preset data collection position and a fourth preset data collection position, respectively, corresponding to positions A, B, C, D where observers are located, and then data collection devices 1 are respectively arranged on the four positions.
Further, in one embodiment, the data processing terminal 2 specifically includes an observer data processing module, a projection display prediction module, and a projection output module;
the observer data processing module is used for carrying out character modeling processing on the data information of the observer and the distance information between the observer and the sand table to obtain actual observer information, and analyzing a preset data acquisition azimuth with the maximum number of observers;
the projection display prediction module is used for respectively processing the information of the actual observer and the projection area of the sand table by using the projection display prediction model, a new projection direction and a new projection area after the projection direction is changed, and calculating the optimal projection data by combining the distance information between the observer and the sand table;
the projection output module is used for transmitting projection data to the projector 3.
Referring to fig. 4, the present application provides a projection method of an artificial intelligence follow-up projection sand table, which is applied to the artificial intelligence follow-up projection sand table provided in the first aspect, and the projection method includes:
s1: identifying observer data, and acquiring the projection area of the sand table, the data information of the observer and the distance information between the observer and the sand table by using the data acquisition device 1;
s2: observer data processing, namely performing data processing on observer data information and distance information, calculating projection data by using a projection display prediction model, and transmitting the projection data to a projector 3;
s3: projection display control, in which projection parameters of the projection data adjusting device are adjusted to project an image.
Further, in one embodiment, the step S1 specifically includes: firstly, a binocular camera collects picture images in a preset data collection azimuth range according to a first preset collection frequency, a character detection and identification algorithm is utilized to identify outline image information of an observer in the preset data collection azimuth range, and meanwhile, a projection area of a sand table is obtained; then, acquiring distance information between an observer and a sand table within a preset data acquisition azimuth range according to a first preset acquisition frequency by using a depth camera; and finally, capturing human eye data of an observer in a preset data acquisition azimuth range by using an eye tracker sensor.
Further, in one embodiment, the step S2 specifically includes the following substeps:
s201: carrying out character modeling processing on the data information of the observer and the distance information between the data information and the sand table to obtain actual observer information, and analyzing a preset data acquisition azimuth with the maximum number of observers;
s202: the projection display prediction model is utilized to respectively process the information of an actual observer and the projection area of the sand table, a new projection direction and a new projection area after the projection direction is changed are obtained, and the optimal projection data is calculated by combining the distance information between the observer and the sand table;
s203: the optimum projection data is transmitted to the projector 3.
The purpose of step S201 is to collect sensor information in 4 directions, and then the observer data processing module in the data processing terminal 2 performs a modeling process based on the information as follows:
s2011: firstly, combining contour image information and distance information of observers in each preset data acquisition azimuth range acquired by a binocular camera and a depth camera, and drawing all observer images around a sand table, wherein each observer image comprises the identified contour information and the position of the observer. Wherein the circular point location represents the observer around the sand table identified by the binocular camera.
S2022: and (3) based on human eye data of observers in each preset data acquisition azimuth range acquired by the eye tracker sensor, analyzing human eye data by taking the observers which do not watch the sand table as false observers, as shown in fig. 6, wherein rectangular point positions represent the false observers which are identified by the eye tracker sensor, namely, the persons which do not watch the sand table. As shown in fig. 7, correction is performed in combination with data acquired by the eye tracker sensor, so as to exclude false observers, and obtain actual observer numbers in each preset data acquisition azimuth range.
S2023: selecting any one preset data acquisition azimuth as a starting point, numbering actual observers in each preset data acquisition azimuth range according to the clockwise or anticlockwise direction, and comparing the actual number of the observers in each preset data acquisition azimuth range to obtain the preset data acquisition azimuth with the largest number of the observers. As shown in fig. 8, the data processing terminal 2 numbers the observer in the clockwise direction with the lower left corner of the sand table as the starting point. And then drawing a data table by combining the information of each observer, wherein the distance information of the observer is represented by D1-D4, the preset data acquisition azimuth of the sand table is represented by geographical azimuth East, west, south and North, and finally obtaining the following data table:
table 1 viewer information table
Figure BDA0004020051650000081
Figure BDA0004020051650000091
The direction of the maximum number of people around the sand table is obtained by comparing the number of people on each direction.
S2024: finally, the data in steps S2021 to S2023 are transmitted as observer information to the projection display prediction model for subsequent processing.
Further, in one embodiment, in step S202, the data processing terminal 2 processes the projection display prediction model according to the observer information, and the purpose of the process is to calculate the projection parameters including the projection direction, the projection size, the projection resolution, etc. by combining the projection display prediction model based on the projection display area data and the observer data. The process is specifically as follows:
s2021: based on the projection area of the sand table acquired by the binocular camera, the actual projection area data of the projector 3 is calculated by utilizing a projection area algorithm and is sent to a projection display prediction model. The projection display prediction model is realized by adopting a patent application with the patent application number of CN201910239617.1, and discloses a laser holographic projection sand table manufacturing method. The projection area algorithm is realized by adopting a projection area calculation method of the projector 3 in the prior art, and as the information such as the actual size of the sand table, the parameters of the projector 3 and the like are known, the corresponding projection area data can be calculated according to the actual size of the sand table and the size of the sand table in the acquired image only by acquiring the projection area of the sand table.
S2022: the projection display prediction model imports observer information and projection area data, carries out data cleaning on the observer information and the projection area data according to a preset data cleaning rule, and judges whether each characteristic value in the data is abnormal or not, wherein the characteristic value comprises a missing value, a repeated value and an abnormal value; and then adopting a corresponding exception handling method to handle exceptions in the data, such as adopting mean value interpolation, clustering filling multiple interpolation prediction to carry out missing value handling, repeating the repeated value to remove the exception value, and the like.
S2023: after the data cleaning is completed, as shown in fig. 9, the projection display prediction model judges a new projection direction according to a preset data acquisition direction with the largest number of people in the observer information. Then, as shown in fig. 10, the projection display prediction model calculates a new projection area based on the new projection direction and the projection area data.
S2024: the projection display prediction model calculates optimal projection parameters based on the new projection direction and combined with the actual number of observers and the positions of the observers in the observer information, wherein the projection parameters comprise the offset angle, the resolution of a projection picture and the projection size of the projection.
Further, the projector 3 adjusts the projection data according to the projection parameters, and first, the projector 3 receives the projection information output parameters provided by the data processing terminal 2; then, the projector 3 performs parameter analysis, and projects an image based on the analyzed parameters.
The method has the advantage of solving the problem that the viewing position of the observer is limited by the projection direction. Under the scene of the sand table projecting the geographic information, the sensor in the sand table is matched with a corresponding analysis algorithm to perform data processing, so that the direction and effect of the projection display of the sand table can be dynamically adjusted based on the position change of an observer.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality", "multiple" means at least two.
It will be understood that when an element is referred to as being "mounted" or "disposed" on another element, it can be directly on the other element or intervening elements may also be present; when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may be present, and further, as used herein, connection may comprise a wireless connection; the use of the term "and/or" includes any and all combinations of one or more of the associated listed items.
Any process or method description in a flowchart or otherwise described herein may be understood as: means, segments, or portions of code representing executable instructions including one or more steps for implementing specific logical functions or processes are included in the preferred embodiments of the present application, in which functions may be executed out of order from that shown or discussed, including in a substantially simultaneous manner or in an inverse order, depending upon the functionality involved, as would be understood by those skilled in the art to which the embodiments of the present application pertains.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (9)

1. An artificial intelligence following projection sand table, characterized by comprising:
the data acquisition device is used for acquiring the projection area of the sand table, the data information of the observer and the distance information between the observer and the sand table;
the data processing terminal is connected with the data acquisition device and is used for carrying out data processing on the data information and the distance information of the observer, calculating projection data by using a projection display prediction model and transmitting the projection data to the projector;
and the projector is connected with the data processing terminal and is used for projecting images according to the projection parameters of the projection data adjusting equipment.
2. The artificial intelligence following projection sand table according to claim 1, wherein the data acquisition device comprises a binocular camera, a depth camera and an eye tracker sensor which are arranged around the sand table in a preset data acquisition direction, and the data processing terminal is respectively connected with the binocular camera, the depth camera and the eye tracker sensor; the binocular camera is used for acquiring picture images in a preset data acquisition azimuth range according to a first preset acquisition frequency, identifying outline image information of an observer in the preset data acquisition azimuth range by utilizing a character detection and identification algorithm, and acquiring a projection area of the sand table; the depth camera is used for acquiring distance information between an observer and the sand table within a preset data acquisition azimuth range according to a first preset acquisition frequency; the eye movement sensor is used for capturing human eye data of an observer in a preset data acquisition azimuth range.
3. The artificial intelligence following projection sand table according to claim 1, wherein the data processing terminal specifically comprises an observer data processing module, a projection display prediction module and a projection output module;
the observer data processing module is used for carrying out character modeling processing on the data information of the observer and the distance information between the observer and the sand table to obtain actual observer information, and analyzing a preset data acquisition azimuth with the maximum number of observers;
the projection display prediction module is used for respectively processing the information of the actual observer and the projection area of the sand table by using the projection display prediction model, a new projection direction and a new projection area after the projection direction is changed, and calculating the optimal projection data by combining the distance information between the observer and the sand table;
the projection output module is used for transmitting projection data to the projector.
4. The artificial intelligence following projection sand table of claim 2 wherein the preset data acquisition orientations include four preset data acquisition orientations of the same azimuth angle, a first preset data acquisition orientation, a second preset data acquisition orientation, a third preset data acquisition orientation, and a fourth preset data acquisition orientation, respectively.
5. A projection method of an artificial intelligence following type projection sand table, which is applied to the artificial intelligence following type projection sand table as claimed in any one of claims 1 to 4, and is characterized in that the projection method comprises the following steps:
s1: identifying observer data, and acquiring a projection area of the sand table, data information of the observer and distance information between the observer and the sand table by using a data acquisition device;
s2: the observer data processing is carried out, the observer data information and the distance information are processed, projection data are calculated by using a projection display prediction model, and the projection data are transmitted to a projector;
s3: projection display control, in which projection parameters of the projection data adjusting device are adjusted to project an image.
6. The method for projecting an artificial intelligence follow-up type projection sand table according to claim 5, wherein the step S1 specifically includes: firstly, a binocular camera collects picture images in a preset data collection azimuth range according to a first preset collection frequency, a character detection and identification algorithm is utilized to identify outline image information of an observer in the preset data collection azimuth range, and meanwhile, a projection area of a sand table is obtained; then, acquiring distance information between an observer and a sand table within a preset data acquisition azimuth range according to a first preset acquisition frequency by using a depth camera; and finally, capturing human eye data of an observer in a preset data acquisition azimuth range by using an eye tracker sensor.
7. The method for projecting an artificial intelligence follow-up type projection sand table according to claim 5, wherein the step S2 specifically comprises the following substeps:
s201: carrying out character modeling processing on the data information of the observer and the distance information between the data information and the sand table to obtain actual observer information, and analyzing a preset data acquisition azimuth with the maximum number of observers;
s202: the projection display prediction model is utilized to respectively process the information of an actual observer and the projection area of the sand table, a new projection direction and a new projection area after the projection direction is changed are obtained, and the optimal projection data is calculated by combining the distance information between the observer and the sand table;
s203: the optimal projection data is transmitted to the projector.
8. The method for projecting an artificial intelligence followed by a sand table according to claim 7, wherein the substep S201 specifically comprises:
s2011: firstly, combining contour image information and distance information of observers in each preset data acquisition azimuth range, and drawing all observer images around a sand table, wherein each observer image comprises the contour information of the identified observer and the position of the identified observer;
s2022: based on human eye data of observers in each preset data acquisition azimuth range, carrying out human eye data analysis by taking the observers which do not watch the sand table as false observers to obtain actual observer numbers in each preset data acquisition azimuth range;
s2023: selecting any one preset data acquisition azimuth as a starting point, numbering actual observers in each preset data acquisition azimuth range according to a clockwise direction or a anticlockwise direction, and comparing the actual number of the observers in each preset data acquisition azimuth range to obtain the preset data acquisition azimuth with the maximum number of the observers;
s2024: the data in steps S2021 to S2023 are transmitted as observer information to the projection display prediction model.
9. The method for projecting an artificial intelligence followed by a sand table according to claim 7, wherein the substep S202 specifically comprises:
s2021: calculating actual projection area data of the projector by utilizing a projection area algorithm based on the projection area of the sand table, and transmitting the actual projection area data to a projection display prediction model;
s2022: the projection display prediction model imports observer information and projection area data, and carries out data cleaning on the observer information and the projection area data according to a preset data cleaning rule;
s2023: the projection display prediction model judges a new projection direction according to a preset data acquisition direction with the largest number of people in the observer information, and calculates a new projection area based on the new projection direction and projection area data;
s2024: the projection display prediction model calculates optimal projection parameters based on the new projection direction and combined with the actual number of observers and the positions of the observers in the observer information, wherein the projection parameters comprise the offset angle, the resolution of a projection picture and the projection size of the projection.
CN202211687548.9A 2022-12-27 2022-12-27 Artificial intelligent following type projection sand table and projection method thereof Pending CN116016878A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211687548.9A CN116016878A (en) 2022-12-27 2022-12-27 Artificial intelligent following type projection sand table and projection method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211687548.9A CN116016878A (en) 2022-12-27 2022-12-27 Artificial intelligent following type projection sand table and projection method thereof

Publications (1)

Publication Number Publication Date
CN116016878A true CN116016878A (en) 2023-04-25

Family

ID=86031194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211687548.9A Pending CN116016878A (en) 2022-12-27 2022-12-27 Artificial intelligent following type projection sand table and projection method thereof

Country Status (1)

Country Link
CN (1) CN116016878A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078885A (en) * 2023-09-07 2023-11-17 广州市创佳建筑模型有限公司 Building model spliced sand table coordinate and posture guiding system based on feature recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078885A (en) * 2023-09-07 2023-11-17 广州市创佳建筑模型有限公司 Building model spliced sand table coordinate and posture guiding system based on feature recognition

Similar Documents

Publication Publication Date Title
CN111060023B (en) High-precision 3D information acquisition equipment and method
US10789765B2 (en) Three-dimensional reconstruction method
CN106258010B (en) 2D image dissector
US9940717B2 (en) Method and system of geometric camera self-calibration quality assessment
US8928736B2 (en) Three-dimensional modeling apparatus, three-dimensional modeling method and computer-readable recording medium storing three-dimensional modeling program
US7616885B2 (en) Single lens auto focus system for stereo image generation and method thereof
US20210044787A1 (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, and computer
CN107852533A (en) Three-dimensional content generating means and its three-dimensional content generation method
WO2020125499A1 (en) Operation prompting method and glasses
US10645364B2 (en) Dynamic calibration of multi-camera systems using multiple multi-view image frames
CN102158719A (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP2024056955A (en) Personalized hrtf by optical type capture
CN104169965A (en) Systems, methods, and computer program products for runtime adjustment of image warping parameters in a multi-camera system
US20120194513A1 (en) Image processing apparatus and method with three-dimensional model creation capability, and recording medium
CN111062234A (en) Monitoring method, intelligent terminal and computer readable storage medium
US20140055580A1 (en) Depth Of Field Maintaining Apparatus, 3D Display System And Display Method
CN111060008B (en) 3D intelligent vision equipment
CN115526983B (en) Three-dimensional reconstruction method and related equipment
CN116016878A (en) Artificial intelligent following type projection sand table and projection method thereof
CN108282650B (en) Naked eye three-dimensional display method, device and system and storage medium
CN114894337B (en) Temperature measurement method and device for outdoor face recognition
CN115457176A (en) Image generation method and device, electronic equipment and storage medium
CN106919246A (en) The display methods and device of a kind of application interface
CN108876824B (en) Target tracking method, device and system and dome camera
CN116580169B (en) Digital man driving method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination