CN106970709B - 3D interaction method and device based on holographic imaging - Google Patents

3D interaction method and device based on holographic imaging Download PDF

Info

Publication number
CN106970709B
CN106970709B CN201710204357.5A CN201710204357A CN106970709B CN 106970709 B CN106970709 B CN 106970709B CN 201710204357 A CN201710204357 A CN 201710204357A CN 106970709 B CN106970709 B CN 106970709B
Authority
CN
China
Prior art keywords
preset
image
image information
detected
sensing equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710204357.5A
Other languages
Chinese (zh)
Other versions
CN106970709A (en
Inventor
张春光
顾开宇
李应樵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Wanwei Display Technology Co Ltd
Original Assignee
Ningbo Wanwei Display Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Wanwei Display Technology Co Ltd filed Critical Ningbo Wanwei Display Technology Co Ltd
Priority to CN201710204357.5A priority Critical patent/CN106970709B/en
Publication of CN106970709A publication Critical patent/CN106970709A/en
Application granted granted Critical
Publication of CN106970709B publication Critical patent/CN106970709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Holo Graphy (AREA)

Abstract

The invention provides a 3D interaction method and device based on holographic imaging, and belongs to the technical field of holographic interaction. The method comprises the following steps: acquiring to-be-detected image information in a preset area; when the image information to be detected meets a preset standard, starting a holographic imaging projector; projecting a pre-stored image file to a preset target position; when the fact that the target position is projected by the image file is detected, the motion sensing equipment is started; acquiring a behavior gesture of a user based on the motion sensing equipment; and controlling the projection to finish the action corresponding to the behavior gesture. This application obtains to predetermine the regional user whether through the image information who obtains in the predetermined region to judge through matching image information and human image information, thereby confirm whether the object that appears in the prediction region is for one's person, when the person that appears, open holographic imaging projector, make the user can produce the sensation that gets into the 3D scene suddenly, bring a vision difference as it comes suddenly for the user.

Description

3D interaction method and device based on holographic imaging
Technical Field
The invention relates to the technical field of holographic interaction, in particular to a 3D interaction method and device based on holographic imaging.
Background
With the continuous development of science and technology, more and more devices based on 3D interaction are provided, however, the conventional practice is to actively turn on the devices by users to perform interaction or to enter virtual reality. However, due to the initiative of the user, the user can not enter virtual reality or other 3D interactions without any preparation, so that the experience of the user is greatly reduced, and a novel and exciting interactive experience cannot be brought to the user. Therefore, how to solve the above problems is a technical problem which needs to be solved urgently at present.
Disclosure of Invention
The invention provides a 3D interaction method and device based on holographic imaging, and aims to solve the problems.
The invention provides a 3D interaction method based on holographic imaging, which is applied to a 3D interaction system, wherein the 3D interaction system comprises a holographic imaging projector and a somatosensory device, and the method comprises the following steps: acquiring to-be-detected image information in a preset area; when the image information to be detected meets a preset standard, starting the holographic imaging projector; projecting a pre-stored image file to a preset target position; when the fact that the target position has the projection of the image file is detected, the motion sensing equipment is started; acquiring a behavior gesture of a user based on the motion sensing equipment; and controlling the projection to finish the action corresponding to the behavior gesture.
Preferably, the step of acquiring the image information to be detected in the preset region includes: and acquiring image information of the moving object appearing in the preset area, and taking the acquired image information of the moving object as the image information to be detected.
Preferably, the step of when the image information to be detected meets a preset standard includes: dividing an image in the image information to be detected into a plurality of areas; acquiring a salient region in the plurality of regions, wherein the salient region is used for representing a region with visual saliency in the image; calculating the confidence of each salient region, wherein the confidence is used for representing the probability that the image in the corresponding salient region is a face image; and comparing each confidence coefficient with a preset threshold value respectively, and judging that the image information is matched with preset human body image information when any one confidence coefficient in the confidence coefficients is equal to or greater than the preset threshold value. Preferably, the destination location is provided with a photoelectric sensor, and when the destination location detects that the projection of the image file exists, the step of turning on the motion sensing device includes: receiving detection information returned by the photoelectric sensor; and when the detection information meets a second preset rule, the motion sensing equipment is started.
Preferably, be equipped with infrared sensor and positioner on the body sensing equipment, open the step of body sensing equipment includes: acquiring coordinate values of the positioning device; matching the coordinate value with a preset coordinate interval corresponding to the preset area, and controlling the infrared sensor to check when the coordinate value bit is within the preset coordinate interval; receiving a detection value returned by the infrared sensor; and when the detection value meets a third preset rule, judging that the body sensing equipment is carried by a user in the preset area, and starting the body sensing equipment.
The invention provides a 3D interaction device based on holographic imaging, which is applied to a 3D interaction system, wherein the 3D interaction system comprises a holographic imaging projector and a somatosensory device, and the device comprises: the image acquisition module is used for acquiring the information of the image to be detected in the preset area; the image judging module is used for starting the holographic imaging projector when the image information to be detected meets a preset standard; the projection module is used for projecting a prestored image file to a preset target position; the projection processing module is used for starting the motion sensing equipment when the projection of the image file at the target position is detected; the user gesture obtaining module is used for obtaining a behavior gesture of a user based on the motion sensing equipment; and the execution module is used for controlling the projection to finish the action corresponding to the behavior gesture.
Preferably, the image acquisition module is specifically configured to: and acquiring image information of the moving object appearing in the preset area, and taking the acquired image information of the moving object as the image information to be detected.
Preferably, the motion sensing device is provided with an infrared sensor, and the image judgment module is specifically configured to: dividing an image in the image information to be detected into a plurality of areas; acquiring a salient region in the plurality of regions, wherein the salient region is used for representing a region with visual saliency in the image; calculating the confidence of each salient region, wherein the confidence is used for representing the probability that the image in the corresponding salient region is a face image; and comparing each confidence coefficient with a preset threshold value respectively, and judging that the image information is matched with preset human body image information when any one confidence coefficient in the confidence coefficients is equal to or greater than the preset threshold value.
Preferably, the destination location is provided with a photoelectric sensor, and the projection processing module is further configured to: receiving detection information returned by the photoelectric sensor; and when the detection information meets a second preset rule, the motion sensing equipment is started.
Preferably, be equipped with infrared sensor and positioner on the body sensing equipment, projection processing module still is used for: acquiring coordinate values of the positioning device; matching the coordinate value with a preset coordinate interval corresponding to the preset area, and controlling the infrared sensor to check when the coordinate value bit is within the preset coordinate interval; receiving a detection value returned by the infrared sensor; and when the detection value meets a third preset rule, judging that the body sensing equipment is carried by a user in the preset area, and starting the body sensing equipment.
According to the 3D interaction method and device based on holographic imaging, whether a user exists in the preset area is obtained by obtaining the image information to be detected in the preset area, and the judgment is carried out by matching the image information to be detected with the preset standard, so that whether the image information to be detected collected in the prediction area meets the standard is determined, when the standard is met, the holographic imaging projector is started, the video file pre-stored in the holographic imaging projector is played, the user can feel that the user suddenly enters a 3D scene, and the sudden visual difference is brought to the user. And when the projection is detected to be projected to the target position, the motion sensing device is started, so that the user can perform 3D interaction with the image projected by the holographic imaging projector, the user can realize the 3D interaction under the condition of no autonomy, and a brand-new interaction experience is brought to the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a functional structure diagram of a 3D interaction system according to an embodiment of the present invention;
fig. 2 is a schematic functional structure diagram of the connection between the holographic imaging projector and the motion sensing device in the 3D interactive system shown in fig. 1;
fig. 3 is a flowchart of a 3D interaction method based on holographic imaging according to a first embodiment of the present invention;
fig. 4 is a block diagram of a 3D interaction device based on holographic imaging according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and 2, the 3D interactive system 100 includes a holographic imaging projector 110 and a motion sensing device 120. The holographic imaging projector 110 is in communication connection with the motion sensing device 120 through a network to perform data communication or interaction. The holographic imaging projector 110 enables a user to interact in 3D with imagery projected by the holographic imaging projector 110 by capturing behavioral gestures of the user captured by the motion sensing device 120.
In this embodiment, the holographic imaging projector 110 includes a projector 112, a controller 111, a holographic projection film 114, and an image capture device 113. The image capture device 113, the holographic projection film 114, and the projector 112 are coupled to the controller 111. The controller 111 is coupled to the motion sensing device 120.
In this embodiment, the controller 111 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. Here, it is not particularly limited.
The image acquisition device 113 may be a camera or a thermal infrared imager. Here, the number of the carbon atoms is not particularly limited. For example, the model of the camera may be ov 7725. The thermal infrared imager may be a FLIR T390 model.
In this embodiment, a photosensor 1141 is disposed on the holographic projection film 114. The photosensor 1141 is coupled to the controller 113. The holographic projection film 114 is used for receiving the image projected by the projector 112.
In this embodiment, the motion sensing device 120 is provided with an infrared sensor 121 and a positioning device 122. The infrared sensor 121 and the positioning device 122 are both coupled to the controller 111.
In this embodiment, the motion sensing device 120 may be a space mouse, a touch glove, or a brand new motion sensing kit (Kinect). And is not particularly limited herein. For example, the model of the space mouse may be MxAir.
Referring to fig. 3, a flowchart of a 3D interaction method based on holographic imaging according to a first embodiment of the present invention is shown. The specific flow shown in fig. 3 will be described in detail below.
Step S301, acquiring image information to be detected in a preset area.
In this embodiment, the holographic imaging projector is provided with an image collecting device, and the image collecting device may be a camera or a thermal infrared imager. Here, the number of the carbon atoms is not particularly limited. For example, the model of the camera may be ov 7725. The thermal infrared imager may be a FLIR T390 model.
The preset area refers to an area where an image can be acquired by an image acquisition device arranged on the holographic imaging projector. The specific selection mode of the preset area can be selected according to the installation position of the holographic imaging projector and the collection view angle of the image collection device in practice.
The image information to be detected refers to an image in the preset area.
As an embodiment, image information of a moving object appearing in the preset area may be acquired by an image acquisition device, and the acquired image information of the moving object may be used as the image information to be detected. The moving object refers to an object entering the preset area.
As another embodiment, the image information of the moving object appearing in the preset area may be collected by a thermal infrared imager, and the collected image information of the moving object may be used as the image information to be detected. Namely, scanning and acquiring an image of an object appearing in a preset area in an infrared imaging mode of a thermal infrared imager.
Step S302, when the image information to be detected meets a preset standard, the holographic imaging projector is started.
The preset standard means that the collected image information to be detected is matched with the pre-stored human body image information.
As an embodiment, dividing an image in the image information to be detected into a plurality of areas; a salient region of the plurality of regions is obtained, the salient region being a region of visual saliency in the image. Dividing the image into a plurality of regions can enable a plurality of sample regions to be acquired when acquiring a significant region, so that the region range per detection is small. For example, if the image is not divided into a plurality of regions, the entire image is treated as one region, and thus the complexity of detecting an object in one region is greatly increased. Calculating the confidence of each salient region, wherein the confidence is used for representing the probability that the image in the corresponding salient region is a face image; and comparing each confidence coefficient with a preset threshold value respectively, and judging that the image information is matched with preset human body image information when any one confidence coefficient in the confidence coefficients is equal to or greater than the preset threshold value. By adopting the mode of comparing the confidence coefficient with the preset threshold value, the mode of obtaining the confidence coefficient meeting the rule is simpler and more convenient, and the processing efficiency is improved.
As a preferred embodiment, each confidence coefficient is compared in advance, and then a target confidence coefficient with the largest value among the plurality of confidence coefficients is obtained; and comparing the target confidence coefficient with a preset threshold, acquiring a target confidence coefficient with the maximum value of the confidence coefficient, and comparing the target confidence coefficient with the preset threshold, so that the processing complexity can be effectively reduced, and the time for acquiring the significant region corresponding to the target confidence coefficient is further improved. And when the target confidence coefficient is larger than the preset threshold value, judging that the image information is matched with preset human body image information, and starting the holographic imaging projector.
In this embodiment, each candidate salient region in the image, which is a rectangular region having visual saliency in the image, may be acquired by an image saliency algorithm. Specifically, each candidate salient region is found based mainly on feature extraction and salient map generation. The characteristic extraction means that an input image is represented by a Gaussian pyramid. For example, when the number of layers of the gaussian pyramid is 9, the 0 th layer is an input image, the 1 to 8 layers are respectively formed by filtering and sampling the input image with a 5 × 5 gaussian filter, and the sizes of the layers are 1/2 to 1/256 of the input image. Then, various features are respectively extracted for each layer of the pyramid: luminance I, red R, green G, blue B, yellow Y, direction, forming a luminance pyramid, a chrominance pyramid, and a direction pyramid. The luminance and color characteristics are obtained by the following equations:
I=(r+g+b)/3;
R=r-(g+b)/2;
G=g-(r+b)/2;
B=b-(r+g)/2;
Y=r+g-2(|r-g|+b)。
where r, g, b are the red, green, blue components of the input image, respectively. In this embodiment, a negative value is set to 0.
O (σ, θ) is a Gabor pyramid obtained by filtering a Gabor function of the luminance feature I in the scale-wise direction. Wherein: σ ∈ [0, 1, 2, 3, … 8], θ ∈ [0 °, 45 °, 90 °, 135 ° ], thus expressing the features as 9 pyramids: luminance is 1, chromaticity is 4 (red, blue, green, yellow, respectively), direction is 4 (0 °, 45 °, 90 °, 135 °, respectively). The four colorimetric features described above respond to black and white with zero and have the greatest response to the respective saturated monochromatic colors (red, blue, green, yellow). And respectively differentiating various characteristics in different scales of the characteristic pyramid through the receptive field. The center of the receptive field corresponds to the pixel (c ∈ {2, 3, 4}) of the feature map of the scale c, and the peripheral region of the receptive field corresponds to the pixel (s ═ c + δ, δ ∈ {3, 4}) of the feature map of the scale s. And the resolution ratios of the feature maps with different scales are different, and after the two images are identical in size through interpolation, the difference is made through point-to-point between the two images. This process can be denoted by Θ. So that a comparison of features of the center (dimension c) and the periphery (dimension s) is obtained indicating a comparison of local directional features of the center and the periphery.
Wherein I (c, s) ═ I (c) Θ I(s); RG (c, s) ═ g (c) -g (c)) Θ (g(s) -r (s)) |; BY (c, s) ═ y (b (c) -y (c)) Θ (y(s) -b (s)) |; o (c, s, θ) ═ O (c, θ) Θ O (s, θ) |.
Where I (c, s) represents a luminance characteristic diagram, specifically, a contrast of luminance. RG (c, s) and BY (c, s) represent color profiles, specifically, dual antagonism of color in the visual cortex. And RG (c, s) is a red/green profile, which is the difference between the central red to green contrast and the peripheral green to red contrast, indicating a red/green and green/red dual antagonistic response. BY (c, s) is a blue/yellow profile, which is the difference between the central blue to yellow contrast and the peripheral yellow to blue contrast, indicating a blue/yellow and yellow/blue dual antagonistic response. O (c, s, θ) represents a directional feature map, in which features in the same direction θ are differentiated at different scales, and represents a comparison between local directional features in the center and the periphery. Because there are 6 combinations (2-5, 2-6, 3-6, 3-7, 4-7, 4-8) between the central scale c and the peripheral scale s, the four equations can obtain 6 feature maps, and 42 feature maps with different scales (6 luminance feature maps, 12 color feature maps, and 24 direction feature maps, respectively) can be obtained.
Wherein the generation of the saliency map can be realizedThe significant value of each pixel point in the characteristic diagram is normalized to an interval [0, M ] by an over-normalization function N (-)]And forming a comprehensive saliency map of the features, and then carrying out normalization processing on the saliency maps of different features to obtain the best visual saliency map. Namely, the saliency map is the saliency region. By normalizing the saliency value of each pixel point to a range of [0, M]The influence caused by different intervals of the significant value distribution of different features can be effectively eliminated, then the global maximum value M in the feature map is searched, and the average value of all other local maximum values is calculated
Figure BDA0001258878310000091
Finally, multiply each position in the feature map by
Figure BDA0001258878310000092
This enlarges the positions of the potentially salient regions in each feature map so that the salient values for those positions are more prominent relative to the background. Specifically, it can be expressed by the following formula:
Figure BDA0001258878310000102
Figure BDA0001258878310000103
Figure BDA0001258878310000104
wherein,
Figure BDA0001258878310000105
represents the average value of the luminance characteristic map,representing a characteristic diagram of the colourThe average value of the values is calculated,
Figure BDA0001258878310000107
the mean value of the directional feature map is shown, and S shows the visual saliency map.
And finally, marking the candidate salient regions with the aspect ratios within the preset aspect ratio interval in each candidate salient region as the salient regions. Wherein the preset aspect ratio interval may be [0.2, 5 ].
The detection method for the salient region may be from bottom to top or from top to bottom for inspection. Specifically, for example, a saliency detection model based on frequency domain analysis or a detection model based on depth information is a detection method of a salient region from bottom to top. Here, the number of the carbon atoms is not particularly limited.
Wherein the confidence may be obtained by a neural network algorithm. Specifically, in a neural network, the network is generally registered by three layers, where the first layer is an input layer, and the input layer is configured to receive externally input information and transmit the externally input information to the next layer. The second layer is a hidden layer and is used for receiving the signal of the speed of the input layer, then transmitting the received signal into the next layer, namely the third layer of output layer, through a preset transfer function, and directly giving a final result by the output layer. Wherein the transfer function is:
Figure BDA0001258878310000108
for the above network structure, the functional expression is:
Figure BDA0001258878310000111
Figure BDA0001258878310000113
Figure BDA0001258878310000114
wherein nin and nhid are the node numbers of the input layer and the hidden layer, WhliWeight for connecting the first hidden node and the ith input node, WojlWeight for connecting jth output node and lth hidden node, WobjIs the threshold value of the jth output node, HlAnd HidlInput and output values, O, of the first hidden nodejAnd YjThe input and output values of the jth output node, respectively. In this embodiment, the target confidence level refers to the confidence level with the largest value among all confidence levels.
In this embodiment, the preset threshold refers to a preset comparison threshold. For example, the preset threshold may be 95%.
Step S303, projecting the pre-stored image file to a preset destination position.
Wherein the image file refers to a file to be projected by the holographic imaging projector.
In this embodiment, the image file may be stored locally, or the image file stored in the cloud or the server may be acquired in real time through a network.
Wherein the preset destination position refers to a fixed position set for projection of the holographic imaging projector. The user is enabled to see the holographic image only when the holographic imaging projector projects the image to a preset destination location. For example, when a holographic projection film is previously provided at the destination position, the holographic image projector can form a 3D image only when an image is projected onto the holographic projection film.
Step S304, when the fact that the target position has the projection of the image file is detected, the motion sensing device is started.
The motion sensing device is a device which can receive actions or voice information of a user through an inductor so as to complete interaction.
As an implementation mode, the somatosensory device is provided with an infrared sensor. The infrared sensor can be a Honeywell infrared sensor with the model number SE2470, an SD2440 infrared sensor or an SE1450 infrared sensor. Here, the number of the carbon atoms is not particularly limited.
Receiving a detection value returned by the infrared sensor; and when the detection value meets a first preset rule, judging that the body sensing equipment is carried by a user in the preset area, and starting the body sensing equipment. Wherein the detection value may be a body temperature value of the user detected by the infrared sensor. The first preset rule is whether the detection value is zero or not. When the detection value is not zero, the first preset rule is satisfied. And when the first preset rule is met, the somatosensory device is carried by a user in a preset area. For example, when a user holds the motion sensing device, an infrared sensor on the motion sensing device collects the body temperature of the user holding the motion sensing device through the hand of the user.
As another embodiment, the motion sensing device is further provided with a positioning device and the infrared sensor. The positioning device may be a GPS positioning chip. For example, the GPS positioning chip may be SiRFstar III (GSW 3.0/3.1).
Obtaining the coordinate value of the positioning device; matching the coordinate value with a preset coordinate interval corresponding to the preset area, and controlling the infrared sensor to check when the coordinate value bit is within the preset coordinate interval; receiving a detection value returned by the infrared sensor; and when the detection value meets a third preset rule, judging that the body sensing equipment is carried by a user in the preset area, and starting the body sensing equipment. The third preset rule is whether the detection value is zero or not. When the detection value is not zero, the third preset rule is satisfied. And when a third preset rule is met, the somatosensory device is carried by a user in a preset area.
The coordinate value of the motion sensing device is obtained by obtaining the coordinate value of the positioning device, and the obtained coordinate value is matched with a preset coordinate interval, so that whether the coordinate value is located in the coordinate interval is judged, whether the motion sensing device is located in the preset coordinate interval is further judged, the coordinate interval is the coordinate value of the preset area, and whether the motion sensing device is located in the preset area is judged. Through right body sensing device's coordinate is judged, can be so that infrared sensor need not real-time detection, and only works as body sensing device is located when predetermineeing in the region, just inspects to can reduce effectively infrared sensor's operating time, and then resources are handled in the saving.
As another embodiment, the motion sensing device is provided with a photoelectric sensor, and the photoelectric sensor may be an ohilon photoelectric sensor with model number E3 JK-5M 3. Receiving detection information returned by the photoelectric sensor; and when the detection information meets a second preset rule, the somatosensory equipment is started. The detection information refers to a photoelectric signal value acquired by the photoelectric sensor, that is, an optical signal is converted into an electrical signal, and specifically, the photoelectric sensor acquires the optical signal projected onto the target position, so as to determine whether an image is projected onto the target position. The second preset rule is that whether the photoelectric signal value in the detection information is matched with a preset electric signal value or not. And when the matching is carried out, the motion sensing equipment is started.
Step S305, acquiring the behavior gesture of the user based on the motion sensing device.
Wherein the action gesture refers to action behavior of the user. For example, the behavioral gesture may be waving an arm or jumping, etc. And is not particularly limited herein. Specifically, the behavior gesture of the user is obtained through a sensor on the motion sensing device.
And S306, controlling the projection to finish the action corresponding to the behavior gesture.
And after the holographic imaging projector receives the behavior gesture returned by the motion sensing equipment, the behavior gesture and the image projected to the target position are interacted in real time, namely the image is matched with the behavior gesture of the user. For example, when the user makes a throw, a preset action related to the throw will appear in the image.
Please refer to fig. 4, which is a schematic diagram of functional modules of a 3D interaction device based on holographic imaging according to a second embodiment of the present invention. The 3D interactive apparatus 400 based on holographic imaging includes an image obtaining module 410, an image determining module 420, a projection module 430, a projection processing module 440, a user gesture obtaining module 450, and an executing module 460.
The image obtaining module 410 is configured to obtain image information to be detected in a preset area.
The image obtaining module 410 is specifically configured to: and acquiring image information of the moving object appearing in the preset area, and taking the acquired image information of the moving object as the image information to be detected.
An image determining module 420, configured to turn on the holographic imaging projector when the image information to be detected meets a preset standard, where the preset standard is matching with human body image information.
The motion sensing device is provided with an infrared sensor, and the image judgment module 420 is specifically configured to: dividing an image in the image information to be detected into a plurality of areas; acquiring a salient region in the plurality of regions, wherein the salient region is used for representing a region with visual saliency in the image; calculating the confidence of each salient region, wherein the confidence is used for representing the probability that the image in the corresponding salient region is a face image; and comparing each confidence coefficient with a preset threshold value respectively, and judging that the image information is matched with preset human body image information when any one confidence coefficient in the confidence coefficients is equal to or greater than the preset threshold value.
And a projection module 430 for projecting the pre-stored image file to a preset destination location.
And the projection processing module 440 is configured to, when it is detected that the destination location has the projection of the image file, turn on the motion sensing device.
In this embodiment, the motion sensing device is provided with an infrared sensor, and the infrared sensor may be a Honeywell infrared sensor with model number SE2470, an infrared sensor with model number SD2440, or an infrared sensor with model number SE 1450. Here, the number of the carbon atoms is not particularly limited.
The projection processing module 440 is specifically configured to: when the projection of the image file at the target position is detected, receiving a detection value returned by the infrared sensor; and when the detection value meets a first preset rule, judging that the body sensing equipment is carried by a user in the preset area, and starting the body sensing equipment.
In this embodiment, the destination location is provided with a photosensor. For example, the photosensor may be an Ohlong model number E3 JK-5M 3 photosensor.
As an embodiment, the projection processing module 440 is further configured to: receiving detection information returned by the photoelectric sensor; and when the detection information meets a second preset rule, the motion sensing equipment is started.
In this embodiment, the motion sensing device is provided with an infrared sensor and a positioning device. The infrared sensor can be a Honeywell infrared sensor with the model number SE2470, an SD2440 infrared sensor or an SE1450 infrared sensor. Here, the number of the carbon atoms is not particularly limited. The positioning device may be a GPS positioning chip. For example, the GPS positioning chip may be SiRFstar III (GSW 3.0/3.1).
As another embodiment, the projection processing module 440 is further configured to: acquiring coordinate values of the positioning device; matching the coordinate value with a preset coordinate interval corresponding to the preset area, and controlling the infrared sensor to check when the coordinate value bit is within the preset coordinate interval; receiving a detection value returned by the infrared sensor; and when the detection value meets a third preset rule, judging that the body sensing equipment is carried by a user in the preset area, and starting the body sensing equipment.
A user gesture obtaining module 450, configured to obtain a behavior gesture of the user based on the motion sensing device.
And an executing module 460, configured to control the projection to complete an action corresponding to the behavior gesture.
In summary, the present invention provides a 3D interaction method and apparatus based on holographic imaging, in which whether a user is in a preset region is obtained by obtaining image information to be detected in the preset region, and the image information to be detected is matched with a preset standard to determine whether the image information to be detected collected in the prediction region meets the standard, and when the standard is met, a holographic imaging projector is turned on to play a video file pre-stored in the holographic imaging projector, so that the user can feel that the user suddenly enters a 3D scene, and a sudden visual difference is brought to the user. And when the projection is detected to be projected to the target position, the motion sensing device is started, so that the user can perform 3D interaction with the image projected by the holographic imaging projector, the user can realize the 3D interaction under the condition of no autonomy, and a brand-new interaction experience is brought to the user.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (8)

1. A3D interaction method based on holographic imaging is applied to a 3D interaction system, the 3D interaction system comprises a holographic imaging projector and a somatosensory device, and the method comprises the following steps:
acquiring to-be-detected image information in a preset area;
when the image information to be detected meets a preset standard, starting the holographic imaging projector;
projecting a pre-stored image file to a preset target position;
when detecting the target position has the projection of image file, open body sensing equipment, wherein, be equipped with infrared sensor and positioner on the body sensing equipment, the step of opening body sensing equipment include:
acquiring coordinate values of the positioning device;
matching the coordinate value with a preset coordinate interval corresponding to the preset area, and controlling the infrared sensor to check when the coordinate value bit is within the preset coordinate interval;
receiving a detection value returned by the infrared sensor;
when the detection value meets a third preset rule, judging that the motion sensing equipment is carried by a user in the preset area, and starting the motion sensing equipment;
acquiring a behavior gesture of a user based on the motion sensing equipment;
and controlling the projection to finish the action corresponding to the behavior gesture.
2. The method according to claim 1, wherein the step of obtaining the image information to be detected in the preset area comprises:
and acquiring image information of the moving object in the preset area, and taking the acquired image information of the moving object as the image information to be detected.
3. The method according to claim 1, wherein the step when the image information to be detected meets a preset criterion comprises:
dividing an image in the image information to be detected into a plurality of areas;
acquiring a salient region in the plurality of regions, wherein the salient region is used for representing a region with visual saliency in the image;
calculating the confidence of each salient region, wherein the confidence is used for representing the probability that the image in the corresponding salient region is a face image;
and comparing each confidence coefficient with a preset threshold value respectively, and judging that the image information to be detected is matched with preset human body image information when any one confidence coefficient in the confidence coefficients is equal to or greater than the preset threshold value.
4. The method of claim 1, wherein the destination location is provided with a photoelectric sensor, and the step of turning on the motion sensing device when the destination location detects the projection of the image file comprises:
receiving detection information returned by the photoelectric sensor;
and when the detection information meets a second preset rule, the somatosensory equipment is started.
5. The utility model provides a 3D interactive installation based on holographic imaging is applied to 3D interactive system, 3D interactive system includes holographic imaging projector and somatosensory equipment, its characterized in that, the device includes:
the image acquisition module is used for acquiring the information of the image to be detected in the preset area;
the image judging module is used for starting the holographic imaging projector when the image information to be detected meets a preset standard;
the projection module is used for projecting the image file to a preset target position;
projection processing module for detect the destination location has when the projection of image file, open body sensing equipment, wherein, be equipped with infrared sensor and positioner on the body sensing equipment, projection processing module still is used for:
acquiring coordinate values of the positioning device;
matching the coordinate value with a preset coordinate interval corresponding to the preset area, and controlling the infrared sensor to check when the coordinate value bit is within the preset coordinate interval;
receiving a detection value returned by the infrared sensor;
when the detection value meets a third preset rule, judging that the motion sensing equipment is carried by a user in the preset area, and starting the motion sensing equipment;
the user gesture obtaining module is used for obtaining a behavior gesture of a user based on the motion sensing equipment;
and the execution module is used for controlling the projection to finish the action corresponding to the behavior gesture.
6. The apparatus of claim 5, wherein the image acquisition module is specifically configured to:
and acquiring image information of the moving object in the preset area, and taking the acquired image information of the moving object as the image information to be detected.
7. The apparatus according to claim 5, wherein the motion sensing device is provided with an infrared sensor, and the image determining module is specifically configured to:
dividing an image in the image information to be detected into a plurality of areas;
acquiring a salient region in the plurality of regions, wherein the salient region is used for representing a region with visual saliency in the image;
calculating the confidence of each salient region, wherein the confidence is used for representing the probability that the image in the corresponding salient region is a face image;
and comparing each confidence coefficient with a preset threshold value respectively, and judging that the image information is matched with preset human body image information when any one confidence coefficient in the confidence coefficients is equal to or greater than the preset threshold value.
8. The apparatus of claim 5, wherein the destination location is provided with a photosensor, and the projection processing module is further configured to:
receiving detection information returned by the photoelectric sensor;
and when the detection information meets a second preset rule, the motion sensing equipment is started.
CN201710204357.5A 2017-03-30 2017-03-30 3D interaction method and device based on holographic imaging Active CN106970709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710204357.5A CN106970709B (en) 2017-03-30 2017-03-30 3D interaction method and device based on holographic imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710204357.5A CN106970709B (en) 2017-03-30 2017-03-30 3D interaction method and device based on holographic imaging

Publications (2)

Publication Number Publication Date
CN106970709A CN106970709A (en) 2017-07-21
CN106970709B true CN106970709B (en) 2020-01-17

Family

ID=59336368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710204357.5A Active CN106970709B (en) 2017-03-30 2017-03-30 3D interaction method and device based on holographic imaging

Country Status (1)

Country Link
CN (1) CN106970709B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107908384A (en) * 2017-11-18 2018-04-13 深圳市星野信息技术有限公司 A kind of method, apparatus, system and the storage medium of real-time display holographic portrait
CN108555422A (en) * 2018-03-02 2018-09-21 广州市盘古机器人科技有限公司 More infrared sensor three-dimensional coordinate posture acquiring technologies
CN110096144B (en) * 2019-04-08 2022-11-15 汕头大学 Interactive holographic projection method and system based on three-dimensional reconstruction
CN111722769B (en) * 2020-07-16 2024-03-05 腾讯科技(深圳)有限公司 Interaction method, interaction device, display equipment and storage medium
CN112782683A (en) * 2020-12-30 2021-05-11 广州市德晟光电科技股份有限公司 System and method for ground radar projection interaction

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102307288A (en) * 2011-07-27 2012-01-04 中国计量学院 Projection system moving along with sightline of first person based on human face recognition
JP2013038751A (en) * 2011-08-11 2013-02-21 Panasonic Corp Photographing apparatus
CN105763917B (en) * 2016-02-22 2019-09-20 青岛海信电器股份有限公司 A kind of control method and system of terminal booting
CN205644372U (en) * 2016-04-20 2016-10-12 李科 Can replace intelligent wearing equipment of module
CN205788091U (en) * 2016-04-22 2016-12-07 吴丰盛 A kind of human body senses computer equipment automatically
CN106295515B (en) * 2016-07-28 2019-10-15 北京小米移动软件有限公司 Determine the method and device of the human face region in image
CN106372484A (en) * 2016-09-14 2017-02-01 珠海市魅族科技有限公司 Equipment control method and equipment control device

Also Published As

Publication number Publication date
CN106970709A (en) 2017-07-21

Similar Documents

Publication Publication Date Title
CN106970709B (en) 3D interaction method and device based on holographic imaging
US11443454B2 (en) Method for estimating the pose of a camera in the frame of reference of a three-dimensional scene, device, augmented reality system and computer program therefor
CN104202547B (en) Method, projection interactive approach and its system of target object are extracted in projected picture
JP2009538558A (en) Method and apparatus for identifying characteristics of an object detected by a video surveillance camera
US9064178B2 (en) Edge detection apparatus, program and method for edge detection
JP2008046903A (en) Apparatus and method for detecting number of objects
KR102199094B1 (en) Method and Apparatus for Learning Region of Interest for Detecting Object of Interest
CN111259710B (en) Parking space structure detection model training method adopting parking space frame lines and end points
CN113673584A (en) Image detection method and related device
JP2007052609A (en) Hand area detection device, hand area detection method and program
KR101833943B1 (en) Method and system for extracting and searching highlight image
KR101348681B1 (en) Multi-sensor image alignment method of image detection system and apparatus using the same
CN109658523A (en) The method for realizing each function operation instruction of vehicle using the application of AR augmented reality
CN109785439A (en) Human face sketch image generating method and Related product
JP5651659B2 (en) Object detection system and program
JP4682782B2 (en) Image processing device
JP4918615B2 (en) Object number detection device and object number detection method
JP5217917B2 (en) Object detection and tracking device, object detection and tracking method, and object detection and tracking program
KR101357581B1 (en) A Method of Detecting Human Skin Region Utilizing Depth Information
KR102299902B1 (en) Apparatus for providing augmented reality and method therefor
US11205064B1 (en) Measuring quality of depth images in real time
CN112016495A (en) Face recognition method and device and electronic equipment
CN112949367A (en) Method and device for detecting color of work clothes based on video stream data
JP2009205695A (en) Apparatus and method for detecting the number of objects
Vijaylaxmi et al. Fire detection using YCbCr color model

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant