CN117499613A - Method for preventing 3D dizziness for tripod head device and tripod head device - Google Patents

Method for preventing 3D dizziness for tripod head device and tripod head device Download PDF

Info

Publication number
CN117499613A
CN117499613A CN202210878866.7A CN202210878866A CN117499613A CN 117499613 A CN117499613 A CN 117499613A CN 202210878866 A CN202210878866 A CN 202210878866A CN 117499613 A CN117499613 A CN 117499613A
Authority
CN
China
Prior art keywords
positioning information
offset
user
head device
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210878866.7A
Other languages
Chinese (zh)
Inventor
杨萌
戴付建
赵烈烽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sunny Optics Co Ltd
Original Assignee
Zhejiang Sunny Optics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sunny Optics Co Ltd filed Critical Zhejiang Sunny Optics Co Ltd
Priority to CN202210878866.7A priority Critical patent/CN117499613A/en
Publication of CN117499613A publication Critical patent/CN117499613A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention provides a method for preventing 3D dizziness for a cradle head device and the cradle head device. The method for preventing 3D dizziness of the cradle head device comprises the following steps of: step S1: acquiring first positioning information of eyes of a user; step S2: obtaining a playing position of a first video image according to the first positioning information; step S3: after the positions of the eyes of the user are moved, second positioning information of the eyes of the user is obtained, and the offset is calculated according to the first positioning information and the second positioning information; step S4: converting the offset into spatial azimuth information for acquiring video images according to pre-stored calibration information, wherein the spatial azimuth information comprises the spatial positions and the rotation angles of the eyes of a user, then adjusting a camera of the cradle head device to acquire the video images at the new spatial positions and the rotation angles, and then updating and displaying the playing positions of the second video images according to the offset. The invention solves the problem that the cradle head device in the prior art is easy to induce 3D dizziness in the use process.

Description

Method for preventing 3D dizziness for tripod head device and tripod head device
Technical Field
The invention relates to the technical field of optical sensing, in particular to a method for preventing 3D dizziness of a cradle head device and the cradle head device.
Background
In the field of unmanned aerial vehicles, cradle head modes have begun to be equipped in order to provide better real-time flight experience and higher user handling. Under the cloud deck mode, a user controls the unmanned aerial vehicle to fly in a mode similar to a first person viewing angle game through the visual optical module and the control handle, and the flying gesture is adjusted according to real-time feedback of the action of the control handle, so that the experience similar to real flying can be brought. In the cradle head mode, a camera equipped with the unmanned aerial vehicle is required to collect video images with high frame rate and high resolution, and the collected video images are transmitted back to the visual optical module through the transmitting-receiving antenna for playing.
Due to factors such as large visual angle distortion, flight condition change, aberration of a visual optical module, pupil drift of a user, and the like, the 3D dizziness phenomenon is easy to induce in the cradle head mode, so that adverse reactions such as nausea, dizziness, vomiting and the like with different degrees are generated by partial users in flight time, and the method is also a main problem to be solved in the field of the current cradle head flight mode. The cameras equipped with unmanned aerial vehicle have more hardware limitations in terms of specification and performance, and there is a higher possibility in the visual optical module of 3D, and the side effects of 3D dizziness are reduced by using the image processing method which is lower in cost and easier to integrate, but based on the current research, a method capable of effectively reducing 3D dizziness in the visual optical module is still lacking in the field of the pan-tilt device.
That is, the cradle head device in the prior art has a problem of easily inducing 3D dizziness during use.
Disclosure of Invention
The invention mainly aims to provide a method for preventing 3D dizziness and a tripod head device, which are used for solving the problem that the tripod head device in the prior art is easy to induce 3D dizziness in the using process.
To achieve the above object, according to one aspect of the present invention, there is provided a method for preventing 3D vertigo for a pan-tilt device, comprising the steps of: step S1: acquiring first positioning information of eyes of a user; step S2: obtaining a playing position of a first video image according to the first positioning information; step S3: after the positions of the eyes of the user are moved, second positioning information of the eyes of the user is obtained, and the offset is calculated according to the first positioning information and the second positioning information; step S4: converting the offset into spatial azimuth information for acquiring video images according to pre-stored calibration information, wherein the spatial azimuth information comprises the spatial positions and the rotation angles of the eyes of a user, then adjusting a camera of the cradle head device to acquire the video images at the new spatial positions and the rotation angles, and then updating and displaying the playing positions of the second video images according to the offset.
Further, in step S1 to step S4, the first positioning information and the second positioning information are obtained by a visual optical module of the pan-tilt device, and the first positioning information includes first positioning information of a left eye and first positioning information of a right eye; the second positioning information comprises second positioning information of a left eye and second positioning information of a right eye; the offset includes an offset for the left eye and an offset for the right eye.
Further, the calculating the offset according to the first positioning information and the second positioning information in step S3 includes: and calculating the object distance change according to the difference between the first positioning information and the second positioning information, and adjusting the offset according to the object distance change so that the offset is matched with the object distance change.
Further, in the process of calculating the object distance change according to the difference between the first positioning information and the second positioning information, when the object distance corresponding to the object distance change exceeds a preset object distance threshold, at least one of the focal length and the aperture of the camera of the holder device is adjusted to match the object distance.
Further, the process of adjusting the spatial orientation information of the acquired video image according to the offset in step S4 includes: shooting and adjusting: determining a new spatial position and a new rotation angle of the camera of the cradle head device for collecting video images according to the spatial positions and the rotation angles of the eyes of the user, and then adjusting the camera of the cradle head device to collect the video images at the new spatial position and the new rotation angle; and a display adjustment step: and updating and displaying the playing position of the second video image according to the offset.
Further, in step S4, the method further includes selectively performing zoom adjustment on images in different view angle ranges during the subsequent display according to the maximum view angle and the spatial orientation information of the camera.
Further, in step S4, adjusting the frame rate of the second video image according to the rate of change of the spatial orientation information over time is further included.
Further, in step S3, when the offset of the left eye and/or the offset of the right eye exceeds the preset offset threshold, the offset of the left eye and/or the offset of the right eye is recorded, and the initial positioning information is updated.
According to another aspect of the present invention, there is provided a cradle head device for performing the above method for preventing 3D vertigo of a cradle head device, the cradle head device comprising: the unmanned aerial vehicle is provided with a camera for collecting images; and the visual optical module is used for receiving and displaying the image acquired by the camera.
By applying the technical scheme of the invention, the method for preventing 3D dizziness for the cradle head device comprises the following steps of: step S1: acquiring first positioning information of eyes of a user; step S2: obtaining a playing position of a first video image according to the first positioning information; step S3: after the positions of the eyes of the user are moved, second positioning information of the eyes of the user is obtained, and the offset is calculated according to the first positioning information and the second positioning information; step S4: converting the offset into spatial azimuth information for acquiring video images according to pre-stored calibration information, wherein the spatial azimuth information comprises the spatial positions and the rotation angles of the eyes of a user, then adjusting a camera of the cradle head device to acquire the video images at the new spatial positions and the rotation angles, and then updating and displaying the playing positions of the second video images according to the offset.
The method comprises the steps of detecting first positioning information of eyes of a user in the using process of the holder device, and then determining the playing position of a first video image according to the first positioning information; when displacement of the eyes of the user is detected, second positioning information of the eyes of the user after the movement is obtained, then the offset is calculated according to the first positioning information and the second positioning information, the offset is converted into spatial azimuth information for collecting video images according to pre-stored calibration information, so that the spatial azimuth of the video images collected by a camera of the cradle head device is adjusted, the video images are collected at new spatial positions and rotation angles, the offset of the eyes of the user and the spatial azimuth information of the collected video images are kept consistent, meanwhile, the newly collected video images are updated and displayed, so that the spatial azimuth of the collected video images and the eyes of the user are kept synchronous, and meanwhile, the state is updated in real time, thereby reducing the phenomenon that 3D dizziness occurs in the process of using the cradle head device for the user, greatly reducing the possibility that dizziness, nausea and vomiting occur for the user, and improving the use satisfaction degree of the user.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention. In the drawings:
fig. 1 shows a flow chart of a method for preventing 3D vertigo of a pan-tilt head device according to an alternative embodiment of the present invention;
figure 2 shows a schematic diagram of the present invention for preventing 3D vertigo;
FIG. 3 is a schematic view showing the structure of a visual optical module of a cradle head device according to an alternative embodiment of the present invention;
FIG. 4 shows a schematic view of another angle of FIG. 3;
fig. 5 shows a schematic diagram of the composition of a visual optical module of a cradle head device according to an alternative embodiment of the invention.
Wherein the above figures include the following reference numerals:
100. a visual optical module; 110. a housing; 120. an eyepiece lens; 130. a controller; 131. a sensing module; 132. a processing module; 133. an I/O interface; 140. and a communication module.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
It is noted that all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless otherwise indicated.
In the present invention, unless otherwise indicated, terms of orientation such as "upper, lower, top, bottom" are used generally with respect to the orientation shown in the drawings or with respect to the component itself in the vertical, upright or gravitational direction; also, for ease of understanding and description, "inner and outer" refers to inner and outer relative to the profile of each component itself, but the above-mentioned orientation terms are not intended to limit the present invention.
In order to solve the problem that 3D dizziness is easy to induce in the using process of a cradle head device in the prior art, the invention provides a method for preventing 3D dizziness for the cradle head device and the cradle head device.
As shown in fig. 1 to 5, a method for preventing 3D vertigo for a cradle head device includes the steps of: step S1: acquiring first positioning information of eyes of a user; step S2: obtaining a playing position of a first video image according to the first positioning information; step S3: after the positions of the eyes of the user are moved, second positioning information of the eyes of the user is obtained, and the offset is calculated according to the first positioning information and the second positioning information; step S4: converting the offset into spatial azimuth information for acquiring video images according to pre-stored calibration information, wherein the spatial azimuth information comprises the spatial positions and the rotation angles of the eyes of a user, then adjusting a camera of the cradle head device to acquire the video images at the new spatial positions and the rotation angles, and then updating and displaying the playing positions of the second video images according to the offset.
The method comprises the steps of detecting first positioning information of eyes of a user in the using process of the holder device, and then determining the playing position of a first video image according to the first positioning information; when displacement of the eyes of the user is detected, second positioning information of the eyes of the user after the movement is obtained, then the offset is calculated according to the first positioning information and the second positioning information, the offset is converted into spatial azimuth information for collecting video images according to pre-stored calibration information, so that the spatial azimuth of the video images collected by a camera of the cradle head device is adjusted, the video images are collected at new spatial positions and rotation angles, the offset of the eyes of the user and the spatial azimuth information of the collected video images are kept consistent, meanwhile, the newly collected video images are updated and displayed, so that the spatial azimuth of the collected video images and the eyes of the user are kept synchronous, and meanwhile, the state is updated in real time, thereby reducing the phenomenon that 3D dizziness occurs in the process of using the cradle head device for the user, greatly reducing the possibility that dizziness, nausea and vomiting occur for the user, and improving the use satisfaction degree of the user.
It should be noted that the main purpose of the present invention is to ensure that the spatial azimuth information of the image collected by the camera is consistent with the positions of the eyes, specifically, to keep the shooting angle of the camera synchronous with the rotation angle of the eyes, so as to avoid the phenomenon of 3D dizziness.
Specifically, in steps S1 to S4, first positioning information and second positioning information are acquired by the visual optical module 100 of the pan-tilt device, the first positioning information including first positioning information of the left eye and first positioning information of the right eye; the second positioning information comprises second positioning information of a left eye and second positioning information of a right eye; the offset includes an offset for the left eye and an offset for the right eye, and the offset may also include a spatial position change and a rotation angle change of both eyes of the user. By the arrangement, the method for preventing the 3D dizziness can independently provide a method for relieving the dizziness aiming at the dizziness caused by independent drifting of the left eye and the right eye of the user, so that the possibility of the 3D dizziness of the user in the use process is reduced. The calculating the offset according to the first positioning information and the second positioning information in step S3 includes: and calculating the object distance change according to the difference between the first positioning information and the second positioning information, and adjusting the offset according to the object distance change so that the offset is matched with the object distance change. The arrangement is such that, on the basis of the view angle matching, the object distance matching is further added, thereby improving the anti-dizziness effect of the visual optical module 100.
Specifically, the process of adjusting the spatial orientation information of the acquired video image according to the offset in step S4 includes: shooting and adjusting: determining a new spatial position and a new rotation angle of the camera of the cradle head device for collecting video images according to the spatial positions and the rotation angles of the eyes of the user, and then adjusting the camera of the cradle head device to collect the video images at the new spatial position and the new rotation angle; and a display adjustment step: and updating and displaying the playing position of the second video image according to the offset. That is, when the camera receives new spatial orientation information, the camera may take a picture at a new spatial position or rotation angle, and then the visual optical module 100 will receive an image taken according to the new spatial position or rotation angle and present it to the user. The process is continuously performed in this way, ensuring that the user gets feedback in real time as much as possible during the flight.
Specifically, in the process of calculating the object distance change according to the difference between the first positioning information and the second positioning information, when the object distance corresponding to the object distance change exceeds a preset object distance threshold, at least one of the focal length and the aperture of the camera of the holder device is adjusted to match the object distance. In this way, when the visual optical module 100 measures that the object distance change is not matched with the depth of field range of the camera shooting on the unmanned aerial vehicle, the focal length and the aperture are additionally adjusted to improve the anti-dizziness effect.
Specifically, in step S4, the method further includes selectively performing zoom adjustment on the images in different view angle ranges during the subsequent display according to the difference between the maximum view angle and the spatial orientation information of the camera. The external vision field distortion correction device is further used for correcting the external vision field distortion on the basis of the above, and the anti-dizziness effect is improved.
Specifically, in step S4, the method further includes adjusting the frame rate of the video image displayed at the second video information playing position according to the rate of change of the spatial orientation information of the camera with time. When the camera is further used for acquiring image visual angle switching, buffering of video images is increased, and anti-dizziness effect is improved.
Specifically, in step S3, when the offset amount for the left eye and the offset amount for the right eye differ by more than the preset offset threshold number of times, the offset amount for the left eye and the offset amount for the right eye are recorded, and the initial positioning information is updated for updating the pre-stored setting information. By recording biological information related to viewing angle, pupil distance, diopter and the like, personalized initial positioning information is formed so as to increase the matching degree of the visual optical module 100 and a user, and thus a customized personalized application effect is formed.
The invention also provides a cradle head device, which is used for executing the method for preventing 3D dizziness of the cradle head device, and comprises the following steps: the unmanned aerial vehicle is provided with a camera for collecting images; and the visual optical module 100, wherein the visual optical module 100 is used for receiving and displaying the image acquired by the camera.
As shown in fig. 3 to 5, a typical visual optical module 100 includes a housing 110, a plurality of eyepiece lenses 120, the plurality of eyepiece lenses 120 being sequentially arranged in the housing 110, a controller 130, and a communication module 140 (not shown in fig. 3 and 4) for communicating with an unmanned aerial vehicle flying in the air and carrying a camera and a communication antenna for image acquisition. The controller 130 includes an I/O interface 133 for communicatively coupling with the communication module 140, a processing module 132 for performing various operation functions such as image section and display, and a sensing module 131 for receiving and sensing information of the positions of eyes of the user. The sensing module 131 includes an infrared light source, a reflective optical element or refractive optical lens for guiding light emitted from the infrared light source to a user's eyes, and an infrared sensor for receiving the reflected light.
As shown in fig. 2, in order to generate the principle of 3D vertigo, in which the first positioning information of the left eye of the user is 310 and the first positioning information of the right eye of the user is 320 in the pre-stored setting information, when viewing the scene 350 with the theoretical object distance L, the theoretical positions or angles of the pupils of the eyes should satisfyAt this point the visual optical templates would display image 330 and image 340 at the eyepieces corresponding to the left and right eyes. However, when the scene 350 changes or a different scene needs to be viewed, the pupil needs to rotate, and the difference between the actual positions of the displayed image 331 and the displayed image 341 and the theoretical positions required by the new spatial positions or angles 311, 321 of the pupil is different from the physical factors in flight and handle operation, and also occurs due to the optical factors such as mismatching of object and image measuring field, distortion of field, video refresh rate, depth of field and the like in actual operationDeviations, thereby aggravating the user's 3D dizziness response.
In order to ensure that the display of images 331 and 341 can be more accurately adjusted in response to changes in the pupils of the eyes during the above process, it is necessary to continuously monitor the infrared light reflected by the pupils of the eyes, evaluate what kind of offset occurs in the pupils of the eyes relative to the pre-stored reference position by, for example, comparing the reflected infrared light coordinates corresponding to the reference position (such as the position of the central field of view on the corresponding optical axis), and output a spatial offset. The spatial offset is converted into spatial azimuth information for a camera to collect video images according to pre-stored calibration information, and the spatial azimuth information is used for instructing a camera carried by the unmanned aerial vehicle to shoot at a new spatial position or rotation angle. The pre-stored calibration information comprises a corresponding relation between the visual fields in all directions in the ocular and the space position or the rotation angle shot by the camera, which is built in a manufacturer. The visual optics module 100 will thereafter receive the image taken from the new spatial position or rotation angle and present it to the user. The process is performed continuously in this way, ensuring that the user gets feedback in real time as much as possible during the flight, to reduce the possibility of generating 3D dizziness.
Of course, the above solution preferably needs to be matched with various auxiliary solutions to achieve an optimal effect of preventing 3D dizziness.
First, the corresponding object distance change can be calculated according to the difference between the first positioning information and the second positioning information of the left eye of the user and the first positioning information and the second positioning information of the right eye, and the offset of the two eyes can be further finely adjusted according to the object distance change corresponding to the two eyes. If the object distance change Δd1 of the left eye is calculated according to the offset Δd1 between the first positioning information and the second positioning information of the left eye, and excessive mismatch occurs between the object distance change Δd2 of the right eye is calculated according to the offset Δd2 between the first positioning information and the second positioning information of the right eye, eye fatigue and dizziness can be caused, and at this time, the calculated offset is adjusted to ensure that Δd1 Δd2 is substantially consistent. Ensuring that the delta d1 delta d2 is substantially consistent, namely ensuring that the left eye and the right eye are consistent with the respective corresponding object distances, and judging whether the focal length and the aperture of the camera should be adjusted in time by taking the object distances as indexes. If the first object distance calculated according to the first positioning information of the left eye and the right eye and the second object distance calculated according to the second positioning information of the left eye and the right eye have a severe change, if the preset object distance threshold value is exceeded, the focal length and the aperture of the camera should be adjusted in time, for example, the focal length or the aperture is reduced to enlarge the depth of field, so that the dizziness of a user is relieved, otherwise, if the object distance change is smaller than a reasonable range, the focal length and the aperture can be enlarged, and the initial value is returned. In addition, if the offset exceeds the threshold value more times, the reference position needs to be further optimized to reduce the workload of real-time adjustment, and the offset can be recorded for updating the pre-stored setting information.
Other auxiliary schemes also include: in response to distortion effects caused by an ultra-wide angle photographing lens having a half angle of view of 60 ° or more, for example, distortion increases further with an increase in the field of view in an area of 45 ° or more, and a feeling of dizziness is generated. At this time, if the spatial orientation corresponding to the pupils of the eyes of the user points to a high distortion area of, for example, 45-60 degrees or more, the degree of clipping of the external view field is determined according to the distortion curve of the camera comprising the unmanned aerial vehicle. If the distortion of a certain view angle exceeds 2%, the software image scaling can be performed on the view angle in the range, the scaled scale is suitable for counteracting the distortion influence, and when the spatial orientation is not in the view angle range of the high-distortion area, the image scaling is not performed, so that the image processing efficiency is improved.
In addition, the dynamic adjustment of the image frame rate is also beneficial to the alleviation of dizziness, the frame rate of the displayed video image can be continuously adjusted according to the change rate of the spatial azimuth information calculated by binocular reflection along with time, if the directions of the eyes are frequently changed by sensing, the transmission and display frame rate of the image can be improved, otherwise, the frame rate can be reduced so as to save electricity. Through the adjustability of the image frame rate, the displayed images are more attached to the changes of the positions and angles of the eyes of the user, so that the display efficiency can be maintained while electricity is saved, and the use satisfaction degree of the user is ensured.
It will be apparent that the embodiments described above are merely some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or described herein.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for preventing 3D vertigo for a pan-tilt device, comprising the steps of:
step S1: acquiring first positioning information of eyes of a user;
step S2: obtaining a playing position of a first video image according to the first positioning information;
step S3: after the positions of the eyes of the user are moved, second positioning information of the eyes of the user is obtained, and the offset is calculated according to the first positioning information and the second positioning information;
step S4: converting the offset into spatial azimuth information for collecting video images according to pre-stored calibration information, wherein the spatial azimuth information comprises the spatial position and the rotation angle of the eyes of a user, then adjusting a camera of the cradle head device to collect the video images at a new spatial position and a new rotation angle, and then updating and displaying the playing position of a second video image according to the offset.
2. The method for preventing 3D vertigo of a pan-tilt device according to claim 1, wherein in the steps S1 to S4, the first positioning information and the second positioning information are acquired through a visual optical module (100) of the pan-tilt device,
the first positioning information comprises first positioning information of a left eye and first positioning information of a right eye;
the second positioning information comprises left-eye second positioning information and right-eye second positioning information;
the offset includes an offset for the left eye and an offset for the right eye.
3. The method for preventing 3D vertigo of a pan-tilt device according to claim 2, wherein in the step S3, calculating the offset according to the first positioning information and the second positioning information includes:
and calculating the object distance change according to the difference between the first positioning information and the second positioning information, and adjusting the offset according to the object distance change so that the offset is matched with the object distance change.
4. A method for preventing 3D vertigo in a pan-tilt device according to claim 3, wherein in the calculating of the object distance change from the difference between the first and second positioning information, when the object distance corresponding to the object distance change exceeds a preset object distance threshold, at least one of a focal length and an aperture of a camera of the pan-tilt device is adjusted to match the object distance.
5. The method for preventing 3D vertigo of a pan-tilt device according to claim 1, wherein the step S4 of adjusting the spatial orientation information of the acquired video image according to the offset comprises:
shooting and adjusting: determining a new spatial position and a new rotation angle of the camera of the cradle head device for acquiring video images according to the spatial positions and the rotation angles of the eyes of the user, and then adjusting the camera of the cradle head device to acquire the video images at the new spatial position and the new rotation angle;
and a display adjustment step: and updating and displaying the playing position of the second video image according to the offset.
6. The method for preventing 3D vertigo according to claim 4, further comprising, in step S4, selectively scaling images in different view angle ranges during subsequent display according to the maximum view angle of the camera and the spatial orientation information.
7. The method for preventing 3D vertigo of a pan-tilt device according to claim 4, further comprising adjusting a frame rate of the second video image according to a rate of change of the spatial orientation information over time in step S4.
8. The method for preventing 3D vertigo of a pan-tilt device according to claim 4, wherein in step S3, when the offset of the left eye and/or the offset of the right eye exceeds a preset offset threshold, the offset of the left eye and/or the offset of the right eye is recorded, and initial positioning information is updated.
9. A cradle head device, characterized in that it is a cradle head device that performs the method for preventing 3D vertigo of a cradle head device according to any one of claims 1 to 8, comprising:
the unmanned aerial vehicle is provided with a camera for collecting images;
and the visual optical module (100) is used for receiving and displaying the image acquired by the camera.
CN202210878866.7A 2022-07-25 2022-07-25 Method for preventing 3D dizziness for tripod head device and tripod head device Pending CN117499613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210878866.7A CN117499613A (en) 2022-07-25 2022-07-25 Method for preventing 3D dizziness for tripod head device and tripod head device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210878866.7A CN117499613A (en) 2022-07-25 2022-07-25 Method for preventing 3D dizziness for tripod head device and tripod head device

Publications (1)

Publication Number Publication Date
CN117499613A true CN117499613A (en) 2024-02-02

Family

ID=89678759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210878866.7A Pending CN117499613A (en) 2022-07-25 2022-07-25 Method for preventing 3D dizziness for tripod head device and tripod head device

Country Status (1)

Country Link
CN (1) CN117499613A (en)

Similar Documents

Publication Publication Date Title
US11669161B2 (en) Enhancing the performance of near-to-eye vision systems
US9961335B2 (en) Pickup of objects in three-dimensional display
CN103595912B (en) The imaging method and device of local scale
US9900517B2 (en) Infrared binocular system with dual diopter adjustment
US10048750B2 (en) Content projection system and content projection method
US10261345B2 (en) Imaging adjustment device and imaging adjustment method
WO2016115873A1 (en) Binocular ar head-mounted display device and information display method therefor
US8330846B2 (en) Image pickup apparatus
US10382699B2 (en) Imaging system and method of producing images for display apparatus
US9961257B2 (en) Imaging to facilitate object gaze
US10002293B2 (en) Image collection with increased accuracy
US20090059364A1 (en) Systems and methods for electronic and virtual ocular devices
US8648897B2 (en) System and method for dynamically enhancing depth perception in head borne video systems
WO2015051606A1 (en) Locating method and locating system
CN106019589A (en) Near-to-eye display device capable of automatically adjusting optical system
CN205038406U (en) Wear -type display device of adjustable focal length
CN106019588A (en) Near-to-eye display device capable of automatically measuring interpupillary distance and method
WO2015051605A1 (en) Image collection and locating method, and image collection and locating device
CN205826969U (en) A kind of self adaptation nearly eye display device
EP4038441A2 (en) Compact retinal scanning device for tracking movement of the eye's pupil and applications thereof
CN106054386A (en) Self-adaptive near-eye display device
CN104345454A (en) Head-mounted vision auxiliary system and imaging method thereof
TW201814356A (en) Head-mounted display apparatus and lens position adjusting method thereof
CN110537897B (en) Sight tracking method and device, computer readable storage medium and electronic equipment
CN205910419U (en) Automatic adjustment optical system's near -to -eye display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination