CN115908212A - Anti-dizziness method - Google Patents
Anti-dizziness method Download PDFInfo
- Publication number
- CN115908212A CN115908212A CN202211206329.4A CN202211206329A CN115908212A CN 115908212 A CN115908212 A CN 115908212A CN 202211206329 A CN202211206329 A CN 202211206329A CN 115908212 A CN115908212 A CN 115908212A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- area
- region
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 208000002173 dizziness Diseases 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 16
- 230000001965 increasing effect Effects 0.000 claims abstract description 15
- 238000003708 edge detection Methods 0.000 claims abstract description 9
- 238000001931 thermography Methods 0.000 claims abstract description 9
- 230000004297 night vision Effects 0.000 claims abstract description 8
- 230000004927 fusion Effects 0.000 claims abstract description 3
- 238000012937 correction Methods 0.000 claims description 9
- 230000007704 transition Effects 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000007499 fusion processing Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000013135 deep learning Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000010801 machine learning Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 230000004313 glare Effects 0.000 claims 1
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 208000012886 Vertigo Diseases 0.000 description 3
- 231100000889 vertigo Toxicity 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012827 research and development Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses an anti-dizziness method, which relates to the technical field of machine vision and comprises the following steps: acquiring a plurality of original images; carrying out ISP processing on a plurality of original images, and carrying out splicing and fusion to obtain spliced images; capturing a human eye watching region through an eye tracker module, and dividing the spliced image into a foreground region and a background region according to the watching region; adopting a self-adaptive suggestion and enhancement algorithm to highlight and enhance the foreground area and blurring the background area; carrying out target identification and edge detection on the processed foreground area, acquiring target category information and contour information, and carrying out target contour reconstruction; under the night vision environment, acquiring thermal imaging information of a target, outlining the reconstructed target contour, increasing the brightness of a pixel part of the outlining target contour to obtain a final image, and displaying the final image.
Description
Technical Field
The invention relates to the technical field of machine vision, in particular to an anti-dizziness method.
Background
Research and development of the head-mounted equipment are hot spots of research and development of countries around the world, the near-eye display system is a core component of the head-mounted equipment, and vertigo caused by the near-eye display system is a common problem facing the world.
At present, no public results show about the worldwide research aiming at the anti-dizziness of the near-eye display devices, and the dizziness problem appears in all the near-eye display devices. Vertigo is caused by many reasons, including physiological reasons and reasons of software and hardware of a visual system, and the hardware mainly shows that the visual angle is too small and the distance between the visual angle and the vision of human is large. Secondly, in terms of display content and display mode, corresponding anti-dizziness measures are lacked, and contradiction is generated between human brain cognition, so that the user is dizzy.
Therefore, the vertigo brought by the near-eye display system is reduced and eliminated, the use experience of a user is improved, and the problem which needs to be solved urgently in the application field of the near-eye display system is solved.
Disclosure of Invention
The embodiment of the invention provides an anti-dizziness method, which can solve the problems in the prior art.
The invention provides an anti-dizziness method, which comprises the following steps:
acquiring a plurality of original images;
carrying out ISP processing on a plurality of original images, and carrying out splicing and fusion to obtain spliced images;
capturing a human eye gazing area through an eye tracker module, and dividing the spliced image into a foreground area, a transition area and a background area according to the gazing area;
adopting a self-adaptive suggestion and enhancement algorithm to highlight and enhance the foreground area and blurring the background area;
carrying out target identification and edge detection on the processed foreground area, acquiring target category information and contour information, and carrying out target contour reconstruction;
under the night vision environment, acquiring thermal imaging information of a target, outlining the reconstructed target contour, increasing the brightness of a pixel part of the outlining target contour to obtain a final image, and displaying the final image.
Preferably, ISP processing is performed on the original image through an FPGA circuit or an intelligent video processing SoC, which specifically includes the following steps:
black level correction, distortion correction, noise removal, dead pixel removal, bayer interpolation, color correction, white balance and automatic exposure control;
and splicing and fusing the original image processed by the ISP through the ARM.
Preferably, the method includes the steps of capturing a human eye gazing area through an eye tracker module, capturing the human eye gazing area through the eye tracker module, and dividing the spliced image into a foreground area, a transition area and a background area according to the gazing area, wherein the method includes the following steps:
capturing the attention area of the user according to the eye tracker module, performing area expansion on the extracted attention area, and dividing the foreground area and the background area of the image.
Preferably, the method for highlighting and enhancing the foreground area and blurring the background area by using the adaptive cue and enhancement algorithm comprises the following steps:
automatically adjusting the cut-off frequency and the filtering mode of filtering according to the brightness and the contrast of the spliced image by an image adaptive filter algorithm;
blurring the background area image to simulate the visual focusing mechanism of human eyes; and performing Gaussian smoothing processing on the transition region.
Preferably, the method for performing target identification and edge detection on the processed foreground region to acquire the category information and contour information of the target and perform target contour reconstruction includes the following steps:
performing target identification based on deep learning or machine learning according to the divided foreground region to obtain the category information of the target;
carrying out edge detection on the foreground area to obtain the outline of the target;
and reconstructing the contour of the target according to the category information and the edge information of the target.
Preferably, in a night vision environment, acquiring thermal imaging information of a target, outlining the reconstructed target contour, and increasing brightness of a pixel part of the outlined target contour according to the overall brightness of the spliced image, specifically comprising the following steps:
acquiring an infrared image of the foreground area and thermal imaging information of the target by using an infrared sensor;
carrying out image fusion processing on the infrared image and the original image;
and according to the infrared thermal image information of the target, the outline of the target is sketched again, the overall brightness of the target is adaptively adjusted according to the image brightness after fusion processing, the contrast ratio of the target and a background area is increased, and the target is further enhanced and displayed.
Preferably, before displaying the final image, the method for increasing the frame rate of the final image to a set value by using an optical flow prediction method and a centroid prediction method specifically includes the following steps:
converting two continuous frames of input images into a gray scale image;
calculating an optical flow field of a pixel level through the gray-scale image, and predicting motion field information of the image;
and performing frame interpolation processing on the motion track information according to the motion field information, increasing the frame rate of the video to a set value, and sending the set value to a display system for displaying.
Compared with the prior art, the invention has the beneficial effects that:
the technology such as high refresh rate display, visual content suggestion and background rendering effectively reduces the dizzy sense of the user, prolongs the use time, and solves the problems of dizziness, unbalance, fatigue and the like of the existing night vision equipment in the long-time wearing and using process. The technology can be applied to individual investigation products or police products to perform tactical functions such as day and night marching, reconnaissance, battle, information display, maintenance and the like, or to civil night vision equipment to enhance operation experience.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is an overall flow chart of a method of anti-glare of the present invention;
FIG. 2 is a schematic diagram of the stitched image partitioning of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides an anti-dizziness method, including the steps of:
the first step is as follows: a plurality of original images are acquired.
And secondly, performing ISP processing on a plurality of original images through an FPGA circuit or an intelligent video processing SoC, wherein the ISP processing specifically comprises black level correction, distortion correction, noise removal, dead pixel removal, bayer interpolation, color correction, white balance, automatic exposure control and the like. And splicing and fusing the original images processed by the ISP through the ARM to realize large-field-angle image display.
And thirdly, capturing the attention area of the user through an eye tracker module, and dividing the spliced image into a foreground area, a background area and a transition area for the extracted attention area.
The fourth step: adopting a self-adaptive suggestion and enhancement algorithm to highlight and enhance the foreground area and blurring the background area, and specifically comprising the following steps of: automatically adjusting the cut-off frequency and the filtering mode of filtering according to the brightness and the contrast of the spliced image by an image adaptive filter algorithm; blurring the background area image to simulate the visual focusing mechanism of human eyes; and performing Gaussian smoothing processing on the transition region.
In order to enable the display content to be in line with the working habits of human eyes, the model of the image acquired by the human eyes is taken as a starting point, the content of the main attention area needs to be displayed in a key clear mode, the scenes of the parts outside the key area are necessary information acquired by human balance, and clear display is not needed, so that the displayed content is rendered, the main display content gives a hint, the main content module for the next display is suggested, the visual attention of a user is guided to be concentrated in a small-range area, and the discomfort brought by the surrounding scenes is reduced.
The fifth step: carrying out target identification and edge detection on the processed foreground area, acquiring the category information and contour information of a target, and carrying out target contour reconstruction, wherein the method specifically comprises the following steps: performing target identification based on deep learning or machine learning according to the divided foreground region to obtain the category information of the target; carrying out edge detection on the foreground area to obtain the outline of the target; and reconstructing the contour of the target according to the category information and the edge information of the target.
And a sixth step: in a night vision environment, acquiring thermal imaging information of a target, outlining the reconstructed target contour, and increasing brightness of a pixel part of the outlined target contour according to the overall brightness of a spliced image, specifically comprising the following steps: acquiring an infrared image of a foreground area and thermal imaging information of a target by using an infrared sensor; carrying out image fusion processing on the infrared image and the original image; and according to the infrared thermal image information of the target, the outline of the target is sketched again, the overall brightness of the target is adaptively adjusted according to the brightness of the image after fusion processing, the contrast ratio of the target and a background area is increased, and the target is further enhanced and displayed.
The seventh step: the method for improving the frame rate of the final image to a set value by using an optical flow prediction method and a centroid prediction method specifically comprises the following steps: converting two continuous frames of input images into a gray scale image; calculating an optical flow field of a pixel level through a gray scale image, and predicting motion field information of the image; and performing frame interpolation processing on the motion track information according to the motion field information, increasing the frame rate of the video to a set value, and transmitting the set value to a display system for displaying. And displaying the processed final image. The method is characterized in that an optical flow prediction method and a centroid prediction method are combined, two frames are inserted between the two frames, a low-illumination sensor module is inserted into a high-frame-rate video according to a low-frame-rate video of 30 frames/second, the motion distance of an object between two adjacent frames is calculated, and the inserted frame is an image on an estimated position of the object after the object moves, so that a display picture is smooth and fluent, and the frame rate of a final image is increased to 90Hz.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (7)
1. A method of anti-glare comprising the steps of:
acquiring a plurality of original images;
carrying out ISP processing on a plurality of original images, and carrying out splicing and fusion to obtain spliced images;
capturing a human eye watching region through an eye tracker module, and dividing the spliced image into a foreground region, a transition region and a background region according to the watching region;
highlighting and enhancing the foreground area by adopting a self-adaptive suggestion and enhancement algorithm, and blurring the background area;
carrying out target identification and edge detection on the processed foreground area, acquiring target category information and contour information, and carrying out target contour reconstruction;
under the night vision environment, acquiring thermal imaging information of a target, outlining the reconstructed target, increasing the brightness of a pixel part of the outlined target outline to obtain a final image, and displaying the final image.
2. The method for preventing dizziness according to claim 1, wherein ISP processing is performed on a plurality of the original images through an FPGA circuit or an intelligent video processing SoC, and the method specifically comprises:
black level correction, distortion correction, noise removal, dead pixel removal, bayer interpolation, color correction, white balance and automatic exposure control;
and splicing and fusing a plurality of original images processed by the ISP through the ARM.
3. The anti-glare method according to claim 1, wherein the eye tracker module captures a gaze region of a human eye, and the stitched image is divided into a foreground region, a transition region, and a background region according to the gaze region, comprising the steps of:
capturing the attention area of the user according to the eye tracker module, performing area expansion on the captured attention area, and dividing the foreground area, the transition area and the background area of the spliced image according to the attention area after the area expansion.
4. A method of anti-glare according to claim 3, wherein the foreground region is highlighted and enhanced using an adaptive hinting and enhancing algorithm and the background region is blurred, comprising the steps of:
automatically adjusting the cut-off frequency and the filtering mode of filtering according to the brightness and the contrast of the spliced image by an image adaptive filter algorithm;
blurring the background area image to simulate the visual focusing mechanism of human eyes; and performing Gaussian smoothing processing on the transition region.
5. The anti-glare method according to claim 1, wherein the step of performing object identification and edge detection on the processed foreground area, acquiring category information and contour information of an object, and performing object contour reconstruction comprises the steps of:
performing target identification based on deep learning or machine learning according to the divided foreground region to obtain the category information of the target;
carrying out edge detection on the foreground area to obtain the outline of the target;
and reconstructing the contour of the target according to the category information and the edge information of the target.
6. The method of preventing glare as claimed in claim 1, wherein the steps of obtaining thermal imaging information of the target in a night vision environment, outlining the reconstructed target contour, and increasing the brightness of the pixel portion of the outlining target contour according to the overall brightness of the stitched image comprise:
acquiring an infrared image of a foreground area and thermal imaging information of a target by using an infrared sensor;
carrying out image fusion processing on the infrared image and the original image;
and according to the infrared thermal image information of the target, the outline of the target is sketched again, the overall brightness of the target is adaptively adjusted according to the image brightness after fusion processing, the contrast ratio of the target and a background area is increased, and the target is further enhanced and displayed.
7. The method of preventing dizziness according to claim 1, wherein before displaying the final image, the frame rate of the final image is increased to a set value by using an optical flow prediction method and a centroid prediction method, comprising the steps of:
converting two continuous frames of input images into a gray-scale image;
calculating an optical flow field of a pixel level through the gray-scale image, and predicting motion field information of the image;
and performing frame interpolation processing on the motion track information according to the motion field information, increasing the frame rate of the video to a set value, and sending the set value to a display system for displaying.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211206329.4A CN115908212A (en) | 2022-09-30 | 2022-09-30 | Anti-dizziness method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211206329.4A CN115908212A (en) | 2022-09-30 | 2022-09-30 | Anti-dizziness method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115908212A true CN115908212A (en) | 2023-04-04 |
Family
ID=86469923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211206329.4A Withdrawn CN115908212A (en) | 2022-09-30 | 2022-09-30 | Anti-dizziness method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115908212A (en) |
-
2022
- 2022-09-30 CN CN202211206329.4A patent/CN115908212A/en not_active Withdrawn
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10129520B2 (en) | Apparatus and method for a dynamic “region of interest” in a display system | |
CN109633907B (en) | Method for automatically adjusting brightness of monocular AR (augmented reality) glasses and storage medium | |
CA2916780C (en) | An apparatus and method for augmenting sight | |
KR102204212B1 (en) | Apparatus and method for providing realistic contents | |
CN106484116A (en) | The treating method and apparatus of media file | |
CN110826374B (en) | Method and device for monitoring eye gazing time, storage medium and electronic equipment | |
US20170154437A1 (en) | Image processing apparatus for performing smoothing on human face area | |
CN115171024A (en) | Face multi-feature fusion fatigue detection method and system based on video sequence | |
EP3588448B1 (en) | Method and system for displaying a virtual object | |
CN115022616B (en) | Image focusing enhancement display device and display method based on human eye tracking | |
CN115908212A (en) | Anti-dizziness method | |
Hwang et al. | 23.4: Augmented Edge Enhancement for Vision Impairment using Google Glas | |
GB2597725A (en) | Data processing system and method for image enhancement | |
US11270475B2 (en) | Variable rendering system and method | |
CN115877573A (en) | Display method, head-mounted display device, and storage medium | |
EP4167826A1 (en) | Visual assistance | |
CN115268073A (en) | Virtual reality display equipment and display method | |
CN115914603A (en) | Image rendering method, head-mounted display device and readable storage medium | |
CN116489500A (en) | Image processing method and device | |
CN116416202A (en) | Image processing method and device based on AR (augmented reality) glasses and electronic equipment | |
CN111031250A (en) | Refocusing method and device based on eyeball tracking | |
CN115883816A (en) | Display method and device, head-mounted display equipment and storage medium | |
CN110876052A (en) | Augmented reality image processing method and device and augmented reality image display equipment | |
CN110880158A (en) | Augmented reality image processing method and device and augmented reality image display equipment | |
JP2012103753A (en) | Image processing method, image processor and correction processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230404 |
|
WW01 | Invention patent application withdrawn after publication |