CN115348437A - Video processing method, device, equipment and storage medium - Google Patents

Video processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115348437A
CN115348437A CN202210908234.0A CN202210908234A CN115348437A CN 115348437 A CN115348437 A CN 115348437A CN 202210908234 A CN202210908234 A CN 202210908234A CN 115348437 A CN115348437 A CN 115348437A
Authority
CN
China
Prior art keywords
video
resolution
display screen
frame image
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210908234.0A
Other languages
Chinese (zh)
Other versions
CN115348437B (en
Inventor
曾光
岳小龙
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zejing Xi'an Automotive Electronics Co ltd
Original Assignee
Zejing Xi'an Automotive Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zejing Xi'an Automotive Electronics Co ltd filed Critical Zejing Xi'an Automotive Electronics Co ltd
Priority to CN202210908234.0A priority Critical patent/CN115348437B/en
Publication of CN115348437A publication Critical patent/CN115348437A/en
Application granted granted Critical
Publication of CN115348437B publication Critical patent/CN115348437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The application discloses a video processing method, a video processing device, video processing equipment and a storage medium, and belongs to the field of video processing. The method comprises the following steps: the method comprises the steps of firstly adjusting a first central distance of a left display picture and a right display picture of a display screen of the near-eye display device according to the obtained user pupil distance to obtain a second central distance related to the user pupil distance, then processing a first video to be displayed according to the first central distance and the second central distance to obtain a second video, and then displaying the second video through the display screen. So can obtain the second central distance of controlling the display screen about after the adjustment relevant with user's interpupillary distance earlier, treat the video that shows according to the second central distance after the adjustment again, make the overlap region of every frame image in the video after handling, with the central distance and the user's interpupillary distance adaptation of controlling the display screen after the adjustment, thereby when showing the video after handling through the display screen, the video that the user watched through the eyepiece with user's interpupillary distance adaptation is better.

Description

Video processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of video processing, and in particular, to a video processing method, apparatus, device, and storage medium.
Background
With the development of scientific technology, near-eye display devices based on technologies such as VR (Virtual Reality), AR (Augmented Reality), MR (Mixed Reality) and the like can give immersive three-dimensional stereoscopic experience to users, and are widely applied to various fields. The near-eye display device comprises an eyepiece and a display screen, wherein the display screen usually displays a left display picture and a right display picture, a left image included in each frame of a video is displayed through the left display picture, a right image included in each frame of the video is displayed through the right display picture, a user can view an overlapping area of the left image and the right image of each frame displayed by the display screen through the eyepiece, and the overlapping area is a three-dimensional image viewed by the user.
At present near-to-eye display device can make user's interpupillary distance and eyepiece center distance keep unanimous mode through adjustment eyepiece center distance usually when showing the video, reduces the user and wears near-to-eye display device and watches three-dimensional stereoscopic video's uncomfortable sense, but the influence of the center distance of controlling display frame to user's viewing experience is rarely considered. For example, because the central distance between the left and right display frames affects the size of the overlapping area between the left image and the right image of each frame of image displayed on the display screen, if the central distance between the left and right display frames is inconsistent with the user's interpupillary distance, the overlapping area will not be adapted to the user's interpupillary distance, and the effect of the three-dimensional video viewed by the user through the eyepiece adapted to the user's interpupillary distance may be poor.
Disclosure of Invention
The application provides a video processing method, a video processing device, video processing equipment and a storage medium, wherein the video to be displayed can be processed, the overlapping area of each frame of image in the processed video is matched with the center distance of a left display picture and a right display picture after adjustment and the pupil distance of a user, and therefore the effect of the video watched by the user through an eyepiece matched with the pupil distance of the user is better. The technical scheme is as follows:
in a first aspect, a video processing method is provided, where the method includes:
acquiring the pupil distance of a user;
adjusting a first central distance of left and right display pictures of a display screen of the near-eye display equipment according to the user pupil distance to obtain a second central distance, wherein the second central distance is related to the user pupil distance;
processing a first video to be displayed according to the first center distance and the second center distance to obtain a second video, wherein the aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen;
and displaying the second video through the display screen.
As an example, the processing a first video to be displayed according to the first center distance and the second center distance to obtain a second video includes:
adjusting the display resolution of the display screen according to the first center distance and the second center distance;
processing the first video according to the adjusted display resolution of the display screen to obtain a second video;
the displaying the second video through the display screen includes:
and displaying the second video through the display screen according to the adjusted display resolution of the display screen.
As an example, the adjusting the display resolution of the display screen according to the first center distance and the second center distance includes:
determining a target resolution corresponding to the second center distance according to the first center distance, the second center distance and the display resolution of the display screen;
and adjusting the display resolution of the display screen according to the target resolution, wherein the adjusted display resolution of the display screen is the target resolution.
As an example, the determining, according to the first center distance, the second center distance, and the display resolution of the display screen, a target resolution corresponding to the second center distance includes:
according to the first center distance, the second center distance and the display resolution of the display screen, determining the target resolution by the following formula:
Figure BDA0003773192480000021
Figure BDA0003773192480000022
wherein, w 2 A lateral resolution, w, included for said target resolution 1 A display lateral resolution, PD, included for the display resolution of the display screen 1 Is the second center distance, PD 2 Is the first center distance, h 2 A longitudinal resolution, h, included for the target resolution 1 A display longitudinal resolution included for a display resolution of the display screen.
As an example, before the processing the first video to be displayed to obtain the second video, the method further includes:
acquiring the aspect ratio of a source video;
if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the source video to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, taking the source video as the third video;
and if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video, and if the resolution of the third video is the same as the display resolution of the display screen, taking the third video as the first video.
As an example, if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video includes:
if the resolution of the third video is greater than the display resolution of the display screen, performing resolution reduction processing on the third video to obtain the first video;
and if the resolution of the third video is smaller than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
As an example, the performing resolution reduction processing on the third video to obtain the first video includes:
determining a resolution ratio of a resolution of the third video to a display resolution of the display screen;
determining at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio;
and performing pixel extraction on pixels at least one pixel extraction position in each frame of image included in the third video, wherein the third video after pixel extraction is the first video.
As an example, the performing super-resolution processing on the third video to obtain the first video includes:
for a first target frame image in the third video, determining an adjacent frame image of the first target frame image, wherein the first target frame image is any frame in the third video;
determining at least one pixel interpolation position in a first target frame image according to the resolution of the third video and the display resolution of the display screen;
determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image;
and performing pixel interpolation on each pixel interpolation position in the first target frame image according to the interpolation frame image corresponding to the first target frame image, wherein the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video.
As an example, the frame interpolation image corresponding to the first target frame image includes a first frame interpolation image and/or a second frame interpolation image;
the determining, according to the first target frame image and the adjacent frame image, an interpolated frame image corresponding to the first target frame image includes:
determining the motion state of the first target frame image relative to the adjacent frame image, and determining the first frame interpolation image corresponding to the first target frame image according to the motion state;
and/or the presence of a gas in the gas,
and extracting and matching the features of the first target frame image and the adjacent frame image, and fusing the matched features to obtain the second frame interpolation image corresponding to the first target frame image.
As an example, if the aspect ratio of the source video is different from the aspect ratio of the display screen, the aspect ratio processing is performed on the source video to obtain a third video, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, the source video is before the third video, the method further includes:
acquiring the frame rate of the source video and the field angle of an eyepiece of the near-eye display device;
if the frame rate of the source video is different from that of the display screen, performing frame rate processing on the source video to obtain a fourth video, and if the frame rate of the source video is the same as that of the display screen, determining that the source video is the fourth video;
if the field angle of the source video is different from that of the eyepiece, performing field angle processing on the fourth video to obtain a fifth video, and if the field angle of the source video is the same as that of the eyepiece, taking the fourth video as the fifth video;
if the aspect ratio of the source video is different from that of the display screen, the aspect ratio of the source video is processed to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, the source video is the third video, including:
and if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the fifth video to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, taking the fifth video as the third video.
As an example, if the frame rate of the source video is different from the frame rate of the display screen, performing frame rate processing on the source video includes:
if the frame rate of the source video is greater than that of the display screen, deleting at least one frame of image in the source video;
if the frame rate of the source video is less than the frame rate of the display screen, determining at least one frame interpolation position in the source video, determining a frame interpolation image corresponding to a first frame interpolation position in the at least one frame interpolation position according to a previous frame image and a next frame image adjacent to the first frame interpolation position, and interpolating the determined frame interpolation image to the first frame interpolation position, wherein the first frame interpolation position is any one of the at least one frame interpolation position.
As an example, if the field angle of the source video is different from the field angle of the eyepiece, performing field angle processing on the fourth video to obtain a fifth video includes:
for a second target frame image in the fourth video, determining an energy value of each pixel point in the second target frame image, wherein the second target frame image is any one frame in the fourth video;
determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel;
if the field angle of the source video is smaller than that of the eyepiece, performing pixel interpolation on the second target frame image according to the at least one pixel, wherein the second target frame image after the pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, determining a first cycle value, if the first cycle value does not meet a first preset condition, taking a frame image corresponding to the second target frame image in the fifth video as the second target frame image, jumping to a step of determining an energy value of each pixel point in the second target frame image until the first cycle value meets the first preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the first cycle value is used for indicating the number of times of performing the pixel interpolation on the second target frame image according to the at least one pixel included in the path with the smallest energy value;
if the field angle of the source video is larger than that of the eyepiece, performing pixel removal on at least one pixel in the second target frame image, wherein the second target frame image after pixel removal is a frame image corresponding to the second target frame image in the fifth video, determining a second cycle value, if the second cycle value does not satisfy a second preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to a step of determining an energy value of each pixel point in the second target frame image until the second cycle value satisfies the second preset condition to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the second cycle value is used for indicating the number of times of performing pixel removal on the second target frame image according to the at least one pixel included in the path with the smallest energy value.
As an example, before adjusting a first center distance of left and right display screens of a display screen of a near-eye display device according to the user pupil distance, the method further includes:
if the difference value between the ocular central distance of the near-to-eye display device and the user interpupillary distance is larger than a first threshold value, adjusting the ocular central distance, wherein the adjusted ocular central distance is related to the user interpupillary distance.
In a second aspect, a video processing apparatus is provided, the apparatus including a first obtaining module, a first adjusting module, a first processing module, and a display module;
the first acquisition module is used for acquiring the interpupillary distance of the user;
the first adjusting module is used for adjusting a first central distance of left and right display pictures of a display screen of the near-eye display device according to the user pupil distance to obtain a second central distance, and the second central distance is related to the user pupil distance;
the first processing module is configured to process a first video to be displayed according to the first center distance and the second center distance to obtain a second video, where an aspect ratio of the first video is the same as that of the display screen, and a resolution of the first video is the same as a display resolution of the display screen;
and the display module is used for displaying the second video through the display screen.
As an example, the first processing module is configured to adjust a display resolution of the display screen according to the first center distance and the second center distance, and process the first video according to the adjusted display resolution of the display screen to obtain the second video;
and the display module is used for displaying the second video through the display screen according to the adjusted display resolution of the display screen.
As an example, the first processing module is configured to determine, according to the first center distance, the second center distance, and a display resolution of the display screen, a target resolution corresponding to the second center distance;
and adjusting the display resolution of the display screen according to the target resolution, wherein the adjusted display resolution of the display screen is the target resolution.
As an example, the first processing module is configured to determine the target resolution according to the first center distance, the second center distance, and the display resolution of the display screen by using the following formula:
Figure BDA0003773192480000061
Figure BDA0003773192480000062
wherein, w 2 A lateral resolution, w, included for said target resolution 1 A display lateral resolution, PD, included for the display resolution of the display screen 1 Is the second center distance, PD 2 Is the first center distance, h 2 A longitudinal resolution, h, included for the target resolution 1 A display longitudinal resolution included for a display resolution of the display screen.
As an example, the apparatus further includes a second obtaining module, a second processing module, and a third processing module;
the second acquisition module is used for acquiring the aspect ratio of the source video;
the second processing module is configured to perform aspect ratio processing on the source video to obtain a third video if the aspect ratio of the source video is different from the aspect ratio of the display screen, and the source video is the third video if the aspect ratio of the source video is the same as the aspect ratio of the display screen;
the third processing module is configured to, if the resolution of the third video is different from the display resolution of the display screen, perform resolution processing on the third video to obtain the first video, and if the resolution of the third video is the same as the display resolution of the display screen, determine that the third video is the first video.
As an example, the third processing module is configured to perform resolution reduction processing on the third video to obtain the first video if the resolution of the third video is greater than the display resolution of the display screen;
and if the resolution of the third video is smaller than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
As an example, the third processing module is configured to determine a resolution ratio of a resolution of the third video to a display resolution of the display screen;
determining at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio;
and performing pixel extraction on pixels at least one pixel extraction position in each frame of image included in the third video, wherein the third video after pixel extraction is the first video.
As an example, the third processing module is configured to, for a first target frame image in the third video, determine a neighboring frame image of the first target frame image, where the first target frame image is any frame in the third video;
determining at least one pixel interpolation position in a first target frame image according to the resolution of the third video and the display resolution of the display screen;
determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image;
and performing pixel interpolation on each pixel interpolation position in the first target frame image according to the frame interpolation image corresponding to the first target frame image, wherein the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video.
As an example, the third processing module is configured to determine, according to the first target frame image and the adjacent frame image, an interpolated frame image corresponding to the first target frame image, and includes:
determining the motion state of the first target frame image relative to the adjacent frame image, and determining the first frame interpolation image corresponding to the first target frame image according to the motion state;
and/or the presence of a gas in the atmosphere,
and performing feature extraction and matching on the first target frame image and the adjacent frame image, and fusing matched features to obtain the second frame interpolation image corresponding to the first target frame image.
As an example, the apparatus further includes a third obtaining module, a fourth processing module, and a fifth processing module;
the third obtaining module is configured to obtain a frame rate of the source video and a field angle of an eyepiece of the near-eye display device;
the fourth processing module is configured to, if the frame rate of the source video is different from the frame rate of the display screen, perform frame rate processing on the source video to obtain a fourth video, and if the frame rate of the source video is the same as the frame rate of the display screen, determine that the source video is the fourth video;
the fifth processing module is configured to perform, if the field angle of the source video is different from the field angle of the eyepiece, field angle processing on the fourth video to obtain a fifth video, and if the field angle of the source video is the same as the field angle of the eyepiece, the fourth video is the fifth video;
and the second processing module is used for performing aspect ratio processing on the fifth video to obtain a third video if the aspect ratio of the source video is different from that of the display screen, and the fifth video is the third video if the aspect ratio of the source video is the same as that of the display screen.
As an example, the fourth processing module is configured to delete at least one image in the source video if the frame rate of the source video is greater than the frame rate of the display screen;
if the frame rate of the source video is less than that of the display screen, determining at least one frame insertion position in the source video, determining a frame insertion image corresponding to a first frame insertion position in the at least one frame insertion position according to a previous frame image and a next frame image adjacent to the first frame insertion position, and inserting the determined frame insertion image into the first frame insertion position, wherein the first frame insertion position is any one of the at least one frame insertion position.
As an example, the fifth processing module is configured to determine, for a second target frame image in the fourth video, an energy value of each pixel point in the second target frame image, where the second target frame image is any frame in the fourth video;
determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel;
if the field angle of the source video is smaller than that of the eyepiece, performing pixel interpolation on the second target frame image according to the at least one pixel, wherein the second target frame image after the pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, determining a first cycle value, if the first cycle value does not meet a first preset condition, taking a frame image corresponding to the second target frame image in the fifth video as the second target frame image, jumping to a step of determining an energy value of each pixel point in the second target frame image until the first cycle value meets the first preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the first cycle value is used for indicating the number of times of performing the pixel interpolation on the second target frame image according to the at least one pixel included in the path with the smallest energy value;
if the field angle of the source video is larger than that of the eyepiece, performing pixel removal on at least one pixel in the second target frame image, wherein the second target frame image after pixel removal is a frame image corresponding to the second target frame image in the fifth video, determining a second cycle value, if the second cycle value does not satisfy a second preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to a step of determining an energy value of each pixel point in the second target frame image until the second cycle value satisfies the second preset condition to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the second cycle value is used for indicating the number of times of performing pixel removal on the second target frame image according to the at least one pixel included in the path with the smallest energy value.
As one example, the apparatus further comprises a second adjustment module;
and the second adjusting module is used for adjusting the ocular central distance if the difference value between the ocular central distance of the near-to-eye display equipment and the interpupillary distance of the user is greater than a first threshold value, and the adjusted ocular central distance is related to the interpupillary distance of the user.
In a third aspect, a computer device is provided, the computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the computer program, when executed by the processor, implementing the video processing method described above.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the video processing method described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the user interpupillary distance is firstly acquired, the first central distance of left and right display pictures of a display screen of the near-eye display device is adjusted according to the acquired user interpupillary distance, the second central distance related to the user interpupillary distance is acquired, the first video to be displayed is processed according to the first central distance and the second central distance to acquire the second video, and then the second video is displayed through the display screen. The aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen. Therefore, the first central distance of the left and right display pictures can be adjusted according to the user pupil distance to obtain the second central distance of the left and right display pictures after adjustment related to the user pupil distance, and then the video to be displayed is processed according to the first central distance of the left and right display pictures before adjustment and the second central distance of the left and right display pictures after adjustment, so that the overlapping area of the left image and the right image of each frame image in the processed video is adaptive to the central distance of the left and right display pictures after adjustment and the user pupil distance, and when the processed video is displayed through the display screen, the effect of the three-dimensional video viewed by the user through the eyepiece adaptive to the user pupil distance is better.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is a flowchart of another video processing method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference to "a plurality" in this application means two or more. In the description of this application, "/" indicates an inclusive meaning, for example, A/B may indicate either A or B; "and/or" herein is only an association relationship describing an association object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, for the convenience of clearly describing the technical solutions of the present application, the terms "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," and the like do not denote any order or importance, but rather the terms "first," "second," and the like do not denote any order or importance.
Before explaining the embodiments of the present application in detail, an application scenario of the embodiments of the present application will be described.
The video processing method provided by the embodiment of the application can be applied to a scene giving an immersive three-dimensional stereoscopic vision experience to a user through a near-eye display device. For example, the video processing method provided by the embodiment of the present application can adjust the center distance of the left and right display frames according to the user pupil distance, and then process the video to be displayed according to the center distance before and after adjustment, so that the overlapping area of each frame of image in the processed video, the center distance of the adjusted left and right display frames, and the user pupil distance are adapted, and thus the video viewed by the user through the eyepiece adapted to the user pupil distance has a good effect.
Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the present disclosure. The video processing method may be applied to a computer device, where the computer device may be a near-eye display device, or may also be other devices except the near-eye display device, such as a terminal, a server, or an embedded device, which processes a video to be displayed of the near-eye display device, and the terminal may be a desktop or a tablet computer. As shown in fig. 1, the method comprises the steps of:
step 101, a computer device acquires a user pupil distance.
The user pupil distance is a distance between pupils of two eyes of a user, and the size of the user pupil distance affects a visual field range viewable by the user, for example, affects the size of an overlapping area viewable by the user, where the overlapping area is a three-dimensional image viewed by the user.
For example, each frame of image of the source video to be displayed includes a left image and a right image, the near-eye display device includes an eyepiece and a display screen capable of displaying a left display screen and a right display screen, the near-eye display device displays the left image of each frame of image through the left display screen and displays the right image of each frame of image through the right display screen, and the user can view an overlapping area of the left image displayed by the left display screen and the right image displayed by the right display screen through the eyepiece. The center distance between the left display screen and the right display screen influences the size of the overlapping area displayed by the left display screen and the right display screen. If the size of the overlapping area viewable by the user is not matched with the size of the overlapping area displayed by the left display picture and the right display picture, that is, the pupil distance of the user is not matched with the center distance of the left display picture and the right display picture, the effect of a three-dimensional video viewed by the user through an eyepiece matched with the pupil distance of the user may be poor, and therefore, the pupil distance of the user should be acquired before the near-eye display device displays a source video to be displayed, so that the center distance of the left display picture and the center distance of the right display picture are adjusted according to the pupil distance of the user.
And 102, adjusting the first center distance of left and right display pictures of a display screen of the near-to-eye display equipment according to the pupil distance of the user to obtain a second center distance.
Wherein the second center distance is related to the user's interpupillary distance. The second center distance is related to the user's interpupillary distance, which means that the second center distance is adapted to the user's interpupillary distance, for example, the second center distance is the same as or close to the user's interpupillary distance. For example, the difference between the second center distance and the pupil distance of the user is smaller than or equal to a second threshold, where the second threshold is a preset smaller error distance.
As an example, before adjusting the first center distance, the computer device may first obtain the first center distance, determine whether a difference between the first center distance and a pupil distance of the user is greater than a second threshold, adjust the first center distance according to the pupil distance of the user to obtain the second center distance if the difference between the first center distance and the pupil distance of the user is greater than the second threshold, and if the difference between the first center distance and the pupil distance of the user is less than or equal to the second threshold, do not adjust the first center distance, do not perform the following steps 103-104, in which case the first center distance is related to the pupil distance of the user.
As one example, the difference between the first center distance and the user interpupillary distance being greater than the second threshold may include both the first center distance being less than the user interpupillary distance and the first center distance being greater than the user interpupillary distance. If the initial first center distance is smaller than the pupil distance of the user, although the first center distance is not matched with the pupil distance of the user, the overlapping area which can be viewed by the user includes the overlapping area which is displayed by the left and right display pictures on the basis of the first center distance, so that the influence on the stereoscopic video viewed by the user through the eyepiece is small in the case, and the first center distance can not be adjusted.
As an example, the interpupillary distance of a typical adult male is 59-72 mm, and the interpupillary distance of an adult female is 56-66 mm. Initially, a first central distance of a left display screen and a right display screen of the near-eye display device may be set to any numerical value greater than or equal to 72mm, so that the obtained pupil distance of the user is less than or equal to the first central distance, and a second central distance obtained after the first central distance is adjusted is less than or equal to the first central distance, so that the first central distance needs to be adjusted.
As an example, the computer device may further obtain an aspect ratio, a display resolution and a frame rate of a display screen of the near-eye display device, a field angle of an eyepiece, and the like, which is not limited in this embodiment of the application.
As one example, the computer device may also adjust an eyepiece center-to-center distance of an eyepiece of the near-eye display device prior to adjusting the first center distance. For example, the computer device determines whether the difference between the pupil distance of the user and the ocular center distance of the near-to-eye display device is greater than a first threshold, if so, adjusts the ocular center distance, wherein the adjusted ocular center distance is related to the pupil distance of the user, and the first threshold is a preset smaller error distance.
And 103, processing the first video to be displayed by the computer equipment according to the first center distance and the second center distance to obtain a second video.
The horizontal-vertical ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen, so that the horizontal-vertical ratio and the display resolution of the display screen of the near-to-eye display device can be ensured to be adaptive, namely, the display effect of the first video can be better through the display screen before the first video is adjusted according to the second center distance of the adjusted left and right display pictures.
For example, the computer device may first adjust the display resolution of the display screen according to the first center distance and the second center distance, and then process the first video to be displayed according to the adjusted display resolution of the display screen to obtain the second video.
As an example, the center distance of the left and right display screens of the display screen has a corresponding relationship with the display resolution of the display screen, and generally, the larger the center distance of the left and right display screens is, the larger the display resolution is, so that after the first center distance is adjusted, the display resolution of the display screen should be adjusted correspondingly. Typically, the resolution includes a lateral resolution and a longitudinal resolution. For example, the display resolution includes a display horizontal resolution and a display vertical resolution.
For example, the computer device may determine a target resolution corresponding to the second center distance according to the first center distance, the second center distance, and the display resolution of the display screen, and then adjust the display resolution of the display screen according to the target resolution, where the adjusted display resolution of the display screen is the target resolution, and the adjusted display resolution of the display screen is adapted to the second center distance and the user pupil distance. Because the display resolution of the adjusted display screen is adapted to the second center distance and the user pupil distance, the second video obtained by processing the first video according to the display resolution of the adjusted display screen is also adapted to the second center distance and the user pupil distance.
As an example, in the case where the initial first center distance is smaller than the user pupil distance, that is, the first center distance is smaller than the second center distance, even if the first center distance does not fit the user pupil distance, the first center distance has a small influence on the stereoscopic video viewed by the user through the eyepiece, and thus the first center distance may not be adjusted in this case.
As an example, if the first center distance is greater than the second center distance, the computer device may determine the target resolution by the following equations (1) and (2) according to the first center distance, the second center distance, and the display resolution of the display screen:
Figure BDA0003773192480000121
Figure BDA0003773192480000122
wherein w 2 Transverse resolution, w, included for the target resolution 1 Display lateral resolution, PD, included for the display resolution of the display screen 1 Is the second center distance, PD 2 Is a first center distance, h 2 Longitudinal resolution, h, included for the target resolution 1 A display longitudinal resolution included for the display resolution of the display screen.
As can be seen from the above equations (1) and (2), if the first center distance is greater than the second center distance, the target resolution is smaller than the display resolution of the display screen, that is, if the first center distance of the left and right display frames is greater than the interpupillary distance of the user initially, the display resolution of the display screen after adjustment is smaller than the display resolution of the display screen before adjustment.
For example, the computer device may perform resolution processing on each frame image in the first video according to the adjusted display resolution of the display screen, that is, according to the target resolution, to obtain a second video, where the resolution of the second video is the same as the adjusted display resolution of the display screen, that is, the resolution of the second video is the target resolution.
As an example, when the first center distance is greater than the interpupillary distance of the user, the display resolution of the display screen before adjustment is greater than the display resolution of the display screen after adjustment, and the resolution of the first video is the same as the display resolution of the display screen before adjustment, so that the resolution of the first video is greater than the display resolution of the display screen after adjustment, and therefore, according to the display resolution of the display screen after adjustment, resolution reduction processing may be performed on each frame of image in the first video to obtain the second video.
As an example, a specific implementation process of performing resolution reduction processing on the first video will be described in detail in step 216 of the embodiment in fig. 2 below, and details of the embodiment of the present application are not repeated here.
It should be noted that, in the embodiment of the present application, each frame of image of a video includes a left image and a right image, and processing each frame of image in the video refers to processing both the left image and the right image of each frame of image in the video.
As an example, before processing a first video to be displayed to obtain a second video, the computer device may process the source video according to the aspect ratio and the display resolution of the display screen to obtain the first video, where the aspect ratio of the obtained first video is the same as the aspect ratio of the display screen, and the resolution of the first video is the same as the display resolution of the display screen. For example, the computer device may process the source video according to the aspect ratio and the display resolution of the display screen, and obtain the first video through the following steps:
step 1, computer equipment acquires the aspect ratio of a source video.
As an example, the computer device may further obtain a source resolution, a frame rate, a field angle, or the like of the source video, which is not limited in this embodiment.
And 2, determining whether the aspect ratio of the source video is the same as that of the display screen, if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the source video to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, determining that the source video is the third video.
If the aspect ratio of the source video is different from the aspect ratio of the display screen, the aspect ratio processing on the source video may include scaling processing and stretching processing.
As an example, before step 2, the computer device may further process the frame rate and the viewing angle of the source video according to the frame rate of the display screen and the viewing angle of the eyepiece, so as to ensure that the frame rate, the aspect ratio, the display resolution, and the viewing angle of the eyepiece of the first video are all adapted to the frame rate, the aspect ratio, and the display resolution of the display screen before the first video is adjusted according to the second center distance of the adjusted left and right display frames, so that the effect of displaying the first video on the display screen is good, and the effect of viewing the first video through the eyepiece by the user is good.
For example, the computer device may first obtain a frame rate of a source video and a field angle of an eyepiece of the near-eye display device, determine whether the frame rate of the source video is the same as the frame rate of the display screen, perform frame rate processing on the source video if the frame rate of the source video is different from the frame rate of the display screen to obtain a fourth video, determine whether the field angle of the source video is the same as the field angle of the eyepiece if the frame rate of the source video is the same as the frame rate of the display screen, perform field angle processing on the fourth video to obtain a fifth video if the field angle of the source video is different from the field angle of the eyepiece, and obtain the fifth video if the field angle of the source video is the same as the field angle of the eyepiece, and then obtain the fifth video from the fourth video.
For example, if the frame rate of the source video is different from the frame rate of the display screen, the frame rate processing on the source video may include frame deletion processing and frame insertion processing. If the field angle of the source video is different from the field angle of the eyepiece, the performing of the field angle processing on the fourth video may include ascending field angle processing and descending field angle processing.
As an example, a specific implementation process of performing frame deletion processing or frame insertion processing on a source video will be described in detail in step 207 of the embodiment of fig. 2 below, which is not described herein again in this embodiment of the present application.
As an example, a specific implementation process of performing the increasing field angle processing and the decreasing field angle processing on the fourth video will be described in detail in step 210 of the embodiment of fig. 2 below, and details of the embodiment of the present application are not repeated here.
As an example, frame deletion processing or frame insertion processing is performed on the source video without changing the source resolution of the source video, so that the resolution of the fourth video is the same as the source resolution of the source video; performing field angle up processing and field angle down processing on the fourth video, and changing the resolution of the fourth video, so that the resolution of the fourth video is different from that of the fifth video, and the resolution of the fifth video is different from the source resolution of the source video; and performing scaling processing and stretching processing on the fifth video without changing the resolution of the fifth video, so that the resolution of the fifth video is the same as that of the third video, and the resolution of the third video is different from the source resolution of the source video.
As an example, after processing the frame rate and the field angle of the source video, the step 2 may include: and if the aspect ratio of the source video is determined to be different from that of the display screen, performing aspect ratio processing on the fifth video to obtain a third video, and if the aspect ratio of the source video is determined to be the same as that of the display screen, determining that the fifth video is the third video.
And 3, determining whether the resolution of the third video is the same as the display resolution of the display screen, if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain a first video, and if the resolution of the third video is the same as the display resolution of the display screen, determining that the third video is the first video.
As one example, after the frame rate and the field angle of the source video are processed, the resolution of the third video is different from the source resolution of the source video.
For example, if the resolution of the third video is different from the display resolution of the display screen, the resolution processing on the third video may include resolution reduction processing and super-resolution processing.
As an example, a specific implementation process of performing the resolution reduction processing and the super-resolution processing on the third video will be described in detail in step 216 of the embodiment in fig. 2 below, and no further details are given in this embodiment of the application.
Thus, through the steps 1 to 3, the first video can be obtained, the aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen.
And 104, displaying the second video by the computer equipment through the display screen.
For example, the computer device may display the second video through the display screen according to the adjusted display resolution of the display screen.
Because the display resolution of the display screen after the adjustment and the user interpupillary distance adaptation, second video and user interpupillary distance adaptation, consequently to the user, according to the display resolution of the display screen after the adjustment, the display effect that shows is carried out to the second video through the display screen is better, namely the user is better through the effect of the second video that the eyepiece with user's interpupillary distance adaptation was watched.
In the embodiment of the application, the pupil distance of a user is firstly acquired, the first center distance of a left display picture and a right display picture of a display screen of near-to-eye display equipment is adjusted according to the acquired pupil distance of the user, the second center distance related to the pupil distance of the user is acquired, a first video to be displayed is processed according to the first center distance and the second center distance to acquire a second video, and then the second video is displayed through the display screen. The aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen. Therefore, the first central distance of the left and right display pictures can be adjusted according to the user pupil distance to obtain the second central distance of the left and right display pictures after adjustment related to the user pupil distance, and then the video to be displayed is processed according to the first central distance of the left and right display pictures before adjustment and the second central distance of the left and right display pictures after adjustment, so that the overlapping area of the left image and the right image of each frame image in the processed video is adaptive to the central distance of the left and right display pictures after adjustment and the user pupil distance, and when the processed video is displayed through the display screen, the effect of the three-dimensional video viewed by the user through the eyepiece adaptive to the user pupil distance is better.
Referring to fig. 2, fig. 2 is a flowchart of another video processing method according to an embodiment of the present disclosure. The computer device may be a near-eye display device, or may be other devices except the near-eye display device, such as a terminal, a server, or an embedded device, that process a video to be displayed of the near-eye display device, where the terminal may be a desktop or a tablet computer. As shown in fig. 2, the method comprises the steps of:
step 201, a computer device obtains a pupil distance of a user, video parameters of a source video, display parameters of a display screen of a near-eye display device, and a field angle and an eyepiece center distance of an eyepiece.
The pupil distance of the user refers to the distance between pupils of two eyes of the user.
Each frame of image of the source video comprises a left image and a right image, the near-eye display device comprises an eyepiece and a display screen capable of displaying a left display picture and a right display picture, the near-eye display device displays the left image of each frame of image through the left display picture, the right image of each frame of image is displayed through the right display picture, and a user can watch the overlapping area of the left image displayed by the left display picture and the right image displayed by the right display picture through the eyepiece.
The video parameters are used to indicate basic features of the source video, and may include an aspect ratio, a source resolution, a frame rate, a field angle, or the like.
The display parameters are used for indicating the capability of the near-eye display device to display the video, and the display parameters may include the aspect ratio of the display screen, the display resolution, the frame rate, or the first center distance of the left and right display frames. The near-eye display device may display the video according to the display parameters.
As one example, the computer device can detect the user pupil distance input instruction and acquire the user pupil distance according to the user pupil distance input instruction. For example, the computer device may include a display screen through which the computer device detects user interpupillary distance input instructions. The user interpupillary distance input instruction can be triggered by a user through user interpupillary distance input operation based on the display screen. The operation type of the user pupil distance input operation may be a click operation, a press operation, a language operation, a gesture operation, or the like, which is not limited in this embodiment of the application.
Alternatively, the computer device may include a measurement module by which the computer device can automatically measure the user's interpupillary distance. For example, the measurement module is a measurement instrument capable of measuring the pupil distance of the user, and the like, which is not limited in this embodiment of the present application.
Step 202, the computer device determines whether the center distance of the eyepieces is the same as the interpupillary distance of the user.
It should be noted that, in the embodiment of the present application, it is described as an example that it is determined whether the center distance of the eyepiece is the same as the user's interpupillary distance, in other embodiments, the computer device may also determine whether a difference between the center distance of the eyepiece and the user's interpupillary distance is greater than a first threshold, where the first threshold is a preset smaller error distance.
And 203, if the computer equipment determines that the central distance of the ocular is different from the pupil distance of the user, adjusting the central distance of the ocular according to the pupil distance of the user.
And the adjusted center distance of the ocular is related to the interpupillary distance of the user. The fact that the adjusted ocular center distance is related to the user's interpupillary distance means that the adjusted ocular center distance is matched with the user's interpupillary distance, for example, the adjusted ocular center distance is the same as or close to the user's interpupillary distance. For example, the difference between the adjusted center distance of the ocular and the interpupillary distance of the user is less than or equal to the first threshold.
As an example, if the eyepiece center distance is the same as the user's interpupillary distance, then the eyepiece center distance is not adjusted.
As an example, if the computer device determines that the difference between the central distance of the eyepieces and the pupil distance of the user is greater than the first threshold, the central distance of the eyepieces is adjusted according to the pupil distance of the user. And if the difference value between the central distance of the ocular and the user interpupillary distance is smaller than or equal to a first threshold value, not adjusting the central distance of the ocular, wherein the unadjusted central distance of the ocular is related to the user interpupillary distance.
And 204, if the computer equipment determines that the central distance of the eyepieces is the same as the interpupillary distance of the user, determining whether the first central distance of the left and right display pictures of the display screen is the same as the interpupillary distance of the user.
It should be noted that, in the embodiment of the present application, it is described by taking an example of determining whether a first central distance between left and right display frames of a display screen is the same as a pupil distance of a user, and in other embodiments, the computer device may also determine whether a difference value between the first central distance and the pupil distance of the user is greater than a second threshold, where the second threshold is a preset smaller error distance.
Step 205, if the computer device determines that the first central distance is different from the pupil distance of the user, the computer device adjusts the first central distance according to the pupil distance of the user to obtain a second central distance.
Wherein the second center distance is related to the user's interpupillary distance. The second center distance is related to the user's interpupillary distance, which means that the second center distance is adapted to the user's interpupillary distance, for example, the second center distance is the same as or close to the user's interpupillary distance. For example, the difference between the second center distance and the user's interpupillary distance is smaller than or equal to a second threshold, where the second threshold is a preset smaller error distance.
As an example, if it is determined that the first center distance is the same as the user's interpupillary distance, the first center distance is not adjusted.
As an example, if the computer device determines that the difference between the first center distance and the user's interpupillary distance is greater than the second threshold, the computer device adjusts the first center distance according to the user's interpupillary distance, and if the difference between the first center distance and the user's interpupillary distance is less than or equal to the second threshold, the computer device does not adjust the first center distance, and does not perform the following steps 206-219, in which case the first center distance is related to the user's interpupillary distance.
The first central distance is different from the pupil distance of the user, and the first central distance is smaller than the pupil distance of the user and larger than the pupil distance of the user.
As an example, in the case where the initial first center distance is smaller than the user pupil distance, that is, the first center distance is smaller than the second center distance, even if the first center distance does not fit the user pupil distance, the first center distance has a small influence on the stereoscopic video viewed by the user through the eyepiece, and thus the first center distance may not be adjusted in this case.
For example, if the first center distance is the same as the user's interpupillary distance, the first center distance is not adjusted, in which case the first center distance is related to the user's interpupillary distance.
In step 206, the computer device determines whether the frame rate of the source video is the same as the frame rate of the display screen.
Step 207, if the computer device determines that the frame rate of the source video is different from the frame rate of the display screen, performing frame rate processing on the source video to obtain a fourth video.
It should be noted that, in the embodiment of the present application, each frame of image of a video includes a left image and a right image, and processing each frame of image in the video refers to processing each frame of image in the video.
If the frame rate of the source video is different from the frame rate of the display screen, the frame rate processing on the source video may include frame deletion processing and frame insertion processing. For example, if the frame rate of the source video is greater than the frame rate of the display screen, frame deletion processing is performed on the source video to obtain a fourth video, and if the frame rate of the source video is less than the frame rate of the display screen, frame interpolation processing is performed on the source video to obtain the fourth video.
As one example, frame deletion processing on a source video refers to deleting at least one frame of image in the source video. For example, if the frame rate of the source video is greater than the frame rate of the display screen, determining a frame rate difference between the frame rate of the source video and the frame rate of the display screen, determining at least one frame deletion image in the source video according to the frame rate difference, and then deleting at least one frame image in the source video to obtain a fourth video. Wherein the frame rate difference indicates the number of at least one frame deletion frame image to be deleted from the source video in the measurement unit.
For example, each frame deletion image in the at least one frame deletion image may be any frame in the source video, and determining the at least one frame deletion image may indicate to delete one frame deletion image from the source video at every frame deletion interval, where the frame deletion interval may be determined by the frame rate difference and the frame rate of the source video. For example, if the frame rate of the source video is 30fps and the frame rate of the display screen is 20fps, the frame rate difference indicates that the number of at least one frame deletion image to be deleted from the source video per second is 10, and the computer device may delete 1 frame image from every 3 frame images of the source video, resulting in a fourth video with a frame rate of 20 fps.
As an example, the frame interpolation processing on the source video refers to performing frame interpolation between adjacent frame images in the source video, that is, inserting an interpolated frame image. For example, at least one frame interpolation position in the source video is determined, for a first frame interpolation position in the at least one frame interpolation position, an frame interpolation image corresponding to the first frame interpolation position is determined according to a previous frame image and a next frame image adjacent to the first frame interpolation position, and the determined frame interpolation image is interpolated to the first frame interpolation position, wherein the first frame interpolation position is any one of the at least one frame interpolation position.
For example, at least one of the interpolation frame positions may be one or more positions between adjacent frame images, that is, one or more interpolation frame images may be inserted between adjacent frame images corresponding to the interpolation frame positions.
For example, an interpolated frame image may be inserted between adjacent frame images, the frame rate of the source video is 20fps, and the frame rate of the display screen is 30fps, and at least one interpolated frame position may indicate a position where an interpolated frame image is inserted every 2 frames. For example, the source video includes frame 0 image to frame 29 image, and the computer device may insert frame interpolation images between frame 1 and frame 2, between frame 3 and frame 4, between frame 5 and frame 6, and so on, between frame 17 and frame 18, and after frame 19 of the source video, resulting in a fourth video with a frame rate of 30 fps.
As one example, the interpolated image corresponding to the first interpolated position may include one or more. For example, the frame-inserted image corresponding to the first frame-inserted position may include a third frame-inserted image and/or a fourth frame-inserted image. The computer device may determine a motion state of a previous frame image and a next frame image adjacent to the first interpolation frame position, and determine a third interpolation frame image corresponding to the first interpolation frame position according to the motion state. And/or extracting and matching the characteristics of the previous frame image and the next frame image adjacent to the first frame interpolation position, and fusing the matched characteristics to obtain a fourth frame interpolation image corresponding to the first frame interpolation position. Of course, the interpolated frame image may also be obtained in other manners, which is not limited in this application.
For example, the optical flow between the previous frame image and the next frame image may be determined according to the change of the pixels of the previous frame image and the next frame image in the time domain and the correlation between the previous frame image and the next frame image, the motion state may be determined according to the optical flow between the previous frame image and the next frame image, the motion state may include a rotation matrix and an offset, and then the previous frame image or the next frame image may be subjected to affine transformation according to the rotation matrix and the offset to obtain the third frame-inserted image corresponding to the first frame-inserted position.
For example, feature extraction and matching are performed on a previous frame image and a next frame image adjacent to the first frame interpolation position, the color, texture, shape or spatial relationship of the matched pixels in the previous frame image and the next frame image is obtained, and the color, texture, shape or spatial relationship of the matched pixels is fused to obtain a fourth frame interpolation image corresponding to the first frame interpolation position.
The frame rate processing, that is, the frame dropping processing or the frame interpolation processing, is performed on the source video, and the source resolution of the source video is not changed, so that the resolution of the fourth video is the same as the source resolution of the source video.
In step 208, if the computer device determines that the frame rate of the source video is the same as the frame rate of the display screen, the source video is a fourth video.
In step 209, the computer device determines whether the angle of view of the source video is the same as the angle of view of the eyepiece.
And step 210, if the computer equipment determines that the field angle of the source video is different from the field angle of the eyepiece, performing field angle processing on the fourth video to obtain a fifth video.
If the field angle of the source video is different from the field angle of the eyepiece, the field angle processing on the fourth video may include ascending field angle processing and descending field angle processing.
For example, if the field angle of the source video is different from the field angle of the eyepiece, for a second target frame image in the fourth video, the computer device may determine an energy value of each pixel point in the second target frame image, determine a path with the smallest energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, perform, if it is determined that the field angle of the source video is smaller than the field angle of the eyepiece, view-increasing angle processing on the fourth video to obtain a fifth video, and perform, if it is determined that the field angle of the source video is larger than the field angle of the eyepiece, view-decreasing angle processing on the fourth video to obtain the fifth video. The second target frame image is any frame in the fourth video, and the path with the minimum energy value comprises at least one pixel.
As an example, the computer device may determine a gray value of each pixel point in the second target frame image, and then determine an energy value of each pixel point according to the gray value of each pixel point. The energy value of each pixel point indicates the importance degree of the pixel in the image, and the energy value of each pixel point is equal to the sum of the gradient of the gray value of each pixel point in the transverse direction of the image and the gradient of the gray value of each pixel point in the longitudinal direction of the image.
As an example, the path with the smallest energy value in the second target frame image is a connection path of pixels from top to bottom or from left to right in the second target frame image. For example, the path with the minimum energy value for the pixels from top to bottom may include one pixel of each line in the second target frame image, and the path with the minimum energy value includes at least one pixel with the number of lines of the pixels in the second target frame image. If the abscissa of the pixel in a certain row on the path with the smallest energy value is x, the abscissa of the pixel in the previous row constituting the path with the smallest energy value is x-1, x, or x +1.
As one example, performing the field angle raising processing on the fourth video refers to performing pixel interpolation on each frame image in the fourth video. For example, pixel interpolation is performed on a second target frame image in the fourth video, if the field angle of the source video is smaller than the field angle of the eyepiece, pixel interpolation is performed on the second target frame image according to at least one pixel, the second target frame image after pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, a first cyclic value is determined, if the first cyclic value does not satisfy a first preset condition, a frame image corresponding to the second target frame image in the fifth video is taken as the second target frame image, the step of determining the energy value of each pixel point in the second target frame image is skipped to until the first cyclic value satisfies the first preset condition, so that a frame image corresponding to the second target frame image in the fifth video is obtained, and the first cyclic value is used for indicating the number of times of pixel interpolation performed on the second target frame image according to at least one pixel included in a path with the smallest energy value.
For example, interpolating the pixels of the second target frame image according to at least one pixel means copying and inserting at least one pixel included in a path having the minimum energy value to a preset position. For example, the resolution of the fourth video is the same as the source resolution of the source video, the source resolution is 200 × 200, and after each pixel in the at least one pixel is inserted into a preset position laterally adjacent to each pixel, the resolution of a frame image corresponding to the second target frame image in the fifth video obtained by performing pixel interpolation on the second target frame image according to the at least one pixel is 200 × 201.
For example, the first cycle value is an integer, the first preset condition is an integer, and 1 is added to the first cycle value after each pixel interpolation. For example, initially, the first cycle value is 0, the first cycle value is updated to 1 after pixel interpolation is performed on the second target frame image, if the first cycle value does not satisfy the first preset condition, the frame image corresponding to the second target frame image in the fifth video obtained after the pixel interpolation is performed is used as the second target frame image, the step of determining the energy value of each pixel point in the second target frame image is skipped until the first cycle value satisfies the first preset condition, and the frame image corresponding to the second target frame image in the fifth video in which the first cycle value satisfies the first preset condition is used as the final result.
For example, the computer device may determine the first preset condition according to a display resolution of the display screen, a field angle of the source video, and a field angle of the eyepiece. For example, the computer device determines the first preset condition according to the display resolution of the display screen, the field angle of the source video, and the field angle of the eyepiece by the following formula (3):
Figure BDA0003773192480000201
wherein k is 1 Is a first predetermined condition, X res Display lateral resolution, X, to display resolution fov At the angle of view of the eyepiece, X video Is the field angle of the source video.
As an example, X in the above formula (3) res The display resolution may also be a display vertical resolution, which is not limited in the embodiment of the present application.
As one example, performing the down-field angle processing on the fourth video refers to performing pixel removal on each frame image in the fourth video. For example, pixel removal is performed on a second target frame image in the fourth video, if the field angle of the source video is greater than that of the eyepiece, pixel removal is performed on at least one pixel in the second target frame image, the second target frame image after pixel removal is a frame image corresponding to the second target frame image in the fifth video, a second cycle value is determined, if the second cycle value does not satisfy a second preset condition, the frame image corresponding to the second target frame image in the fifth video is taken as the second target frame image, and the step of determining the energy value of each pixel point in the second target frame image is skipped until the second cycle value satisfies the second preset condition, so that a frame image corresponding to the second target frame image in the fifth video is obtained, where the second cycle value is used to indicate the number of times of pixel removal performed on the second target frame image according to at least one pixel included in a path with the smallest energy value.
For example, the pixel removal of the second target frame image according to at least one pixel refers to removing at least one pixel included in the path with the smallest energy value. For example, if the source resolution is 200 × 200, the resolution of a frame image corresponding to the second target frame image in the fifth video obtained by performing pixel removal on the second target frame image for one time according to at least one pixel is 200 × 199.
For example, the second cycle value is an integer, the second preset condition is an integer, and 1 is added to the second cycle value after each pixel interpolation. The computer device may determine the second preset condition according to a display resolution of the display screen, a field angle of the source video, and a field angle of the eyepiece. For example, the computer device determines the second preset condition according to the display resolution of the display screen, the field angle of the source video, and the field angle of the eyepiece by the following formula (4):
Figure BDA0003773192480000211
wherein k is 2 Is a second predetermined condition, X res Display lateral resolution, X, for display resolution fov At the angle of view of the eyepiece, X video Is the field angle of the source video.
When the fourth video is subjected to the field angle processing, that is, the field angle processing is performed up and the field angle processing is performed down, the resolution of the fourth video is changed, so that the resolution of the fourth video is different from the resolution of the fifth video, and the resolution of the fifth video is different from the source resolution of the source video.
In step 211, if the computer device determines that the field angle of the source video is the same as the field angle of the eyepiece, the fourth video is a fifth video.
At step 212, the computer device determines whether the aspect ratio of the source video is the same as the aspect ratio of the display screen.
Step 213, if the computer device determines that the aspect ratio of the source video is different from the aspect ratio of the display screen, the computer device performs aspect ratio processing on the fifth video to obtain a third video.
If the aspect ratio of the source video is different from the aspect ratio of the display screen, the aspect ratio processing on the fifth video may include scaling processing and stretching processing.
For example, if the aspect ratio of the source video is greater than the aspect ratio of the display screen, the fifth video is zoomed to obtain the third video, and if the aspect ratio of the source video is less than the aspect ratio of the display screen, the fifth video is stretched to obtain the third video.
The scaling or stretching of the fifth video refers to scaling or stretching of the size of the left image and the right image of each frame in the fifth video, and the resolution of the fifth video does not change. That is, the resolution of the fifth video is not changed by performing the aspect ratio processing, that is, the scaling processing and the stretching processing on the fifth video, so that the resolution of the fifth video is the same as the resolution of the third video, and the resolution of the third video is different from the source resolution of the source video.
In step 214, if the computer device determines that the aspect ratio of the source video is the same as the aspect ratio of the display screen, the fifth video is the third video.
In step 215, the computer device determines whether the resolution of the third video is the same as the display resolution of the display screen.
In step 216, if the computer device determines that the resolution of the third video is different from the display resolution of the display screen, the computer device performs resolution processing on the third video to obtain the first video.
The horizontal-vertical ratio of the first video is the same as that of the display screen, the resolution of the first video is the same as that of the display screen, and the frame rate of the first video is the same as that of the display screen.
If the resolution of the third video is different from the display resolution of the display screen, the resolution processing on the third video may include resolution reduction processing and super-resolution processing. For example, if the resolution of the third video is greater than the display resolution of the display screen, the resolution reduction processing is performed on the third video to obtain the first video, and if the resolution of the third video is less than the display resolution of the display screen, the super-resolution processing is performed on the third video to obtain the first video.
As an example, a specific implementation of the resolution reduction processing on the third video may include: the computer device may determine a resolution ratio of the resolution of the third video to the display resolution of the display screen, determine at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio, perform pixel extraction on pixels in the at least one pixel extraction position in each frame of image included in the third video, and obtain the third video after pixel extraction as the first video.
For example, the resolution ratio may be a ratio of a horizontal resolution of the third video to a display horizontal resolution, or a ratio of a vertical resolution of the third video to a display vertical resolution, and the resolution ratio is a common divisor of the horizontal resolution of the third video and the vertical resolution of the third video.
For example, the resolution ratio is n, the pixel position of one pixel can be extracted for every n pixels in each frame image according to at least one pixel extraction position determined by the resolution ratio, then down-sampling is performed on each frame image included in the third video, that is, one pixel can be extracted from every n pixels in the transverse direction, then one pixel is extracted from every n pixels in the longitudinal direction, and the obtained down-sampled third video is the first video.
As an example, a specific implementation of the super-resolution processing on the third video may include: for a first target frame image in the third video, the computer device may determine an adjacent frame image of the first target frame image, determine at least one pixel interpolation position in the first target frame image according to the resolution of the third video and the display resolution of the display screen, determine an interpolated frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image, and perform pixel interpolation on each pixel interpolation position in the first target frame image according to the interpolated frame image corresponding to the first target frame image, where the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video. And the first target frame image is any frame in the third video.
For example, the at least one pixel interpolation position includes a horizontal pixel interpolation position and/or a vertical pixel interpolation position, the horizontal pixel interpolation position may be determined according to a difference between a horizontal resolution and a display horizontal resolution of the third video, the vertical pixel interpolation position may be determined according to a difference between a vertical resolution and a display vertical resolution of the third video, and then the horizontal pixel interpolation position and the vertical pixel interpolation position may be respectively subjected to pixel interpolation according to the interpolation frame image corresponding to the first target frame image.
For example, one or more frame interpolation images corresponding to the first target frame image may be used. For example, the frame interpolation image corresponding to the first target frame image comprises the first frame interpolation image and/or the second frame interpolation image, the computer device may determine a motion state of the first target frame image relative to the adjacent frame images, and determine the first frame interpolation image corresponding to the first target frame image according to the motion state; and/or extracting and matching the features of the first target frame image and the adjacent frame image, and fusing the matched features to obtain a second frame interpolation image corresponding to the first target frame image. Of course, the interpolated frame image may also be obtained in other manners, which is not limited in this application.
Step 217, if the computer device determines that the resolution of the third video is the same as the display resolution of the display screen, the third video is the first video.
Thus, after a series of processing is performed on the source video through the above steps 206 to 217, the obtained first video is adapted to the frame rate, the aspect ratio, the display resolution and the field angle of the eyepiece of the display screen, so that the effect of displaying the first video on the display screen is better, and the effect of the first video viewed by the user through the eyepiece is better.
In addition, the computer device also considers the influence of a second center distance related to the user interpupillary distance on the size of an overlapping area of the first video displayed by the display screen on the basis of ensuring that the frame rate, the aspect ratio and the display resolution of the first video and the display screen are all adapted, and the field angle of the eyepiece is adapted. For example, the computer device may process the first video according to the second center distance, so that the video viewed by the user through an eyepiece adapted to the user's interpupillary distance is more effective.
Step 218, the computer device processes the first video according to the first center distance and the second center distance to obtain a second video.
For example, the display resolution of the display screen is adjusted according to the first center distance and the second center distance, and then the first video to be displayed is processed according to the adjusted display resolution of the display screen to obtain the second video, where the resolution of the second video is the adjusted display resolution of the display screen.
For example, the computer device may determine a target resolution corresponding to the second center distance according to the first center distance, the second center distance, and a display resolution of the display screen, and then adjust the display resolution of the display screen according to the target resolution, where the adjusted display resolution of the display screen is the target resolution, the adjusted display resolution of the display screen is adapted to the second center distance and the user pupil distance, and the resolution of the second video is the target resolution corresponding to the second center distance.
For example, the computer device may perform resolution processing on the left image and the right image of each frame in the first video to obtain the second video.
Because the display resolution of the adjusted display screen is adapted to the second center distance and the user pupil distance, the second video obtained by processing the first video according to the display resolution of the adjusted display screen is also adapted to the second center distance and the user pupil distance.
Step 219, the computer device displays the second video through the display screen.
For example, the computer device displays the second video through the display screen according to the adjusted display resolution of the display screen.
Because the display resolution of the display screen after the adjustment and user's interpupillary distance adaptation, second video and user's interpupillary distance adaptation, consequently to the user, according to the display resolution of the display screen after the adjustment, it is better to carry out the display effect that shows to the second video through the display screen, and the second video's that the user watched through the eyepiece with user's interpupillary distance adaptation effect is better.
So, adjust the eyepiece central separation distance of eyepiece according to user's interpupillary distance, carry out frame rate processing, angle of vision processing, aspect ratio processing and resolution ratio processing to the source video according to the angle of vision of the display parameter of display screen and eyepiece, optimize the source video promptly, obtain first video, first video and display screen, the equal adaptation of eyepiece to the display screen shows that the effect of first video is better, and the first video that the user watched through the eyepiece effect is better.
In addition, the first center distance of the left display picture and the first center distance of the right display picture can be adjusted according to the pupil distance of the user, the second center distance of the left display picture and the right display picture which are related to the pupil distance of the user after adjustment is obtained, and the video to be displayed is processed according to the first center distance of the left display picture and the second center distance of the left display picture and the right display picture before adjustment and the second center distance of the left display picture after adjustment, so that the overlapping area of the left image and the right image of each frame of image in the processed video is matched with the center distance of the left display picture and the right display picture after adjustment and the pupil distance of the user, and when the processed video is displayed through the display screen, the effect of the three-dimensional video which is watched by the user through the eyepiece matched with the pupil distance of the user is better.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure. The video processing apparatus may be implemented by software, hardware or a combination of both as part or all of a computer device, which may be the computer device shown in fig. 4 below. Referring to fig. 3, the apparatus includes: a first obtaining module 301, a first adjusting module 302, a first processing module 303 and a display module 304.
The first obtaining module 301 is configured to obtain a pupil distance of the user.
The first adjusting module 302 is configured to adjust a first center distance of left and right display frames of a display screen of the near-eye display device according to the pupil distance of the user to obtain a second center distance, where the second center distance is related to the pupil distance of the user.
The first processing module 303 is configured to process a first video to be displayed according to the first center distance and the second center distance to obtain a second video, where an aspect ratio of the first video is the same as an aspect ratio of the display screen, and a resolution of the first video is the same as a display resolution of the display screen.
And the display module 304 is configured to display the second video through the display screen.
As an example, the first processing module 303 is configured to adjust a display resolution of the display screen according to the first center distance and the second center distance, and process the first video according to the adjusted display resolution of the display screen to obtain a second video;
and the display module 304 is configured to display the second video through the display screen according to the adjusted display resolution of the display screen.
As an example, the first processing module 303 is configured to determine, according to the first center distance, the second center distance, and the display resolution of the display screen, a target resolution corresponding to the second center distance;
and adjusting the display resolution of the display screen according to the target resolution, wherein the adjusted display resolution of the display screen is the target resolution.
As an example, the first processing module 303 is configured to determine the target resolution according to the first center distance, the second center distance, and the display resolution of the display screen by the following formula:
Figure BDA0003773192480000251
Figure BDA0003773192480000252
wherein, w 2 Transverse resolution, w, included for the target resolution 1 Display lateral resolution, PD, included for the display resolution of the display screen 1 Is the second center distance, PD 2 Is a first center distance, h 2 Longitudinal resolution, h, included for the target resolution 1 A display longitudinal resolution included for the display resolution of the display screen.
As an example, the apparatus further includes a second obtaining module 305, a second processing module 306, a third processing module 307;
a second obtaining module 305, configured to obtain an aspect ratio of the source video;
the second processing module 306 is configured to perform aspect ratio processing on the source video to obtain a third video if the aspect ratio of the source video is different from that of the display screen, and the source video is the third video if the aspect ratio of the source video is the same as that of the display screen;
the third processing module 307 is configured to perform resolution processing on the third video to obtain the first video if the resolution of the third video is different from the display resolution of the display screen, and if the resolution of the third video is the same as the display resolution of the display screen, the third video is the first video.
As an example, the third processing module 307 is configured to, if the resolution of the third video is greater than the display resolution of the display screen, perform resolution reduction processing on the third video to obtain a first video;
and if the resolution of the third video is smaller than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
As an example, the third processing module 307 is configured to determine a resolution ratio of a resolution of the third video to a display resolution of the display screen;
determining at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio;
and performing pixel extraction on pixels of at least one pixel extraction position in each frame of image included in the third video, wherein the third video after the pixel extraction is the first video.
As an example, the third processing module 307 is configured to determine, for a first target frame image in the third video, a neighboring frame image of the first target frame image, where the first target frame image is any frame in the third video;
determining at least one pixel interpolation position in the first target frame image according to the resolution of the third video and the display resolution of the display screen;
determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image;
and performing pixel interpolation on each pixel interpolation position in the first target frame image according to the frame interpolation image corresponding to the first target frame image, wherein the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video.
As an example, the third processing module 307 is configured to determine a motion state of the first target frame image relative to the adjacent frame image, and determine a first frame interpolation image corresponding to the first target frame image according to the motion state;
and/or the presence of a gas in the atmosphere,
and performing feature extraction and matching on the first target frame image and the adjacent frame image, and fusing matched features to obtain a second frame interpolation image corresponding to the first target frame image.
As an example, the apparatus further comprises a third obtaining module 308, a fourth processing module 309, a fifth processing module 310;
a third obtaining module 308, configured to obtain a frame rate of a source video and a field angle of an eyepiece of the near-eye display device;
a fourth processing module 309, configured to perform frame rate processing on the source video to obtain a fourth video if the frame rate of the source video is different from the frame rate of the display screen, and if the frame rate of the source video is the same as the frame rate of the display screen, the source video is the fourth video;
the fifth processing module 310 is configured to perform, if the field angle of the source video is different from the field angle of the eyepiece, field angle processing on the fourth video to obtain a fifth video, and if the field angle of the source video is the same as the field angle of the eyepiece, the fourth video is the fifth video;
the second processing module 306 is configured to perform aspect ratio processing on the fifth video to obtain a third video if the aspect ratio of the source video is different from the aspect ratio of the display screen, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, the fifth video is the third video.
As an example, the fourth processing module 309 is configured to delete at least one image in the source video if the frame rate of the source video is greater than the frame rate of the display screen;
if the frame rate of the source video is less than the frame rate of the display screen, determining at least one frame interpolation position in the source video, determining a frame interpolation image corresponding to a first frame interpolation position according to a previous frame image and a next frame image adjacent to the first frame interpolation position for the first frame interpolation position in the at least one frame interpolation position, and interpolating the determined frame interpolation image to the first frame interpolation position, wherein the first frame interpolation position is any one of the at least one frame interpolation position.
As an example, the fifth processing module 310 is configured to determine, for a second target frame image in a fourth video, an energy value of each pixel point in the second target frame image, where the second target frame image is any frame in the fourth video;
determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel;
if the field angle of the source video is smaller than that of the eyepiece, pixel interpolation is carried out on a second target frame image according to at least one pixel, the second target frame image after the pixel interpolation is a frame image corresponding to the second target frame image in a fifth video, a first cycle value is determined, if the first cycle value does not meet a first preset condition, the frame image corresponding to the second target frame image in the fifth video is used as the second target frame image, the step of determining the energy value of each pixel point in the second target frame image is skipped until the first cycle value meets the first preset condition, a frame image corresponding to the second target frame image in the fifth video is obtained, and the first cycle value is used for indicating the number of times of pixel interpolation carried out on the second target frame image according to at least one pixel included in a path with the minimum energy value;
if the field angle of the source video is larger than that of the eyepiece, pixel removal is performed on at least one pixel in the second target frame image, the second target frame image after pixel removal is a frame image corresponding to the second target frame image in the fifth video, a second cycle value is determined, if the second cycle value does not meet a second preset condition, the frame image corresponding to the second target frame image in the fifth video is taken as the second target frame image, and the step of determining the energy value of each pixel point in the second target frame image is skipped until the second cycle value meets the second preset condition, so that the frame image corresponding to the second target frame image in the fifth video is obtained, and the second cycle value is used for indicating the number of times of pixel removal performed on the second target frame image according to at least one pixel included in a path with the smallest energy value.
As an example, the apparatus further comprises a second adjustment module 311;
and the second adjusting module 311 is configured to adjust the ocular central distance if the difference between the ocular central distance of the near-to-eye display device and the user interpupillary distance is greater than a first threshold, where the adjusted ocular central distance is related to the user interpupillary distance.
It should be noted that: the video processing apparatus provided in the foregoing embodiment is only illustrated by the division of the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above.
Each functional unit and module in the above embodiments may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present application.
The video processing apparatus and the video processing method provided in the above embodiments belong to the same concept, and for specific working processes of units and modules and technical effects brought by the working processes in the above embodiments, reference may be made to the method embodiments, and details are not described here.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure. As shown in fig. 4, the computer apparatus includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and operable on the processor 401, the steps in the neural network processing method in the above embodiments being implemented when the computer program 403 is executed by the processor 401.
The computer device may be the computer device in embodiment 1 or embodiment 2 described above. The computer device may be a near-eye display device, or a desktop computer, a portable computer, a network server, a palmtop computer, a mobile phone, a tablet computer, a wireless terminal device, a communication device, or an embedded device, and the embodiment of the present application does not limit the type of the computer device. Those skilled in the art will appreciate that fig. 4 is merely an example of a computing device and is not intended to limit the computing device, and may include more or fewer components than those shown, or some components in combination, or different components, such as input output devices, network access devices, etc.
Processor 401 may be a Central Processing Unit (CPU), and Processor 401 may also be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor.
The Memory 402 may be, in some embodiments, an on-chip Memory or an off-chip Memory of a computer device, such as a cache Memory, an SRAM (Static Random-Access Memory), a DRAM (Dynamic Random-Access Memory), a floppy disk, or the like of the computer device. The memory 402 may also be a plug-in hard drive, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on a computer device in alternative embodiments. Further, the memory 402 may also include both on-chip memory, off-chip memory internal storage locations of the computer device, and external storage devices. The memory 402 is used to store an operating system, application programs, a Boot Loader (Boot Loader), data, and other programs. The memory 402 may also be used to temporarily store data that has been output or is to be output.
An embodiment of the present application further provides a computer device, where the computer device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application also provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments can be implemented.
The embodiments of the present application provide a computer program product, which when run on a computer causes the computer to execute the steps of the above-mentioned method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the above method embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the above method embodiments. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a photographing apparatus/terminal device, a recording medium, computer Memory, ROM (Read-Only Memory), RAM (Random Access Memory), CD-ROM (Compact Disc Read-Only Memory), magnetic tape, floppy disk, optical data storage device, etc. The computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (16)

1. A method of video processing, the method comprising:
acquiring the pupil distance of a user;
adjusting a first central distance of left and right display pictures of a display screen of the near-eye display equipment according to the user pupil distance to obtain a second central distance, wherein the second central distance is related to the user pupil distance;
processing a first video to be displayed according to the first center distance and the second center distance to obtain a second video, wherein the aspect ratio of the first video is the same as that of the display screen, and the resolution of the first video is the same as that of the display screen;
and displaying the second video through the display screen.
2. The method of claim 1, wherein the processing a first video to be displayed according to the first center distance and the second center distance to obtain a second video comprises:
adjusting the display resolution of the display screen according to the first center distance and the second center distance;
processing the first video according to the adjusted display resolution of the display screen to obtain the second video;
the displaying the second video through the display screen includes:
and displaying the second video through the display screen according to the adjusted display resolution of the display screen.
3. The method of claim 2, wherein the adjusting the display resolution of the display screen based on the first center distance and the second center distance comprises:
determining a target resolution corresponding to the second center distance according to the first center distance, the second center distance and the display resolution of the display screen;
and adjusting the display resolution of the display screen according to the target resolution, wherein the adjusted display resolution of the display screen is the target resolution.
4. The method of claim 3, wherein determining the target resolution corresponding to the second center distance based on the first center distance, the second center distance, and the display resolution of the display screen comprises:
according to the first center distance, the second center distance and the display resolution of the display screen, determining the target resolution by the following formula:
Figure FDA0003773192470000011
Figure FDA0003773192470000012
wherein, w 2 A lateral resolution, w, included for said target resolution 1 A display lateral resolution, PD, included for the display resolution of the display screen 1 Is the second center distance, PD 2 Is the first center distance, h 2 For the longitudinal resolution comprised by said target resolution,h 1 a display longitudinal resolution included for a display resolution of the display screen.
5. The method of claim 1, wherein prior to processing the first video to be displayed to obtain the second video, the method further comprises:
acquiring the aspect ratio of a source video;
if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the source video to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, taking the source video as the third video;
if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video, and if the resolution of the third video is the same as the display resolution of the display screen, the third video is the first video.
6. The method of claim 5, wherein if the resolution of the third video is different from the display resolution of the display screen, performing resolution processing on the third video to obtain the first video comprises:
if the resolution of the third video is larger than the display resolution of the display screen, performing resolution reduction processing on the third video to obtain the first video;
and if the resolution of the third video is smaller than the display resolution of the display screen, performing super-resolution processing on the third video to obtain the first video.
7. The method of claim 6, wherein the performing the resolution reduction on the third video to obtain the first video comprises:
determining a resolution ratio of a resolution of the third video to a display resolution of the display screen;
determining at least one pixel extraction position in each frame of image included in the third video according to the resolution ratio;
and performing pixel extraction on pixels at least one pixel extraction position in each frame of image included in the third video, wherein the third video after pixel extraction is the first video.
8. The method of claim 6, wherein the super-resolution processing the third video to obtain the first video comprises:
for a first target frame image in the third video, determining an adjacent frame image of the first target frame image, wherein the first target frame image is any frame in the third video;
determining at least one pixel interpolation position in the first target frame image according to the resolution of the third video and the display resolution of the display screen;
determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image;
and performing pixel interpolation on each pixel interpolation position in the first target frame image according to the frame interpolation image corresponding to the first target frame image, wherein the first target frame image after the pixel interpolation is the frame image corresponding to the first target frame image in the first video.
9. The method according to claim 8, wherein the interpolation image corresponding to the first target frame image comprises a first interpolation image and/or a second interpolation image;
determining an interpolation frame image corresponding to the first target frame image according to the first target frame image and the adjacent frame image, including:
determining the motion state of the first target frame image relative to the adjacent frame image, and determining the first frame interpolation image corresponding to the first target frame image according to the motion state;
and/or the presence of a gas in the gas,
and extracting and matching the features of the first target frame image and the adjacent frame image, and fusing the matched features to obtain the second frame interpolation image corresponding to the first target frame image.
10. The method of claim 5, wherein if the aspect ratio of the source video is different from the aspect ratio of the display screen, performing aspect ratio processing on the source video to obtain a third video, and if the aspect ratio of the source video is the same as the aspect ratio of the display screen, the source video is before the third video, the method further comprising:
acquiring the frame rate of the source video and the field angle of an eyepiece of the near-eye display device;
if the frame rate of the source video is different from that of the display screen, performing frame rate processing on the source video to obtain a fourth video, and if the frame rate of the source video is the same as that of the display screen, determining that the source video is the fourth video;
if the field angle of the source video is different from that of the eyepiece, performing field angle processing on the fourth video to obtain a fifth video, and if the field angle of the source video is the same as that of the eyepiece, taking the fourth video as the fifth video;
if the aspect ratio of the source video is different from that of the display screen, the aspect ratio of the source video is processed to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, the source video is the third video, including:
and if the aspect ratio of the source video is different from that of the display screen, performing aspect ratio processing on the fifth video to obtain a third video, and if the aspect ratio of the source video is the same as that of the display screen, taking the fifth video as the third video.
11. The method of claim 10, wherein if the frame rate of the source video is different from the frame rate of the display screen, performing frame rate processing on the source video comprises:
if the frame rate of the source video is greater than that of the display screen, deleting at least one frame of image in the source video;
if the frame rate of the source video is less than the frame rate of the display screen, determining at least one frame interpolation position in the source video, determining a frame interpolation image corresponding to a first frame interpolation position in the at least one frame interpolation position according to a previous frame image and a next frame image adjacent to the first frame interpolation position, and interpolating the determined frame interpolation image to the first frame interpolation position, wherein the first frame interpolation position is any one of the at least one frame interpolation position.
12. The method according to claim 10, wherein if the field angle of the source video is different from the field angle of the eyepiece, performing field angle processing on the fourth video to obtain a fifth video, comprises:
for a second target frame image in the fourth video, determining an energy value of each pixel point in the second target frame image, wherein the second target frame image is any one frame in the fourth video;
determining a path with the minimum energy value in the second target frame image according to the energy value of each pixel point in the second target frame image, wherein the path with the minimum energy value comprises at least one pixel;
if the field angle of the source video is smaller than that of the eyepiece, performing pixel interpolation on the second target frame image according to the at least one pixel, wherein the second target frame image after the pixel interpolation is a frame image corresponding to the second target frame image in the fifth video, determining a first cycle value, if the first cycle value does not meet a first preset condition, taking a frame image corresponding to the second target frame image in the fifth video as the second target frame image, jumping to a step of determining an energy value of each pixel point in the second target frame image until the first cycle value meets the first preset condition, so as to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the first cycle value is used for indicating the number of times of performing the pixel interpolation on the second target frame image according to the at least one pixel included in the path with the smallest energy value;
if the field angle of the source video is larger than that of the eyepiece, performing pixel removal on at least one pixel in the second target frame image, wherein the second target frame image after pixel removal is a frame image corresponding to the second target frame image in the fifth video, determining a second cycle value, if the second cycle value does not satisfy a second preset condition, taking the frame image corresponding to the second target frame image in the fifth video as the second target frame image, and jumping to a step of determining an energy value of each pixel point in the second target frame image until the second cycle value satisfies the second preset condition to obtain a frame image corresponding to the second target frame image in the fifth video, wherein the second cycle value is used for indicating the number of times of performing pixel removal on the second target frame image according to the at least one pixel included in the path with the smallest energy value.
13. The method of any of claims 1-12, wherein prior to adjusting the first center distance of left and right displays of the display screen of the near-eye display device based on the user interpupillary distance, the method further comprises:
and if the difference value between the center distance of the ocular pieces of the near-to-eye display equipment and the user interpupillary distance is larger than a first threshold value, adjusting the center distance of the ocular pieces, wherein the adjusted center distance of the ocular pieces is related to the user interpupillary distance.
14. A video processing device is characterized by comprising a first acquisition module, a first adjustment module, a first processing module and a display module;
the first acquisition module is used for acquiring the interpupillary distance of the user;
the first adjusting module is used for adjusting a first central distance of left and right display pictures of a display screen of the near-to-eye display device according to the pupil distance of the user to obtain a second central distance, and the second central distance is related to the pupil distance of the user;
the first processing module is configured to process a first video to be displayed according to the first center distance and the second center distance to obtain a second video, where an aspect ratio of the first video is the same as that of the display screen, and a resolution of the first video is the same as a display resolution of the display screen;
and the display module is used for displaying the second video through the display screen.
15. A computer device, characterized in that the computer device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, which computer program, when executed by the processor, implements the method according to any of claims 1 to 13.
16. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any one of claims 1 to 13.
CN202210908234.0A 2022-07-29 2022-07-29 Video processing method, device, equipment and storage medium Active CN115348437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210908234.0A CN115348437B (en) 2022-07-29 2022-07-29 Video processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210908234.0A CN115348437B (en) 2022-07-29 2022-07-29 Video processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115348437A true CN115348437A (en) 2022-11-15
CN115348437B CN115348437B (en) 2023-10-31

Family

ID=83950989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210908234.0A Active CN115348437B (en) 2022-07-29 2022-07-29 Video processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115348437B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800302A (en) * 2011-05-25 2012-11-28 联想移动通信科技有限公司 Method for adjusting resolution of display screen by terminal equipment, and terminal equipment
CN104216841A (en) * 2014-09-15 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN105700140A (en) * 2016-01-15 2016-06-22 北京星辰万有科技有限公司 Immersive video system with adjustable pupil distance
CN105847578A (en) * 2016-04-28 2016-08-10 努比亚技术有限公司 Information display type parameter adjusting method and head mounted device
CN108989671A (en) * 2018-07-25 2018-12-11 Oppo广东移动通信有限公司 Image processing method, device and electronic equipment
CN110006634A (en) * 2019-04-15 2019-07-12 北京京东方光电科技有限公司 Visual field angle measuring method, visual field angle measuring device, display methods and display equipment
CN112804561A (en) * 2020-12-29 2021-05-14 广州华多网络科技有限公司 Video frame insertion method and device, computer equipment and storage medium
CN113592720A (en) * 2021-09-26 2021-11-02 腾讯科技(深圳)有限公司 Image scaling processing method, device, equipment, storage medium and program product

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800302A (en) * 2011-05-25 2012-11-28 联想移动通信科技有限公司 Method for adjusting resolution of display screen by terminal equipment, and terminal equipment
CN104216841A (en) * 2014-09-15 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN105700140A (en) * 2016-01-15 2016-06-22 北京星辰万有科技有限公司 Immersive video system with adjustable pupil distance
CN105847578A (en) * 2016-04-28 2016-08-10 努比亚技术有限公司 Information display type parameter adjusting method and head mounted device
CN108989671A (en) * 2018-07-25 2018-12-11 Oppo广东移动通信有限公司 Image processing method, device and electronic equipment
CN110006634A (en) * 2019-04-15 2019-07-12 北京京东方光电科技有限公司 Visual field angle measuring method, visual field angle measuring device, display methods and display equipment
CN112804561A (en) * 2020-12-29 2021-05-14 广州华多网络科技有限公司 Video frame insertion method and device, computer equipment and storage medium
CN113592720A (en) * 2021-09-26 2021-11-02 腾讯科技(深圳)有限公司 Image scaling processing method, device, equipment, storage medium and program product

Also Published As

Publication number Publication date
CN115348437B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
US11308675B2 (en) 3D facial capture and modification using image and temporal tracking neural networks
CN109064390B (en) Image processing method, image processing device and mobile terminal
US20160301868A1 (en) Automated generation of panning shots
JP5409107B2 (en) Display control program, information processing apparatus, display control method, and information processing system
EP3149706B1 (en) Image refocusing for camera arrays
CN108076384B (en) image processing method, device, equipment and medium based on virtual reality
US10942567B2 (en) Gaze point compensation method and apparatus in display device, and display device
CN102572492B (en) Image processing device and method
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
US10957027B2 (en) Virtual view interpolation between camera views for immersive visual experience
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
KR20130040771A (en) Three-dimensional video processing apparatus, method therefor, and program
EP2847998B1 (en) Systems, methods, and computer program products for compound image demosaicing and warping
US20160180514A1 (en) Image processing method and electronic device thereof
US10771758B2 (en) Immersive viewing using a planar array of cameras
EP3993383A1 (en) Method and device for adjusting image quality, and readable storage medium
CN110740309A (en) image display method, device, electronic equipment and storage medium
CN112070657A (en) Image processing method, device, system, equipment and computer storage medium
JP2001128195A (en) Stereoscopic image correcting device, stereoscopic image display device, and recording medium with stereoscopic image correcting program recorded thereon
CN115348437B (en) Video processing method, device, equipment and storage medium
JP2011082698A (en) Image generation device, image generation method, and program
CN112419134A (en) Image processing method and device
KR20080026877A (en) Image processing device and image processing method thereof
JP7244661B2 (en) Imaging device and image processing method
WO2023061173A1 (en) Image processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant