CN110992250B - Method and device for realizing high-resolution display - Google Patents

Method and device for realizing high-resolution display Download PDF

Info

Publication number
CN110992250B
CN110992250B CN201911200521.0A CN201911200521A CN110992250B CN 110992250 B CN110992250 B CN 110992250B CN 201911200521 A CN201911200521 A CN 201911200521A CN 110992250 B CN110992250 B CN 110992250B
Authority
CN
China
Prior art keywords
image
viewer
stretching
area
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911200521.0A
Other languages
Chinese (zh)
Other versions
CN110992250A (en
Inventor
耿立华
马希通
李咸珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201911200521.0A priority Critical patent/CN110992250B/en
Publication of CN110992250A publication Critical patent/CN110992250A/en
Application granted granted Critical
Publication of CN110992250B publication Critical patent/CN110992250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Disclosed herein is a method of implementing a high resolution display, comprising: dividing a viewer attention area and a viewer non-attention area on an input image, and dividing the input image according to the region division result; image stretching is carried out on a viewer attention area part of the input image by using a first stretching algorithm to obtain a first stretched image, and image stretching is carried out on a viewer non-attention area part of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm; and splicing the first stretching image and the second stretching image to obtain an output image. The technical scheme can realize the display of the low-resolution film source image on the high-resolution display, and the image display quality and the consumption of computing resources are both considered.

Description

Method and device for realizing high-resolution display
Technical Field
The invention relates to the technical field of display, in particular to a method and a device for realizing high-resolution display.
Background
Currently, viewers have higher requirements on image quality, and limited by the bandwidth of data transmission, a film source can be low-resolution image data, and then the film source is output and displayed after being scaled into a high-resolution image in a driving circuit of display equipment.
Today, the technology of ultra-high resolution (4K, 8K) display is mature, ultra-high resolution display is popular, but many of the current sources are still low resolution (2K, 4K) sources, so when these sources are connected to the ultra-high resolution display through the video interface, the image needs to be stretched inside the display before being displayed, for example, 2K is stretched to 4K and 4K is stretched to 8K.
The type of algorithm used in image stretching can influence the quality (such as saw tooth, definition, sharpness and the like) of an image after image stretching, a high-quality image stretching algorithm can obtain a better post-stretching effect, a simple algorithm can obtain a general post-stretching effect, but the complexity of the high-quality algorithm is high, and more calculation resources and power consumption can be consumed in realization.
Disclosure of Invention
The embodiment of the invention provides a method and a device for realizing high-resolution display, which can realize the display of a low-resolution film source image on a high-resolution display and also consider the image display quality and the consumption of computing resources.
According to a first aspect of the present application, an embodiment of the present application provides a method for implementing high resolution display, including:
Dividing a viewer attention area and a viewer non-attention area on an input image, and dividing the input image according to the region division result;
Image stretching is carried out on a viewer attention area part of the input image by using a first stretching algorithm to obtain a first stretched image, and image stretching is carried out on a viewer non-attention area part of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
And splicing the first stretching image and the second stretching image to obtain an output image.
According to a second aspect of the present application, an embodiment of the present application provides an apparatus for implementing high resolution display, including:
The region dividing and segmenting module is used for dividing a viewer attention region and a viewer non-attention region on an input image, and segmenting the input image according to regions according to a region division result;
The image stretching module is used for carrying out image stretching on the viewer attention area part of the input image by using a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the viewer non-attention area part of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
And the image stitching module is used for stitching the first stretching image and the second stretching image to obtain an output image.
Compared with the related art, the method and the device for realizing high-resolution display divide the concerned area of the viewer and the non-concerned area of the viewer on the input image, and divide the input image according to the area division result; image stretching is carried out on a viewer attention area part of the input image by using a first stretching algorithm to obtain a first stretched image, and image stretching is carried out on a viewer non-attention area part of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm; and splicing the first stretching image and the second stretching image to obtain an output image. According to the technical scheme, the fact that the area watched by the viewer is limited is considered, so that a stretching algorithm with high image stretching quality (which occupies more computing resources) is used in the area watched by the viewer, and a stretching algorithm with low image stretching quality (which occupies less computing resources) is used in the area not watched by the viewer, and therefore the low-resolution film source image is displayed on a high-resolution display, and the image display quality and the computing resource consumption are both achieved.
Drawings
FIG. 1 is a flow chart of a method for realizing high resolution display according to embodiment 1 of the present invention;
FIG. 2 is a schematic diagram illustrating the division of a viewer's region of interest and a viewer's region of non-interest according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of edge blending of an image area according to embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of stretching, stitching and smoothing filtering of an image area in embodiment 1 of the present invention;
fig. 5 is a schematic diagram of an apparatus for realizing high resolution display according to embodiment 2 of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail hereinafter with reference to the accompanying drawings. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be arbitrarily combined with each other.
According to the technical scheme, the fact that the area watched by the viewer is limited is considered, so that a stretching algorithm with high image stretching quality (which occupies more computing resources) is used in the area watched by the viewer, and a stretching algorithm with low image stretching quality (which occupies less computing resources) is used in the area not watched by the viewer, and therefore the computing resources occupied by the algorithm are reduced as much as possible and the power consumption is reduced while high-resolution display is realized.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a method for implementing high resolution display, including:
Step S110, dividing a viewer attention area and a viewer non-attention area on an input image, and dividing the input image according to the area division result;
Step S120, performing image stretching on a viewer attention area part of the input image by using a first stretching algorithm to obtain a first stretched image, and performing image stretching on a viewer non-attention area part of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
And step S130, splicing the first stretching image and the second stretching image to obtain an output image.
In the above embodiment, the viewer attention area and the viewer non-attention area are divided on the input image, the input image is divided according to the area division result, the viewer attention area portion of the input image is subjected to image stretching by using a first stretching algorithm with higher image stretching quality to obtain a first stretched image, and the viewer non-attention area portion of the input image is subjected to image stretching by using a second stretching algorithm with lower image stretching quality to obtain a second stretched image; and splicing the first stretching image and the second stretching image to obtain an output image. In general, an algorithm with high image stretching quality consumes more computing resources and power consumption, and in the image processing mode, a first stretching algorithm (which occupies more computing resources) is used in a region of interest of a viewer, and a second stretching algorithm (which occupies less computing resources) is used in a region of less interest of the viewer, so that the computing resources occupied by the algorithm and the power consumption are reduced as much as possible while high-resolution display is realized.
In step S110, in an exemplary embodiment, the dividing the viewer attention area and the viewer non-attention area on the input image, and dividing the input image by areas according to the area division result includes:
Determining a focus center of an input image according to a viewing focus position on a screen, expanding the focus center to two sides according to an expansion proportion corresponding to a viewing distance, and dividing an expanded image area into viewer focus areas; wherein the expanded image area does not exceed the boundary of the input image;
Dividing the residual image of the input image after the viewer attention area is removed into one or two viewer non-attention areas; wherein each viewer non-interest area is a connected area.
For example, when there are remaining areas on the input image on both left and right sides of the viewer's area of interest, two viewer non-areas may be divided, one on the left side of the viewer's area of interest and the other on the right side of the viewer's area of interest. When there is a remaining area on only one side of the viewer's area of interest on the input image, only one viewer's area of non-interest may be divided.
The viewing focus position on the screen can be obtained by installing a binocular camera on a panel of the display screen, or by installing the binocular camera on the panel of the display screen and an infrared light source coaxially arranged with the binocular camera, extracting depth information of a portrait shot by the binocular camera, determining the viewing direction of a viewer by utilizing the existing viewing tracking technology, and calculating the position of the viewing focus of the viewer on the screen according to the viewing direction of the viewer and the viewing distance of the viewer from the screen. The binocular camera and the infrared light source can be arranged in the middle of the upper edge of the display screen.
The position where the viewing focus of the viewer when looking straight ahead falls on the screen may be set as the center position of the screen. When the viewer turns the eyeball to the left (or right), the position where the viewing focus of the viewer falls on the screen may be shifted to the left (or right) of the center position. When the viewer turns the eyeball upward (or downward), the position where the viewing focus of the viewer falls on the screen may shift to the upper (or lower) side of the center position. After the rotation direction of the eyeballs is determined according to the existing sight tracking technology, the direction and the distance of the position of the watching focus of the viewer falling on the screen, which deviates from the central position of the screen, can be determined by combining the watching distance of the viewer from the screen obtained by the binocular camera.
The watching distance of the viewer can be determined according to the depth information extraction of the portrait shot by the binocular camera. The distance can be determined by the parallax of the images acquired by the two cameras of the binocular camera, and the smaller the parallax is, the farther the parallax is, and the larger the parallax is, the closer the parallax is.
The existing sight tracking technology mainly researches the acquisition, modeling and simulation of eyeball movement information. The device for acquiring the eyeball movement information comprises an infrared light source and an image acquisition device, or an image acquisition device. When the infrared light source and the image acquisition equipment are adopted for carrying out sight tracking, light beams such as infrared rays are actively projected to eyes (irises) through the infrared light source, the eyes (irises) reflect infrared rays, the camera acquires image information, then an image analysis algorithm extracts reflection light spots, and eye rotation information is extracted from changes of the reflection light spots. The infrared projection method has advantages in terms of accuracy, which can be approximately accurate to within 1 cm on a 30-inch screen. Gaze tracking may also be implemented with software support using only an image acquisition device, such as a camera. Existing gaze tracking techniques generally include: image acquisition, image preprocessing, sight parameter detection, pupil tracking, sight tracking system calibration, sight direction calculation and the like.
In step S110, in an exemplary embodiment, the determining the focus center of the input image according to the viewing focus position on the screen includes:
determining a position ratio of a viewing focus position on a screen relative to the entire width of the screen, and determining a position of a focus center of an input image in a width direction of the input image according to the position ratio; wherein a relative position of the viewing focus position in a width direction of the screen is the same as a relative position of the focus center in the width direction of the input image; or alternatively
Determining a position proportion of a viewing focus position on a screen relative to the whole height of the screen, and determining the position of a focus center of an input image in the height direction of the input image according to the position proportion; wherein the relative position of the viewing focus position in the height direction of the screen is the same as the relative position of the focus center in the height direction of the input image;
For example, when the position ratio of the viewing focus position on the screen with respect to the entire width of the screen is 1/3, the relative position of the focus center in the width direction of the input image is also 1/3 of the entire width. Or when the position ratio of the viewing focus position on the screen with respect to the entire height of the screen is 1/3, the relative position of the focus center in the height direction of the input image is also 1/3 of the entire height.
In step S110, in an exemplary embodiment, the expansion ratio corresponding to the viewing distance is a percentage R% of the width of the expansion area to the entire width of the input image, or a percentage R% of the height of the expansion area to the entire height of the input image;
The R% can be determined using the following formula:
R%=a*(1/2N)*l*100% (1);
Wherein a is an expansion coefficient, a is more than 0 and less than or equal to 1, N is the maximum value of effective viewing distance, l is the effective viewing distance of a viewer from a display screen, l is more than 0 and less than or equal to N, and s is the viewing distance of the viewer when the viewer views the screen;
wherein, the closer the viewing distance of the viewer is, the smaller the region of interest is, so the smaller the region to be expanded is; the farther the viewing distance of the viewer, the larger the area of interest, and the larger the area that needs to be expanded.
For example, when a is equal to 1, the actual viewing distance of the viewer is N, the expansion ratio is 50%. As shown in fig. 2, assuming that the center of interest of the input image is located at about 1/4 of the width of the input image, when the region expansion is performed, 50% of the width of the entire input image is respectively expanded from the left and right sides from the center of interest, and if the expansion region exceeds the edge of the input image during the expansion, the final expansion region ends with the edge of the input image. As shown in fig. 2, the region a is a viewer attention region, and since the region a has reached the left boundary of the input image, the remaining region is a region B, which is divided into viewer non-attention regions.
In step S110, in an exemplary embodiment, after the input image is segmented according to the region division result, the method further includes:
When the non-concerned area of the viewer is positioned at the left side and the right side of the concerned area of the viewer, adding one or more columns of pixels in the image area adjacent to the segmentation edge to the image area to generate an image area after edge compensation for any segmented image area; or alternatively
When the non-concerned area of the viewer is positioned on the upper side and the lower side of the concerned area of the viewer, adding one row or a plurality of rows of pixels in the image area adjacent to the divided edge to the image area to generate an image area after edge compensation for any divided image area;
As shown in fig. 3, when the viewer non-attention areas (B1 and B2) are located on the left and right sides of the viewer attention area (a), one or more columns of pixels adjacent to the area B1 in the area a are added to the area B1, and a first viewer non-attention area (B1') after the edge is repaired is generated; adding one or more columns of pixels adjacent to the region A in the region B1 into the region A, and adding one or more columns of pixels adjacent to the region A in the region B2 into the region A to generate a viewer attention region (A') after edge repair; adding one or more columns of pixels in region A adjacent to region B2, generating a second viewer non-interest region (B2') after edge-filling;
In step S120, in an exemplary embodiment, image stretching is performed on a viewer attention area portion of an input image by using a first stretching algorithm to obtain a first stretched image, and image stretching is performed on a viewer non-attention area portion of the input image by using a second stretching algorithm to obtain a second stretched image, including:
performing equal proportion stretching on a viewer attention area part of an input image by using a first stretching algorithm; performing equal proportion stretching on a viewer non-interest area part of the input image by using a second stretching algorithm; the equal-proportion stretching means that the stretching multiple in the width direction is the same as the stretching multiple in the height direction; or alternatively
As shown in fig. 4, the viewer attention area part of the input image after edge trimming is subjected to equal-proportion stretching by using a first stretching algorithm; the non-concerned area part of the viewer of the input image after edge trimming is subjected to equal proportion stretching by utilizing a second stretching algorithm; the equal-proportion stretching means that the stretching multiple in the width direction is the same as the stretching multiple in the height direction;
in step S120, the first stretching algorithm, such as gradient stretching and BQbek algorithms, is complex, generally uses more computing resources, consumes higher power consumption, but has higher image quality after stretching, and shows clearer image, smoother line and less jaggies after stretching. The second stretching algorithm, such as bilinear stretching algorithm, is simpler, generally uses less computing resources, consumes less power, but has lower image quality after stretching.
In step S130, in an exemplary embodiment, after the first stretched image and the second stretched image are spliced to obtain an output image, the method further includes:
Smoothing and filtering the output image;
As shown in fig. 4, the main purpose of the smoothing filter process is: the edges of the image splicing parts obtained by using different stretching algorithms are smoother, and the cracks generated during the segmentation of the input image are eliminated.
Example 2
As shown in fig. 5, an embodiment of the present invention provides an apparatus for implementing high resolution display, including:
a region dividing and segmenting module 10 for dividing a viewer's region of interest and a viewer's region of non-interest on an input image, the input image being segmented by region according to a region division result;
The image stretching module 20 is configured to perform image stretching on a viewer attention area portion of the input image by using a first stretching algorithm to obtain a first stretched image, and perform image stretching on a viewer non-attention area portion of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
and the image stitching module 30 is configured to stitch the first stretched image and the second stretched image to obtain an output image.
In an exemplary embodiment, the region dividing and segmenting module is configured to divide a viewer region of interest and a viewer region of non-interest on an input image in the following manner, and segment the input image by region according to a region division result: determining a focus center of an input image according to a viewing focus position on a screen, expanding the focus center to two sides according to an expansion proportion corresponding to a viewing distance, and dividing an expanded image area into viewer focus areas; wherein the expanded image area does not exceed the boundary of the input image; dividing the residual image of the input image after the viewer attention area is removed into one or two viewer non-attention areas; wherein each viewer non-interest area is a connected area.
In one exemplary embodiment, the region dividing and segmenting module is configured to determine a focus of interest of an input image according to a viewing focus position on a screen in the following manner: determining a position ratio of a viewing focus position on a screen relative to the entire width of the screen, and determining a position of a focus center of an input image in a width direction of the input image according to the position ratio; wherein a relative position of the viewing focus position in a width direction of the screen is the same as a relative position of the focus center in the width direction of the input image; or determining the position proportion of the viewing focus position on the screen relative to the whole height of the screen, and determining the position of the focus center of the input image in the height direction of the input image according to the position proportion; wherein the relative position of the viewing focus position in the height direction of the screen is the same as the relative position of the focus of attention in the height direction of the input image.
In an exemplary embodiment, the expansion ratio corresponding to the viewing distance is a percentage R% of the width of the expansion area to the entire width of the input image, or a percentage R% of the height of the expansion area to the entire height of the input image.
In an exemplary embodiment, the R% is determined using the following formula:
R%=a*(1/2N)*l*100% (1);
Wherein a is an expansion coefficient, 0 < a.ltoreq.1, N is a maximum value of effective viewing distances, l is an effective viewing distance of a viewer from a display screen, 0 < l.ltoreq.N, and s is a viewing distance when the viewer views the screen.
In an exemplary embodiment, the apparatus further comprises: an image trimming module 40;
the image edge trimming module is used for adding one or more columns of pixels in an image area adjacent to the segmentation edge of any one segmented image area to generate an edge-trimmed image area when the non-attention area of the viewer is positioned at the left side and the right side of the attention area of the viewer; or when the non-concerned area of the viewer is positioned on the upper side and the lower side of the concerned area of the viewer, adding one or more rows of pixels in the image area adjacent to the divided edge to the image area to generate the image area after edge compensation for any divided image area.
In an exemplary embodiment, the image stretching module is configured to stretch the viewer's region of interest portion of the input image into a first stretched image by using a first stretching algorithm, and stretch the viewer's non-region of interest portion of the input image into a second stretched image by using a second stretching algorithm, where the first stretched image is a first stretched image of the viewer's region of interest portion of the input image: the method comprises the steps that a viewer attention area part of an input image subjected to edge trimming is subjected to equal proportion stretching by using a first stretching algorithm; the non-concerned area part of the viewer of the input image after edge trimming is subjected to equal proportion stretching by utilizing a second stretching algorithm; the equal ratio stretching means that the stretching ratio in the width direction is the same as the stretching ratio in the height direction.
In an exemplary embodiment, the apparatus further comprises: an image smoothing and filtering module 50;
and the image smoothing and filtering module is used for carrying out smoothing and filtering processing on the output image.
It is to be understood that various other embodiments of the present invention may be made by those skilled in the art without departing from the spirit and scope of the invention, and that various changes and modifications may be made in accordance with the invention without departing from the scope of the invention as defined in the following claims.

Claims (7)

1. A method of implementing a high resolution display, comprising:
Dividing a viewer attention area and a viewer non-attention area on an input image, and dividing the input image according to the region division result;
When the non-concerned area of the viewer is positioned at the left side and the right side of the concerned area of the viewer, adding one or more columns of pixels in the image area adjacent to the segmentation edge to the image area to generate an image area after edge compensation for any segmented image area; or when the non-concerned area of the viewer is positioned on the upper side and the lower side of the concerned area of the viewer, adding one row or a plurality of rows of pixels in the image area adjacent to the divided edge to the image area to generate an image area after edge compensation for any divided image area;
Image stretching is carried out on a viewer attention area part of the input image by using a first stretching algorithm to obtain a first stretched image, and image stretching is carried out on a viewer non-attention area part of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
Splicing the first stretching image and the second stretching image to obtain an output image;
The dividing the viewer attention area and the viewer non-attention area on the input image, dividing the input image according to the area division result, comprises:
Determining a focus center of an input image according to a viewing focus position on a screen, expanding the focus center to two sides according to an expansion proportion corresponding to a viewing distance, and dividing an expanded image area into viewer focus areas; wherein the expanded image area does not exceed the boundary of the input image;
Dividing the residual image of the input image after the viewer attention area is removed into one or two viewer non-attention areas; wherein each viewer non-interest area is a connected area; the two sides comprise an upper side, a lower side or a left side and a right side.
2. The method of claim 1, wherein:
the determining a focus center of an input image according to a viewing focus position on a screen includes:
determining a position ratio of a viewing focus position on a screen relative to the entire width of the screen, and determining a position of a focus center of an input image in a width direction of the input image according to the position ratio; wherein a relative position of the viewing focus position in a width direction of the screen is the same as a relative position of the focus center in the width direction of the input image; or alternatively
Determining a position proportion of a viewing focus position on a screen relative to the whole height of the screen, and determining the position of a focus center of an input image in the height direction of the input image according to the position proportion; wherein the relative position of the viewing focus position in the height direction of the screen is the same as the relative position of the focus of attention in the height direction of the input image.
3. The method of claim 2, wherein:
The expansion ratio corresponding to the viewing distance is a percentage R% of the width of the expansion area to the entire width of the input image, or a percentage R% of the height of the expansion area to the entire height of the input image.
4. A method as claimed in claim 3, wherein:
The R% is determined using the following formula:
R%=a*(1/2N)*l*100% (1);
Wherein a is an expansion coefficient, 0 < a.ltoreq.1, N is a maximum value of effective viewing distances, l is an effective viewing distance of a viewer from a display screen, 0 < l.ltoreq.N, and s is a viewing distance when the viewer views the screen.
5. The method of claim 1, wherein:
image stretching is carried out on a viewer attention area part of an input image by using a first stretching algorithm to obtain a first stretched image, image stretching is carried out on a viewer non-attention area part of the input image by using a second stretching algorithm to obtain a second stretched image, and the method comprises the following steps:
The method comprises the steps that a viewer attention area part of an input image subjected to edge trimming is subjected to equal proportion stretching by using a first stretching algorithm; the non-concerned area part of the viewer of the input image after edge trimming is subjected to equal proportion stretching by utilizing a second stretching algorithm; the equal ratio stretching means that the stretching ratio in the width direction is the same as the stretching ratio in the height direction.
6. The method of claim 1, wherein:
after the first stretching image and the second stretching image are spliced to obtain an output image, the method further comprises the steps of:
And carrying out smoothing filtering processing on the output image.
7. An apparatus for implementing a high resolution display, comprising:
The region dividing and segmenting module is used for dividing a viewer attention region and a viewer non-attention region on an input image, and segmenting the input image according to regions according to a region division result;
an image edge trimming module, configured to, when a viewer non-attention area is located on the left and right sides of the viewer attention area, add, at a dividing edge of the image area, one or more columns of pixels in the image area adjacent to the dividing edge to the present image area to generate an edge-trimmed image area; or when the non-concerned area of the viewer is positioned on the upper side and the lower side of the concerned area of the viewer, adding one row or a plurality of rows of pixels in the image area adjacent to the divided edge to the image area to generate an image area after edge compensation for any divided image area;
The image stretching module is used for carrying out image stretching on the viewer attention area part of the input image by using a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the viewer non-attention area part of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
the image stitching module is used for stitching the first stretching image and the second stretching image to obtain an output image;
The region dividing and segmenting module is used for dividing a viewer attention region and a viewer non-attention region on an input image in the following mode, and segmenting the input image according to regions according to a region dividing result: determining a focus center of an input image according to a viewing focus position on a screen, expanding the focus center to two sides according to an expansion proportion corresponding to a viewing distance, and dividing an expanded image area into viewer focus areas; wherein the expanded image area does not exceed the boundary of the input image; dividing the residual image of the input image after the viewer attention area is removed into one or two viewer non-attention areas; wherein each viewer non-interest area is a connected area; the two sides comprise an upper side, a lower side or a left side and a right side.
CN201911200521.0A 2019-11-29 2019-11-29 Method and device for realizing high-resolution display Active CN110992250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911200521.0A CN110992250B (en) 2019-11-29 2019-11-29 Method and device for realizing high-resolution display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911200521.0A CN110992250B (en) 2019-11-29 2019-11-29 Method and device for realizing high-resolution display

Publications (2)

Publication Number Publication Date
CN110992250A CN110992250A (en) 2020-04-10
CN110992250B true CN110992250B (en) 2024-06-14

Family

ID=70088287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911200521.0A Active CN110992250B (en) 2019-11-29 2019-11-29 Method and device for realizing high-resolution display

Country Status (1)

Country Link
CN (1) CN110992250B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103974115A (en) * 2014-04-23 2014-08-06 京东方科技集团股份有限公司 High-resolution display method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MX2011008609A (en) * 2009-02-17 2011-09-09 Koninklije Philips Electronics N V Combining 3d image and graphical data.
CN105027144A (en) * 2013-02-27 2015-11-04 汤姆逊许可公司 Method and device for calibration-free gaze estimation
US20150358594A1 (en) * 2014-06-06 2015-12-10 Carl S. Marshall Technologies for viewer attention area estimation
CN106531073B (en) * 2017-01-03 2018-11-20 京东方科技集团股份有限公司 Processing circuit, display methods and the display device of display screen

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103974115A (en) * 2014-04-23 2014-08-06 京东方科技集团股份有限公司 High-resolution display method and system

Also Published As

Publication number Publication date
CN110992250A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
US9335820B2 (en) Method and system for correcting gaze offset
Terzić et al. Methods for reducing visual discomfort in stereoscopic 3D: A review
US20150170400A1 (en) Depth map generation using bokeh detection
Choi et al. Visual fatigue modeling and analysis for stereoscopic video
US9041773B2 (en) Conversion of 2-dimensional image data into 3-dimensional image data
US10942567B2 (en) Gaze point compensation method and apparatus in display device, and display device
CN109741289B (en) Image fusion method and VR equipment
CN102859675A (en) Semiconductor fault analysis device and fault analysis method
KR100560464B1 (en) Multi-view display system with viewpoint adaptation
US20160180514A1 (en) Image processing method and electronic device thereof
US20220182595A1 (en) Optical flow based omnidirectional stereo video processing method
CN113034348A (en) Image processing method, image processing apparatus, storage medium, and device
Bokov et al. Automatic detection of artifacts in converted S3D video
WO1996005573A1 (en) Image-processing system for handling depth information
CN105657268A (en) Multi-viewpoint video splicing and fusion algorithm based on multiple resolutions
CN102026012B (en) Generation method and device of depth map through three-dimensional conversion to planar video
CN104184936A (en) Image focusing processing method and system based on light field camera
Hanhart et al. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience
GB2585197A (en) Method and system for obtaining depth data
Avraham et al. Ultrawide foveated video extrapolation
CN110992250B (en) Method and device for realizing high-resolution display
CN113706400A (en) Image correction method, image correction device, microscope image correction method, and electronic apparatus
Cai et al. Hole-filling approach based on convolutional neural network for depth image-based rendering view synthesis
EP3467637B1 (en) Method, apparatus and system for displaying image
Seitner et al. Trifocal system for high-quality inter-camera mapping and virtual view synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant