CN101547323B - Image conversion method, device and display system - Google Patents

Image conversion method, device and display system Download PDF

Info

Publication number
CN101547323B
CN101547323B CN2009101361557A CN200910136155A CN101547323B CN 101547323 B CN101547323 B CN 101547323B CN 2009101361557 A CN2009101361557 A CN 2009101361557A CN 200910136155 A CN200910136155 A CN 200910136155A CN 101547323 B CN101547323 B CN 101547323B
Authority
CN
China
Prior art keywords
image
character area
conversion
weight
aspect ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2009101361557A
Other languages
Chinese (zh)
Other versions
CN101547323A (en
Inventor
刘源
赵嵩
王静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Huawei Device Shenzhen Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN2009101361557A priority Critical patent/CN101547323B/en
Publication of CN101547323A publication Critical patent/CN101547323A/en
Priority to EP10769288.1A priority patent/EP2426638B1/en
Priority to PCT/CN2010/072178 priority patent/WO2010124599A1/en
Application granted granted Critical
Publication of CN101547323B publication Critical patent/CN101547323B/en
Priority to US13/284,227 priority patent/US8503823B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a method and device for converting image, and the display system. The image conversion method comprises: detecting character area of image and obtaining detected character area; converting the image according to the character area, obtaining converted image with different aspect ratio with unconverted image. The conversion device comprises a detection unit for detecting character area of image and obtaining detected character area; a conversion unit, converting the image according to the character area and obtaining converted image with different aspect ratio with unconverted image. The embodiment of the invention also provides a display system. The technical scheme of the embodiment can keep the important content area of image and clearly display it.

Description

Image conversion method, conversion device and display system
Technical Field
The invention relates to the technical field of image processing, in particular to an image conversion method, an image conversion device and a display system.
Background
Image conversion is performed in an image processing technique, and generally includes operations such as image scaling. When image scaling is performed at different scales, distortion of an image is easily caused due to the disparity of the aspect ratio of a source image and a target image. A typical application scenario is the problem of adapting images with aspect ratios of 4: 3 and 16: 9, for example, a conventional Cathode Ray Tube (CRT) television basically adopts a 4: 3 image Display mode, while a latest high-definition Liquid Crystal Display (LCD) television adopts a 16: 9 image Display mode, so that there is a problem of resolution conversion with unequal ratios of images of 4: 3 and 16: 9.
In image conversion, human eyes are generally sensitive to important content areas in an image, such as areas where characters exist, so that when scaling images at different scales, the character areas in the image should be kept as much as possible and distortion should be minimized. Taking the scaling of the video conference scene image with the aspect ratio of 4: 3 to 16: 9 as an example, the simplest linear scaling algorithm can be generally adopted in the prior art, and at this time, important content areas such as characters in the image are greatly deformed. In addition, a cropping algorithm can be adopted to crop the edge of the image, so that the proportion of the cropping area is consistent with that of the target image, the image distortion is not caused, and the character area of the image is easy to lose completely or partially.
During the course of research and practice on this method, the inventors of the present invention found that:
the image conversion method in the prior art easily causes deformation or loss of important content areas such as character areas in the image.
Disclosure of Invention
The embodiment of the invention provides an image conversion method, an image conversion device and a display system, which can reserve and clearly display important content areas in an image.
An image conversion method, comprising: carrying out character region detection on the image to obtain a detected character region; identifying the weight of the detected character area, referring to the weight of the character area, and performing conversion processing on the image according to the character area, wherein the conversion processing on the image according to the character area comprises the following steps: and carrying out scaling processing on the image according to the character area or carrying out cutting processing on the image according to the character area to obtain a converted image with the aspect ratio different from that of the image before conversion.
A conversion apparatus, comprising: the detection unit is used for detecting the character area of the image to obtain the detected character area; an identification unit configured to identify a weight to the detected character region; a conversion unit, configured to refer to the weight of the text region, and perform conversion processing on an image according to the text region, where performing conversion processing on an image according to the text region includes: and carrying out scaling processing on the image according to the character area or carrying out cutting processing on the image according to the character area to obtain a converted image with the aspect ratio different from that of the image before conversion.
A display system, comprising: the conversion device is used for detecting the character area of the image to obtain the detected character area; identifying the weight of the detected character area, referring to the weight of the character area, and performing conversion processing on the image according to the character area, wherein the conversion processing on the image according to the character area comprises the following steps: zooming the image according to the character area or cutting the image according to the character area to obtain a converted image with a different aspect ratio from the image before conversion; and the display device is used for displaying the converted image.
According to the technical scheme, the character area detection is firstly carried out on the image to obtain the detected character area; and then, the image is subjected to resolution conversion processing with unequal ratio according to the character area, so that important content areas such as the character area in the converted image can not be lost, and can be reserved and displayed without distortion.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of an image transformation method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a second image conversion method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating detecting text regions according to a second embodiment of the present invention;
FIG. 4 is a flowchart of a third image conversion method according to an embodiment of the present invention;
FIG. 5 is a flowchart of processing in the vertical direction by using the intelligent cropping algorithm according to the third embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a conversion apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a display system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image conversion method, a conversion device and a display system, which can reserve important content areas in an image and perform undistorted display. The following are detailed below.
Fig. 1 is a flowchart of an image conversion method according to an embodiment of the present invention, which mainly includes the steps of:
step 101, detecting a character area of an image to obtain a detected character area;
the step can adopt a positioning algorithm based on the combination of edge and texture energy to detect the character area of the image, so as to obtain the detected character area.
And 102, converting the image according to the character area to obtain a converted image with a different aspect ratio from the image before conversion.
The conversion processing of the image according to the character area is conversion processing of an unequal resolution. The anisometric resolution means that the aspect ratio of the image resolution before conversion is different from the aspect ratio of the image resolution after conversion.
The conversion process may be a scaling process or a clipping process. The scaling process includes: and setting the weight of the character area as a high weight, and scaling the image according to the set aspect ratio by adopting a nonlinear scaling algorithm according to the weight of the character area and the weight of the non-character area. The cropping processing comprises: and according to the character area, clipping the image according to a set aspect ratio by adopting a clipping algorithm, wherein the clipped image comprises the character area.
As can be seen from the content of the first embodiment, the first embodiment of the present invention performs text region detection on an image to obtain a detected text region; and then, the image is converted according to the character area with unequal resolution, so that important content areas such as the character area in the converted image can not be lost, and can be reserved and displayed without distortion.
The technical solutions of the second embodiment and the third embodiment of the present invention are described in more detail below.
FIG. 2 is a flowchart of a second image conversion method according to an embodiment of the present invention, which mainly includes the following steps:
step 201, detecting a character area.
After an input image is acquired, a character region in the image is detected, and the detected character region is obtained.
Commonly used text region localization algorithms include: a connected domain based localization algorithm, an edge based localization algorithm, and a texture energy based localization algorithm.
The positioning algorithm based on the connected domain mainly utilizes the color characteristics of the character area; the edge-based positioning algorithm mainly utilizes the characteristics of dense edges and regular directivity of character areas, but because the edge densities and directions of different characters have different characteristics and the background content is unknown, the edge-based positioning algorithm is generally matched with other algorithms for use; the positioning algorithm based on the texture energy mainly takes the character area as a special texture, and the method based on the texture has stronger robustness.
The embodiment of the invention adopts a positioning algorithm based on the combination of edges and texture energy, the algorithm calculates the texture energy of different areas of the image through discrete cosine transform, completes the initial positioning of the character area according to the texture energy, and completes the accurate positioning of the character area by combining edge constraint conditions.
Referring to fig. 3, the process of step 201, fig. 3 is a flowchart of detecting a text area in the second embodiment of the present invention, including the steps of:
step 301, determining an area to be detected.
First, a region to be detected is preliminarily determined. The character area in the general image may be located at the corner position of the edge of the image, so that the corner position can be determined as the area to be detected. Taking the video conference system as an example to detect the conference information, the conference information is usually displayed at the upper, lower, left and right 4 corner positions of the image, so that the upper, lower, left and right 4 corner positions can be selected as the areas to be detected.
Step 302, calculating texture energy of the region to be detected to obtain an energy distribution condition.
Assume that the top, bottom, left, and right 4 corner positions of the image are 4 regions to be detected. Each region to be detected is divided into small blocks, for example, 8 x 8 pixel blocks, and then discrete cosine transform is performed. Because the coefficients at different positions in the frequency domain represent the high-frequency and low-frequency portions of the image, the energy of the high-frequency portion is higher, and the energy of the low-frequency portion is lower, the total texture energy of each small block can be calculated by selecting different transform coefficients to combine. In many cases, the energy of the text region in a single color space is very low, so the texture energy can be calculated in each of the three spaces Y, Cb, and Cr, and then the total texture energy of the three spaces in each tile can be obtained.
Step 303, determining important areas and unimportant areas.
Because the size and the length of the character region are unknown, the character region can not be effectively selected through a single threshold value, and therefore two threshold values, namely the first threshold value and the second threshold value, are set. Selecting a region with the total texture energy of the small blocks larger than a first threshold, comparing the total texture energy of the small blocks with a second threshold in the selected region, selecting the region larger than the second threshold as an important region, and determining the region outside the important region as a non-important region. The important region is generally a character region with higher reliability, and the non-important region includes a character region with low energy and a non-character region.
And step 304, merging the non-important areas based on the important areas to determine the detected character areas.
Based on the important area, combining the non-important areas to obtain a more accurate character area for further positioning.
Step 202, identify the text area.
The detected character areas can be identified, a plurality of rectangular areas can be used for identifying, and weight is set for each area, wherein the rectangular areas indicate the range of the character areas, and the weight indicates the importance of the character areas.
And step 203, performing conversion processing by adopting a nonlinear scaling algorithm.
This step employs a non-linear scaling algorithm based on Region of interest (ROI) mapping. The algorithm takes into account the weight of each pixel in the image, with a smaller scale and thus less distortion for high weight pixels and a larger scale and corresponding larger distortion for low weight pixels. Since the text region is a high-weight region, it is possible to ensure that the text region is not deformed or is less deformed at the time of scaling.
The step requires determining an ROI (region of interest) map of the image, and mapping the source image according to the weight of each pixel in the ROI map. Pixels in the ROI map correspond to pixels in the source image one by one, except that the value of the pixels in the ROI map is the weight of the corresponding pixels in the source image. The weight value may be an integer or a floating point number. The weighting criterion is an energy equation, where flat regions of the image lacking texture are generally lower in energy, while regions with richer texture and edges are generally higher in energyIs large. The energy equation has many forms, and commonly includes L1-normGradient energy e1And L2-normGradient energy e2As shown in the following formula (1) and formula (2), respectively.
<math><mrow><msub><mi>e</mi><mn>1</mn></msub><mrow><mo>(</mo><mi>I</mi><mo>)</mo></mrow><mo>=</mo><mo>|</mo><mfrac><mo>&PartialD;</mo><mrow><mo>&PartialD;</mo><mi>x</mi></mrow></mfrac><mi>I</mi><mo>|</mo><mo>+</mo><mo>|</mo><mfrac><mo>&PartialD;</mo><mrow><mo>&PartialD;</mo><mi>y</mi></mrow></mfrac><mi>I</mi><mo>|</mo></mrow></math> Formula (1)
<math><mrow><msub><mi>e</mi><mn>2</mn></msub><mrow><mo>(</mo><mi>I</mi><mo>)</mo></mrow><mo>=</mo><msqrt><msup><mrow><mo>|</mo><mfrac><mo>&PartialD;</mo><mrow><mo>&PartialD;</mo><mi>x</mi></mrow></mfrac><mi>I</mi><mo>|</mo></mrow><mn>2</mn></msup><mo>+</mo><msup><mrow><mo>|</mo><mfrac><mo>&PartialD;</mo><mrow><mo>&PartialD;</mo><mi>y</mi></mrow></mfrac><mi>I</mi><mo>|</mo></mrow><mn>2</mn></msup></msqrt></mrow></math> Formula (2)
Where I in the formula represents the gray value of the pixel, x represents the horizontal coordinate (row number) of the pixel, and y represents the vertical coordinate (column number) of the pixel. .
A high energy value may be set for a particular high weight region, such as a text region or a face region. For example, in the embodiment of the present invention, since the text region is an important region and it is desirable that the distortion thereof is as small as possible, the pixel for the text region can be set to a larger energy value such as the maximum energy value, i.e., E, in the entire imageText(i, j) ═ max (E), where i denotes the row number where the text region pixel is located, j denotes the column number, E denotes the energy, E denotes the number of the lines where the text region pixel is locatedTextRepresenting the text region energy.
(1) Case of reducing image
For each pixel, a reduction value can be calculated from the weights, and for the entire image a reduction map can be determined. Considering the case of one line, for the case of reducing 1 pixel per line on an image of width w, the reduction value per pixel is: <math><mrow><mi>s</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow><mo>=</mo><mn>1</mn><mo>/</mo><mi>E</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow><munderover><mi>&Sigma;</mi><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>w</mi></munderover><mn>1</mn><mo>/</mo><mi>E</mi><mrow><mo>(</mo><mi>j</mi><mo>)</mo></mrow></mrow></math> . Where i, j are row indices.
For the case of reducing k pixels per line, the reduction value of k pixels is S (k, S) ═ ks (i). In order to avoid the pixel mapping position rearrangement, the reduction value of each pixel cannot exceed 1, i.e. the following two equations need to be satisfied:
S(k,s)=min(k0s,1), <math><mrow><munderover><mi>&Sigma;</mi><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>w</mi></munderover><mi>min</mi><mrow><mo>(</mo><msub><mi>k</mi><mn>0</mn></msub><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow><mi>s</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow><mo>,</mo><mn>1</mn><mo>)</mo></mrow><mo>=</mo><mi>k</mi></mrow></math>
it should be noted that the processing for each column is similar to the processing for each row.
To avoid the aliasing phenomenon, the correlation between image rows or columns is further considered. For line scaling, s (x, y) ≈ s (x, y-1), and the larger the energy E (x, y) of a certain pixel, the closer s (x, y-1) is to s (x, y), so the energy map can be processed to smooth the image in 2D space using the following constraints:
E(x,y)=K1·E(x,y-1)+K2·E(x,y)
the parameters can be set according to actual conditions, and generally can be set as follows: k1=1,K20 to 0.2. For video sequences, if scaling is performed on a frame-by-frame basis, significant jitter may occur due to the correlation of the resulting scaled-down image with the image content, and therefore 3D temporal smoothing may need to be considered for video scaling. Two approaches may be used in 3D temporal smoothing: for the case of small changes in image content, a reduced image may be determined only in the first frame, and subsequent frames are scaled using the reduced image, that is: s (x, y, t) is S (x, y, 1). Another method is to use a 2D-like spatial smoothing method to smooth two adjacent frames in the time domain. Two temporally adjacent frames have a high correlation, so the energy map can be processed using the following constraints: e (x, y, t) ═ K3·E(x,y,t-1)+K4·E(x,y,t)
The parameters can be set according to actual conditions, and generally can be set as follows: k3=1,K4=0~0.2。
After the minification image is determined, mapping from the source image to the target image is required. The position of a certain pixel coordinate on the source image on the target image can be calculated by adopting a forward mapping method through the accumulated reduction values. For example for line scaling one may: <math><mrow><msubsup><mi>x</mi><mi>i</mi><mo>&prime;</mo></msubsup><mo>=</mo><msub><mi>x</mi><mi>i</mi></msub><mo>-</mo><munderover><mi>&Sigma;</mi><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow></munderover><mi>s</mi><mrow><mo>(</mo><mi>j</mi><mo>)</mo></mrow></mrow></math>
the obtained row coordinate x' of the target image is a floating point value, and an integer coordinate value can be obtained through an interpolation algorithm. The interpolation algorithm may use a bilinear interpolation algorithm or a cubic convolution algorithm, or the like.
(2) Case of magnifying image
For each pixel, a reduction value can be calculated from the weights, which can be considered as a negative number, i.e. for the case of expanding k pixels, the reduction value can be expressed as: s (k, S) ═ ks (i). For the case of image magnification, it can be considered that there is no constraint of pixel scaling because the reduction of one pixel cannot exceed 1 unit, while the magnification can be unlimited.
After the conversion processing is carried out by adopting the nonlinear scaling algorithm, a converted image is obtained, and then the image can be output for display.
It should be noted that the above is exemplified by using a non-linear scaling algorithm based on Region of interest (ROI) mapping, but not limited thereto, and other non-linear scaling algorithms, such as a sub-Region non-linear scaling algorithm, may also be used, which is different from processing using the ROI-based non-linear scaling algorithm mainly in that the sub-Region non-linear scaling algorithm sets a weight for a set Region instead of a pixel setting weight, and other processing procedures are similar.
As can be seen from the second content of the embodiment, the technical scheme of the embodiment of the invention performs character region detection on the image to obtain the detected character region, and identifies the weight; and performing conversion processing on the image by adopting a nonlinear scaling algorithm according to the character area, so that important content areas such as the character area in the converted image can not be lost, and can be reserved and displayed without distortion.
Fig. 4 is a flowchart of an image conversion method according to a third embodiment of the present invention, and the third embodiment is different from the second embodiment mainly in that an algorithm used in the conversion process is different. As shown in fig. 4, the method mainly comprises the following steps:
step 401, detecting a text area.
Step 402, identify a text region.
The contents of steps 401 and 402 are substantially the same as those of step 201 and 202 in the second embodiment, and are not described herein again. It should be noted that, in this embodiment, no weight may be identified for the text region.
And 403, performing conversion processing by adopting an intelligent cutting algorithm.
When the intelligent cropping algorithm is used for cropping the image, the position of the character area determined in the previous step is considered, so that the cropping area does not comprise the character area, and the phenomenon of content loss of the character area is avoided.
Referring to fig. 5, the process of step 403 is shown in fig. 5, which is a flow chart of processing in the vertical direction by using the intelligent cropping algorithm in the third embodiment of the present invention, and includes the steps of:
step 501, judging whether the top end of the image contains characters, if so, entering step 502, and if not, entering step 503;
judging whether the character area is located above or below the image according to the character area detected in the previous step can be completed by comparing the distance from the height coordinate of the character area to the upper boundary and the lower boundary of the image. If the text area is located above the image, it can be determined that the top of the image contains text, then step 502 is entered, otherwise step 503 is entered.
502, determining the vertical direction coordinate of a character area close to the top edge;
when only one character area is assumed above the image, directly determining the vertical direction coordinate of the character area close to the top edge; suppose that there are two text regions T above the image1And T2Then it is determined which of the two text regions is closest to the edge of the image, e.g. T2Ratio T1Closer to the upper boundary of the image, T is determined2Vertical coordinates of the text area near the top edge, i.e. T2The distance of the height coordinate from the upper boundary is taken as the image height to be cropped.
Step 503, judging whether the bottom end of the image contains characters, if so, entering step 504, and if not, entering step 505;
judging whether the character area is located above or below the image according to the character area detected in the previous step can be completed by comparing the distance from the height coordinate of the character area to the upper boundary and the lower boundary of the image. If there is a text area located below the image, it can be determined that the bottom of the image contains text, then step 504 is entered, otherwise step 505 is entered.
Step 504, determining the vertical direction coordinate of the character area close to the bottom edge;
when only one character area is assumed below the image, the vertical direction coordinate of the character area close to the bottom edge is directly determined; suppose that there are two text regions T below the image1And T2Then it is determined which of the two text regions is closest to the edge of the image, e.g. T2Ratio T1Closer to the lower boundary of the image, T is determined2Vertical coordinates of the text area near the bottom edge, i.e. T2The distance from the height coordinate to the lower boundary is taken as the image height to be cropped.
And 505, performing vertical cutting.
If there is only the upper text region, the remaining cropping height may be set in the lower boundary region of the image after determining the upper text region to be preserved based on the vertical coordinates of the text region near the top edge. If only the lower text area is available, the remaining cropping height can be set in the upper boundary area of the image after determining the lower text area to be preserved according to the vertical coordinates of the text area near the bottom edge. If there are text areas above and below, then it is necessary to keep the text areas above and below according to the vertical coordinates of the text areas near the top and bottom edges, and then to cut out the other text areas.
After the intelligent cutting algorithm is adopted for conversion processing, a converted image is obtained, and the image can be output for display.
Note that, the above description is given by exemplifying the cutting in the vertical direction, and the cutting in the horizontal direction is similar.
It should be noted that, if the cropped area cannot meet the resolution requirement of the target image because there is a text area above and below, the cropped image may be scaled linearly or non-linearly to achieve the resolution of the target image. In the non-linear scaling, reference may be made to the method in the second embodiment, and the principle is the same, or other scaling methods may be adopted.
It should be noted that the above is illustrated by using an intelligent cropping algorithm for performing the conversion processing, but the invention is not limited thereto, and other algorithms for performing the cropping in consideration of the text area are also possible.
As can be seen from the third content of the embodiment, the technical scheme of the embodiment of the invention performs character region detection on the image to obtain a detected character region; and performing conversion processing on the image by adopting an intelligent cutting algorithm according to the character area, so that important content areas such as the character area in the converted image can not be lost, and can be reserved and displayed without distortion.
The above description is given by taking an example of performing conversion processing by using a non-linear scaling algorithm or an intelligent clipping algorithm after detecting a text region, but the present invention is not limited to this, and other scaling processing methods may be used.
The foregoing describes the image conversion method in detail, and accordingly, the embodiment of the present invention provides a conversion apparatus and a display system.
Fig. 6 is a schematic structural diagram of a conversion device according to an embodiment of the present invention.
As shown in fig. 6, the conversion apparatus includes: a detection unit 61 and a conversion unit 62.
A detection unit 61 configured to perform text region detection on the image to obtain a detected text region;
and a conversion unit 62, configured to perform conversion processing on the image according to the text region, so as to obtain a converted image having an aspect ratio different from that of the image before conversion.
The conversion apparatus may further include: the unit 63 is identified.
An identification unit 63 configured to identify a weight for the detected character region; the conversion unit 62 further refers to the weight of the text region when performing conversion processing on an image.
The converting unit 62 may set the weight of the text region identified by the identifying unit as a high weight, and scale the image according to the set aspect ratio by using a non-linear scaling algorithm according to the weight of the text region and the weight of the non-text region, where the non-linear scaling algorithm may be a non-linear scaling algorithm supporting mapping based on the important region, and the like. Or,
the conversion unit 62 may use a cropping algorithm to crop the image according to the set aspect ratio, the cropped image includes the text region, and the cropping algorithm may be an intelligent cropping algorithm or the like.
When the aspect ratio of the cropped image is found not to be equal to the set aspect ratio, the conversion unit 62 further performs scaling processing on the cropped image.
Fig. 7 is a schematic structural diagram of a display system according to an embodiment of the present invention. As shown in fig. 7, the display system includes: a conversion device 71 and a display device 72.
A conversion device 71, configured to perform text region detection on the image to obtain a detected text region; converting the image according to the character area to obtain a converted image with a different aspect ratio from the image before conversion;
a display device 72 for displaying the converted image.
The switching device 71 has a structure shown in fig. 6, and reference is made to the foregoing description for details, which are not repeated herein.
In summary, in the embodiments of the present invention, first, a text region of an image is detected to obtain a detected text region; and carrying out conversion processing of unequal ratio resolution on the image according to the character area, so that important content areas such as the character area in the converted image can be kept and displayed without distortion without losing.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The image conversion method, the image conversion device and the display system provided by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. An image conversion method, comprising:
carrying out character region detection on the image to obtain a detected character region;
identifying the weight of the detected character area, referring to the weight of the character area, and performing conversion processing on the image according to the character area, wherein the conversion processing on the image according to the character area comprises the following steps: and carrying out scaling processing on the image according to the character area or carrying out cutting processing on the image according to the character area to obtain a converted image with the aspect ratio different from that of the image before conversion.
2. The image conversion method according to claim 1, characterized in that:
the scaling process includes: and setting the weight of the character area as a high weight, and scaling the image according to a set aspect ratio by adopting a nonlinear scaling algorithm according to the weight of the character area and the weight of the non-character area.
3. The image conversion method according to claim 1, characterized in that:
the cropping processing includes: and according to the character area, clipping the image according to a set aspect ratio by adopting a clipping algorithm, wherein the clipped image comprises the character area.
4. The image conversion method according to claim 3, characterized by further comprising:
if the aspect ratio of the cropped image does not reach the set aspect ratio, the cropped image is further zoomed.
5. A conversion apparatus, comprising:
the detection unit is used for detecting the character area of the image to obtain the detected character area;
an identification unit configured to identify a weight to the detected character region;
a conversion unit, configured to refer to the weight of the text region, and perform conversion processing on an image according to the text region, where performing conversion processing on an image according to the text region includes: and carrying out scaling processing on the image according to the character area or carrying out cutting processing on the image according to the character area to obtain a converted image with the aspect ratio different from that of the image before conversion.
6. The conversion apparatus of claim 5, wherein:
the conversion unit sets the weight of the character area identified by the identification unit as a high weight, and the image is zoomed according to the set aspect ratio by adopting a nonlinear zooming algorithm according to the weight of the character area and the weight of the non-character area.
7. The conversion apparatus of claim 5, wherein:
and the conversion unit cuts the image according to a set aspect ratio by adopting a cutting algorithm, and the cut image comprises the character area.
8. The conversion apparatus of claim 7, wherein:
and the conversion unit further performs scaling processing on the cropped image when judging that the aspect ratio of the cropped image does not reach the set aspect ratio.
9. A display system, comprising:
the conversion device is used for detecting the character area of the image to obtain the detected character area; identifying the weight of the detected character area, referring to the weight of the character area, and performing conversion processing on the image according to the character area, wherein the conversion processing on the image according to the character area comprises the following steps: zooming the image according to the character area or cutting the image according to the character area to obtain a converted image with a different aspect ratio from the image before conversion;
and the display device is used for displaying the converted image.
CN2009101361557A 2009-04-30 2009-04-30 Image conversion method, device and display system Active CN101547323B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN2009101361557A CN101547323B (en) 2009-04-30 2009-04-30 Image conversion method, device and display system
EP10769288.1A EP2426638B1 (en) 2009-04-30 2010-04-25 Image conversion method, conversion device and display system
PCT/CN2010/072178 WO2010124599A1 (en) 2009-04-30 2010-04-25 Image conversion method, conversion device and display system
US13/284,227 US8503823B2 (en) 2009-04-30 2011-10-28 Method, device and display system for converting an image according to detected word areas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101361557A CN101547323B (en) 2009-04-30 2009-04-30 Image conversion method, device and display system

Publications (2)

Publication Number Publication Date
CN101547323A CN101547323A (en) 2009-09-30
CN101547323B true CN101547323B (en) 2011-04-13

Family

ID=41194157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101361557A Active CN101547323B (en) 2009-04-30 2009-04-30 Image conversion method, device and display system

Country Status (1)

Country Link
CN (1) CN101547323B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2426638B1 (en) 2009-04-30 2013-09-18 Huawei Device Co., Ltd. Image conversion method, conversion device and display system
CN103678300B (en) * 2012-08-30 2020-02-07 深圳市世纪光速信息技术有限公司 Picture conversion method and device
CN103686056B (en) * 2012-09-24 2017-07-28 鸿富锦精密工业(深圳)有限公司 The method for processing video frequency of conference terminal and the conference terminal
CN104700357A (en) * 2015-04-14 2015-06-10 华东理工大学 Chinese character image zooming method based on bilinear operator

Also Published As

Publication number Publication date
CN101547323A (en) 2009-09-30

Similar Documents

Publication Publication Date Title
US8649635B2 (en) Image scaling method and apparatus
JP6016061B2 (en) Image generation apparatus, image display apparatus, image generation method, and image generation program
JP6094863B2 (en) Image processing apparatus, image processing method, program, integrated circuit
CN102883175B (en) Methods for extracting depth map, judging video scene change and optimizing edge of depth map
EP2466901B1 (en) Depth data upsampling
CN101163224A (en) Super-resolution device and method
CN106204441B (en) Image local amplification method and device
US20140320534A1 (en) Image processing apparatus, and image processing method
CN103440664B (en) Method, system and computing device for generating high-resolution depth map
US9105106B2 (en) Two-dimensional super resolution scaling
US10412374B2 (en) Image processing apparatus and image processing method for imaging an image by utilization of a pseudo image
Harb et al. Improved image magnification algorithm based on Otsu thresholding
US9013549B2 (en) Depth map generation for conversion of two-dimensional image data into three-dimensional image data
WO2018113224A1 (en) Picture reduction method and device
CN101547323B (en) Image conversion method, device and display system
CN103369208A (en) Self-adaptive de-interlacing method and device
US20230401855A1 (en) Method, system and computer readable media for object detection coverage estimation
US8295647B2 (en) Compressibility-aware media retargeting with structure preserving
US10089954B2 (en) Method for combined transformation of the scale and aspect ratio of a picture
Li et al. Space–time super-resolution with patch group cuts prior
KR101262164B1 (en) Method for generating high resolution depth image from low resolution depth image, and medium recording the same
CN109325909B (en) Image amplification method and image amplification device
US8503823B2 (en) Method, device and display system for converting an image according to detected word areas
WO2011121563A1 (en) Detecting saliency in an image
CN111784733A (en) Image processing method, device, terminal and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518129 Building 2, B District, Bantian HUAWEI base, Longgang District, Shenzhen, Guangdong.

Patentee after: Huawei terminal (Shenzhen) Co.,Ltd.

Address before: 518129 Building 2, B District, Bantian HUAWEI base, Longgang District, Shenzhen, Guangdong.

Patentee before: HUAWEI DEVICE Co.,Ltd.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20181218

Address after: 523808 Southern Factory Building (Phase I) Project B2 Production Plant-5, New Town Avenue, Songshan Lake High-tech Industrial Development Zone, Dongguan City, Guangdong Province

Patentee after: HUAWEI DEVICE Co.,Ltd.

Address before: 518129 Building 2, B District, Bantian HUAWEI base, Longgang District, Shenzhen, Guangdong.

Patentee before: Huawei terminal (Shenzhen) Co.,Ltd.

TR01 Transfer of patent right