CN108763350B - Text data processing method and device, storage medium and terminal - Google Patents

Text data processing method and device, storage medium and terminal Download PDF

Info

Publication number
CN108763350B
CN108763350B CN201810461484.8A CN201810461484A CN108763350B CN 108763350 B CN108763350 B CN 108763350B CN 201810461484 A CN201810461484 A CN 201810461484A CN 108763350 B CN108763350 B CN 108763350B
Authority
CN
China
Prior art keywords
text
image
attribute information
data
compressed data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810461484.8A
Other languages
Chinese (zh)
Other versions
CN108763350A (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810461484.8A priority Critical patent/CN108763350B/en
Publication of CN108763350A publication Critical patent/CN108763350A/en
Application granted granted Critical
Publication of CN108763350B publication Critical patent/CN108763350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The embodiment of the application discloses a text data processing method, a text data processing device, a storage medium and a terminal, wherein the method comprises the following steps: acquiring text attribute information of text data in a first image, wherein the text attribute information is used for describing the text data; performing first compression operation on the text attribute information to obtain text compressed data; performing a second compression operation on the first image to obtain image compression data, wherein the compression ratio of the first compression operation is greater than that of the second compression operation; when a first image is output, the first image is synthesized and output according to the image compressed data and the text compressed data, and the storage power consumption of the text data can be reduced.

Description

Text data processing method and device, storage medium and terminal
Technical Field
The embodiment of the application relates to the technical field of mobile terminal image processing, in particular to a text data processing method, a text data processing device, a storage medium and a terminal.
Background
With the continuous development of the mobile terminal, the user can take pictures through the mobile terminal to obtain the pictures. When the mobile terminal stores the photo, each pixel point in the photo is compressed and stored.
However, when the photo contains characters, the character parts are still stored in the form of image pixels, which results in that the character parts in the image occupy a larger storage space and increases storage power consumption.
Disclosure of Invention
An object of the embodiments of the present application is to provide a text data processing method, an apparatus, a storage medium, and a terminal, which can reduce storage power consumption of text data.
In a first aspect, an embodiment of the present application provides a text data processing method, including:
acquiring text attribute information of text data in a first image, wherein the text attribute information is used for describing the text data;
performing first compression operation on the text attribute information to obtain text compressed data;
performing a second compression operation on the first image to obtain image compression data, wherein the compression ratio of the first compression operation is greater than that of the second compression operation;
when a first image is output, the first image is synthesized and output according to the image compressed data and the text compressed data.
In a second aspect, an embodiment of the present application provides a text data processing apparatus, including:
the acquisition module is used for acquiring text attribute information of text data in the first image, wherein the text attribute information is used for describing the text data;
the first compression module is used for performing first compression operation on the text attribute information acquired by the acquisition module to obtain text compressed data;
the second compression module is used for carrying out second compression operation on the first image to obtain image compression data, and the compression ratio of the first compression operation is greater than that of the second compression operation;
and the output module is used for synthesizing and outputting the first image according to the image compressed data obtained by the second compression module and the text compressed data obtained by the first compression module when the first image is output.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the program, when executed by a processor, implements the text data processing method as shown in the first aspect.
In a fourth aspect, an embodiment of the present application provides a terminal, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the text data processing method according to the first aspect.
According to the text data processing scheme provided by the embodiment of the application, firstly, text attribute information used for describing text data in a first image is obtained; secondly, performing a first compression operation on the text attribute information to obtain text compressed data; then, carrying out second compression operation on the first image to obtain image compression data; finally, when the first image is output, the first image is output by combining the image compressed data and the text compressed data, so that the storage power consumption of the text data can be reduced.
Drawings
Fig. 1 is a schematic flowchart of a text data processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another text data processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another text data processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another text data processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another text data processing method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another text data processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a text data processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical scheme of the application is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
With the continuous development of the mobile terminal, the user can take pictures through the mobile terminal to obtain the pictures. When the mobile terminal stores the photo, each pixel point in the photo is compressed and stored. However, when the photo contains characters, the character parts are still stored in the form of image pixels, which results in that the character parts in the image occupy a larger storage space and increases storage power consumption.
The embodiment of the application provides a text data processing method, during storage, text data and image data in an image are stored after different compression operations are carried out, during output, an original image is synthesized and output according to the text compressed data and the image compressed data, the text data processing can be completed quickly, meanwhile, a small storage space is occupied, and storage power consumption is reduced. The specific scheme is as follows:
fig. 1 is a schematic flow diagram of a text data processing method according to an embodiment of the present application, where the method is used when a terminal processes an image with text data, and the method may be executed by a mobile terminal, where the mobile terminal may be a smartphone, a tablet computer, a wearable device, a notebook computer, and the like, and the method specifically includes the following steps:
step 110, obtaining text attribute information of the text data in the first image.
Wherein the text attribute information is used to describe the text data.
The first image is an image to be processed, which contains text data, and may be, for example, a photo to be stored after a user adds a text label to the photo after taking a picture. The text data refers to text content information in the first image, and the text data includes at least one of a word, a letter, or a number in the first image. The text attribute information is attribute information describing text data, including content of the text data, a position in an image, rendering information, and the like.
Alternatively, the obtaining of the text attribute information of the text data in the first image may be performed in response to a data processing instruction, for example, in response to an instruction issued by a user to store the image with the text data, or when the instruction to output the image with the text data is output, the text attribute information of the text data in the first image is obtained.
Optionally, the text attribute information of the text data in the first image may be obtained by first locating the position of the text data in the first image and then obtaining the text attribute information describing the text data at the position, for example, in one image, first locating the position of a text part in the image and then determining the specific position information, text content, text color, font, thickness and other text attribute information.
And step 120, performing a first compression operation on the text attribute information to obtain text compressed data.
The first compression operation is an operation of compressing the text attribute information, that is, the text attribute information is represented by using fewer bits or bytes to reduce the storage space of the text attribute information, and the specific compression operations are many, which are not limited in the present application, for example, huffman coding and arithmetic coding may be used. The text compressed data refers to compressed result data obtained by performing a first compression operation on text attribute information.
Optionally, when the text attribute information is compressed, the compression operation can be divided into four parts, namely modeling, redundancy removal, conversion and encoding. The first three parts (modeling, redundancy removal and conversion) are used for finding out an optimal method for coding the text attribute information, and the fourth part is used for simplifying the text attribute information into fewer characters and outputting the characters to obtain text compressed data after coding the text attribute information.
And step 130, performing a second compression operation on the first image to obtain image compression data.
Wherein the compression ratio of the first compression operation is greater than the compression ratio of the second compression operation.
The second compression operation is an operation of compressing image information, i.e. removing redundant data present in the image, and representing the two-dimensional pixel matrix in the original image with a smaller storage space. Optionally, the first compression operation for compressing the text attribute information and the second compression operation for compressing the image are two different compression operations, and the compression ratio of the first compression operation is greater than that of the second compression operation, that is, the ratio of the size of the text attribute information after compression to the size before compression is greater than that of the text attribute information after compression to the size after compression. The image compression data is compression result data obtained by performing a second compression operation on the first image.
Optionally, when the second compression operation is performed on the first image, the two-dimensional pixel array in the first image may be converted into a statistically uncorrelated data set, and then redundant data in the data set is removed.
For example, each pixel point in an image is converted into a vector representing the position coordinate and the gray value of the pixel point, and the vectors of all the pixel points form a data set, and since a large amount of redundant data (e.g., spatial redundancy caused by correlation between adjacent pixels in the image) exists in the data set, the redundant data is removed to reduce the space occupied by data storage.
It should be noted that after the first compression operation and the second compression operation are completed, the process of storing the image with the text data can be completed by storing the text compressed data and the image compressed data. Compared with the method that text data is directly regarded as images and stored in the form of pixel points, the method reduces the problem of image quality reduction caused by image information loss in the storage process, greatly saves storage space and reduces storage power consumption.
And step 140, synthesizing and outputting the first image according to the image compressed data and the text compressed data when the first image is output.
When the first image is required to be output, only the image compressed data and the text compressed data corresponding to the first image need to be searched from the storage space, the compressed data are respectively subjected to corresponding decompression operation to obtain text attribute information and a decompressed image, and the text data is added at the relevant position of the decompressed image according to the text attribute information to obtain the first image for output.
The text data processing method provided in the embodiment of the application comprises the steps of firstly obtaining text attribute information used for describing text data in a first image; secondly, performing a first compression operation on the text attribute information to obtain text compressed data; then, carrying out second compression operation on the first image to obtain image compression data; finally, when the first image is output, the first image is output by combining the image compressed data and the text compressed data, and after different compression operations are respectively carried out on the text data and the image data in the image storage stage, one image is stored by the text compressed data and the image compressed data; in the image output stage, the stored text compressed data and the image compressed data are decompressed and then merged for output, so that the storage space is reduced, and the storage power consumption of the text data can be further reduced.
Fig. 2 is a schematic flow chart of another text data processing method provided in an embodiment of the present application, which is used to further explain the foregoing embodiment, and includes:
step 210, obtaining text attribute information of the text data in the first image.
And step 220, performing a first compression operation on the text attribute information to obtain text compressed data.
Step 230, removing the image area corresponding to the text data from the first image to obtain a second image.
The text data part in the first image is already represented and stored in the form of text compressed data, and at this time, the second compression operation is still performed on the text data in the first image, which not only increases the power consumption of the second compression operation, but also affects the accuracy of the merging result in final output. For example, after the image decompressed by the compressed image data contains text data and the text data is added according to the compressed text data, the output first image will have overlapped text data, which results in the blur phenomenon of the image text region. Therefore, the image area corresponding to the text data may be removed from the first image to obtain the second image, and the second compression operation may be performed on the second image.
Optionally, when the image area corresponding to the text data is removed from the first image, the image area corresponding to the text data in the first image may be determined by using the text position information in the text attribute information in step 210, and the image area is removed from the first image by means of cropping, so as to obtain the second image.
Step 240, performing a second compression operation on the second image.
And performing second compression operation on the second image without the text area to obtain image compressed data.
Optionally, the text compressed data obtained by compressing in step 220 and the image compressed data obtained by compressing in step 240 are stored, that is, the process of storing the image with the text data can be completed.
And step 250, when the first image is output, synthesizing and outputting the first image according to the image compressed data and the text compressed data.
According to the text data processing method provided by the embodiment of the application, when the first image is compressed, the area corresponding to the text data is removed from the first image to obtain the second image, the second image is subjected to the second compression operation to obtain the image compressed data, and the first image is stored and output through the image compressed data and the text compressed data, so that the quality of the output image is ensured, the storage space is reduced, and the storage power consumption of the text data can be reduced.
Fig. 3 is a schematic flow chart of another text data processing method provided in an embodiment of the present application, and as a further description of the foregoing embodiment, the method is applied to a case where a user adds text data to an image and processes the image after adding a text, and specifically includes:
step 310, begin.
Step 320, judging whether the user triggers the text adding operation.
The text adding operation may be an operation of adding text data to the original image, for example, after the user takes a picture of a food through a mobile phone, the user selects a picture editing function to add the word "rich lunch" to the picture.
Optionally, there are many ways for the user to trigger the text addition operation, which are not limited in the present application, and for example, the text addition operation may be sent by clicking a relevant key (e.g., an edit key, a decoration key, etc.) on the image display interface; the method can also be triggered by voice or gestures, for example, a user inputs 'add text' by voice or inputs a preset gesture for adding text on a display interface to trigger text adding operation; or the text adding operation can be triggered when the user is detected to add text in the image.
Optionally, the condition for determining whether the user triggers the text adding operation is not limited, and the determination may be started after the user finishes capturing the image, or may be started after the user starts software related to image processing (such as american show). The judgment frequency can be real-time judgment, and can also be judged once according to a preset time length (such as 5 seconds).
Step 330, if the user triggers the text adding operation in the original image, acquiring the text data input by the user and the setting information of the text data.
The text data input by the user refers to text content information input by the user through a text adding operation. The setting information of the text data refers to information such as a text data addition position, a rendering form (e.g., font, color, thickness, etc.), and the like, which is set when the user adds the text content.
And when detecting that the user triggers a text adding operation in the original image, acquiring the text data and the setting information of the text data which are input corresponding to the operation according to the text adding operation of the user. The text data and the setting information of the text data may be acquired in real time while the user inputs the text data, or may be acquired after the completion of the user input is detected.
And step 340, generating text attribute information according to the text data and the setting information.
The manner of generating the text attribute information according to the text data and the setting information is not limited, and the text attribute information may be generated by directly combining the text data and the setting information thereof. Or after the corresponding relationship between the text data and the setting information is established, generating text attribute information according to the text data, the setting information and the corresponding relationship therebetween, for example, the text data includes three sentences, the setting information includes two types, the first sentence and the third sentence correspond to the first setting information, the second sentence corresponds to the second setting information, and the generated text attribute information includes not only the text data and the setting information, but also the corresponding relationship between the text data and the setting information.
And 350, performing a first compression operation on the text attribute information to obtain text compressed data.
And step 360, performing a second compression operation on the first image to obtain image compression data.
When the first image is output, the first image is synthesized and output according to the image compressed data and the text compressed data, step 370.
According to the text data processing method provided by the embodiment of the application, when a user triggers a text adding operation in an original image, text data input by the user and setting information of the text data are obtained, text attribute information is generated, a first compression operation is carried out to obtain text compressed data, a second compression operation is carried out on the first image to obtain image compressed data, and when the image is output, the text compressed data and the image compressed data are merged and output. The storage space is reduced, and the storage power consumption of the text data can be further reduced.
Fig. 4 is a schematic flowchart of another text data processing method provided in an embodiment of the present application, and as a further description of the foregoing embodiment, the method is applied to a case of processing an image that already contains text data, and includes:
step 410, begin.
Step 420, analyzing the first image to determine whether a text region exists.
Analyzing the first image, judging whether a text region exists, if so, executing step 430 to perform character recognition on the text region, if not, returning to step 410 to perform analysis operation on the next image, and processing the current first image according to the existing image data processing method. For example, the second compression operation may be performed on the first image, the compressed image data may be obtained and stored, and when the image is output, the image may be restored according to the image compression result and output.
Optionally, the first image is analyzed, and the text region may be determined according to a specific attribute of the text region. The specific process may be that, according to the edge features that the text regions are relatively significant and the morphological features that are unique to the text, the text region in the first image is located through an image processing algorithm (such as edge detection, binarization processing, morphological feature processing, denoising processing, and the like) to determine whether the text region exists in the first image.
And 430, if the text area exists, performing character recognition on the text area.
If the text region exists in the first image, performing text Recognition in the text region, where the specific text Recognition methods are many, and the application does not limit this, for example, the text Recognition may be performed in the text region through Optical Character Recognition (OCR), or a text Recognition result may be automatically output for the text region in the first image through a pre-established text Recognition model based on a neural network.
And step 440, determining text attribute information of the text data according to the character recognition result.
Wherein, the character recognition result includes: the content of the text data, the position of the text data, the definition of the text data, the semantic meaning of the text data, the rendering information of the text data and the like.
Optionally, the text recognition result may include a lot of information, but not all of the recognition results are used to generate attribute information of the text, and the text data content, the location, and the text rendering information in the text recognition result may be combined to generate the attribute information of the text data.
Optionally, an erroneous recognition result may also exist in the text recognition result, for example, the user takes an image of a bank card, and the text data to be recognized is the bank card number, but the text recognition result may also recognize text information such as a name of the bank, an identifier of the bank, and the like, and at this time, the recognition results except the bank card all belong to the erroneous recognition result. For this case, determining text attribute information of the text data according to the word recognition result may further include:
judging whether text attribute information corresponding to the character recognition result meets preset text main body attribute information or not; and if the text attribute information meets the preset text main body attribute information, determining the text attribute information of the text data according to the character recognition result.
The preset text body attribute information may be preset text body attribute information of a scene preset according to at least one of attribute information specific to a text body in an actual text data processing scene, semantic features of a text, and a degree of association between the text. For example, for the bank card image, the text main body of the bank card number has the characteristic that the card number area is made of metal, the color of the card number area is the same as the background color, and the card number is silver metal after the background color on the card number is worn, so that the preset text main body information can be the same as the background color or silver metal aiming at the bank card image, and other character recognition results on the bank card can be accurately filtered.
And comparing the text attribute features corresponding to the text recognition results with preset text main body attribute information to see whether the text attribute features are the corresponding target recognition results in the scene to which the text recognition results belong. And if so, determining text attribute information according to the identification result, and if not, judging the next identification result.
And step 450, performing a first compression operation on the text attribute information to obtain text compressed data.
And step 460, performing a second compression operation on the first image to obtain image compressed data.
And 470, when the first image is output, synthesizing and outputting the first image according to the image compressed data and the text compressed data.
According to the text data processing method provided by the embodiment of the application, when a text region exists in a first image, character recognition is carried out on the text region, when attribute information corresponding to a recognition result meets preset text main body attribute information, the text attribute information of the text data is determined according to the character recognition result, a first compression operation is carried out to obtain text compressed data, a second compression operation is carried out on the first image to obtain image compressed data, and when the image is output, the image compressed data and the image compressed data are merged and output. The text attribute information of the text data can be accurately determined according to the actual scene, the accuracy of text data storage in the image is improved, the storage space is reduced, and the storage power consumption of the text data can be reduced.
Fig. 5 is a schematic flow chart of another text data processing method provided in an embodiment of the present application, which is used to further explain the foregoing embodiment, and includes:
step 510, obtaining text attribute information of the text data in the first image.
And 520, performing a first compression operation on the text attribute information to obtain text compressed data.
Step 530, performing a second compression operation on the first image to obtain image compressed data.
And 540, when the first image is output, decompressing the text compressed data to obtain text data.
When the stored first image is to be output, the stored data of the first image, namely the text compressed data and the image compressed data, is obtained, the text compressed data is subjected to decompression operation opposite to the first compression operation to obtain text attribute information, and the text data of the original image can be restored according to the text attribute information. And performing decompression operation opposite to the second compression operation on the image compressed data to obtain a decompressed image.
Step 550, determining redundant pixel blocks according to text regions in the image obtained by decompressing the text data and the image compressed data.
The redundant pixel block refers to a pixel point in an image, wherein the definition of the image is reduced due to inaccurate gray value. In the embodiment of the application, pixel points representing text data still exist in a text region in an image obtained by decompressing image compressed data, but partial information of the pixel points is lost through compression and decompression operations, so that the accuracy of the gray value of the pixel points is not high, and pixel blocks causing image blurring due to superposition can appear after the decompressed text data are combined, namely the pixel blocks are redundant pixel blocks.
Alternatively, the method for determining the redundant pixel block is not limited, and pixel blocks around the position where the text data is located in the text region may be used as the redundant pixel block. Or a gray threshold value is preset according to the background gray value of the text region and the gray value of the text data, and the pixel points with the gray value larger than the gray threshold value in the background region are used as redundant pixel blocks.
And step 560, repairing the redundant pixel block.
And repairing the redundant pixel block to ensure that the definition of the first image output by combination is better and is close to the original first image. The specific repairing process is not limited, the gray value of the redundant pixel block may be directly set as the gray value of the background, or the gray value corresponding to each redundant pixel block may be calculated according to the gray value of the text data and the gray value of the background according to a certain repairing algorithm, so as to complete the repairing of the redundant pixel block.
According to the text data processing method provided by the embodiment of the application, when the stored image is output according to the stored text compressed data and the stored image compressed data, the redundant pixel block is determined according to the text area corresponding to the text data and the image compressed data, and the repaired first image is output after the redundant pixel block is repaired, so that the quality of the output image is ensured, and the output power consumption of the text data can be reduced.
Fig. 6 is a schematic flowchart of a text data processing method provided in an embodiment of the present application, and as a further description of the foregoing embodiment, the method includes:
step 610, obtaining text attribute information of the text data in the first image.
Step 620, judging whether the target text attribute information identical to the text attribute information is stored.
The target text attribute information may be text attribute information common to a plurality of preset images, for example, if the user adds common text information "western impression lake" to all photos taken in the west lake, the text attribute information corresponding to the text information "western impression lake" is the target text attribute information. But also text attribute information stored in an image preceding the current image. For example, the first image is added with the text information "West lake impression", and the text data information corresponding to the text information is stored to be the target text attribute information. Alternatively, the target text attribute information may include information having at least one of the same text content, the same text position, and the same rendering information.
And judging whether the text attribute information acquired in the step 610 is the stored target text attribute information, if so, executing a step 630 to establish an association relationship between the target attribute information and the first image. Otherwise, step 660 is executed, the first compression operation is performed on the text attribute information to obtain text compressed data, and meanwhile, the text attribute information is added to the target text attribute information.
Step 630, if the target text attribute information identical to the text attribute information is stored, establishing an association relationship between the target text attribute information and the first image.
If the target text attribute information which is the same as the text attribute information is stored, the incidence relation between the target text attribute information and the first image is established, the incidence relation is stored, the text attribute information of the first image is not stored any more, the repeated storage of the same information is avoided, and the storage space is greatly saved.
And step 640, performing a second compression operation on the first image to obtain image compressed data.
And 650, synthesizing and outputting the first image according to the image compressed data and the target text attribute information when outputting the first image.
Since the first image is stored with the association relationship between the compressed first image data and the target text attribute information and the first image, when the first image is output, the target text attribute information corresponding to the association relationship is found according to the association relationship between the target text attribute information and the first image, and the first image is synthesized with the compressed data of the first image according to the target text attribute information and is output.
And 660, performing a first compression operation on the text attribute information to obtain text compressed data.
And step 670, performing a second compression operation on the first image to obtain image compressed data.
And step 680, synthesizing and outputting the first image according to the image compressed data and the text compressed data when the first image is output.
It should be noted that, in steps 630 to 650, a method for storing and outputting the first image when the acquired text attribute information of the text data in the first image is the stored target text attribute information; and step 660 to step 680 are methods for storing and outputting the first image when the acquired text attribute information of the text data in the first image is not the stored target text attribute information.
According to the text data processing method provided by the embodiment of the application, if the text attribute information of the first image is the same as the stored target text attribute information, the incidence relation between the target text attribute information and the first image is directly established, the incidence relation and the image compressed data are stored, when the first image is output, the target text attribute information is found according to the incidence relation, and the first image is synthesized and output by combining the image compressed data, so that the storage space is reduced, and the storage power consumption of the text data can be reduced.
Fig. 7 is a schematic structural diagram of a text data processing apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: an obtaining module 710, a first compressing module 720, a second compressing module 730, and an outputting module 740.
An obtaining module 710, configured to obtain text attribute information of text data in a first image, where the text attribute information is used to describe the text data;
a first compressing module 720, configured to perform a first compressing operation on the text attribute information acquired by the acquiring module 710 to obtain text compressed data;
a second compression module 730, configured to perform a second compression operation on the first image to obtain image compressed data, where a compression ratio of the first compression operation is greater than a compression ratio of the second compression operation;
an output module 740, configured to, when outputting a first image, synthesize and output the first image according to the image compressed data obtained by the second compression module 730 and the text compressed data obtained by the first compression module 720.
Further, the second compressing module 730 is configured to:
removing an image area corresponding to the text data from the first image to obtain a second image;
a second compression operation is performed on the second image.
Further, the obtaining module 710 is configured to:
judging whether a user triggers a text adding operation or not;
if a user triggers a text adding operation in an original image, acquiring text data input by the user and setting information of the text data;
and generating text attribute information according to the text data and the setting information.
Further, the obtaining module 710 is further configured to:
analyzing the first image and judging whether a text area exists or not;
if the text area exists, performing character recognition on the text area;
and determining text attribute information of the text data according to the character recognition result.
Further, the output module 740 is configured to:
when the first image is output, decompressing the text compressed data to obtain text data;
determining redundant pixel blocks according to the text data and a text region in the image obtained by decompressing the image compressed data;
and repairing the redundant pixel block.
Further, the first compressing module 720 is configured to:
judging whether target text attribute information which is the same as the text attribute information is stored or not;
if the target text attribute information which is the same as the text attribute information is stored, establishing an incidence relation between the target text attribute information and the first image;
correspondingly, the output module 740 is configured to synthesize and output the first image according to the image compression data and the target text attribute information.
In the text data processing apparatus provided in the embodiment of the present application, first, the obtaining module 710 obtains text attribute information used for describing text data in a first image; secondly, the first compression module 720 performs a first compression operation on the text attribute information to obtain text compressed data; then, the second compression module 730 performs a second compression operation on the first image to obtain image compressed data; finally, when the output module 740 outputs the first image, the first image is output by merging according to the image compressed data and the text compressed data, in the image storage stage, after different compression operations are respectively performed on the text data and the image data in the image, an image is stored with the text compressed data and the image compressed data; in the image output stage, the stored text compressed data and the image compressed data are decompressed and then merged for output, so that the storage space is reduced, and the storage power consumption of the text data is further reduced.
The device can execute the methods provided by all the embodiments of the application, and has corresponding functional modules and beneficial effects for executing the methods. For details of the technology not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the present application.
Fig. 8 is a schematic structural diagram of another terminal device provided in an embodiment of the present application. As shown in fig. 8, the terminal may include: a housing (not shown), a memory 801, a Central Processing Unit (CPU) 802 (also called a processor, hereinafter referred to as CPU), a computer program stored in the memory 801 and operable on the processor 802, a circuit board (not shown), and a power circuit (not shown). The circuit board is arranged in a space enclosed by the shell; the CPU802 and the memory 801 are provided on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the terminal; the memory 801 is used for storing executable program codes; the CPU802 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 801.
The terminal further comprises: peripheral interface 803, RF (Radio Frequency) circuitry 805, audio circuitry 806, speakers 811, power management chip 808, input/output (I/O) subsystem 809, touch screen 812, other input/control devices 810, and external port 804, which communicate over one or more communication buses or signal lines 807.
It should be understood that the illustrated terminal device 800 is merely one example of a terminal, and that the terminal device 800 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail a terminal device provided in this embodiment, where the terminal device is a smart phone as an example.
A memory 801, the memory 801 being accessible by the CPU802, the peripheral interface 803, and the like, the memory 801 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
A peripheral interface 803, said peripheral interface 803 allowing input and output peripherals of the device to be connected to the CPU802 and the memory 801.
I/O subsystem 809, which I/O subsystem 809 may connect input and output peripherals on the device, such as touch screen 812 and other input/control devices 810, to peripheral interface 803. The I/O subsystem 809 may include a display controller 8091 and one or more input controllers 8092 for controlling other input/control devices 810. Where one or more input controllers 8092 receive electrical signals from or transmit electrical signals to other input/control devices 810, other input/control devices 810 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is worth noting that the input controller 8092 may be connected to any of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
The touch screen 812 may be a resistive type, a capacitive type, an infrared type, or a surface acoustic wave type, according to the operating principle of the touch screen and the classification of media for transmitting information. The touch screen 812 may be classified by installation method: external hanging, internal or integral. Classified according to technical principles, the touch screen 812 may be: a vector pressure sensing technology touch screen, a resistive technology touch screen, a capacitive technology touch screen, an infrared technology touch screen, or a surface acoustic wave technology touch screen.
A touch screen 812, which touch screen 812 is an input interface and an output interface between the user terminal and the user, displays visual output to the user, which may include graphics, text, icons, video, and the like. Optionally, the touch screen 812 sends an electrical signal (e.g., an electrical signal of the touch surface) triggered by the user on the touch screen to the processor 802.
The display controller 8091 in the I/O subsystem 809 receives electrical signals from the touch screen 812 or sends electrical signals to the touch screen 812. The touch screen 812 detects a contact on the touch screen, and the display controller 8091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 812, that is, implements a human-computer interaction, and the user interface object displayed on the touch screen 812 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 805 is mainly used to establish communication between the smart speaker and a wireless network (i.e., a network side), and implement data reception and transmission between the smart speaker and the wireless network. Such as sending and receiving short messages, e-mails, etc.
The audio circuit 806 is mainly used to receive audio data from the peripheral interface 803, convert the audio data into an electric signal, and transmit the electric signal to the speaker 811.
Speaker 811 is used to convert the voice signals received by the smart speaker from the wireless network through RF circuit 805 into sound and play the sound to the user.
And the power management chip 808 is used for supplying power and managing power to the hardware connected with the CPU802, the I/O subsystem and the peripheral interface.
In this embodiment, the cpu802 is configured to:
acquiring text attribute information of text data in a first image, wherein the text attribute information is used for describing the text data;
performing first compression operation on the text attribute information to obtain text compressed data;
performing a second compression operation on the first image to obtain image compression data, wherein the compression ratio of the first compression operation is greater than that of the second compression operation;
when a first image is output, the first image is synthesized and output according to the image compressed data and the text compressed data.
Further, the performing a second compression operation on the first image includes:
removing an image area corresponding to the text data from the first image to obtain a second image;
a second compression operation is performed on the second image.
Further, the acquiring text attribute information of the text data in the first image includes:
judging whether a user triggers a text adding operation or not;
if a user triggers a text adding operation in an original image, acquiring text data input by the user and setting information of the text data;
and generating text attribute information according to the text data and the setting information.
Further, the acquiring text attribute information of the text data in the first image includes:
analyzing the first image and judging whether a text area exists or not;
if the text area exists, performing character recognition on the text area;
and determining text attribute information of the text data according to the character recognition result.
Further, the determining text attribute information of the text data according to the character recognition result includes:
judging whether text attribute information corresponding to the character recognition result meets preset text main body attribute information or not;
and if the text attribute information meets the preset text main body attribute information, determining the text attribute information of the text data according to the character recognition result.
Further, synthesizing and outputting the first image according to the image compressed data and the text compressed data includes:
decompressing the text compressed data to obtain text data;
determining redundant pixel blocks according to the text data and a text region in the image obtained by decompressing the image compressed data;
and repairing the redundant pixel block.
Further, performing a first compression operation on the text attribute information to obtain text compressed data, including:
judging whether target text attribute information which is the same as the text attribute information is stored or not;
if the target text attribute information which is the same as the text attribute information is stored, establishing an incidence relation between the target text attribute information and the first image;
correspondingly, the synthesizing and outputting the first image according to the image compressed data and the text compressed data comprises:
and synthesizing and outputting the first image according to the image compressed data and the target text attribute information.
An embodiment of the present application further provides a storage medium containing terminal device executable instructions, where the terminal device executable instructions are executed by a terminal device processor to perform a text data processing method, and the method includes:
acquiring text attribute information of text data in a first image, wherein the text attribute information is used for describing the text data;
performing first compression operation on the text attribute information to obtain text compressed data;
performing a second compression operation on the first image to obtain image compression data, wherein the compression ratio of the first compression operation is greater than that of the second compression operation;
when a first image is output, the first image is synthesized and output according to the image compressed data and the text compressed data.
Further, the performing a second compression operation on the first image includes:
removing an image area corresponding to the text data from the first image to obtain a second image;
a second compression operation is performed on the second image.
Further, the acquiring text attribute information of the text data in the first image includes:
judging whether a user triggers a text adding operation or not;
if a user triggers a text adding operation in an original image, acquiring text data input by the user and setting information of the text data;
and generating text attribute information according to the text data and the setting information.
Further, the acquiring text attribute information of the text data in the first image includes:
analyzing the first image and judging whether a text area exists or not;
if the text area exists, performing character recognition on the text area;
and determining text attribute information of the text data according to the character recognition result.
Further, the determining text attribute information of the text data according to the character recognition result includes:
judging whether text attribute information corresponding to the character recognition result meets preset text main body attribute information or not;
and if the text attribute information meets the preset text main body attribute information, determining the text attribute information of the text data according to the character recognition result.
Further, synthesizing and outputting the first image according to the image compressed data and the text compressed data includes:
decompressing the text compressed data to obtain text data;
determining redundant pixel blocks according to the text data and a text region in the image obtained by decompressing the image compressed data;
and repairing the redundant pixel block.
Further, performing a first compression operation on the text attribute information to obtain text compressed data, including:
judging whether target text attribute information which is the same as the text attribute information is stored or not;
if the target text attribute information which is the same as the text attribute information is stored, establishing an incidence relation between the target text attribute information and the first image;
correspondingly, the synthesizing and outputting the first image according to the image compressed data and the text compressed data comprises:
and synthesizing and outputting the first image according to the image compressed data and the target text attribute information.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the text data processing operations described above, and may also perform related operations in the text data processing method provided in any embodiment of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (8)

1. A text data processing method, comprising:
acquiring text attribute information of text data in a first image, wherein the text attribute information is used for describing the text data;
performing first compression operation on the text attribute information to obtain text compressed data;
performing a second compression operation on the first image to obtain image compression data, wherein the compression ratio of the first compression operation is greater than that of the second compression operation;
when outputting a first image, synthesizing and outputting the first image according to the image compressed data and the text compressed data;
the synthesizing and outputting the first image according to the image compressed data and the text compressed data includes:
decompressing the text compressed data to obtain text data;
determining redundant pixel blocks according to the text data and a text region in the image obtained by decompressing the image compressed data;
and repairing the redundant pixel block.
2. The method according to claim 1, wherein the acquiring text attribute information of the text data in the first image comprises:
judging whether a user triggers a text adding operation or not;
if a user triggers a text adding operation in an original image, acquiring text data input by the user and setting information of the text data;
and generating text attribute information according to the text data and the setting information.
3. The method according to claim 1, wherein the acquiring text attribute information of the text data in the first image comprises:
analyzing the first image and judging whether a text area exists or not;
if the text area exists, performing character recognition on the text area;
and determining text attribute information of the text data according to the character recognition result.
4. The method of claim 3, wherein the determining text attribute information of the text data according to the character recognition result comprises:
judging whether text attribute information corresponding to the character recognition result meets preset text main body attribute information or not;
and if the text attribute information meets the preset text main body attribute information, determining the text attribute information of the text data according to the character recognition result.
5. The method of claim 1, wherein performing a first compression operation on the text attribute information to obtain compressed text data comprises:
judging whether target text attribute information which is the same as the text attribute information is stored or not;
if the target text attribute information which is the same as the text attribute information is stored, establishing an incidence relation between the target text attribute information and the first image;
correspondingly, the synthesizing and outputting the first image according to the image compressed data and the text compressed data comprises:
and synthesizing and outputting the first image according to the image compressed data and the target text attribute information.
6. A text data processing apparatus, characterized by comprising:
the acquisition module is used for acquiring text attribute information of text data in the first image, wherein the text attribute information is used for describing the text data;
the first compression module is used for performing first compression operation on the text attribute information acquired by the acquisition module to obtain text compressed data;
the second compression module is used for carrying out second compression operation on the first image to obtain image compression data, and the compression ratio of the first compression operation is greater than that of the second compression operation;
the output module is used for synthesizing and outputting a first image according to the image compressed data obtained by the second compression module and the text compressed data obtained by the first compression module when the first image is output; the output module is specifically configured to: when the first image is output, decompressing the text compressed data to obtain text data; determining redundant pixel blocks according to the text data and a text region in the image obtained by decompressing the image compressed data; and repairing the redundant pixel block.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a text data processing method according to any one of claims 1 to 5.
8. A terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the text data processing method according to any one of claims 1 to 5 when executing the computer program.
CN201810461484.8A 2018-05-15 2018-05-15 Text data processing method and device, storage medium and terminal Active CN108763350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810461484.8A CN108763350B (en) 2018-05-15 2018-05-15 Text data processing method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810461484.8A CN108763350B (en) 2018-05-15 2018-05-15 Text data processing method and device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN108763350A CN108763350A (en) 2018-11-06
CN108763350B true CN108763350B (en) 2021-02-02

Family

ID=64006966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810461484.8A Active CN108763350B (en) 2018-05-15 2018-05-15 Text data processing method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN108763350B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476853B (en) * 2020-03-17 2024-05-24 西安万像电子科技有限公司 Method, equipment and system for encoding and decoding text image
CN111882491A (en) * 2020-06-17 2020-11-03 西安万像电子科技有限公司 Character image coding and decoding method, equipment and system
CN112488964B (en) * 2020-12-18 2024-04-16 深圳市镜玩科技有限公司 Image processing method, related device, equipment and medium for sliding list
CN116132431B (en) * 2023-04-19 2023-06-30 泰诺尔(北京)科技有限公司 Data transmission method and system
CN117177079B (en) * 2023-11-02 2024-03-01 珠海鸿芯科技有限公司 Image synthesizing method, computer device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999021121A3 (en) * 1997-10-21 1999-07-01 Kurzweil Educational Systems I Compression/decompression algorithm for image documents having text, graphical and color content
CN101996227A (en) * 2009-08-13 2011-03-30 鸿富锦精密工业(深圳)有限公司 Document compression system and method
CN102289468A (en) * 2011-07-22 2011-12-21 北京航空航天大学 Method for acquiring and recording photo information in camera
CN105808782A (en) * 2016-03-31 2016-07-27 广东小天才科技有限公司 Picture label adding method and device
CN106791827A (en) * 2016-12-13 2017-05-31 北京黎阳之光科技有限公司 A kind of Novel Image Compression Coding device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999021121A3 (en) * 1997-10-21 1999-07-01 Kurzweil Educational Systems I Compression/decompression algorithm for image documents having text, graphical and color content
US6014464A (en) * 1997-10-21 2000-01-11 Kurzweil Educational Systems, Inc. Compression/ decompression algorithm for image documents having text graphical and color content
CN101996227A (en) * 2009-08-13 2011-03-30 鸿富锦精密工业(深圳)有限公司 Document compression system and method
CN102289468A (en) * 2011-07-22 2011-12-21 北京航空航天大学 Method for acquiring and recording photo information in camera
CN105808782A (en) * 2016-03-31 2016-07-27 广东小天才科技有限公司 Picture label adding method and device
CN106791827A (en) * 2016-12-13 2017-05-31 北京黎阳之光科技有限公司 A kind of Novel Image Compression Coding device

Also Published As

Publication number Publication date
CN108763350A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108763350B (en) Text data processing method and device, storage medium and terminal
US20150149925A1 (en) Emoticon generation using user images and gestures
JP7181375B2 (en) Target object motion recognition method, device and electronic device
US8965051B2 (en) Method and apparatus for providing hand detection
CN112200187A (en) Target detection method, device, machine readable medium and equipment
CN110008997B (en) Image texture similarity recognition method, device and computer readable storage medium
CN111209377B (en) Text processing method, device, equipment and medium based on deep learning
CN109934142B (en) Method and apparatus for generating feature vectors of video
CN111832449A (en) Engineering drawing display method and related device
CN111290684B (en) Image display method, image display device and terminal equipment
CN111325220B (en) Image generation method, device, equipment and storage medium
CN110188782B (en) Image similarity determining method and device, electronic equipment and readable storage medium
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
WO2022237116A1 (en) Image processing method and apparatus
CN113486738A (en) Fingerprint identification method and device, electronic equipment and readable storage medium
CN113313066A (en) Image recognition method, image recognition device, storage medium and terminal
CN111626035A (en) Layout analysis method and electronic equipment
CN116342940A (en) Image approval method, device, medium and equipment
CN106650727B (en) Information display method and AR equipment
CN110619597A (en) Semitransparent watermark removing method and device, electronic equipment and storage medium
CN108027962B (en) Image processing method, electronic device and storage medium
CN108647097B (en) Text image processing method and device, storage medium and terminal
CN109492451B (en) Coded image identification method and mobile terminal
CN108540726B (en) Method and device for processing continuous shooting image, storage medium and terminal
CN113010128A (en) Multi-screen interaction method and system based on BIM (building information modeling)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant