CN114943202A - Information processing method, information processing apparatus, and electronic device - Google Patents

Information processing method, information processing apparatus, and electronic device Download PDF

Info

Publication number
CN114943202A
CN114943202A CN202210555521.8A CN202210555521A CN114943202A CN 114943202 A CN114943202 A CN 114943202A CN 202210555521 A CN202210555521 A CN 202210555521A CN 114943202 A CN114943202 A CN 114943202A
Authority
CN
China
Prior art keywords
image
scanned
screen
text
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210555521.8A
Other languages
Chinese (zh)
Inventor
侯宇涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210555521.8A priority Critical patent/CN114943202A/en
Publication of CN114943202A publication Critical patent/CN114943202A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/106Display of layout of documents; Previewing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/117Tagging; Marking up; Designating a block; Setting of attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an information processing method, an information processing device and electronic equipment, and belongs to the technical field of electronics. The method includes displaying scanned contents corresponding to a target document in a first screen; the scan content includes at least one of: scanning a text and an image; and responding to a first input of a user to the first information in the scanned content, and displaying the first information in a second screen.

Description

Information processing method, information processing apparatus, and electronic device
Technical Field
The present application belongs to the field of electronic technologies, and in particular, relates to an information processing method, an information processing apparatus, and an electronic device.
Background
Along with the popularization of intelligent terminals and the rapid development of the internet, more and more users use the intelligent terminals to read. The paper books can be converted into electronic file formats through technologies such as photographing and scanning, and the electronic file formats are displayed on the terminal.
When converting a paper book into an electronic file format, the paper book is usually scanned into an image, and then the image is stored for a user to browse. When reading a paper book, it is very important for understanding and memorizing the book to take notes of the book. However, when a user views images corresponding to these paper books, characters or pictures in the images cannot be edited, which makes it inconvenient to take notes.
Disclosure of Invention
An object of the embodiments of the present application is to provide an information processing method, an information processing apparatus, and an electronic device, which can solve the problem that it is not convenient to take notes when browsing images of documents such as books.
In a first aspect, an embodiment of the present application provides an information processing method, where the method includes:
displaying scanned content corresponding to a target document in the first screen; the scan content includes at least one of: scanning a text and an image;
displaying the first information in the second screen in response to a first input of the first information in the scanned content by a user.
In a second aspect, an embodiment of the present application provides an information processing apparatus, including:
the first display module is used for displaying the scanning content corresponding to the target document in the first screen; the scan content includes at least one of: scanning a text and an image;
and the second display module is used for responding to a first input of a user to the first information in the scanned content and displaying the first information in the second screen.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the information processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, and when the program or instructions are executed by a processor, the program or instructions implement the information processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the information processing method according to the first aspect.
In a sixth aspect, the present application provides a computer program product, which is stored in a storage medium and executed by at least one processor to implement the information processing method according to the first aspect.
In the embodiment of the application, on one hand, the target document is converted into the electronic format through scanning, so that the corresponding scanning content is displayed in the screen, and the user can edit the scanning content, so that the note can be formed quickly and conveniently. On the other hand, the scanned content of the document is displayed on one screen by utilizing the characteristics of double screens of the electronic equipment, and the information operated by the user can be displayed on the other screen, so that the note and the scanned content of the document are independently displayed, the scanned content is prevented from being shielded, the display effect of the document can be improved, and the note can be conveniently operated.
Drawings
FIG. 1 is a flowchart of an information processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an effect of a display interface in an information processing method according to an embodiment of the present application;
fig. 3 is a second schematic view illustrating a display interface effect of the information processing method according to the embodiment of the present application;
fig. 4 is a third schematic view illustrating a display interface effect of the information processing method according to the embodiment of the present application;
fig. 5 is a fourth schematic view illustrating a display interface effect of the information processing method according to the embodiment of the present application;
fig. 6 is a fifth schematic view illustrating a display interface effect of the information processing method according to the embodiment of the present application;
fig. 7 is a sixth schematic view illustrating a display interface effect of the information processing method according to the embodiment of the present application;
fig. 8 is a schematic diagram of scanned content in an information processing method provided in an embodiment of the present application;
fig. 9 is a second flowchart of an information processing method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an information processing apparatus provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 12 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The information processing method, the information processing apparatus, and the electronic device provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The embodiment of the application firstly provides an information processing method. For example, the information processing method may be applied to electronic devices such as a mobile phone, a tablet computer, a Personal Computer (PC), a wearable electronic device (e.g., a smart watch), an Augmented Reality (AR)/Virtual Reality (VR) device, and a vehicle-mounted device. The electronic device has multiple display screens, such as an electronic device with a foldable screen, an electronic device with a scroll screen, and the like, which is not limited in any way by the embodiments of the present application.
The traditional paper file is inconvenient to carry and record, can be recorded and checked in the mobile terminal after being converted into an electronic file format, can also be added with notes, notes or edited and the like, and can greatly facilitate users. When a user wants to view a paper file on an electronic device, the information processing method provided by the embodiment of the application can be adopted to convert the paper file into a file in an electronic format, and the image in the paper file and the characters in the image are displayed independently, so that the user can operate the electronic device conveniently. For example, the image or the characters in the image can be edited respectively, so as to generate new characters and images, and realize re-editing of the paper file.
Fig. 1 shows a flowchart of an information processing method provided in an embodiment of the present application. As shown in fig. 1, the information processing method includes the steps of:
step 100: displaying scanned contents corresponding to a target document in a first screen; the scanning content includes at least one of: scanning text, scanning images.
The target document may include a paper document, and may also include an electronic document, such as an image, a text page, and the like. The target document can be scanned through the scanner, and scanning content corresponding to the target document is obtained. For example, the scanner may scan the target Document as a Portable Document (PDF), and obtain the PDF obtained by the scanner, which may be used as the scanned content. Or, the target document is shot by a camera, and then an image shot by the camera is acquired, so that the scanning content corresponding to the target document can be obtained.
The target document may include words and images, and the scanned content may include scanned text corresponding to the words in the target document and scanned images corresponding to the images. The scanned content corresponding to the target document may be displayed in a display screen of the electronic device. The electronic equipment is provided with two display screens, wherein the display screens can be foldable screens, can be one display screen when being folded, and can be two display screens, namely a first screen and a second screen when being unfolded. The scanned content may be displayed on a first screen.
Step 200: the first information is displayed in the second screen in response to a first input by the user to the first information in the scanned content.
The first input refers to an operation of the scanned content by the user, and the operation is used for determining first information in the scanned content. The user may perform an operation on the displayed scanned contents, such as scanned text, scanned image, for example, select a character in the scanned text, select one of the scanned images, and so on. When the electronic device receives a first input of a user, first information corresponding to the first input can be acquired, and in response to the first input, the electronic device can display the first information in a second screen. Taking the electronic device as an example of a mobile phone, as shown in fig. 2, the mobile phone may include a display screen 201 (i.e., a first screen) and a display screen 202 (i.e., a second screen). In which the scanned contents of the target document can be displayed in the display screen 201. The scanned content of the target document may include scanned text 203 and scanned image 204. When the electronic device receives a first input in the display screen 201, the first information 205 that the user wants to edit can be determined according to the first input, and the first information 205 is displayed in the display screen 202.
For example, the first input of the user may include a click operation, a slide operation, a drag operation, and the like, which is not particularly limited in this embodiment. Taking the first input as an example of a sliding operation, the user may select information in the scanned content through the sliding operation. The electronic device may acquire coordinates of a position touched by a finger of a user through an event corresponding to a sliding operation, such as a motionevent. Specifically, the electronic device may obtain the position (x1, y1) clicked by the user's finger in the first input, and determine whether (x1, y1) overlaps with the display area of the scanned content. If the operation is not overlapped, the operation is determined to be a false touch, and the operation is not responded. If there is overlap, the electronic device may retrieve the position (x2, y2) where the user's finger is lifted, and determine the rectangle formed by the moving route of the finger from (x1, y1) to (x2, y 2). Information within the rectangle in the first screen may be taken as the first information.
And the electronic equipment can distinctively display the first information so as to prompt the user of the selected content. For example, the electronic device can underline, highlight, and/or the like the selected first information. Continuing with fig. 2 as an example, when the mobile phone receives the sliding operation, the mobile phone may add an underline to the first information corresponding to the sliding operation, and when the sliding operation is finished, the mobile phone may automatically display the corresponding first information in the display screen 202 in response to the sliding operation.
Further, when the mobile phone receives an operation of the user dragging the first information into the display screen 202, the mobile phone may display the first information in the operation. Specifically, according to a position where the finger is away in the user's drag operation, when the position overlaps the display screen 202, the first information is displayed at the position in the display screen 202. For example, the position where the user's finger is away in the drag operation is acquired by the motionevent. When the position is (x3, y3) and overlaps the display screen 202, the cellular phone can display the first information at the (x3, y3) position of the display screen 202. On the display screen 202, the position of the first information may also be moved continuously by the drag operation, for example, from (x3, y3) to (x4, y4), etc., so as to meet the editing requirement of the user on the information.
On the second screen, the first information may be edited by moving a cursor, for example, deleting a certain character in the first information, adding a character to the first information, and the like. After editing is complete, the information in the second screen may be stored for repeated viewing. In this embodiment, when browsing the scanned content corresponding to the target document, the user can add a mark, a remark, and the like without changing the original content, thereby generating new information, facilitating the user to quickly understand the content of the document, and improving the reading effect. In addition, the first information displayed in the second screen can be stored, and when the user needs to check the content edited by the user later, the user can also check the content edited by the user, so that a reading note can be formed, and the requirements of the user are met.
With the continuous development of electronic equipment, more and more folding screen mobile phones, double-screen mobile phones and the like are provided. The equipment is provided with two or more screens, and each screen can independently display the original content and the edited information of the document, so that the screens can be fully utilized, the original scanned content can be prevented from being damaged when the first information is edited, and the scanned content is prevented from being tampered. And other information can be prevented from being operated by mistake when the first information is edited, and the operation efficiency is higher.
If the target document simultaneously comprises the text and the image, the corresponding scanning content simultaneously comprises the scanning text and the scanning image. In this case, the electronic device may divide the display area of the first screen into an area for displaying a text and an area for displaying an image, so as to display the scanned text and the scanned image separately, thereby facilitating a user to operate the text and the image separately, and improving accuracy of operation. Illustratively, in a case where both the scanned text and the scanned image are included in the scanned content, the scanned text is displayed in a first screen area of the first screen, and the scanned image is displayed in a second screen area of the first screen.
As shown in fig. 3, the first screen 201 may include a screen area 301 (i.e., a first screen area) and a screen area 302 (i.e., a second screen area). The electronic device may display the scanned text in the scanned content in screen area 301 and the scanned image in the scanned content in screen area 302.
For example, in a case where a document image is included in the target document and an image text is included in the document image, a scanned image corresponding to the document image may be displayed in the third screen area of the first screen and a scanned text corresponding to the image text may be displayed in the fourth screen area of the first screen. The image text is a part where the text in the document image is located, and the corresponding scanned text refers to editable text recognized from the part. The scanned image corresponding to the document image refers to an image which is restored after removing a part of the image text in the document image and does not include the image text. If the target document comprises the document image, when the electronic equipment displays the scanning content corresponding to the target document in the first screen, the scanning text corresponding to the document image and the scanning image can be directly and separately displayed in different areas, so that characters in the document image are changed into editable scanning texts, a user can conveniently edit texts and images in the document image independently, and the usability of information in the target document is improved.
As shown in fig. 4, when a document image is included in the target document and image text is included in the document image, the display screen 201 (first screen) may be divided into a first screen region 401, a second screen region 402, a third screen region 403, and a fourth screen region 404. The electronic device can recognize the scanned text corresponding to the image text from the document image of the target document, display it in the screen area 404, and then process the document image into a scanned image that does not include the image text, and display it in the screen area 403.
For example, the first screen may be divided by other methods to obtain the first screen area, the second screen area, the third screen area and the fourth screen area, for example, the third screen area and the fourth screen area may be arranged left and right, and the present embodiment is not limited to this.
The following describes a process of obtaining corresponding scanned text and scanned image from document image processing. First, a document image is scanned to obtain a corresponding initial scanned image, for example, a paper document image is scanned to obtain an initial scanned image in an electronic format. The initial scan image includes image text. And then determining an image area where the image text is located in the initial scanning image, adjusting the pixel values of the pixel points in the image area for the image area according to the pixel values of the pixel points in the adjacent areas of the image area in the initial scanning image, and taking the adjusted image as the scanning image.
Specifically, whether the document image includes the image text or not can be identified through an Optical Character Recognition (OCR) technology, and if the document image includes the image text, the scanned text corresponding to the image text can be identified, so that the image text is converted into an editable scanned text, and the text in the image is convenient for a user to use. Meanwhile, an image area where the image text is located in the initial scanned image is identified, and a plurality of image areas may exist in the initial scanned image. For each image region, neighboring regions adjacent thereto may be determined. And then replacing the original pixel values of the pixel points in the image area with the pixel values of the pixel points in the adjacent area, and covering the original pixel values of the image area through the pixel values of the adjacent area, thereby removing the original image text and obtaining the scanned image without the image text.
In an exemplary embodiment, in the case where the document image is included in the target document and the image text is included in the document image, the scanned text corresponding to the document image may not be displayed separately from the scanned image. That is, a scanned image corresponding to a document image including a scanned text corresponding to an image text is displayed in the first screen. That is, when the scanned image and the scanned text corresponding to the document image are displayed, the scanned text may be displayed in the original image area, and the scanned text in the image area may be in an edited state, so that editing operations such as copying and modification may be performed.
When the electronic device receives a first input of a user for the scanned text in the scanned image corresponding to the document image, the electronic device may display the scanned text in the scanned image corresponding to the document image in the second screen in response to the first input. The first input may include a click operation of the user. When the user clicks the scanned image corresponding to the document image, the scanned text on the scanned image may be subjected to editing operations such as copying and modification, and the edited text may be displayed in the second screen. As shown in fig. 5, the electronic device may display a scanned image 501 including scanned text in the first screen 201. The scanned image 501 includes scanned text 502 in an edited state. When the electronic device receives a click operation for a scanned image, the scanned text 502 in the scanned image 501 may be displayed in the second screen 202.
In an exemplary embodiment, while the first information is displayed on the second screen, the first information may be displayed at a target node position in the second screen. Wherein, the target node position refers to the display position of the target node in the thinking map. The mind map may be made up of a root node and a plurality of child nodes, the root node being connected to each of its child nodes. The thinking map is a way of organizing information graphically, and is commonly used for making notes. It can connect all elements related to the object in a radial line shape, thereby gathering related information together. Knowledge can be better refined through the thought guide graph, and the thinking of the user is helped.
The scanned text and the scanned image in the scanned content can be selected as first information, and the first information can be displayed at a target node position corresponding to a node in the thought map as the node. When the first information is determined, if the root node does not exist in the thought-derivative graph, the first information is used as the root node, and the first information is displayed according to the target node position corresponding to the root node. When the first information is determined, if the root node exists in the thought-guiding graph, the first information is used as a child node of the root node, and the first information is displayed according to the target node position corresponding to the child node. The positions of the target nodes respectively corresponding to the root node and the child nodes in the mind map can be preset. For example, the target node location corresponding to the root node and the target node location corresponding to the child node may be arranged side-to-side. One root node may have a plurality of child nodes.
For example, the user may determine a plurality of information in the scanned content in the first screen, such as the first information, the second information, the third information, and the like, through a plurality of operations. As shown in fig. 6, the electronic device receives the text "go east, wave, and thousand-ancient style characters" in the scanned content in the first screen. "i.e., the first input, the corresponding information, i.e., the first information, may be displayed as the following node of the mind map at the target node position 601 corresponding to the root node in the second screen. If a second input to second information in the first screen 201, such as the image 602, is received again, the electronic device may display the second information at the target node position 603 corresponding to the child node of the root node in the second screen 202. The root node is connected with the child nodes through connecting lines. Similarly, when a third input of the third information "poetry review" in the scanned content is received, the third information may be displayed at the corresponding target node position 604 as a child node of the root node. As can be seen, the node positions of the root node and its corresponding child nodes may be arranged from left to right, and the positions of the plurality of child nodes may be arranged from top to bottom. In addition, the root node and the child nodes may be arranged in a position order from top to bottom, or the root node is arranged above the root node and the child nodes are arranged from left to right, and so on.
For example, when the user closes the second screen, the mind map in the second screen may be saved, for example, the content displayed in the second screen 202 may be captured, so that the note edited by the user in the second screen is saved in a picture manner. In addition, the pictures obtained by screenshot can be shared by other users, so that the information sharing performance is improved. In the embodiment, when the user needs to edit the information, the information edited by the user can be automatically generated into the mind map, so that the efficiency of the user in taking notes is higher. The thinking guide graph is taken as a note, so that the understanding is easier, and the reading efficiency of the document is higher.
In the case where there are a plurality of scanned contents in the target document, a corresponding page code may be generated for each scanned content, and the scanned content may be identified by the page code. When one of the scanned contents of the target document needs to be displayed in the first screen, the user may input a desired page code in the first screen, and the operation of inputting the page code may be recorded as a second input. When the electronic device receives a second input, the page code input by the user can be determined, and in response to the second input, the scanned content corresponding to the page code can be displayed in the first screen. When an input for first information in the scanned content is received, the first information may be displayed in the second screen.
Illustratively, the scanned content and the first information may also be present in different display windows in the same display screen. When the first screen and the second screen of the electronic device are combined to form one display screen, the electronic device may display two windows in the display screen in a split-screen display manner, where the two windows are the first window and the second window, respectively, and then may display the scanned content in the first window, and the first information may be displayed in the second window. For example, as shown in fig. 7, the electronic device may display a first window 701 and a second window 702. Scanned contents corresponding to the page code may be displayed in the first window 701. The electronic device may display the image 703 in the second window 702 when the electronic device receives a first input for the image 703 by a user in the first window 701. In this case, the image 703 may be displayed in the second window 702 as a root node, and if the user needs to edit other information, the information to be edited next by the user may be displayed at a corresponding node position as a child node of the image 703.
Illustratively, the target document may include one or more pages, and the information of the target document may be identified through the scanned content of one of the pages, so that the target document is retrieved to obtain the scanned content of all the pages in the target document. Paper books, the most important paper documents, have been mostly scanned as images for storage. For example, when the target document is a paper book, the name of the book may be identified by obtaining the scanned content corresponding to one of the paper pages, and then the scanned content of each page of the corresponding book is searched based on the name.
Specifically, after the scanned content of one page is obtained, whether the corresponding target document is a paper book or not can be identified based on the scanned content of the page. Illustratively, according to the page rule of the book, whether the scanned content is a page in the book can be identified. The pages of the book comprise a book name, a page number and a text. There may be no text in some pages of the book, so if it is recognized that the scanned content includes the title and the page number, it may be determined that the document corresponding to the scanned content is a book.
Fig. 8 shows a schematic view of a page of a book. As shown in fig. 8, page 800 is one of the pages of the book. The page 800 has the name of the book, i.e. the name of the book, at position 801, the page number at position 802, and the content of the text at position 803. Based on this rule, after the scanned content corresponding to the page 800 is acquired, the content at the position 801, the position 802, and the position 803 of the scanned content can be identified. If the position 801 of the scanned content has content, the position 802 has content, and the content in the position 802 conforms to the format of page number, the target document corresponding to the scanned content is a book. Where the page number is typically composed of one or more digits, a regular expression can be used to determine whether the contents in location 802 is a page number. For example, the content of location 802 is matched to the regular expression "/Lambda (0| [1-9] [0-9] +) $/", and if there is a match, the content is a page number. If not, the content is not a page number. The regular expression "/Lambda (0| [1-9] [0-9] +) $/" represents a number that starts with 0 or not 0.
If the target document corresponding to the scanned content is identified as a book, the content at position 1 in the page 800 may be acquired as a book name, and a search is performed to find an image page corresponding to the book name from the database, so as to obtain the scanned content of all pages of the book. If the image page corresponding to the book name is not found or the image page is not a page in the book, scanning can be performed on each paper page in the paper document to obtain scanned content. The page number in the scanned content corresponding to the book can be used as the page code and stored in association with the scanned content, so that when the scanned content needs to be displayed again in the following process, the corresponding page code can be directly searched without scanning again, and resource waste caused by multiple times of scanning is avoided.
Alternatively, a file list may be generated from the stored scan content. Illustratively, the page code of each scanned content is arranged to generate a file list as a directory of the target document, so as to facilitate quick viewing of the stored scanned content.
The technical scheme of the embodiment can be added to the electronic equipment as a new function of the electronic equipment. For example, this function may be referred to as: note addition, book recognition, and the like are not particularly limited in this embodiment. A trigger button of the function may be added in the notification bar, and an icon of the trigger button in the notification bar may be displayed by pulling down the notification bar. When the trigger button is clicked, the electronic device may be triggered to perform the above-described method.
Continuing to take the electronic device as a mobile phone as an example, as shown in fig. 9, the information processing method of this embodiment may include the following steps:
step 901: and confirming to open the folding screen when the operation of clicking the trigger button is received. Clicking the trigger button can trigger the folding screen mobile phone to start executing the steps in the above embodiments. It is first determined whether the folding screen is open. And acquiring the state of the folding screen, and if the folding screen is not in an opening state, displaying prompt information to prompt a user to open the folding screen. Step 902: and opening the camera. A target document, such as a book, is scanned by a camera. Step 903: and identifying the picture scanned by the camera, and obtaining a scanned text and a scanned image contained in the picture. Step 904: and identifying the scanned text contained in the document image, and acquiring the scanned image without the scanned text. Step 905: the scanned text and the scanned image are displayed on a first screen. And, corresponding page codes are generated for the scanned text and the scanned image. Step 906: an operation (first input) of a user selecting information in the first screen is received. The folding screen mobile phone can determine the operation of the user through gesture recognition. Step 907: and responding to the operation, acquiring information selected by the user and displaying the information on the second screen. And, the selected information may be displayed in a mind map manner. Step 908: the information displayed on the second screen is saved. And when the mobile phone receives the operation of closing the second screen by the user, the information displayed on the second screen is saved.
According to the embodiment, when the folding screen mobile phone does not have the stored information of the target document, the scanning content of the target document can be acquired in real time, and the scanning content and the information which needs to be edited by the user are displayed separately, so that the displayed scanning content can be edited independently, the effect of taking notes on the document is achieved, and the problem that the user is inconvenient to take notes when reading paper files is solved.
When the information of some documents is already stored in the mobile phone, after the mobile phone performs the above step 901, the mobile phone skips the steps 902 to 904, and performs the following steps: the page code is determined. And receiving the operation (second input) of determining the page code by the user, and obtaining the corresponding page code input by the user. And responding to the operation, and then executing the step 905, and displaying the scanned text and the scanned image corresponding to the page code in the first screen.
In an exemplary embodiment, in step 903, based on the scanned picture, it may be identified whether the document corresponding to the picture is a book. When the document corresponding to the picture is identified as a book, all pictures corresponding to the book name can be searched from the cloud server according to the identified book name. Therefore, the operation of scanning each page in the book is omitted, the searched pictures are directly identified, the scanned contents are obtained, and the display speed can be improved. Illustratively, if it is recognized that the document corresponding to the picture is not a book, the scanned contents identified in the picture, that is, the scanned text, the scanned image, and the like, may also be stored in the cloud server, and when the user needs to read again, the user may directly input the corresponding page code, and search for the corresponding scanned contents from the cloud server for display, thereby increasing the display speed.
It should be understood that, in the foregoing embodiment, the information processing method is executed in a mobile phone as an example, but the information processing method provided in this embodiment may also be applied to other electronic devices such as a dual-screen mobile phone, a scroll-screen mobile phone, a tablet computer, a personal computer, and the like, which is not limited in this application.
Further, in the information processing method provided by the embodiment of the present application, the execution main body may be an information processing apparatus. Next, an information processing apparatus provided in an embodiment of the present application will be described by taking as an example an information processing method executed by an information processing apparatus.
As shown in fig. 10, the information processing apparatus 1000 according to the embodiment of the present disclosure may include a first display module 1010 and a second display module 1020. Specifically, the first display module 1010 may be configured to display scanned content corresponding to the target document in the first screen; the scanning content includes at least one of: scanning text, scanning images. The second display module 1020 may be used to display the first information in the second screen in response to a first input by the user to the first information in the scanned content.
The information processing device provided by the embodiment can display the scanned content in the target document, and a user can edit the scanned content, so that the usability of the information in the document can be improved. In addition, the information to be edited by the user and the scanned content can be displayed on different screens independently, the scanned content can be prevented from being influenced when the user marks the scanned content, the user operation is easier, and the user experience can be improved.
In an exemplary embodiment, the first display module 1010 may be specifically configured to: in the case where the scanned content includes both the scanned text and the scanned image, the scanned text is displayed in a first screen area of the first screen, and the scanned image is displayed in a second screen area of the first screen.
In an exemplary embodiment, the first display module 1010 is specifically configured to, when the target document includes a document image and the document image includes an image text, display a scanned image corresponding to the document image in a third screen area of the first screen, and display a scanned text corresponding to the image text in a fourth screen area of the first screen; and scanning images corresponding to the document images do not comprise scanning texts corresponding to the image texts.
In an exemplary embodiment, the apparatus may further include: the first scanning module is used for scanning the document image to obtain an initial scanning image corresponding to the document image; the initial scanning image comprises a scanning text corresponding to the image text; the first determining module is used for determining an image area where a scanning text corresponding to the image text is located in the initial scanning image; the first adjusting module is used for adjusting the pixel values of the pixel points in the image areas according to the pixel values of the pixel points in the adjacent areas of the image areas for each image area; and the first generation module is used for generating a scanning image corresponding to the document image according to the adjusted pixel values of the pixel points in the image area.
In an exemplary embodiment, the first display module 1010 may be specifically configured to: displaying a scanned image corresponding to the document image in the first screen in a case where the target document includes the document image and the document image includes the image text; the scanning image corresponding to the document image comprises a scanning text corresponding to the image text, and the scanning text in the scanning image corresponding to the document image is in an editing state.
In an exemplary embodiment, the second display module 1020 may be specifically configured to: in response to a first input of a scanned text in a scanned image corresponding to the document image by a user, the scanned text in the scanned image corresponding to the document image is displayed in the second screen.
In an exemplary embodiment, the second display module 1020 may be specifically configured to: in response to a first input of the first information by the user, displaying the first information at a target node position in the second screen; under the condition that the root node is not included in the second screen, the target node corresponding to the position of the target node is the root node; and under the condition that the second screen comprises the root node, the target node corresponding to the position of the target node is a child node connected with the root node.
The information processing apparatus in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The information processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to fig. 9, and is not described here again to avoid repetition.
Optionally, as shown in fig. 11, an embodiment of the present application further provides an electronic device 1100, which includes a processor 1101 and a memory 1102. The memory 1102 stores a program or an instruction that can be executed on the processor 1101, and when the program or the instruction is executed by the processor 1101, the steps of the information processing method embodiment described above are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing the embodiment of the present application.
The electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 12102, audio output unit 1203, input unit 1204, sensor 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, and processor 1210, among other components.
Those skilled in the art will appreciate that the electronic device 1200 may further comprise a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 120 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation to the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 1210 is configured to obtain an image page obtained by scanning a paper page; identifying a first image in the image page, and acquiring a second image not containing a target character and the target character under the condition that the first image contains the target character; and displaying the second image and the target character.
It should be understood that in the embodiment of the present application, the input Unit 1204 may include a Graphics Processing Unit (GPU) 1041 and a microphone 12042, and the Graphics Processing Unit 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes at least one of a touch panel 12071 and other input devices 12072. A touch panel 12071, also referred to as a touch screen. The touch panel 12071 may include two parts of a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1209 may be used to store software programs as well as various data. The memory 1209 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1209 may include volatile memory or nonvolatile memory, or the memory 1209 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1209 in the embodiments of the subject application include, but are not limited to, these and any other suitable types of memory.
Processor 1210 may include one or more processing units; optionally, processor 1210 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned information processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the information processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the above-mentioned embodiments of the information processing method, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An information processing method applied to an electronic device including a first screen and a second screen, the method comprising:
displaying scanned content corresponding to a target document in the first screen; the scan content includes at least one of: scanning a text and an image;
displaying the first information in the second screen in response to a first input of the first information in the scanned content by a user.
2. The information processing method according to claim 1, wherein the displaying of the scanned content corresponding to the target document in the first screen includes:
in a case where the scanned content includes both a scanned text and a scanned image, the scanned text is displayed in a first screen area of the first screen, and the scanned image is displayed in a second screen area of the first screen.
3. The information processing method according to claim 1, wherein the displaying of the scanned content corresponding to the target document in the first screen includes:
when the target document comprises a document image and the document image comprises an image text, displaying a scanned image corresponding to the document image in a third screen area of the first screen, and displaying a scanned text corresponding to the image text in a fourth screen area of the first screen;
and scanning the image corresponding to the document image, wherein the scanning image corresponding to the document image does not comprise the scanning text corresponding to the image text.
4. The information processing method according to claim 3, characterized by further comprising:
scanning the document image to obtain an initial scanning image corresponding to the document image; the initial scanning image comprises a scanning text corresponding to the image text;
determining an image area where a scanned text corresponding to the image text is located in the initial scanned image;
for each image area, adjusting the pixel value of a pixel point in the image area according to the pixel value of the pixel point in an adjacent area of the image area;
and generating a scanning image corresponding to the document image according to the adjusted pixel values of the pixel points in the image area.
5. The information processing method according to claim 1, wherein the displaying of the scanned content corresponding to the target document in the first screen includes:
displaying a scanned image corresponding to the document image in the first screen in a case where the target document includes a document image and the document image includes an image text;
and the scanned image corresponding to the document image comprises a scanned text corresponding to the image text, and the scanned text in the scanned image corresponding to the document image is in an editing state.
6. The information processing method according to claim 5, wherein the displaying the first information in the second screen in response to a first input of the first information in the scanned content by a user comprises:
and in response to a first input of a scanned text in a scanned image corresponding to the document image by a user, displaying the scanned text in the scanned image corresponding to the document image in the second screen.
7. The information processing method according to claim 1, wherein the displaying of the first information in the second screen in response to a first input of the first information in the scanned content by a user comprises:
in response to a first input of the first information by a user, displaying the first information at a target node position in the second screen;
under the condition that the root node is not included in the second screen, the target node corresponding to the target node position is the root node; and under the condition that the second screen comprises the root node, the target node corresponding to the target node position is a child node connected with the root node.
8. An information processing apparatus characterized by comprising:
the first display module is used for displaying the scanning content corresponding to the target document in the first screen; the scan content includes at least one of: scanning a text and an image;
and the second display module is used for responding to a first input of a user to the first information in the scanned content and displaying the first information in the second screen.
9. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the information processing method of any one of claims 1-7.
10. A readable storage medium, characterized in that a program or instructions is stored thereon, which when executed by a processor implements the information processing method according to any one of claims 1 to 7.
CN202210555521.8A 2022-05-19 2022-05-19 Information processing method, information processing apparatus, and electronic device Pending CN114943202A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210555521.8A CN114943202A (en) 2022-05-19 2022-05-19 Information processing method, information processing apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210555521.8A CN114943202A (en) 2022-05-19 2022-05-19 Information processing method, information processing apparatus, and electronic device

Publications (1)

Publication Number Publication Date
CN114943202A true CN114943202A (en) 2022-08-26

Family

ID=82909725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210555521.8A Pending CN114943202A (en) 2022-05-19 2022-05-19 Information processing method, information processing apparatus, and electronic device

Country Status (1)

Country Link
CN (1) CN114943202A (en)

Similar Documents

Publication Publication Date Title
US8108776B2 (en) User interface for multimodal information system
US9076124B2 (en) Method and apparatus for organizing and consolidating portable device functionality
EP3528140A1 (en) Picture processing method, device, electronic device and graphic user interface
CN111859856A (en) Information display method and device, electronic equipment and storage medium
CN114518820A (en) Icon sorting method and device and electronic equipment
CN112181253A (en) Information display method and device and electronic equipment
CN113849092A (en) Content sharing method and device and electronic equipment
CN112558851A (en) Object processing method, device, equipment and readable storage medium
CN115437736A (en) Method and device for recording notes
CN115373555A (en) Display method and device of folder icon, electronic equipment and medium
CN113253904A (en) Display method, display device and electronic equipment
CN114943202A (en) Information processing method, information processing apparatus, and electronic device
CN114564921A (en) Document editing method and device
CN111796733B (en) Image display method, image display device and electronic equipment
CN114998102A (en) Image processing method and device and electronic equipment
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
KR20150097250A (en) Sketch retrieval system using tag information, user equipment, service equipment, service method and computer readable medium having computer program recorded therefor
CN114416664A (en) Information display method, information display device, electronic apparatus, and readable storage medium
CN114491309A (en) Picture processing method and device
CN113436297A (en) Picture processing method and electronic equipment
CN113835598A (en) Information acquisition method and device and electronic equipment
CN113268961A (en) Travel note generation method and device
CN112765500A (en) Information searching method and device
CN112286613A (en) Interface display method and interface display device
CN114519859A (en) Text recognition method, text recognition device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination