CN110908570A - Image processing method, device, terminal and storage medium - Google Patents

Image processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN110908570A
CN110908570A CN201911186508.4A CN201911186508A CN110908570A CN 110908570 A CN110908570 A CN 110908570A CN 201911186508 A CN201911186508 A CN 201911186508A CN 110908570 A CN110908570 A CN 110908570A
Authority
CN
China
Prior art keywords
associated information
target image
operation instruction
marked
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911186508.4A
Other languages
Chinese (zh)
Other versions
CN110908570B (en
Inventor
梁志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911186508.4A priority Critical patent/CN110908570B/en
Publication of CN110908570A publication Critical patent/CN110908570A/en
Application granted granted Critical
Publication of CN110908570B publication Critical patent/CN110908570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, a terminal and a storage medium, wherein the method comprises the following steps: receiving an operation instruction aiming at a target image displayed in a browsing interface; if the operation position included in the operation area corresponding to the operation instruction is located in the unmarked area, performing operation object detection on the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result; displaying a correlation information entry interface corresponding to the operation object, and detecting correlation information entered on the correlation information entry interface corresponding to the operation object; and storing the input associated information and the attribute information of the operation object in an associated manner, and displaying the mark added with the associated information on the operation object. By adopting the embodiment of the invention, the image processing modes can be enriched, and the pertinence of the image processing is enhanced.

Description

Image processing method, device, terminal and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, a terminal, and a storage medium.
Background
With the development of science and technology, many applications for browsing images can support a user to input some operations on the browsed images so as to facilitate subsequent viewing, such as marking operations, annotation operations, comment operations, and the like. For example, a user browses a cartoon image through an application of browsing images, where the cartoon image is composed of a plurality of cartoon frames, and each cartoon frame is a preset-shaped sub-image, such as a square sub-image, a rectangular sub-image, or a circular sub-image, that forms a story line of the cartoon image.
A comment triggering area can be displayed in a browsing interface for browsing the cartoon image, when a user wants to evaluate the currently browsed cartoon image, a triggering operation can be input in the comment triggering area to trigger a terminal to display a comment interface, and the user inputs a comment on the cartoon image in the comment interface. Therefore, in the prior art, the whole image can only be processed correspondingly according to the operation of the user, and the sub-images included in the whole image cannot be processed. Therefore, how to effectively process an image according to a user operation becomes a hot issue in the field of image processing.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, a terminal and a storage medium, which can effectively process a target image according to the operation of a user.
In one aspect, an embodiment of the present invention provides an image processing method, including:
receiving an operation instruction aiming at a target image displayed in a browsing interface;
if the operation position included in the operation area corresponding to the operation instruction is located in the unmarked area, performing operation object detection on the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result;
displaying a relevant information entry interface corresponding to the operation object, and detecting relevant information entered on the relevant information entry interface corresponding to the operation object;
and storing the input associated information and the attribute information of the operation object in a database in an associated manner, and displaying a mark added with the associated information on the operation object.
In another aspect, an embodiment of the present invention provides an image processing apparatus, including:
the receiving unit is used for receiving an operation instruction aiming at a target image displayed in the browsing interface;
the processing unit is used for detecting an operation object in a detection area corresponding to the operation position to obtain a detection result if the operation position included in the operation area corresponding to the operation instruction is located in an unmarked area;
the acquisition unit is used for acquiring an operation object corresponding to the operation instruction from the target image according to the detection result;
the display unit is used for displaying the associated information entry interface corresponding to the operation object;
the processing unit is further configured to detect associated information entered on an associated information entry interface corresponding to the operation object;
the storage unit is used for storing the input associated information and the attribute information of the operation object into a database in an associated manner;
the display unit is further configured to display a mark to which the associated information has been added on the operation object.
In another aspect, an embodiment of the present invention provides a terminal, where the terminal includes:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of:
receiving an operation instruction aiming at a target image displayed in a browsing interface;
if the operation position included in the operation area corresponding to the operation instruction is located in the unmarked area, performing operation object detection on the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result;
displaying a relevant information entry interface corresponding to the operation object, and detecting relevant information entered on the relevant information entry interface corresponding to the operation object;
and storing the input associated information and the attribute information of the operation object in a database in an associated manner, and displaying a mark added with the associated information on the operation object.
In still another aspect, an embodiment of the present invention provides a computer storage medium, wherein the computer storage medium stores computer program instructions, and the computer program instructions, when executed by a processor, are configured to perform the image processing method as described above.
When an operation instruction aiming at an unmarked area of a target image is received, an operation object corresponding to the operation instruction on the target image is determined by detecting the operation object in a detection area corresponding to the operation instruction, a related information entry interface corresponding to the operation object can be further displayed, related information entered in the related information entry interface is detected, the entered related information and an identifier of the operation object are stored in a database in a related mode, and a mark added with the related information is displayed on the operation object. It should be understood that the operation object is determined by detecting a detection area, which is a part on the target image, and thus the operation object is a part included in the target image. In the image processing process, a part of the target image corresponding to the user operation can be automatically identified according to the user operation and correspondingly processed, so that the image processing mode is enriched, and the pertinence of the image processing is enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an image processing system according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a target image with associated information added according to an embodiment of the present invention;
fig. 4a is a schematic diagram illustrating an operation object corresponding to a determined operation instruction according to an embodiment of the present invention;
fig. 4b is a schematic diagram of another operation object corresponding to the determined operation instruction according to the embodiment of the present invention;
FIG. 5a is a schematic diagram of an associated information entry interface provided by an embodiment of the present invention;
FIG. 5b is a schematic diagram of another associated information entry interface provided by embodiments of the present invention;
FIG. 5c is a schematic diagram of yet another associated information entry interface provided by an embodiment of the present invention;
FIG. 6 is a flow chart of another image processing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a user interface of a terminal for displaying notification information according to an embodiment of the present invention;
FIG. 8a is a schematic diagram of a line detection provided by an embodiment of the present invention;
FIG. 8b is a schematic illustration of a straight line screen provided by an embodiment of the present invention;
FIG. 8c is a schematic diagram of a straight line intersection detection and an inclusion relation detection provided by an embodiment of the present invention;
FIG. 9a is a schematic diagram of adding association information to a single object according to an embodiment of the present invention;
FIG. 9b is a diagram illustrating adding association information to a combined object according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating a method for viewing associated information of a marked object according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The embodiment of the invention provides an image processing scheme, which can automatically determine an operation object corresponding to an operation instruction in a target image according to the operation instruction of a user, and further process the operation object according to the associated information input by the user. In a specific implementation, if a user inputs an operation instruction in an unmarked area of a target image, where the unmarked area may include an area where an object to which associated information is not added in the target image is located, and the object may refer to a sub-image included in the target image, such as a cartoon frame in a cartoon image, or may refer to a certain image element included in the target image, such as a certain person in an image including multiple persons, a terminal may intercept image content within a specific range of the operation position as a detection area by acquiring the operation position where the user inputs the operation instruction; then, determining an operation object corresponding to the operation instruction of the user by detecting the operation object in the detection area; and finally, displaying an associated information entry interface corresponding to the operation object, acquiring associated information input by a user on the associated information entry interface, storing the associated information and the operation object in an associated manner, and simultaneously displaying a mark added with the associated information on the operation object. The related information may include comment information, annotation information and other information related to the operation object. For example, the operation object is a character image, and the related information may be personal information of a character in the character image, such as name, native place, location, occupation, and the like; for another example, the operation object is a cartoon frame, and the related information may be a comment about a scene or a character in the cartoon frame, such as "the scene is designed to be too funny and the character is preferred".
In other embodiments, the image processing scheme provided by the embodiment of the present invention may also be applied to an application scenario in which associated information is added to corresponding text content according to an operation instruction of a user. For example, when a user can read text content, an operation instruction can be input at a certain position of a text, and the image processing scheme provided by the embodiment of the invention can be used for detecting a text object according to the operation instruction of the user to obtain an operation text object corresponding to the operation instruction; and displaying an associated information entry interface corresponding to the operation text object, detecting associated information entered in the associated information entry interface corresponding to the operation text object, and finally storing the associated information and the identifier of the operation text object in an associated manner. The operation text object corresponding to the operation instruction of the user may refer to selecting a paragraph or a sentence including the operation position corresponding to the operation instruction as the operation text object.
Based on the image processing scheme, an embodiment of the present invention provides an image processing system, and referring to fig. 1, a schematic structural diagram of the image processing system provided in the embodiment of the present invention is shown. The image processing system shown in fig. 1 may include a server 101 and at least one terminal 102. The server 101 stores a plurality of images that can be displayed in the terminal 102, and the terminal 102 can interact with the server 101 to acquire and display the images stored in the server 101. The terminal 102 is a terminal device used by a user, and the terminal 102 may be a mobile phone, a notebook, an intelligent wearable device, and the like.
In one embodiment, the terminal 102 may acquire and display the image from the server 101 by: the user inputs an operation of browsing a target image in a web page of the terminal 102, and the terminal 102 acquires the target image from the server 101 and displays the target image in a browsing interface of the web page, wherein the target image can be any one of images stored in the server 101. In other embodiments, an application program for browsing images may be installed in the terminal 102, the user starts the application program and inputs an operation for browsing a target image, and the terminal 102 acquires the target image from the server 101 and displays the target image in a browsing interface of the application program. For example, the terminal 102 has an animation reading application installed therein, and the terminal 102 may obtain a cartoon image selected by the user from the server and display the cartoon image in a browsing interface of the animation reading application.
In the process of displaying the target image by the terminal 102, the terminal 102 may execute the image processing scheme to add the associated information to the operation object in the target image according to the operation instruction of the user. After acquiring the associated information added to the operation object, the terminal 102 may display a mark to which the associated information is added on the operation object. Alternatively, the terminal 102 may record the operation object to which the mark of the related information is added as the marked object in the target image, record the area where the marked object is located as the marked area in the target image, and synchronously store the attribute information of the operation object and the related information (which may also be understood as historical related information) of the operation object as historical mark data of the target image in the server 101, where the attribute information of the operation object may include position information, size information, and the like of the operation object. Subsequently, when the terminal 102 acquires and displays the target image from the server 101, the mark of the marked object included in the target image is displayed simultaneously according to the history mark data. And if the user inputs an operation instruction in the marked area, the terminal displays an associated information entry interface corresponding to the marked area, wherein the associated information entry interface is used for displaying associated information of the marked object included in the marked area. The association information entry interface may further receive new association information for the marked object, and the terminal 102 may add the new association information to the association information corresponding to the marked operation object for storage.
Based on the description of the image processing system, the embodiment of the invention provides a flow chart of an image processing method, as shown in fig. 2. The image processing method described in fig. 2 may be executed by a terminal, and in particular may be executed by a processor of the terminal. The image processing method shown in fig. 2 may include the steps of:
s201, receiving an operation instruction aiming at a target image displayed in a browsing interface.
Wherein, the browsing interface can be a web interface in the terminal, such as an interface of a browser; alternatively, the browsing interface may also be a user interface of an application in the terminal that can browse images. The target image may be any image, such as any cartoon image including a cartoon grid, a person image including a person, a news image related to a news report, a landscape image, and the like. The target image can comprise a plurality of objects, each object can be called as a sub-image of the target image, and for the cartoon image, each cartoon frame is called as an object of the cartoon image; for the personal image, each person may be referred to as an object of the personal image. The operation instruction comprises a first operation instruction used for adding associated information to a single object, such as click operation, long-time press operation or double-click operation and the like, wherein the adding of the associated information to the single object refers to adding the associated information to one object in the target image; or, the operation instruction includes a second operation instruction, such as a frame selection operation, for adding associated information to the combined object, where adding associated information to the combined object refers to combining at least two objects in the target image together, and adding associated information to the combined objects.
Based on the foregoing, the target image may be displayed by a plurality of terminals, and a user of each terminal may process the target image, such as adding associated information, such as adding comments, annotations, and annotations, to a certain object in the target image, where the object to which the associated information is added is referred to as a marked object included in the target image. After detecting that the user processes the target image, each terminal may use data obtained through corresponding processing as history tag data corresponding to the target image, and store the history tag data and the target image in association with each other and synchronously store the history tag data and the target image in the server. That is, the server stores the target image and the history flag data corresponding to the target image.
Based on this, before performing step S201, the terminal further performs: acquiring a target image and historical marking data corresponding to the target image, wherein the historical marking data comprises attribute information of a marked object in the target image and historical associated information corresponding to the marked object; and displaying the target image in a browsing interface and adding a mark added with the associated information for the marked object according to the attribute information of the marked object in the target image and the historical associated information. Optionally, the target image and the history mark data corresponding to the target image may be obtained from a local database or a server when the terminal detects a browsing instruction of the user to the target image.
The attribute information of the marked object in the target image may include information such as a position, a size (or called a dimension), an object identifier, and the like of the marked object. The embodiment of adding the mark added with the associated information to the marked object according to the attribute information of the marked object in the target image and the historical associated information may be: counting the quantity of history associated information corresponding to the marked object; and selecting the marked object according to the attribute information frame of the target image, and displaying the quantity of the history associated information corresponding to the marked object at a preset position of the marked object, such as the lower right corner. Referring to fig. 3, which is a schematic diagram of a target image according to an embodiment of the present invention, it is assumed that the target image is a cartoon image, an upper left cartoon grid 301 and a lower right cartoon grid 302 are marked objects, the number of history associated information corresponding to the upper left cartoon grid 301 is 5, and the number of history associated information corresponding to the lower right cartoon grid 302 is 2. The mark added with the associated information to the marked object in fig. 3 may be: the upper left hand corner is outlined by a dashed box in the target image with the number 5 displayed in its lower right hand corner, and the lower right hand corner is outlined by a dashed box in the target image with the number 2 displayed in its lower right hand corner.
Wherein the position of the marked object in the target image can be represented as position coordinates; alternatively, the objects in the target image are arranged in a certain order, and the position of the marked object may be an arrangement number of the marked object in the target image. For example, in referring to fig. 3, if a coordinate system is established based on the target image: the length direction of the target image is an x-axis, the height direction of the target image is a y-axis, and the position of each cartoon frame can be represented by one coordinate, for example, the position of the marked object 301 can be (1,1), and the position of the marked object 302 can be (2, 3). For another example, in the target image, each cartoon grid is arranged in the target image according to the time sequence of occurrence of the story line in the cartoon grid, and an arrangement number may be added to each cartoon grid, for example, the cartoon grid arrangement number corresponding to the story line occurring first is 1, the cartoon grid arrangement number corresponding to the story line sent second is 2, and so on, and the positions of the corresponding identified objects in the target image are 1 and 5, respectively.
In one embodiment, the area occupied by each marked object in the target image is referred to as a marked area in the target image, for example, in fig. 3, the area where the cartoon grid at the upper left corner is located is a marked area, the area where the cartoon grid at the lower right corner is located is a marked area, and the areas of the target image other than the marked area are referred to as unmarked areas.
S202, if the operation position included in the operation area corresponding to the operation instruction is located in the unmarked area, performing operation object detection on the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result.
The operation object may be a marked object in the target image, or may include an unmarked object in the target image. In one embodiment, if the operation instruction is a first operation instruction for adding associated information to a single object, an operation area and an operation position corresponding to the operation instruction are the same and may both be a position point, for example, the operation instruction is a position in a click target image, and the position is both an operation position and an operation area; if the operation instruction is a first operation instruction for adding the associated information to the combined object, the operation area corresponding to the operation instruction may be a frame selection area, and the operation position included in the operation area may refer to a central point position of the frame selection area, for example, the operation instruction is to frame and select a rectangular area on the target image, where the rectangular area is the operation area, and the center of the rectangular area is the operation position included in the operation area; if the operation instruction is to select a circular area on the target image, the circular area is the operation area, and the center of the circular area is the operation position included in the operation area.
In one embodiment, the detection area corresponding to the operation position may refer to: and selecting an area which is positioned at a preset distance above and below the operation position in the target image as a detection area corresponding to the operation position, wherein the width of the detection area is equal to the width of the target image by default. Alternatively, the preset distance may be determined according to the height of the target image. In a specific implementation, a weight coefficient may be set, and the preset distance may be equal to the weight coefficient multiplied by the target image height. For example, the weight coefficient is 0.3, and the preset distance is equal to 30% of the height of the target image, that is, the operation position selects a section of region with the target image height of 0.3 upwards, and selects a section of region with the target image height of 0.3 downwards, and the two regions form the detection region corresponding to the operation position. In practical application, the setting of the weight coefficient depends on the height of the target image and the size information of the object in the target image, and the embodiment of the present invention merely lists one possible weight coefficient, and does not limit the specific weight coefficient.
In an embodiment, the detection result obtained by detecting the operation object in the detection area corresponding to the operation position may be different in types of the detection object according to needs, and different detection methods may be selected. For example, the detected object is a person, and the detection method may be a face recognition algorithm; if the detected object is a cartoon frame, the detection method can be a straight line detection method; for another example, the detected object is text content, and the detection method may be a regular expression, and the like. The detection result may be used to reflect whether the detection area includes a candidate object that can be an operation object, and if so, what the number of candidate objects is.
In other embodiments, the detection area corresponding to the operation position may also be a preset fixed area. Assuming that the terminal can specify a corresponding relationship between a receiving area for receiving an operation instruction and a detection area in advance, when an operation position corresponding to the operation instruction received by the terminal falls into a preset receiving area, the detection area corresponding to the receiving area is determined as the detection area corresponding to the operation position.
In one embodiment, the acquiring an operation object corresponding to the operation instruction from the target image according to the detection result may include: if the detection result indicates that the detection area comprises at least one candidate object, selecting an operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and the attribute information of each candidate object; and if the detection result indicates that the detection area does not comprise the candidate object, determining the target image as the operation object corresponding to the operation instruction. Wherein the operation position may be represented in a form of coordinates, the attribute information of each candidate object includes position information of each candidate object and size information of each candidate object, and the position information of each candidate object may be represented in a form of coordinates. Optionally, when the operation instruction is the first operation instruction and the operation instruction is the second operation instruction, the implementation of selecting the operation object corresponding to the operation instruction from the at least one candidate object may be different.
In a specific implementation, if the operation instruction is a first operation instruction, the obtaining, from the at least one candidate object, an operation object corresponding to the operation instruction according to the operation position and the attribute information of each candidate object includes: determining whether a candidate object comprising the operation position exists in the at least one candidate object according to the operation position and the attribute information of each candidate object; if the candidate object exists, determining the candidate object comprising the operation position as an operation object; and if not, determining the candidate object with the distance to the operation position smaller than the distance threshold value as the operation object. In short, if the operation instruction is a first operation instruction for adding the association information to the single object, the candidate object including the operation position or having the smallest distance from the operation position is selected from the candidate objects as the operation object corresponding to the operation instruction. Optionally, after determining the operation object corresponding to the operation instruction, the terminal may highlight the operation object; alternatively, the operation object is highlighted in other manners, such as a box selection, and the like. Therefore, the user can visually see the operation object selected by the user in the browsing interface, if the user finds that the current operation object is not the operation object expected by the user, the operation object may be selected incorrectly due to an operation instruction input error or an operation instruction recognition error of the terminal, and the user can input the operation instruction again at this time to reselect the correct operation object.
Fig. 4a is a schematic diagram of determining an operation object corresponding to an operation instruction according to an embodiment of the present invention. In fig. 4a, it is assumed that the target image is a cartoon image, the object in the target image is a cartoon frame, an operation instruction input by the user in fig. 4a is a first operation instruction for adding related information to a single object, such as a click operation, and an operation position corresponding to the operation instruction is 401. Assume that, after the detection area corresponding to the operation position is detected, it is determined that two candidate objects 402 and 403 are included in the detection area. As can be seen from the figure, the operation position 401 is located on the candidate object 402, and therefore, the candidate object 402 is determined as the operation object corresponding to the operation instruction, and the terminal may highlight the candidate object 402 to prompt the user that the operation object corresponding to the operation instruction is 402.
If the operation instruction is a second operation instruction, the obtaining an operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and the attribute information of each candidate object includes: and selecting a candidate object overlapped with the operation area in the at least one candidate object according to the operation position and the attribute information of each candidate object, and determining the selected candidate object as the operation object corresponding to the operation instruction. In short, if the operation instruction is a second operation instruction for adding the association information to the combined object, the candidate objects having an overlap with the operation area corresponding to the operation instruction among the candidate objects are combined together as the operation object.
Referring to fig. 4b, a schematic diagram of another operation object corresponding to the operation instruction is determined according to the embodiment of the present invention. In fig. 4b, it is assumed that the target image is a cartoon image, the object included in the target image is a cartoon frame, and it is assumed that the user inputs a second operation instruction in the target image shown in fig. 4b, for example, an operation area corresponding to the second operation instruction is a circular area 404, and the operation position is a center position of the circular area. It is assumed that 4 candidate objects are included in the detection area determined by detecting the detection area corresponding to the operation position, as shown in fig. 4B as a, B, C, and D. As can be seen from fig. 4b, the operation area overlaps with the candidate objects C and D, and then C and D are combined together to be the operation object corresponding to the operation instruction.
S203, displaying the associated information entry interface corresponding to the operation object, and detecting the associated information entered on the associated information entry interface corresponding to the operation object.
In an embodiment, the associated information entry interface corresponding to the operation object is used to receive associated information added to the operation object by a user, optionally, the associated information entry interface corresponding to the operation object includes an associated information entry frame and an exit button, the associated information entry interface is terminated to be displayed when the exit button is triggered, and the associated information is detected in the associated information entry frame. The associated information entry interface may be a mask layer interface added in the browsing interface. Optionally, the associated information entry interface may further include an associated information display area, where the associated information display area is used to display the associated information detected by the associated information entry interface.
Referring to fig. 5a, for a schematic diagram of an associated information entry interface provided by an embodiment of the present invention, the associated information entry interface shown in fig. 5a may include an associated information entry box 501 and an exit button 502, and a user may input associated information of an operation object, such as comment information, and comment information, through the associated information entry box 501. As can be seen in fig. 5a, after the operation object is determined, the operation object may be highlighted in the browsing interface as shown at 500 in fig. 5 a. The user may also exit the current association information entry interface by clicking an exit button 502. The associated information entry interface shown in fig. 5a may further include an associated information display area 503, indicated by a dotted line box, for displaying the associated information input by the user on the associated information display area 503 if it is detected that the user inputs the associated information on the associated information input box 501.
In one embodiment, the user may pop up the keyboard after clicking 501. As shown in FIG. 5b, the keyboard may include a button 504 for confirming the input, and after the user inputs the association information of the operation object through the keyboard, the button 504 for confirming the input may be clicked to confirm that the association information is added to the target operation object. After determining that the associated information is added to the operation object, the associated information that the user has just entered may be displayed in the associated information display area 503.
In other embodiments, a confirmation button 505 for confirming to add the associated information may also be included in the associated information entry interface, as shown in fig. 5c, after the user clicks 501 and fills the associated information in 501, the user may click the confirmation button 505, and after the user clicks the confirmation button 505, the addition of the associated information to the operation object is successful. At this time, the related information just entered by the user is displayed in the related information display area 503 of the related information entry interface.
In one embodiment, if the operation object is a marked object included in the target image, historical associated information corresponding to the marked object may also be displayed in the associated information entry interface. In other words, if the operation object determined according to the detection result is the marked object in the target image, the associated information entry interface corresponding to the operation object is displayed, besides the associated information entry frame and the exit button, historical associated information corresponding to the operation object is also displayed in the associated information display area of the associated information entry interface.
And step S204, storing the input associated information and the attribute information of the operation object in an associated manner, and displaying a mark added with the associated information on the operation object.
Wherein, the attribute information of the operation object can comprise at least one or more of position, size, object identification and the like of the operation object. In one embodiment, if the association information is detected on the association information entry interface, the association information and the attribute information of the operation object are stored in an associated manner, and the association information and the attribute information of the operation object can also be sent to a server and stored by the server.
In one embodiment, the implementation of displaying the mark to which the associated information is added on the operation object may be: the operation object is framed or highlighted in the target image in a preset shape, and the number of pieces of association information corresponding to the operation object is displayed on the operation object. The specific implementation manner may be set according to actual needs of a user, and is not specifically limited in the embodiment of the present invention.
In the embodiment of the invention, an operation instruction for a target image displayed in a browsing interface is received, if an operation position included in an area corresponding to the operation instruction is located in an unmarked area, operation object detection is carried out on a detection area corresponding to the operation position to obtain a detection result, and an operation object corresponding to the operation instruction is obtained from the target image according to the detection result; further, displaying a related information entry interface corresponding to the operation object, and detecting related information entered on the related information entry interface corresponding to the operation object; and storing the input associated information and the attribute information of the operation object in an associated manner, and displaying a mark added with the associated information on the operation object. It should be understood that the operation object is determined by detecting a detection area, which is a part on the target image, and thus the operation object is a part included in the target image. In the image processing process, a part of the target image corresponding to the user operation can be automatically determined according to the user operation and is correspondingly processed, so that the image processing mode is enriched, and the pertinence of the image processing is enhanced.
Fig. 6 is a schematic flow chart of another image processing method according to an embodiment of the present invention. The image processing method shown in fig. 6 may be executed by the terminal, and may specifically be executed by a processor of the terminal. The information processing method shown in fig. 6 may include the steps of:
step S601, a target image and historical marking data corresponding to the target image are obtained, wherein the historical marking data comprise attribute information and historical association information of a marked object.
Step S602, displaying the target image in the browsing interface and adding a mark added with the associated information for the marked object according to the attribute information and the history associated information of the marked object in the target image.
In an embodiment, the implementation of the terminal acquiring the target image and the history flag data corresponding to the target image may be as follows: the terminal receives the selection operation of the user on the identification of the target image; the terminal searches whether a target image and historical marking data corresponding to the target image are stored in a local database or not; if the object image exists, the target image and the mark of the marked object in the target image, which is added with the associated information, are directly displayed in a browsing interface; and if the target image does not exist, sending an acquisition request to the server, receiving the target image returned by the server and the historical mark data corresponding to the target image, and displaying the target image and the mark of the marked object in a browsing interface. The identification of the target image can be the name of the target image, for example, the target image is a cartoon image, a user can select a certain chapter which wants to read a cartoon, the chapter can include a plurality of cartoon images, the user can click to open any one cartoon image according to the name of each cartoon image, the cartoon image which is selected to be opened by the user is the target image, and the click operation of the user is the selection operation of the identification of the target image; alternatively, the identification of the target image may refer to a name of a text content including the target image, for example, the target image matches a scene in a news report text, and the user may select to open the news report text for the purpose of browsing the target image.
As can be seen from the foregoing, the target image stored in the server may be displayed by a plurality of terminals, and when users of the plurality of terminals browse the target image, associated information such as comment information may be added to a certain object of the target image, and the terminal synchronously saves the associated information as history flag data of the target object in the server. In this way, if a terminal acquires the target image and the historical tag data of the target image from the server and stores the target image and the historical tag data of the target image in the local database, and the terminal does not timely update the target image and the historical tag data corresponding to the target image in the server, the historical tag data of the target image stored in the terminal and the historical tag data of the target image stored in the server may be inconsistent. Based on this, when the server detects that the history tag data of the target image is updated, an update notification may be sent to the terminal that acquired the target image once to notify the terminal to acquire the latest target image and the history tag data of the target image from the server again.
Based on this, after the terminal receives the trigger operation of the user on the identification of the target image, if the terminal finds that the target image is stored locally, the latest update time of the historical mark data of the target image can be checked before the target image is displayed in the browsing interface; if the difference between the updating time and the current time exceeds a time threshold value, which indicates that the historical marking data of the target image stored in the terminal is possibly not accurate enough, the terminal acquires the target image and the latest historical marking data from the server and displays the target image and the latest historical marking data in a browsing interface; if the difference between the update time and the current time is within the time threshold, the terminal can directly display the target image and the historical mark data of the target image according to the target image stored in the local database.
And step S603, receiving an operation instruction aiming at the target image displayed in the browsing interface.
Step S604, if the operation position included in the operation area corresponding to the operation instruction is located in the unmarked area, performing operation object detection on the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result.
As can be seen from the foregoing, the target image includes a marked region and an unmarked region, and if the operation position corresponding to the operation instruction is located in the unmarked region, the user may want to add the related information to the object to which the related information has not been added in the target image, and may not be distracted by the user. If execution of step S604 is initiated upon detection of the user inputting an operation instruction, there may be a case where the user does not want to add associated information to an object to which associated information is not added, but only carelessly inputs an operation instruction, so that the terminal consumes unnecessary power consumption to determine an operation object. In order to save unnecessary power consumption overhead of the terminal, before performing step S604, the terminal may further perform: outputting notification information whether to add the associated information; if a confirmation operation of the user for the notification information is received, step S604 is performed. Referring to fig. 7, which is a schematic diagram of a terminal displaying notification information according to an embodiment of the present invention, when it is detected that an operation instruction is input in a target image by a user, a notification interface may be output, where the notification interface may include notification information, such as "whether you want to add associated information"; a confirmation button and a cancel button may also be included in the notification interface. If the user clicks the confirm button, performing step S604; if the user clicks the cancel button, step S604 may not be performed.
In an embodiment, the detecting the operation object in the detection area corresponding to the operation position to obtain the detection result includes: carrying out gray level processing on the detection area, and carrying out linear detection on the detection area after the gray level processing to obtain a straight line included in the detection area; performing equation processing on the straight line included in the detection area by adopting a preset straight line equation to obtain a straight line equation included in the detection area; determining a target straight line included in the detection area according to the straight line equation and a straight line screening condition; and extending the target straight lines, and carrying out intersection detection and inclusion relation detection on the target straight lines to obtain a detection result.
In one embodiment, a line detection algorithm may be used to perform line detection on the detection area after the gray processing, and common line detection algorithms may include a Hough line detection algorithm, a Freeman line detection algorithm, an inchworm crawling algorithm, and the like. The Hough line detection algorithm is strong in anti-interference capability, insensitive to the incomplete part of a straight line in an image, noise and other coexisting nonlinear structures, capable of tolerating gaps in feature boundary description and relatively free from the influence of image noise, and based on the Hough line detection algorithm, the Hough line detection algorithm is one of the most commonly used line detection algorithms. The hough line detection algorithm can generally comprise the following steps when performing line detection on a detection area: 1) denoising the detection area after the gray processing to remove noise included in the detection area; 2) further performing marginalization processing on the detection area subjected to denoising processing; 3) binarization (judging whether a certain pixel in the detection area is an edge point or not, and determining through the gray value of the pixel, wherein if the gray value is 255, the pixel is an edge point); 4) and mapping to Hough space, filtering out interference straight lines, and finally obtaining straight lines included in the detection area.
Referring to fig. 8a, in order to provide a schematic diagram for performing line detection on a detection area according to an embodiment of the present invention, it is assumed that a target image is a cartoon image, the cartoon image includes a plurality of cartoon frames, it is assumed that 801 represents a detection area determined from the target image according to an operation position, line detection is performed on the detection area by using a hough line detection method, so as to obtain all lines included in the detection area, 802 represents the detection area after line detection, and all bold lines in 802 represent lines included in the detection area.
In one embodiment, if the number of straight lines obtained after the straight line detection is performed on the detection area is too large, the terminal needs to consume too much power consumption when determining the operation object according to the detection result, and in order to save power consumption overhead and improve the detection efficiency of the terminal on the operation object, in the embodiment of the invention, after the straight lines are obtained by performing the straight line detection on the detection area, all the obtained straight lines are screened to obtain the target straight lines meeting the screening condition. In a specific implementation, a preset linear equation is used to perform equation processing on a detected straight line, and the linear equation used in the embodiment of the present invention may be y ═ kx + b, where y represents a linear equation corresponding to a certain straight line, k represents a slope of the certain straight line, and b represents an intercept of the certain straight line. The linear equation corresponding to each straight line in the detection area can be obtained by adopting the formula. Further, setting a screening condition and merging or deleting the straight lines meeting the screening condition to finally obtain the target straight line. The screening conditions may be: and uniformly combining the straight lines similar to the straight line equation in the preset range into a straight line, and deleting the straight line equation to contain the straight lines of which the number is less than the number threshold. The preset range can be determined according to the size of each object in the target image, and if the comment size of each object included in the target image is a length a and a width B, the preset range can be set as a rectangular range with the length a and the width B. The number threshold may be set to any number of 2,4,5, etc., as desired. The above are only possible screening conditions listed in the embodiments of the present invention, and in practical applications, the screening conditions may also be set according to practical requirements.
In one embodiment, the equation similarity may refer to the difference between the linear equations with the same slope and the same intercept within a predetermined range of difference, such as the linear equation y1=5x+b1And equation of a straight line y2=5x+b2The slope is the same if b2And b1If the difference value between the two linear equations is within the preset difference value range, the two linear equations are determined to be similar linear equations. Deleting the equation of lines containing a number of lines less than the number threshold may mean: for a certain linear equation, if the number of similar linear equations is less than the number threshold, the corresponding linear equation may be deleted.
Referring to fig. 8b, a schematic diagram of a straight line screening provided in the embodiment of the present invention, 802 indicates a detection area after straight line processing, where the detection area may include a plurality of straight lines, assuming that the screening condition is: unifying the similar straight lines of the straight line equation in the preset range into a straight line, deleting the straight line equation to contain the straight lines with the number smaller than the number threshold, for example, in 802, the slopes of the straight line A and the straight line B are the same, the difference of the intercepts is in the preset difference, and unifying the straight lines A and B into a straight line C; for another example, if only E is a straight line having a similar equation to the straight line D, the straight line E and the straight line D are deleted. As described above, the straight lines included in 802 are screened, and the straight lines included in the screened detection regions 803 and 803 are target straight lines.
Further, the target straight lines are extended, intersection detection and inclusion relation detection among the straight lines are carried out, and a detection result is obtained. In the specific implementation, the target straight line is extended and intersected, and every four adjacent intersection points form a rectangular frame; detecting the inclusion relationship among the rectangles, deleting the included rectangle frames from the rectangle frames with the inclusion relationship, and finally obtaining the detection result.
Referring to fig. 8c, which is a schematic diagram of detection of intersection and inclusion relationships according to an embodiment of the present invention, in fig. 8c, 803 indicates a detection area after being screened by a screening condition, 803 includes a plurality of target straight lines, all the target straight lines in 803 are extended and intersection detection is performed, and a plurality of rectangular frames are obtained as indicated by 804 (only a part of the rectangular frames are shown in 804); then, the included rectangular frame is deleted from the rectangular frames having the inclusion relationship, and if the rectangular frame ABCD in 804 is included in the rectangular frame AEFD, the rectangular frame ABCD is deleted, and according to this method, the detection result is obtained.
In an embodiment, after the detection result is obtained, the operation object corresponding to the operation instruction may be obtained according to the detection result, and the specific implementation manner may refer to the description of the relevant steps in the embodiment of fig. 2, which is not described herein again.
And S605, displaying the associated information entry interface corresponding to the operation object, and detecting the associated information entered on the associated information entry interface corresponding to the operation object.
And step S606, storing the input associated information and the attribute information of the operation object in an associated manner, and displaying the mark added with the associated information on the operation object.
As can be seen from the foregoing, the manner of acquiring the operation object corresponding to the operation instruction according to the detection result varies according to the operation instruction, and the acquired operation object also varies. For different operation objects, the associated information entry interfaces corresponding to the operation objects may also be different, and finally, the added associated information marks displayed on the operation objects are also different. The following describes the information entry interface corresponding to different operation objects and the mark to which the associated information has been added through fig. 9a and 9 b.
Referring to fig. 9a, a schematic diagram of adding associated information to a single object according to an embodiment of the present invention is provided, where a target image is a cartoon image, the cartoon image includes multiple cartoon frames, and the single object may refer to any one of the cartoon frames. If the operation instruction input by the user in the target image is a click operation and the click operation position is 901, the terminal determines that the operation object corresponding to the operation instruction is 902 through step S604, and may highlight 902 and display the associated information entry interface corresponding to the operation object 902. The associated information entry interface can comprise an associated information entry box 903 and an exit button 904, wherein a user inputs associated information of the operation object 902 through the associated information entry box 903, such as 'ao may ai', then clicks the exit button 904 to exit the associated information entry interface, and returns to the browsing interface; and displaying a mark that the operation object 902 has added the associated information in the browsing interface, such as selecting the operation object 902 by a frame, and displaying the quantity of the associated information in the lower right corner.
Referring to fig. 9b, in a schematic diagram of adding association information to a combined object according to an embodiment of the present invention, it is assumed that the target image is a cartoon image, the cartoon image includes multiple cartoon frames, and the combined object may be a combination of two or more cartoon frames. If the operation instruction input by the user in the target image is a frame selection operation, the operation area is a frame selection area B, and if the operation objects obtained through the step S604 are C and D, the C and D are combined together to be used as the operation object corresponding to the operation instruction, the operation object is highlighted, and the associated information entry interface corresponding to the operation object is displayed; a user can enter the associated information through an associated information entry box 903 in the associated information entry interface, click an exit button 904 to exit the associated information entry interface, and return to the browsing interface; and displaying the mark added with the associated information on the operation object in the browsing interface, such as selecting C and D from the boxes, and displaying the quantity of the associated information in the lower right corner.
Step S607, if the operation position corresponding to the operation instruction is located in the marked region, acquiring history association information corresponding to the marked object in the marked region.
The marked region may refer to any one marked region in the target image, the marked region includes a marked object, and the marked object may be a single object or a combined object obtained by combining a plurality of single objects. Optionally, the operation position corresponding to the operation instruction is located in the marked area, which indicates that the user wants to view the associated information of the marked object in the marked area or add associated information to the marked object, at this time, the terminal may obtain the historical associated information corresponding to the marked object in the marked area, and display the historical associated information in the associated information entry interface corresponding to the marked object. The history associated information corresponding to the marked object may be added to the marked object by other users when browsing the target image.
As can be seen from the foregoing, the marked object may be a single object, or may be a combined object obtained by combining a plurality of single objects, if the marked object is a combined object, two single objects included in the marked object are a whole, and then no matter which single object the operation instruction of the user falls into, the operation instruction of the marked object is considered to be the operation instruction for the marked object.
And step S608, displaying the associated information entry interface corresponding to the marked object, and displaying historical associated information on the associated information entry interface corresponding to the marked object.
And step S609, when new associated information is detected in the associated information entry interface corresponding to the marked object, adding the new associated information into historical associated information corresponding to the marked object.
In one embodiment, the association information entry interface corresponding to the marked object may include an association information display area, an association information entry frame, and an exit button, and after the terminal acquires the historical association information of the marked object, the terminal may display the historical association information in the association information display area. If the user wants to add the associated information to the marked object, the associated information to be added can be input in the associated information input box, and the newly added associated information is added to the historical associated information corresponding to the marked object.
Referring to fig. 10, a schematic diagram of viewing association information of a marked object according to an embodiment of the present invention is provided, in fig. 10, it is assumed that a target image is a cartoon image, a plurality of cartoon frames included in the cartoon image are referred to as objects included in the target image, and 1001 and 1002 represent the marked object in the target image, where 1001 is a single object, and 1002 is a combined object composed of two single objects. The two marked objects correspond to the marks to which the associated information is added, and as shown in fig. 10, the two marked objects are enclosed by a dashed box, and the number of the historical associated information corresponding to the marked objects is at the lower right corner of the dashed box, the number of the associated information corresponding to 1001 is 5, and the number of the associated information corresponding to 1002 is 2. If the user inputs an operation instruction on any one object of the marked objects 1002, it is determined that the user inputs the operation instruction on the marked objects 1002, the terminal displays a related information entry interface corresponding to the marked objects 1002, and history related information corresponding to the identified object is displayed in the related information entry interface, as shown in 1003. The user can input new associated information on the associated information input interface, and if the user does not want to view the associated information, the user can return to the browsing interface through an exit button of the associated information input interface.
The terminal acquires the target image and the historical mark data corresponding to the target image, displays the target image in the browsing interface, and adds the mark of the associated information to the marked object according to the position mark of the marked object in the target image and the quantity of the historical associated information. When an operation instruction for a target image displayed in a browsing interface is received, whether the operation instruction falls into an unmarked area or a marked area is judged. If the operation object falls into the unmarked area, detecting the operation object in the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result; and further displaying a related information entry interface corresponding to the operation object, detecting related information entered on the related information entry interface corresponding to the operation object, storing the entered related information and the identifier of the operation object in a related manner, and displaying a mark added with the related information on the operation object. And if the operation instruction falls into the marked area, acquiring historical associated information corresponding to the marked object in the marked area, displaying an associated information entry interface corresponding to the marked object, and displaying the historical associated information on the associated information entry interface corresponding to the marked object. And when new associated information is detected in the associated information entry interface corresponding to the marked object, adding the new associated information into the historical associated information corresponding to the marked object. In the image processing process, the terminal can automatically select to display the related information of the marked object or display the related information added to the operation object according to the position of the operation instruction input by the user, so that a part of the target image is processed according to the operation instruction of the user, and the pertinence of image processing is improved.
Based on the embodiment of the image processing method, the embodiment of the invention also provides an image processing device. Referring to fig. 11, which is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, the image processing apparatus shown in fig. 11 may operate the following units:
a receiving unit 1101 configured to receive an operation instruction for a target image displayed in a browsing interface;
the processing unit 1102 is configured to, if an operation position included in an operation area corresponding to the operation instruction is located in an unmarked area, perform operation object detection on a detection area corresponding to the operation position to obtain a detection result;
an obtaining unit 1103, configured to obtain, according to the detection result, an operation object corresponding to the operation instruction from the target image;
a display unit 1104, configured to display an associated information entry interface corresponding to the operation object;
the processing unit 1102 is further configured to detect associated information entered on an associated information entry interface corresponding to the operation object;
a storage unit 1105, configured to store the entered association information and the attribute information of the operation object in a database in an associated manner;
the display unit 1104 is further configured to display a mark to which the associated information is added on the operation object.
In one embodiment, when the obtaining unit 1103 obtains the operation object corresponding to the operation instruction from the target image according to the detection result, the following operations are performed: if the detection result indicates that the detection area comprises at least one candidate object, acquiring an operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and attribute information of each candidate object, wherein the attribute information comprises position information and size information; and if the detection result indicates that the detection area does not comprise the candidate object, determining the target image as the operation object corresponding to the operation instruction.
In one embodiment, the operation instruction comprises a first operation instruction for adding association information for a single object; or the operation instruction comprises a second operation instruction used for adding the associated information for the combined object.
In one embodiment, the operation instruction includes a first operation instruction for adding association information to a single object, and the obtaining unit 1103 performs the following operations when obtaining an operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and the attribute information of each candidate object: determining whether a candidate object comprising the operation position exists in the at least one candidate object according to the operation position and the attribute information of each candidate object; if the candidate object exists, determining the candidate object comprising the operation position as an operation object; and if not, determining the candidate object with the distance to the operation position smaller than the distance threshold value as the operation object.
In one embodiment, the operation instruction includes a second operation instruction for adding association information to the combined object, and the obtaining unit 1103 performs the following operations when obtaining the operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and the attribute information of each candidate object: and selecting a candidate object overlapped with the detection area in the at least one candidate object according to the operation position and the attribute information of each candidate object, and determining the selected candidate object as the operation object corresponding to the operation instruction.
In an embodiment, when the detection result is obtained by detecting the operation object in the detection area corresponding to the operation position, the processing unit 1102 performs the following operations: carrying out gray level processing on the detection area, and carrying out linear detection on the detection area after the gray level processing to obtain a straight line included in the detection area; performing equation processing on the straight line included in the detection area by adopting a preset straight line equation to obtain a straight line equation included in the detection area; determining a target straight line included in the detection area according to the straight line equation and a straight line screening condition; and extending the target straight lines, and carrying out intersection detection and inclusion relation detection on the target straight lines to obtain a detection result.
In one embodiment, the associated information entry interface includes an associated information entry box and an exit button, and the exit button ends displaying the associated information entry interface when triggered, and the associated information is detected in the associated information entry box.
In one embodiment, the target image further includes a marked region, the marked region includes a marked object to which a related information mark is added, and the obtaining unit 1103 is further configured to obtain the target image and historical mark data corresponding to the target image, where the historical mark data includes attribute information of the marked object in the target image and historical related information corresponding to the marked object; the display unit 1104 is further configured to display the target image in a browsing interface and add a tag to which associated information is added to the tagged object according to attribute information of the tagged object in the target image and the historical associated information.
In an embodiment, the obtaining unit 1103 is further configured to obtain history association information corresponding to a marked object in the marked area if an operation position included in an operation area corresponding to the operation instruction is located in the marked area; the display unit 1104 is further configured to display an association information entry interface corresponding to the marked object, and display the history association information on the association information entry interface corresponding to the marked object; the processing unit 1102 is further configured to, when new associated information is detected in an associated information entry interface corresponding to the marked object, add the new associated information as historical associated information corresponding to the marked object.
In one embodiment, if the operation object is located in the marked area, and the operation object includes a marked object, the display unit 1104 is further configured to display history associated information corresponding to the operation object in an associated information entry interface corresponding to the operation object.
According to an embodiment of the present invention, the steps involved in the image processing methods shown in fig. 2 and 6 may be performed by units in the image processing apparatus shown in fig. 11. For example, step S201 described in fig. 2 may be performed by the receiving unit 1101 in the image processing apparatus described in fig. 11, step S202 may be performed by the processing unit 1102 and the acquiring unit 1103 in the image processing apparatus described in fig. 11, step S203 may be performed by the display unit 1104 and the processing unit 1102 in the image processing apparatus shown in fig. 11, and step S204 may be performed by the display unit 1104 and the storage unit 1105 in the image processing apparatus shown in fig. 11; as another example, step S601 shown in fig. 6 may be performed by the acquisition unit 1103 in the image processing apparatus shown in fig. 11. Step S602 may be performed by the display unit 1104 in the image processing apparatus shown in fig. 11, step S603 may be performed by the receiving unit 1101 in the image processing apparatus shown in fig. 11, step S604 may be performed by the processing unit 1102 and the acquiring unit 1103 in the image processing apparatus shown in fig. 11, step S605 may be performed by the display unit 1104 and the processing unit 1102 in the image processing apparatus shown in fig. 11, step S606 may be performed by the storage unit 1105 and the processing unit 1102 in the image processing apparatus shown in fig. 11, step S607 may be performed by the acquiring unit 1103 in the image processing apparatus shown in fig. 11, step S608 may be performed by the display unit 1104 in the image processing apparatus shown in fig. 11, and step S609 may be performed by the processing unit 1102 in the image processing apparatus shown in fig. 11.
According to another embodiment of the present invention, the units in the image processing apparatus shown in fig. 11 may be respectively or entirely combined into one or several other units to form the image processing apparatus, or some unit(s) thereof may be further split into multiple units with smaller functions to form the image processing apparatus, which may achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the image processing apparatus may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, the image processing apparatus as shown in fig. 11 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the respective methods as shown in fig. 2 or fig. 6 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and an image processing method according to an embodiment of the present invention may be implemented. The computer program may be embodied on a computer-readable storage medium, for example, and loaded into and executed by the above-described computing apparatus via the computer-readable storage medium.
In the embodiment of the invention, an operation instruction for a target image displayed in a browsing interface is received, if an operation position included in an area corresponding to the operation instruction is located in an unmarked area, operation object detection is carried out on a detection area corresponding to the operation position to obtain a detection result, and an operation object corresponding to the operation instruction is obtained from the target image according to the detection result; further, displaying a related information entry interface corresponding to the operation object, and detecting related information entered on the related information entry interface corresponding to the operation object; and storing the input associated information and the attribute information of the operation object in an associated manner, and displaying a mark added with the associated information on the operation object. It should be understood that the operation object is determined by detecting a detection area, which is a part on the target image, and thus the operation object is a part included in the target image. In the image processing process, a part of the target image corresponding to the user operation on the target image can be automatically identified according to the user operation and correspondingly processed, so that the image processing mode is enriched, and the pertinence of the image processing is enhanced.
Based on the above method embodiment and apparatus embodiment, an embodiment of the present invention further provides a terminal, referring to fig. 12, which is a schematic structural diagram of a terminal provided in an embodiment of the present invention, and the terminal shown in fig. 12 may at least include a processor 1201, an input interface 1202, an output interface 1203, and a computer storage medium 1204. The processor 1201, the input interface 1202, the output interface 1203, and the computer storage medium 1204 may be connected by a bus or other means.
A computer storage medium 1204 may be stored in the memory of the node device, the computer storage medium 1204 being for storing a computer program comprising program instructions, the processor 1201 being for executing the program instructions stored by the computer storage medium 1204. The processor 1201 (or CPU) is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and in particular, is adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function; in an embodiment, the processor 1201 according to an embodiment of the present invention may be configured to perform: receiving an operation instruction aiming at a target image displayed in a browsing interface; if the operation position included in the operation area corresponding to the operation instruction is located in the unmarked area, performing operation object detection on the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result; displaying a relevant information entry interface corresponding to the operation object, and detecting relevant information entered on the relevant information entry interface corresponding to the operation object; and storing the input associated information and the attribute information of the operation object in a database in an associated manner, and displaying a mark added with the associated information on the operation object.
An embodiment of the present invention further provides a computer storage medium (Memory), which is a Memory device in the node device and is used to store programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the terminal, and may also include an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in the memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by the processor 1201. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by the processor 1201 to implement the corresponding steps of the method in the above-described embodiments related to the image processing method, and in particular, the one or more instructions stored in the computer storage medium may be loaded and executed by the processor 1201 to implement the steps of: receiving an operation instruction aiming at a target image displayed in a browsing interface; if the operation position included in the operation area corresponding to the operation instruction is located in the unmarked area, performing operation object detection on the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result; displaying a relevant information entry interface corresponding to the operation object, and detecting relevant information entered on the relevant information entry interface corresponding to the operation object; and storing the input associated information and the attribute information of the operation object in a database in an associated manner, and displaying a mark added with the associated information on the operation object.
In one embodiment, when the processor 1201 acquires an operation object corresponding to the operation instruction from the target image according to the detection result, the following operations are performed: if the detection result indicates that the detection area comprises at least one candidate object, acquiring an operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and attribute information of each candidate object, wherein the attribute information comprises position information and size information; and if the detection result indicates that the detection area does not comprise the candidate object, determining the target image as the operation object corresponding to the operation instruction.
In one embodiment, the operation instruction comprises a first operation instruction for adding association information for a single object; or the operation instruction comprises a second operation instruction used for adding the associated information for the combined object.
In one embodiment, the operation instruction includes a first operation instruction for adding association information to a single object, and the processor 1201, when acquiring an operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and the attribute information of each candidate object, performs the following operations: determining whether a candidate object comprising the operation position exists in the at least one candidate object according to the operation position and the attribute information of each candidate object; if the candidate object exists, determining the candidate object comprising the operation position as an operation object; and if not, determining the candidate object with the distance to the operation position smaller than the distance threshold value as the operation object.
In one embodiment, the operation instruction includes a second operation instruction for adding association information to the combined object, and the processor 1201, when acquiring the operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and the attribute information of each candidate object, performs the following operations: and selecting a candidate object overlapped with the detection area in the at least one candidate object according to the operation position and the attribute information of each candidate object, and determining the selected candidate object as the operation object corresponding to the operation instruction.
In an embodiment, when detecting an operation object in a detection area corresponding to the operation position to obtain a detection result, the processor 1201 performs the following operations: carrying out gray level processing on the detection area, and carrying out linear detection on the detection area after the gray level processing to obtain a straight line included in the detection area; performing equation processing on the straight line included in the detection area by adopting a preset straight line equation to obtain a straight line equation included in the detection area; determining a target straight line included in the detection area according to the straight line equation and a straight line screening condition; and extending the target straight lines, and carrying out intersection detection and inclusion relation detection on the target straight lines to obtain a detection result.
In one embodiment, the associated information entry interface includes an associated information entry box and an exit button, and the exit button ends displaying the associated information entry interface when triggered, and the associated information is detected in the associated information entry box.
In one embodiment, the target image further includes a marked region, the marked region includes a marked object to which the association information mark has been added, and the processor 1201 is further configured to: acquiring the target image and historical marking data corresponding to the target image, wherein the historical marking data comprises attribute information of a marked object in the target image and historical associated information corresponding to the marked object; and displaying the target image in a browsing interface and adding a mark added with the associated information for the marked object according to the attribute information of the marked object in the target image and the historical associated information.
In one embodiment, the processor 1201 is further configured to: if the operation position included in the operation area corresponding to the operation instruction is located in the marked area, acquiring historical associated information corresponding to the marked object in the marked area; displaying a related information entry interface corresponding to the marked object, and displaying the historical related information on the related information entry interface corresponding to the marked object; and when new associated information is detected in an associated information entry interface corresponding to the marked object, adding the new associated information as historical associated information corresponding to the marked object.
In one embodiment, if the operation object is located in the marked region, the operation object includes a marked object, the processor 1201 is further configured to: and displaying historical associated information corresponding to the operation object in an associated information entry interface corresponding to the operation object.
In the embodiment of the invention, an operation instruction for a target image displayed in a browsing interface is received, if an operation position included in an area corresponding to the operation instruction is located in an unmarked area, operation object detection is carried out on a detection area corresponding to the operation position to obtain a detection result, and an operation object corresponding to the operation instruction is obtained from the target image according to the detection result; further, displaying a related information entry interface corresponding to the operation object, and detecting related information entered on the related information entry interface corresponding to the operation object; and storing the input associated information and the attribute information of the operation object in an associated manner, and displaying a mark added with the associated information on the operation object. It should be understood that the operation object is determined by detecting a detection area, which is a part on the target image, and thus the operation object is a part included in the target image. In the image processing process, a part of the target image corresponding to the user operation on the target image can be automatically identified according to the user operation and correspondingly processed, so that the image processing mode is enriched, and the pertinence of the image processing is enhanced.
The above disclosure is intended to be illustrative of only some embodiments of the invention, and is not intended to limit the scope of the invention.

Claims (13)

1. An image processing method, comprising:
receiving an operation instruction aiming at a target image displayed in a browsing interface;
if the operation position included in the operation area corresponding to the operation instruction is located in the unmarked area, performing operation object detection on the detection area corresponding to the operation position to obtain a detection result, and acquiring the operation object corresponding to the operation instruction from the target image according to the detection result;
displaying a relevant information entry interface corresponding to the operation object, and detecting relevant information entered on the relevant information entry interface corresponding to the operation object;
and storing the input associated information and the attribute information of the operation object in an associated manner, and displaying a mark added with the associated information on the operation object.
2. The method of claim 1, wherein the obtaining the operation object corresponding to the operation instruction from the target image according to the detection result comprises:
if the detection result indicates that the detection area comprises at least one candidate object, acquiring an operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and attribute information of each candidate object, wherein the attribute information comprises position information and size information;
and if the detection result indicates that the detection area does not comprise the candidate object, determining the target image as the operation object corresponding to the operation instruction.
3. The method of claim 2, wherein the operation instructions include a first operation instruction for adding association information for a single object; or the operation instruction comprises a second operation instruction used for adding the associated information for the combined object.
4. The method according to claim 3, wherein the operation instruction includes a first operation instruction for adding association information to a single object, and the obtaining an operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and the attribute information of each candidate object includes:
determining whether a candidate object comprising the operation position exists in the at least one candidate object according to the operation position and the attribute information of each candidate object;
if the candidate object exists, determining the candidate object comprising the operation position as an operation object;
and if not, determining the candidate object with the distance to the operation position smaller than the distance threshold value as the operation object.
5. The method of claim 3, wherein the operation instruction comprises a second operation instruction for adding association information to the combined object, and the obtaining the operation object corresponding to the operation instruction from the at least one candidate object according to the operation position and the attribute information of each candidate object comprises:
and selecting a candidate object overlapped with the detection area in the at least one candidate object according to the operation position and the attribute information of each candidate object, and determining the selected candidate object as the operation object corresponding to the operation instruction.
6. The method of claim 1, wherein the detecting the operation object in the detection area corresponding to the operation position to obtain a detection result comprises:
carrying out gray level processing on the detection area, and carrying out linear detection on the detection area after the gray level processing to obtain a straight line included in the detection area;
performing equation processing on the straight line included in the detection area by adopting a preset straight line equation to obtain a straight line equation included in the detection area;
determining a target straight line included in the detection area according to the straight line equation and a straight line screening condition;
and extending the target straight lines, and carrying out intersection detection and inclusion relation detection on the target straight lines to obtain a detection result.
7. The method of claim 1, wherein the associated information entry interface comprises an associated information entry box and an exit button that ends display of the associated information entry interface when triggered, the associated information detected in the associated information entry box.
8. The method of claim 1, wherein the target image further comprises a marked area, the marked area comprises a marked object to which an associated information mark has been added, and before receiving the operation instruction for the target image displayed in the browsing interface, the method further comprises:
acquiring the target image and historical marking data corresponding to the target image, wherein the historical marking data comprises attribute information of a marked object in the target image and historical associated information corresponding to the marked object;
and displaying the target image in a browsing interface and adding a mark added with the associated information for the marked object according to the attribute information of the marked object in the target image and the historical associated information.
9. The method of claim 8, wherein the method further comprises:
if the operation position included in the operation area corresponding to the operation instruction is located in the marked area, acquiring historical associated information corresponding to the marked object in the marked area;
displaying a related information entry interface corresponding to the marked object, and displaying the historical related information on the related information entry interface corresponding to the marked object;
and when new associated information is detected in an associated information entry interface corresponding to the marked object, adding the new associated information as historical associated information corresponding to the marked object.
10. The method of claim 8, wherein if the operand is located within the marked region, the operand comprising a marked object, the method further comprises:
and displaying historical associated information corresponding to the operation object in an associated information entry interface corresponding to the operation object.
11. An image processing apparatus characterized by comprising:
the receiving unit is used for receiving an operation instruction aiming at a target image displayed in the browsing interface;
the processing unit is used for detecting an operation object in a detection area corresponding to the operation position to obtain a detection result if the operation position included in the operation area corresponding to the operation instruction is located in an unmarked area;
the acquisition unit is used for acquiring an operation object corresponding to the operation instruction from the target image according to the detection result;
the display unit is used for displaying the associated information entry interface corresponding to the operation object;
the processing unit is further configured to detect associated information entered on an associated information entry interface corresponding to the operation object;
the storage unit is used for storing the input associated information and the attribute information of the operation object into a database in an associated manner;
the display unit is further configured to display a mark to which the associated information has been added on the operation object.
12. A terminal, characterized in that it further comprises:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium having stored thereon one or more instructions adapted to be loaded by the processor and to execute the image processing method according to claims 1-10.
13. A computer storage medium having computer program instructions stored therein, which when executed by a processor, are adapted to perform the image processing method of any of claims 1-10.
CN201911186508.4A 2019-11-29 2019-11-29 Image processing method, device, terminal and storage medium Active CN110908570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911186508.4A CN110908570B (en) 2019-11-29 2019-11-29 Image processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911186508.4A CN110908570B (en) 2019-11-29 2019-11-29 Image processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110908570A true CN110908570A (en) 2020-03-24
CN110908570B CN110908570B (en) 2023-01-31

Family

ID=69818849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911186508.4A Active CN110908570B (en) 2019-11-29 2019-11-29 Image processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110908570B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552429A (en) * 2020-04-29 2020-08-18 杭州海康威视数字技术股份有限公司 Graph selection method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050158043A1 (en) * 1997-07-12 2005-07-21 Kia Silverbrook Printing cartridge with ink and print media supplies
CN102682025A (en) * 2011-03-16 2012-09-19 中兴通讯股份有限公司 Browser and method for achieving adding and displaying of web image comments
CN107765960A (en) * 2017-10-24 2018-03-06 ***通信集团公司 A kind of information cuing method, device and storage medium
CN108241464A (en) * 2017-12-13 2018-07-03 深圳市金立通信设备有限公司 A kind of method, terminal and computer readable storage medium for showing chat message
CN109905775A (en) * 2019-01-16 2019-06-18 北京奇艺世纪科技有限公司 A kind of scribble barrage generates and display methods, device, terminal device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050158043A1 (en) * 1997-07-12 2005-07-21 Kia Silverbrook Printing cartridge with ink and print media supplies
CN102682025A (en) * 2011-03-16 2012-09-19 中兴通讯股份有限公司 Browser and method for achieving adding and displaying of web image comments
CN107765960A (en) * 2017-10-24 2018-03-06 ***通信集团公司 A kind of information cuing method, device and storage medium
CN108241464A (en) * 2017-12-13 2018-07-03 深圳市金立通信设备有限公司 A kind of method, terminal and computer readable storage medium for showing chat message
CN109905775A (en) * 2019-01-16 2019-06-18 北京奇艺世纪科技有限公司 A kind of scribble barrage generates and display methods, device, terminal device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552429A (en) * 2020-04-29 2020-08-18 杭州海康威视数字技术股份有限公司 Graph selection method and device and electronic equipment

Also Published As

Publication number Publication date
CN110908570B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN106484266B (en) Text processing method and device
US10346560B2 (en) Electronic blueprint system and method
CN101957730B (en) Messaging device and information processing method
JP5665125B2 (en) Image processing method and image processing system
CN109933322B (en) Page editing method and device and computer readable storage medium
US20090327965A1 (en) Selection of items in a virtualized view
CN113536173B (en) Page processing method and device, electronic equipment and readable storage medium
US11610054B1 (en) Semantically-guided template generation from image content
CN110781427A (en) Method, device, equipment and storage medium for calculating first screen time
CN104598467B (en) Webpage picture display method and device
CN110908570B (en) Image processing method, device, terminal and storage medium
CN116682130A (en) Method, device and equipment for extracting icon information and readable storage medium
CN111913777A (en) Information processing method, information processing device, electronic equipment and storage medium
CN111796736B (en) Application sharing method and device and electronic equipment
CN111694627B (en) Desktop editing method and device
US20030154462A1 (en) Software maintenance material generation apparatus and generation program therefor
CN112612469A (en) Interface element processing method and device and electronic equipment
CN107451143B (en) Reading method and reading system of electronic document
CN112433723A (en) Personalized list development method and device
CN112395028A (en) Page checking method, device, terminal and storage medium
CN111523288B (en) Display method and device for aerial view of PDF document
CN111815340B (en) Popularization information determination method, device, equipment and readable storage medium
CN110059281B (en) Picture display method, device, terminal and computer readable storage medium
CN113779438B (en) Webpage text information processing method and device and terminal equipment
CN108268297B (en) Application interface display method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022529

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant