CN112446936A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN112446936A
CN112446936A CN201910809131.7A CN201910809131A CN112446936A CN 112446936 A CN112446936 A CN 112446936A CN 201910809131 A CN201910809131 A CN 201910809131A CN 112446936 A CN112446936 A CN 112446936A
Authority
CN
China
Prior art keywords
image
processed
processing
coordinates
mouse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910809131.7A
Other languages
Chinese (zh)
Inventor
蒋小晴
白圣培
刘海锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910809131.7A priority Critical patent/CN112446936A/en
Publication of CN112446936A publication Critical patent/CN112446936A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image processing method and device, and relates to the technical field of computers. The method is applied to a browser, and the specific implementation mode comprises the following steps: displaying an image to be processed, and determining a processing purpose and a processing range selection mode aiming at the image to be processed currently; wherein the processing purpose comprises rejection or retention; recording the coordinates of a mouse pointer at each position of the image to be processed after monitoring a first mouse event occurring in the image to be processed; stopping the recording when a second mouse event occurring in the image to be processed is monitored; and determining a processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode, and sending the determined processing target information and the determined processing area information to a back-end server. The embodiment can realize fine and controllable image processing results through image processing range selection matched with the user action.

Description

Image processing method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus.
Background
With the rapid development of internet technology, matting software based on artificial intelligence ai (intellectual intelligence) has emerged. Matting is a common operation in image processing, and can separate or crop some part of an image. In the existing matting software, a user is required to mark a foreground color and a background color by a simple action such as drawing, and then content which is the same as or similar to the mark color is retained or removed by a back end. Because the reserved content and the removed content are determined through the color similarity, the difference between the matting result and the expected result of the user is often larger, the satisfactory result can be obtained only by multiple interactions with the user in practical application, and the operation steps are complicated.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method and apparatus, which can select an image processing range in accordance with a user action to achieve a fine and controllable image processing result.
To achieve the above object, according to one aspect of the present invention, there is provided an image processing method.
The image processing method of the embodiment of the invention is applied to a browser; the method comprises the following steps: displaying an image to be processed, and determining a processing purpose and a processing range selection mode aiming at the image to be processed currently; wherein the processing purpose comprises rejection or retention; recording the coordinates of a mouse pointer at each position of the image to be processed after monitoring a first mouse event occurring in the image to be processed; stopping the recording when a second mouse event occurring in the image to be processed is monitored; and determining a processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode, and sending the determined processing target information and the determined processing area information to a back-end server.
Optionally, the method further comprises: before displaying the image to be processed, determining a scaling adopted for displaying the image to be processed according to the size of an original image of the image to be processed and the size of a current display area; before monitoring a first mouse event occurring at the image to be processed, performing at least one of the following image transformation operations on the image to be processed in response to an external action: translation, rotation, zooming.
Optionally, the processing range selection mode includes a mouse track selection mode; and the determining the processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode comprises: performing an emulation transformation operation on the recorded coordinates by using inverse transformation parameters of image transformation operation executed when displaying the image to be processed and responding to external action to obtain transformation coordinates of the recorded coordinates in the original image of the image to be processed; and determining the area formed by the transformed coordinates as a processing area of an original image corresponding to the image to be processed.
Optionally, the method further comprises: after the first mouse event is monitored and before the second mouse event is monitored, drawing a moving track of a mouse pointer by using the recorded coordinates of the mouse pointer at each position of the image to be processed; after the drawing of the moving track is finished, if image transformation operation is performed on the image to be processed in response to the external action, performing emulation transformation operation on the moving track according to transformation parameters of the image transformation operation, and updating the moving track by using an operation result.
Optionally, the processing range selection manner includes a rectangular selection manner; and the determining the processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode comprises: determining the coordinates and the side lengths of base points of the rectangular selection frame according to the recorded coordinates; carrying out an emulation transformation operation on the base point coordinate and the side length of the rectangular selection frame by using an inverse transformation parameter of an image transformation operation executed when displaying an image to be processed and responding to an external action to obtain the base point coordinate and the side length of the rectangular selection frame in an original image of the image to be processed; and taking the obtained area with the determined base point coordinates and side length as a processing area of an original image corresponding to the image to be processed.
Optionally, the method further comprises: after the first mouse event is monitored and before the second mouse event is monitored, drawing a rectangular selection frame by using the coordinates of a base point and the side length determined by the recorded coordinates; after the rectangular selection frame is drawn, if the image transformation operation is executed on the image to be processed in response to the external action, the rectangular selection frame is subjected to the imitative transformation operation according to the transformation parameters of the image transformation operation, and the rectangular selection frame is updated by using the operation result.
Optionally, the first mouse event is a mouse down event, and the second mouse event is a mouse up event.
To achieve the above object, according to another aspect of the present invention, there is provided an image processing apparatus.
The image processing device provided by the embodiment of the invention is applied to a browser, and the device can comprise a preparation unit, a recording unit and an operation unit; the preparation unit can be used for displaying the image to be processed and determining the current processing purpose and processing range selection mode aiming at the image to be processed; wherein the processing purpose comprises rejection or retention; the recording unit may be operable to: recording the coordinates of a mouse pointer at each position of the image to be processed after monitoring a first mouse event occurring in the image to be processed; stopping the recording when a second mouse event occurring in the image to be processed is monitored; the arithmetic unit can be used for determining a processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode, and sending the determined processing target information and the determined processing area information to the back-end server.
Optionally, the apparatus may further comprise a scaling calculation unit and a transformation unit; wherein the scaling calculation unit is operable to: before displaying the image to be processed, determining a scaling adopted for displaying the image to be processed according to the size of an original image of the image to be processed and the size of a current display area; the transform unit may be operable to: before monitoring a first mouse event occurring at the image to be processed, performing at least one of the following image transformation operations on the image to be processed in response to an external action: translation, rotation, zooming.
Optionally, the processing range selection manner includes a mouse track selection manner, and the operation unit may be further configured to: performing an emulation transformation operation on the recorded coordinates by using inverse transformation parameters of image transformation operation executed when displaying the image to be processed and responding to external action to obtain transformation coordinates of the recorded coordinates in the original image of the image to be processed; and determining the area formed by the transformed coordinates as a processing area of an original image corresponding to the image to be processed.
Optionally, the apparatus may further comprise a first rendering unit operable to: after the first mouse event is monitored and before the second mouse event is monitored, drawing a moving track of a mouse pointer by using the recorded coordinates of the mouse pointer at each position of the image to be processed; after the drawing of the moving track is finished, if image transformation operation is performed on the image to be processed in response to the external action, performing emulation transformation operation on the moving track according to transformation parameters of the image transformation operation, and updating the moving track by using an operation result.
Optionally, the processing range selection manner includes a rectangular selection manner, and the operation unit may be further configured to: determining the coordinates and the side lengths of base points of the rectangular selection frame according to the recorded coordinates; carrying out an emulation transformation operation on the base point coordinate and the side length of the rectangular selection frame by using an inverse transformation parameter of an image transformation operation executed when displaying an image to be processed and responding to an external action to obtain the base point coordinate and the side length of the rectangular selection frame in an original image of the image to be processed; and taking the obtained area with the determined base point coordinates and side length as a processing area of an original image corresponding to the image to be processed.
Optionally, the apparatus may further comprise a second rendering unit operable to: after the first mouse event is monitored and before the second mouse event is monitored, drawing a rectangular selection frame by using the coordinates of a base point and the side length determined by the recorded coordinates; after the rectangular selection frame is drawn, if the image transformation operation is executed on the image to be processed in response to the external action, the rectangular selection frame is subjected to the imitative transformation operation according to the transformation parameters of the image transformation operation, and the rectangular selection frame is updated by using the operation result.
Optionally, the first mouse event is a mouse down event, and the second mouse event is a mouse up event.
To achieve the above object, according to still another aspect of the present invention, there is provided an electronic apparatus.
An electronic device of the present invention includes: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the image processing method provided by the present invention.
To achieve the above object, according to still another aspect of the present invention, there is provided a computer-readable storage medium.
A computer-readable storage medium of the present invention has stored thereon a computer program which, when executed by a processor, implements the image processing method provided by the present invention.
According to the technical scheme of the invention, one embodiment of the invention has the following advantages or beneficial effects:
first, the present invention provides two image processing range selection methods: the mouse track selection mode and the rectangle selection mode can be respectively used for selecting a smaller area and a larger area. In each mode, the coordinate of the mouse pointer is recorded in real time to realize the fit with the action of the user, the track marked by the user is sent to a back-end server corresponding to the processing area information of the original image, and the back-end server obtains an image processing result according to the processing area information and returns the image processing result to the user. Thus, customized image processing can be performed in a fine and controllable manner, and the effect of 'what the user sees is what the user gets' is achieved.
Secondly, in order to facilitate the user to mark the image processing range, the invention can execute various image transformation operations such as translation, rotation, scaling and the like (taking scaling as an example, it can be understood that the user often needs to enlarge the image when marking a small area and needs to reduce the image when marking a large area) at any time according to the user instruction, and update the marked track of the user by using the affine transformation principle after executing the image transformation operation. In addition, after the user marks the track, the track can be transformed into the space where the original image is located according to the image transformation operation executed before, so that the accuracy of obtaining the marking range at the rear end is ensured.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the components of an image processing apparatus according to an embodiment of the present invention;
FIG. 3 is an exemplary system architecture diagram to which embodiments of the present invention may be applied;
fig. 4 is a schematic structural diagram of an electronic device for implementing the image processing method in the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments of the present invention and the technical features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of main steps of an image processing method according to an embodiment of the present invention.
As shown in fig. 1, the image processing method according to the embodiment of the present invention may be specifically executed according to the following steps:
step S101: and displaying the image to be processed, and determining the processing purpose and the processing range selection mode aiming at the image to be processed currently.
In the embodiment of the invention, the image processing step can be executed by using a Canvas component in a hypertext Markup Language HTML5(Hyper Text Markup Language 5.0) and writing a corresponding JavaScript (a scripting Language, abbreviated as JS) code at the browser end, and the setting can reduce the response time of user operation and improve the user experience.
In this step, after receiving the image to be processed sent by the user, the browser end may obtain the width and height information of the original image of the image to be processed by using an onload method (loading method) of an img label (picture label), and obtain the width and height information of the current display area by using a JS code written in advance. On the premise of keeping the aspect ratio of the image unchanged, obtaining a scaling ratio which enables the image to be processed to have the maximum display area, and displaying the image to be processed according to the scaling ratio. Generally, the display of an image can be realized by using a drawImage method of Canvas.
After the to-be-processed image is displayed, the browser side can determine the processing purpose and the processing range selection mode for the to-be-processed image according to an external action (such as a user action). The processing purpose may be to remove content in the image or to retain content in the image, and the processing range selection manner may be a mouse track selection manner or a rectangular selection manner. In practical application, a user can select a processing purpose by clicking a button corresponding to the rejection or the retention, and after the processing purpose is determined, a processing range selection mode can be selected by clicking a button corresponding to a mouse track selection mode or a rectangular selection mode. It should be noted that the above description does not set any limit to the user actions and the processing range selection modes for implementing the above functions.
In particular, the image processing method of the present invention is applicable to two scenarios. In the first scenario, the image to be processed has not been processed before, i.e., the image to be processed is the same as its original image. In this case, the browser only loads and displays the image to be processed, and the content of the reserved area marked in the image to be processed by the user is not removed in the subsequent image processing process. In the second scenario, the image to be processed is previously processed one or more times, and a part of the content in the image to be processed is already removed compared with the original image. Because a user may have a need to restore the removed content by comparing with the original image, in this case, the image to be processed and the original image may be displayed simultaneously, and at this time, in the reserved area marked by the image to be processed by the user, the original image content is not removed in the subsequent image processing process, and the previously removed content is restored in the subsequent image processing process.
Step S102: recording the coordinates of a mouse pointer at each position of the image to be processed after monitoring a first mouse event occurring in the image to be processed; and stopping the recording when a second mouse event occurring in the image to be processed is monitored.
After determining the processing purpose and the processing range selection mode for the image to be processed, the user can perform a preset action by using the mouse to realize the marking of the image processing range. The browser end can judge the moment when the preset first mouse event occurring in the image to be processed is monitored as the mark starting moment, judge the moment when the preset second mouse event occurring in the image to be processed is monitored as the mark ending moment, and represent the mark range by the track of the movement of the mouse pointer between the mark starting moment and the mark ending moment. In specific application, the first mouse event and the second mouse event can be flexibly set according to the application environment. For example, the first mouse event may be set as a mouse down event, the second mouse event may be set as a mouse up event, or both the first mouse event and the second mouse event may be set as mouse click events.
In this step, after monitoring a first mouse event occurring in the image to be processed, the browser end records the real-time coordinates of the mouse pointer at each position of the image to be processed by using a pre-written JavaScript code. If the processing range selection manner selected by the user in step S101 is a mouse track selection manner, a draw line (draw line) method of Canvas is used to draw a movement track of the mouse pointer, and an area through which the movement track passes or a formed closed area is an image processing range marked by the user. In this case, the mouse pointer may be a point or a shape that can form a certain region by moving (for example, the mouse pointer can form a certain region when moving using a brush tool in general drawing software).
If the processing range selection mode selected by the user in step S101 is a rectangular selection mode, the coordinates, the width and the length of the base point (i.e. a preset rectangular reference point, such as the vertex at the top left corner) of the rectangular selection frame are first calculated according to the recorded coordinates of the mouse pointer at each position of the image to be processed, and then the rectangular selection frame is drawn by the rect (rectangle creation) method of Canvas, where the area formed by the rectangular selection frame is the image processing range marked by the user. As a preferred scheme, after the rectangular selection box is created, the rectangular selection box can be rotated by any angle in response to the action of a user, and the function can be realized by executing rotation transformation on each point coordinate of the original rectangular selection box. It is understood that drawing the rectangle can be implemented by Canvas, and can also be implemented by Konva (a JS framework library based on Canvas).
In practical applications, for convenience of labeling, users are often used to perform image transformation operations such as translation, scaling, and rotation on an image to be processed, and therefore various image transformation functions for the image to be processed need to be implemented. In the embodiment of the present invention, in response to a user action (for example, clicking a zoom-in button or a zoom-out button), the browser end first determines a zoom center coordinate (for example, a center point coordinate of an image to be processed is determined as the zoom center coordinate), then performs zooming by using a scale (zoom) method of Canvas, and finally redraws the image by using a drawImage method.
In one embodiment, the browser side determines the rotation center coordinates and implements image rotation using the rotate method of Canvas in response to a user action (e.g., clicking a rotate button). The browser side responds to the image dragging action executed by the user, acquires coordinates of each position of the movement of the mouse pointer, calculates difference values between the coordinates, and realizes the translation of the image by using a translation method of Canvas.
As a preferable aspect, after performing an image transformation operation such as translation, scaling, rotation, or the like on the image to be processed, the image processing range that has been marked before the image transformation can be updated in the following manner. Specifically, if the processing range selection mode selected by the user in step S101 is a mouse track selection mode, performing an emulation transformation operation on the movement track of the mouse pointer according to the transformation parameters of the image transformation operation to obtain transformed movement track coordinates, and finally redrawing the movement track by using the drawLine method of Canvas and removing the original movement track. Since the replica transform (including translation, scaling, rotation) operations are known techniques, they need not be repeated here. If the processing range selection mode selected by the user in step S101 is a rectangular selection mode, performing an emulation transformation operation on the drawn rectangular selection frame according to the transformation parameters of the image transformation operation to obtain the coordinates of the base point and the length and width information of the transformed rectangular selection frame, and finally redrawing the rectangular selection frame by using a rect method of Canvas and removing the original rectangular selection frame.
Step S103: and determining a processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode, and sending the determined processing target information and the determined processing area information to a back-end server.
In this step, the browser may send the user mark obtained in step S102 to the back-end server, and the back-end server may perform retention or removal of the image content according to the user mark. Specifically, if the processing range selection manner selected by the user in step S101 is a mouse track selection manner, the inverse transformation parameters of the image transformation operation performed when the image to be processed is displayed and the inverse transformation parameters of the image transformation operation performed when an external action is responded (before the user mark is created) may be used to perform a pseudo transformation operation on the recorded coordinates (i.e., the coordinates of each position in the user mark) to obtain the transformed coordinates of the recorded coordinates in the original image of the image to be processed, then an area formed by the transformed coordinates may be determined as a processing area of the original image corresponding to the image to be processed, and finally the processing area information and the processing destination information determined before may be transmitted to the back-end server. The inverse transformation parameters of the image transformation operation refer to the transformation factors of the inverse transformation of the image transformation operation, and since the calculation method of the replica transformation and the conversion method of the replica transformation and the inverse transformation are known, they are not repeated here. It will be appreciated that one or more inverse transformations performed on the user token may transform the user token into a space in which the original image of the image to be processed is located, thereby facilitating processing by the back-end server for the original image of the image to be processed.
If the processing range selection mode selected by the user in step S101 is a rectangular selection mode, the base point coordinates and the side lengths of the rectangular selection frame may be first subjected to an emulation transformation operation using the inverse transformation parameters of the image transformation operation performed in response to the external action and in the display of the image to be processed, so as to obtain the base point coordinates and the side lengths of the rectangular selection frame in the original image of the image to be processed, then the region determined by the obtained base point coordinates and side lengths is used as the processing region of the original image corresponding to the image to be processed, and finally the processing region information and the processing destination information determined before are sent to the backend server.
After receiving the processing area information and the processing target information, the back-end server removes the image content which is required to be removed by the user, retains and/or recovers the image content which is required to be retained by the user, and finally returns the processed image to the browser end to be displayed to the user, so that the processing expected by the user is executed completely in a fitting manner, and the effect of 'what the user sees is what the user obtains' is achieved.
In the technical scheme of the embodiment of the invention, two image processing range selection modes are provided: the mouse track selection mode and the rectangle selection mode can be respectively used for selecting a smaller area and a larger area. In each mode, the coordinate of the mouse pointer is recorded in real time to realize the fit with the action of the user, the track marked by the user is sent to a back-end server corresponding to the processing area information of the original image, and the back-end server obtains an image processing result according to the processing area information and returns the image processing result to the user. Thus, customized image processing can be performed in a fine and controllable manner, and the effect of 'what the user sees is what the user gets' is achieved. Meanwhile, in order to facilitate the user to mark the image processing range, the invention can execute various image transformation operations such as translation, rotation, scaling and the like according to the user instruction at any time, and update the marked track of the user by using the affine transformation principle after executing the image transformation operations. In addition, after the user marks the track, the track can be transformed into the space where the original image is located according to the image transformation operation executed before, so that the accuracy of obtaining the marking range at the rear end is ensured.
It should be noted that, for the convenience of description, the foregoing method embodiments are described as a series of acts, but those skilled in the art will appreciate that the present invention is not limited by the order of acts described, and that some steps may in fact be performed in other orders or concurrently. Moreover, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required to implement the invention.
To facilitate a better implementation of the above-described aspects of embodiments of the present invention, the following also provides relevant means for implementing the above-described aspects.
Referring to fig. 2, an image processing apparatus 200 according to an embodiment of the present invention is applied to a browser, and the apparatus 200 may include a preparation unit 201, a recording unit 202, and an operation unit 203.
The preparation unit 202 may be configured to display an image to be processed, and determine a processing purpose and a processing range selection manner for the current image to be processed; wherein the processing purpose comprises culling or retention. The recording unit 202 may be configured to: recording the coordinates of a mouse pointer at each position of the image to be processed after monitoring a first mouse event occurring in the image to be processed; and stopping the recording when a second mouse event occurring in the image to be processed is monitored. The arithmetic unit 203 may be configured to determine a processing area of the original image corresponding to the image to be processed using the recorded coordinates and the processing range selection manner, and send the determined processing destination information and processing area information to the back-end server.
In the embodiment of the present invention, the apparatus 200 may further include a scaling calculation unit and a transformation unit. Wherein the scaling calculation unit is operable to: before the image to be processed is displayed, the scaling adopted for displaying the image to be processed is determined according to the size of the original image of the image to be processed and the size of the current display area. The transform unit may be operable to: before monitoring a first mouse event occurring at the image to be processed, performing at least one of the following image transformation operations on the image to be processed in response to an external action: translation, rotation, zooming.
In a specific application, the processing range selection manner includes a mouse track selection manner, and the operation unit 203 may be further configured to: performing an emulation transformation operation on the recorded coordinates by using inverse transformation parameters of image transformation operation executed when displaying the image to be processed and responding to external action to obtain transformation coordinates of the recorded coordinates in the original image of the image to be processed; and determining the area formed by the transformed coordinates as a processing area of an original image corresponding to the image to be processed.
In one embodiment, the apparatus 200 may further comprise a first rendering unit operable to: after the first mouse event is monitored and before the second mouse event is monitored, drawing a moving track of a mouse pointer by using the recorded coordinates of the mouse pointer at each position of the image to be processed; after the drawing of the moving track is finished, if image transformation operation is performed on the image to be processed in response to the external action, performing emulation transformation operation on the moving track according to transformation parameters of the image transformation operation, and updating the moving track by using an operation result.
In practical applications, the processing range selection manner includes a rectangular selection manner, and the operation unit 203 may be further configured to: determining the coordinates and the side lengths of base points of the rectangular selection frame according to the recorded coordinates; carrying out an emulation transformation operation on the base point coordinate and the side length of the rectangular selection frame by using an inverse transformation parameter of an image transformation operation executed when displaying an image to be processed and responding to an external action to obtain the base point coordinate and the side length of the rectangular selection frame in an original image of the image to be processed; and taking the obtained area with the determined base point coordinates and side length as a processing area of an original image corresponding to the image to be processed.
As a preferred aspect, the apparatus 200 may further comprise a second rendering unit operable to: after the first mouse event is monitored and before the second mouse event is monitored, drawing a rectangular selection frame by using the coordinates of a base point and the side length determined by the recorded coordinates; after the rectangular selection frame is drawn, if the image transformation operation is executed on the image to be processed in response to the external action, the rectangular selection frame is subjected to the imitative transformation operation according to the transformation parameters of the image transformation operation, and the rectangular selection frame is updated by using the operation result.
In addition, in the embodiment of the present invention, the first mouse event is a mouse down event, and the second mouse event is a mouse up event.
In the technical scheme of the embodiment of the invention, two image processing range selection modes are provided: the mouse track selection mode and the rectangle selection mode can be respectively used for selecting a smaller area and a larger area. In each mode, the coordinate of the mouse pointer is recorded in real time to realize the fit with the action of the user, the track marked by the user is sent to a back-end server corresponding to the processing area information of the original image, and the back-end server obtains an image processing result according to the processing area information and returns the image processing result to the user. Thus, customized image processing can be performed in a fine and controllable manner, and the effect of 'what the user sees is what the user gets' is achieved. Meanwhile, in order to facilitate the user to mark the image processing range, the invention can execute various image transformation operations such as translation, rotation, scaling and the like according to the user instruction at any time, and update the marked track of the user by using the affine transformation principle after executing the image transformation operations. In addition, after the user marks the track, the track can be transformed into the space where the original image is located according to the image transformation operation executed before, so that the accuracy of obtaining the marking range at the rear end is ensured.
Fig. 3 shows an exemplary system architecture 300 to which the image processing method or the image processing apparatus of the embodiment of the present invention can be applied.
As shown in fig. 3, the system architecture 300 may include terminal devices 301, 302, 303, a network 304 and a server 305 (this architecture is merely an example, and the components included in a particular architecture may be adapted according to the application specific circumstances). The network 304 serves as a medium for providing communication links between the terminal devices 301, 302, 303 and the server 305. Network 304 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal device 301, 302, 303 to interact with the server 305 via the network 304 to receive or send messages or the like. Various client applications, such as an image processing application (for example only), may be installed on the terminal devices 301, 302, 303.
The terminal devices 301, 302, 303 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 305 may be a server providing various services, such as a back-end server (for example only) providing support for image processing applications operated by users with the terminal devices 301, 302, 303. The back-end server may process the received image processing request and feed back the processing result (e.g. a processed image-by way of example only) to the terminal device 301, 302, 303.
It should be noted that the image processing method provided by the embodiment of the present invention is generally executed by the terminal devices 301, 302, 303, and accordingly, the image processing apparatus is generally disposed in the terminal devices 301, 302, 303.
It should be understood that the number of terminal devices, networks, and servers in fig. 3 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The invention also provides the electronic equipment. The electronic device of the embodiment of the invention comprises: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the image processing method provided by the present invention.
Referring now to FIG. 4, a block diagram of a computer system 400 suitable for use with the electronic device implementing an embodiment of the invention is shown. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 4, the computer system 400 includes a Central Processing Unit (CPU)401 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the computer system 400 are also stored. The CPU401, ROM 402, and RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output section 407 including a display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 408 including a hard disk and the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. A driver 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
In particular, the processes described in the main step diagrams above may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the main step diagram. In the above-described embodiment, the computer program can be downloaded and installed from a network through the communication section 409, and/or installed from the removable medium 411. The computer program performs the above-described functions defined in the system of the present invention when executed by the central processing unit 401.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a preparation unit, a recording unit, and an arithmetic unit. The names of these units do not in some cases constitute a limitation on the units themselves, and for example, the preparation unit may be described as a "unit that provides processing-purpose information and a processing range selection manner to the arithmetic unit".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to perform steps comprising: displaying an image to be processed, and determining a processing purpose and a processing range selection mode aiming at the image to be processed currently; wherein the processing purpose comprises rejection or retention; recording the coordinates of a mouse pointer at each position of the image to be processed after monitoring a first mouse event occurring in the image to be processed; stopping the recording when a second mouse event occurring in the image to be processed is monitored; and determining a processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode, and sending the determined processing target information and the determined processing area information to a back-end server.
In the technical scheme of the embodiment of the invention, two image processing range selection modes are provided: the mouse track selection mode and the rectangle selection mode can be respectively used for selecting a smaller area and a larger area. In each mode, the coordinate of the mouse pointer is recorded in real time to realize the fit with the action of the user, the track marked by the user is sent to a back-end server corresponding to the processing area information of the original image, and the back-end server obtains an image processing result according to the processing area information and returns the image processing result to the user. Thus, customized image processing can be performed in a fine and controllable manner, and the effect of 'what the user sees is what the user gets' is achieved. Meanwhile, in order to facilitate the user to mark the image processing range, the invention can execute various image transformation operations such as translation, rotation, scaling and the like according to the user instruction at any time, and update the marked track of the user by using the affine transformation principle after executing the image transformation operations. In addition, after the user marks the track, the track can be transformed into the space where the original image is located according to the image transformation operation executed before, so that the accuracy of obtaining the marking range at the rear end is ensured.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image processing method is applied to a browser; characterized in that the method comprises:
displaying an image to be processed, and determining a processing purpose and a processing range selection mode aiming at the image to be processed currently; wherein the processing purpose comprises rejection or retention;
recording the coordinates of a mouse pointer at each position of the image to be processed after monitoring a first mouse event occurring in the image to be processed; stopping the recording when a second mouse event occurring in the image to be processed is monitored; and
and determining a processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode, and sending the determined processing target information and the determined processing area information to a back-end server.
2. The method of claim 1, further comprising:
before displaying the image to be processed, determining a scaling adopted for displaying the image to be processed according to the size of an original image of the image to be processed and the size of a current display area;
before monitoring a first mouse event occurring at the image to be processed, performing at least one of the following image transformation operations on the image to be processed in response to an external action: translation, rotation, zooming.
3. The method of claim 2, wherein the processing range selection manner comprises a mouse track selection manner; and the determining the processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode comprises:
performing an emulation transformation operation on the recorded coordinates by using inverse transformation parameters of image transformation operation executed when displaying the image to be processed and responding to external action to obtain transformation coordinates of the recorded coordinates in the original image of the image to be processed;
and determining the area formed by the transformed coordinates as a processing area of an original image corresponding to the image to be processed.
4. The method of claim 3, further comprising:
after the first mouse event is monitored and before the second mouse event is monitored, drawing a moving track of a mouse pointer by using the recorded coordinates of the mouse pointer at each position of the image to be processed;
after the drawing of the moving track is finished, if image transformation operation is performed on the image to be processed in response to the external action, performing emulation transformation operation on the moving track according to transformation parameters of the image transformation operation, and updating the moving track by using an operation result.
5. The method of claim 2, wherein the processing range selection manner comprises a rectangular selection manner; and the determining the processing area of the original image corresponding to the image to be processed by using the recorded coordinates and the processing range selection mode comprises:
determining the coordinates and the side lengths of base points of the rectangular selection frame according to the recorded coordinates;
carrying out an emulation transformation operation on the base point coordinate and the side length of the rectangular selection frame by using an inverse transformation parameter of an image transformation operation executed when displaying an image to be processed and responding to an external action to obtain the base point coordinate and the side length of the rectangular selection frame in an original image of the image to be processed; and
and taking the obtained area with the determined base point coordinates and side length as a processing area of an original image corresponding to the image to be processed.
6. The method of claim 5, further comprising:
after the first mouse event is monitored and before the second mouse event is monitored, drawing a rectangular selection frame by using the coordinates of a base point and the side length determined by the recorded coordinates;
after the rectangular selection frame is drawn, if the image transformation operation is executed on the image to be processed in response to the external action, the rectangular selection frame is subjected to the imitative transformation operation according to the transformation parameters of the image transformation operation, and the rectangular selection frame is updated by using the operation result.
7. The method of any of claims 1-6, wherein the first mouse event is a mouse down event and the second mouse event is a mouse up event.
8. An image processing apparatus is applied to a browser; characterized in that the device comprises:
the device comprises a preparation unit, a processing unit and a processing unit, wherein the preparation unit is used for displaying an image to be processed and determining a processing purpose and a processing range selection mode aiming at the image to be processed currently; wherein the processing purpose comprises rejection or retention;
a recording unit configured to: recording the coordinates of a mouse pointer at each position of the image to be processed after monitoring a first mouse event occurring in the image to be processed; stopping the recording when a second mouse event occurring in the image to be processed is monitored; and
and the operation unit is used for determining a processing area of the original image corresponding to the image to be processed by utilizing the recorded coordinates and the processing range selection mode, and sending the determined processing target information and the determined processing area information to the back-end server.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201910809131.7A 2019-08-29 2019-08-29 Image processing method and device Pending CN112446936A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910809131.7A CN112446936A (en) 2019-08-29 2019-08-29 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910809131.7A CN112446936A (en) 2019-08-29 2019-08-29 Image processing method and device

Publications (1)

Publication Number Publication Date
CN112446936A true CN112446936A (en) 2021-03-05

Family

ID=74742145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910809131.7A Pending CN112446936A (en) 2019-08-29 2019-08-29 Image processing method and device

Country Status (1)

Country Link
CN (1) CN112446936A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114779975A (en) * 2022-03-31 2022-07-22 北京至简墨奇科技有限公司 Processing method and device for finger and palm print image viewing interface and electronic system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075172A (en) * 2006-08-23 2007-11-21 腾讯科技(深圳)有限公司 Method for capturing picture, capturer and instant-telecommunication customer terminal
US20140063073A1 (en) * 2012-08-28 2014-03-06 Hon Hai Precision Industry Co., Ltd. Electronic device and method for controlling movement of images on screen
WO2017206400A1 (en) * 2016-05-30 2017-12-07 乐视控股(北京)有限公司 Image processing method, apparatus, and electronic device
WO2018141232A1 (en) * 2017-02-06 2018-08-09 腾讯科技(深圳)有限公司 Image processing method, computer storage medium, and computer device
CN109634703A (en) * 2018-12-13 2019-04-16 北京旷视科技有限公司 Image processing method, device, system and storage medium based on canvas label

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075172A (en) * 2006-08-23 2007-11-21 腾讯科技(深圳)有限公司 Method for capturing picture, capturer and instant-telecommunication customer terminal
US20140063073A1 (en) * 2012-08-28 2014-03-06 Hon Hai Precision Industry Co., Ltd. Electronic device and method for controlling movement of images on screen
WO2017206400A1 (en) * 2016-05-30 2017-12-07 乐视控股(北京)有限公司 Image processing method, apparatus, and electronic device
WO2018141232A1 (en) * 2017-02-06 2018-08-09 腾讯科技(深圳)有限公司 Image processing method, computer storage medium, and computer device
CN109634703A (en) * 2018-12-13 2019-04-16 北京旷视科技有限公司 Image processing method, device, system and storage medium based on canvas label

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘耀钦: "HTML5 Canvas绘图技术及其在图像裁剪中的应用研究", 《洛阳师范学院学报》, vol. 35, no. 11, pages 41 - 45 *
李有仕;: "浅议Photoshop中"抠图"的方法和技巧", 现代职业教育, no. 17 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114779975A (en) * 2022-03-31 2022-07-22 北京至简墨奇科技有限公司 Processing method and device for finger and palm print image viewing interface and electronic system

Similar Documents

Publication Publication Date Title
CN109634598B (en) Page display method, device, equipment and storage medium
CN110784754A (en) Video display method and device and electronic equipment
JP2024505995A (en) Special effects exhibition methods, devices, equipment and media
CN109669617B (en) Method and device for switching pages
CN111629252A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN109472852B (en) Point cloud image display method and device, equipment and storage medium
CN110310299B (en) Method and apparatus for training optical flow network, and method and apparatus for processing image
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN110825286B (en) Image processing method and device and electronic equipment
CN109118456B (en) Image processing method and device
KR20210058768A (en) Method and device for labeling objects
CN111951356B (en) Animation rendering method based on JSON data format
US9646362B2 (en) Algorithm for improved zooming in data visualization components
CN111382386A (en) Method and equipment for generating webpage screenshot
US9501812B2 (en) Map performance by dynamically reducing map detail
CN110673886B (en) Method and device for generating thermodynamic diagrams
CN110782504A (en) Curve determination method, device, computer readable storage medium and equipment
CN112446936A (en) Image processing method and device
CN112363790A (en) Table view display method and device and electronic equipment
CN110134905B (en) Page update display method, device, equipment and storage medium
CN113282852A (en) Method and device for editing webpage
CN114125485B (en) Image processing method, device, equipment and medium
CN112445394B (en) Screenshot method and screenshot device
CN109190097B (en) Method and apparatus for outputting information
CN113391737A (en) Interface display control method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination