CN112884787A - Image clipping method, image clipping device, readable medium and electronic equipment - Google Patents

Image clipping method, image clipping device, readable medium and electronic equipment Download PDF

Info

Publication number
CN112884787A
CN112884787A CN202110118673.7A CN202110118673A CN112884787A CN 112884787 A CN112884787 A CN 112884787A CN 202110118673 A CN202110118673 A CN 202110118673A CN 112884787 A CN112884787 A CN 112884787A
Authority
CN
China
Prior art keywords
image
target
determining
candidate
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110118673.7A
Other languages
Chinese (zh)
Other versions
CN112884787B (en
Inventor
李亚
刘畅
张帆
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202110118673.7A priority Critical patent/CN112884787B/en
Publication of CN112884787A publication Critical patent/CN112884787A/en
Application granted granted Critical
Publication of CN112884787B publication Critical patent/CN112884787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image cropping method, an image cropping device, a readable medium, and an electronic apparatus, including: acquiring an external detection frame of each object contour in an image to be cut through a pre-trained object detection model; taking the external detection frames as nodes in the connected graph, taking the distance between the external detection frames as the weight of edges between the nodes, constructing a connected graph corresponding to an object in the image to be cut as a first connected graph, and determining a target connected graph according to the first connected graph; determining a connected subgraph of the target connected graph, and determining the connected subgraph comprising the target node as a candidate subgraph; and determining a target clipping area according to the candidate subgraph, and clipping the image to be clipped according to the target clipping area. Therefore, the image can be cut by the connected graph formed by the objects in the image, so that the mutual dependency relationship among the objects in the image can be clearly shown, and the corresponding intelligent cutting can be carried out according to different cutting requirements.

Description

Image clipping method, image clipping device, readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image cropping method, an image cropping device, a readable medium, and an electronic device.
Background
In intelligent picture cropping, artificial cropping not only takes time but also has unstable effect according to human factors, so in order to detect and obtain a picture with good effect, a machine is used for cropping, and generally an object detection model is used for detecting as many objects as possible in the picture and then cropping according to the detected objects. Alternatively, the region to be cropped may be determined from the image directly based on the trained model.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides an image cropping method, the method comprising:
acquiring an external detection frame of each object contour in the image to be cut through a pre-trained object detection model, and determining the object type of the external detection frame;
taking the external detection frames as nodes in a connected graph, taking the distance between the external detection frames as the weight of edges between the nodes, constructing a connected graph corresponding to an object in the image to be cut as a first connected graph, and determining a target connected graph according to the first connected graph;
determining a connected subgraph of the target connected graph, and determining the connected subgraph comprising target nodes as candidate subgraphs, wherein the candidate subgraphs are one or more, and the object class to which the external detection frame corresponding to the target nodes belongs is the target object class to be reserved in the image to be cut;
and determining a target clipping area according to the candidate subgraph, and clipping the image to be clipped according to the target clipping area.
In a second aspect, the present disclosure provides an image cropping device, the device comprising:
the object detection module is used for acquiring an external detection frame of each object contour in the image to be cut through a pre-trained object detection model and determining the object type of the external detection frame;
the processing module is used for taking the external detection frames as nodes in a connected graph, taking the distance between the external detection frames as the weight of edges between the nodes, constructing a connected graph corresponding to an object in the image to be cut as a first connected graph, and determining a target connected graph according to the first connected graph;
the determining module is used for determining a connected subgraph of the target connected graph and determining the connected subgraph comprising target nodes as candidate subgraphs, wherein the candidate subgraphs are one or more, and the object class to which the external detection frame corresponding to the target nodes belongs is the target object class to be reserved in the image to be cut;
and the cropping module is used for determining a target cropping area according to the candidate subgraph and cropping the image to be cropped according to the target cropping area.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method of the first aspect.
By the technical scheme, the objects in the image can be used as nodes, the corresponding connected graph is constructed, and the image is cut according to the connected graph, so that the interdependence relation among the objects in the image can be clearly shown, and the corresponding intelligent cutting can be carried out according to different cutting requirements.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
fig. 1 is a flowchart illustrating an image cropping method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating an image cropping method according to yet another exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating an image cropping method according to yet another exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating an image cropping method according to yet another exemplary embodiment of the present disclosure.
Fig. 5 is a block diagram illustrating a configuration of an image cropping device according to an exemplary embodiment of the present disclosure.
FIG. 6 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart illustrating an image cropping method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the method includes steps 101 to 104.
In step 101, an external detection frame of each object contour in the image to be cropped is obtained through a pre-trained object detection model, and the object type to which the external detection frame belongs is determined.
The object detection model may be, for example, a deep learning model using retinaNet, or a deep learning model based on YOLO (you Only Look one), and is obtained by training in advance through a large amount of training data.
The image to be cropped may be a partially sampled video frame in a piece of video, a partially cropped image included in an article, or any one of a group of images including a plurality of pictures. For example, when a cover image of a short video needs to be determined, a frame serving as the cover image may be determined from all video frames in the short video, and the frame may be used as the image to be cropped, so that the cover image of the short video may be obtained by cropping the video frame. The method of determining the image to be cropped is not limited in this disclosure.
The external detection frame can also be a detection frame which can completely include the outline of the object, the external detection frame can be a rectangular detection frame, a circular detection frame, an oval detection frame, other polygonal detection frames and the like, and the external detection frame can be a minimum external frame or not.
The type of the object to which the external detection frame belongs is also the type of the object corresponding to the external detection frame, for example, if the external detection frame is an external detection frame formed according to the outline of a section of text, the type of the object to which the external detection frame belongs is also the type of the text; if the external detection frame is an external detection frame formed by the outlines of the objects with significance detected by the object detection model, the object type to which the external detection frame belongs is also the significant object type.
The multiple object types can be obtained by respectively detecting different object detection models, or can be obtained by directly detecting the same object detection model.
In step 102, the circumscribed detection frames are used as nodes in a connected graph, and the distance between the circumscribed detection frames is used as the weight of edges between the nodes, so as to construct a connected graph corresponding to an object in the image to be cropped as a first connected graph, and a target connected graph is determined according to the first connected graph.
In step 103, a connected subgraph of the target connected graph is determined, the connected subgraph including the target node is determined as one or more candidate subgraphs, and the object class to which the external detection frame corresponding to the target node belongs is the target object class to be reserved in the image to be cut.
The distance between the external detection frames may be the distance between the central points of the external detection frames.
When the target connected graph is determined according to the first connected graph, the first connected graph can be directly determined as the target connected graph, or after operations such as simplification to a certain extent or conversion according to different requirements are performed on the first connected graph, the changed graph is taken as the target connected graph. Specific possible simplifications or transformations are described in the following examples.
And the object type to which the external detection frame corresponding to the target node belongs, namely the object type to which the target node belongs. The target object category to be retained in the image to be cropped may be one or more. For example, in one possible application scenario, only the face category in the image to be cropped may be retained, and in another possible application scenario, both the face category and the text category may be retained.
Because one connected graph can comprise a plurality of connected subgraphs, the connected subgraphs required by users in the plurality of connected subgraphs can be selected as candidate subgraphs according to requirements by selecting the target node, and then the clipping area of the image to be clipped is divided.
In step 104, a target cropping area is determined according to the candidate subgraph, and the image to be cropped is cropped according to the target cropping area.
The number of the obtained clipping areas determined by each candidate sub-image can be one or multiple, and after all the clipping areas corresponding to all the candidate sub-images are determined and obtained, the most appropriate clipping area is selected as the target clipping area, so that the image to be clipped can be clipped only according to the requirements of a user.
By the technical scheme, the objects in the image can be used as nodes, the corresponding connected graph is constructed, and the image is cut according to the connected graph, so that the interdependence relation among the objects in the image can be clearly shown, and the corresponding intelligent cutting can be carried out according to different cutting requirements.
Fig. 2 is a flowchart illustrating an image cropping method according to yet another exemplary embodiment of the present disclosure. As shown in fig. 2, the method further includes steps 201 to 205.
In step 201, the circumscribed detection frames are used as nodes in the connected graph, and the distance between the circumscribed detection frames is used as the weight of the edge between the nodes, so as to construct a connected graph corresponding to the object in the image to be cropped as a first connected graph.
In step 202, the smallest spanning tree in the first connectivity graph is searched as a second connectivity graph.
In step 203, the target connectivity map is determined from the second connectivity map.
By searching the minimum spanning tree in the first communication graph, the relationship between each node in the first communication graph can be simplified to the simplest state, the number of combinations between each node is reduced under the condition that each node can be communicated with other nodes, namely, the number of combinations between each object in the image to be cut is reduced, namely, the possible cutting modes are reduced, and the operation is reduced under the condition that the number of objects in the image to be cut is large.
In step 204, according to the distances between the nodes in the second connected graph, clustering the nodes belonging to the same object class in the second connected graph, and merging the nodes that are clustered into the same class of clusters into the same node, wherein the merged nodes corresponding to the classes of clusters include all the nodes merged in the class of clusters, and the connected graph formed by all the nodes after clustering is used as a third connected graph.
In step 205, the third connectivity graph is determined to be the target connectivity graph.
On the basis of searching the minimum spanning tree of the first connection diagram, nodes with similar distances can be clustered, so that the number of combinations among the nodes is further reduced. In the clustering process, the object type to which the node belongs is also the object type to which the detection frame corresponding to the node belongs, and only the nodes belonging to the same object type are clustered, so that after a plurality of nodes clustered into the same type of cluster are combined, the object type of the combined node is still unchanged.
Because each node in the connected graph corresponds to an external detection frame, and the external detection frame corresponds to a fixed region in the image to be cut, when a plurality of nodes in the same cluster are merged, all image regions of the external detection frame corresponding to the merged node need to be included in the merged node, and the form of the external detection frame is not changed. For example, when the external detection frame is a rectangular detection frame, if there are two nodes currently belonging to the same cluster, and the position of the external detection frame corresponding to one node in the image to be cropped is (X1, Y1, X2, Y2), where (X1, Y1) is the upper left corner of the rectangular region, (X2, Y2) is the lower right corner of the rectangular region, and the position of the external detection frame corresponding to the other node in the image to be cropped is (X3, Y3, X3, Y3), where (X3, Y3) is the upper left corner of the rectangular region, (X3, Y3) is the lower right corner of the rectangular region, and X3< X3, Y3< Y3, then the middle position corresponding to the merged node in the image to be cropped is the image to be cropped and should be (X3, Y3 should be the merged region after X3, Y3 should be the left corner of the merged region (X3, Y3 should be the merged region after X3, (X4, Y4) is the lower right corner of the rectangular area corresponding to the merged node.
Fig. 3 is a flowchart illustrating an image cropping method according to yet another exemplary embodiment of the present disclosure. As shown in fig. 3, the method further comprises step 301 and step 302.
In step 301, a candidate clipping region is determined according to the candidate subgraph, wherein the candidate clipping region and the candidate subgraph have a one-to-one or many-to-one relationship.
The method for determining the candidate clipping region according to the candidate subgraph can be various, for example, the maximum image region which includes and only includes all nodes in the candidate subgraph and meets the preset clipping proportion can be searched in the image to be clipped, and the maximum image region corresponding to the candidate subgraph is determined as the candidate clipping region. When the image to be cropped is a rectangle, the preset cropping ratio may be, for example, (1:1), and when the maximum image area satisfying the above requirement is searched, the search may be performed in four directions by using the center of the circumscribed rectangle formed by all nodes in the candidate subgraph.
Or, in the case that the target object category to be retained in the image to be cropped includes a face category, searching a maximum image region which includes and only includes all nodes in the candidate subgraph, satisfies a preset cropping proportion, and is in a preset position in the image to be cropped, and determining the maximum image region corresponding to the candidate subgraph as the candidate cropping region. The preset position may be determined according to the target object class, for example, in the case that the target object class includes a face class, since the face of a human body generally needs an upper position in the picture, the preset position may be set as a ratio of an upper space of the maximum image area to the height of the image to be cropped, which may be, for example, (0, 0.1, 0.2, 0.33, 0.5), and so on.
In step 302, the target cropping area is selected from the candidate cropping areas through the ranking model, and the image to be cropped is cropped according to the target cropping area.
The ranking model may be a deep learning model obtained by pre-training, and the loss function when training the ranking model may be as follows:
Figure BDA0002921657620000091
where Φ is ResNet deep convolution network model, XtFor other pictures, XkFor the labeled target picture, the loss function may be rank loss.
Fig. 4 is a flowchart illustrating an image cropping method according to yet another exemplary embodiment of the present disclosure. As shown in fig. 4, the method further includes step 401 and step 402.
In step 401, an object category selection instruction input by a user is received.
In step 402, the target object class required to be reserved in the image to be cropped is determined according to the object class selection instruction. That is, before the image to be cropped is cropped, the user can select any object type possibly existing in the image to be cropped to perform the cropping according to the requirement of the user.
Fig. 5 is a block diagram illustrating a configuration of an image cropping device according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the apparatus includes: the object detection module 10 is configured to obtain an external detection frame of each object contour in the image to be cropped through a pre-trained object detection model, and determine an object type to which the external detection frame belongs; the processing module 20 is configured to use the external detection frames as nodes in a connected graph, use distances between the external detection frames as weights of edges between the nodes, construct a connected graph corresponding to an object in the image to be cropped as a first connected graph, and determine a target connected graph according to the first connected graph; the determining module 30 is configured to determine a connected subgraph of the target connected graph, and determine the connected subgraph including a target node as a candidate subgraph, where the candidate subgraph is one or more, and an object class to which the external detection frame corresponding to the target node belongs is a target object class to be reserved in the image to be cropped; and the cropping module 40 is used for determining a target cropping area according to the candidate subgraph and cropping the image to be cropped according to the target cropping area.
In a possible implementation, the processing module 20 is further configured to: searching the minimum spanning tree in the first connection diagram as a second connection diagram; and determining the target connected graph according to the second connected graph.
In a possible implementation, the processing module 20 is further configured to: according to the distance between each node in the second connected graph, clustering the nodes belonging to the same object class in the second connected graph respectively, and merging the nodes which are clustered into the same class of clusters into the same node, wherein the merged nodes corresponding to the various clusters comprise all the nodes merged in the class of clusters, and the connected graph formed by all the nodes after clustering is used as a third connected graph; determining the third connectivity graph as the target connectivity graph.
In a possible embodiment, the cropping module 40 is further configured to: determining a candidate clipping region according to the candidate subgraph, wherein the candidate clipping region and the candidate subgraph have one-to-one or many-to-one relationship; and selecting the target cropping area from the candidate cropping areas through the sorting model, and cropping the image to be cropped according to the target cropping area.
In a possible embodiment, the cropping module 40 is further configured to: searching a maximum image area which comprises all nodes in the candidate subgraph and only comprises all nodes in the candidate subgraph and meets a preset clipping proportion in the image to be clipped; determining the maximum image region corresponding to the candidate subgraph as the candidate clipping region.
In a possible embodiment, the cropping module 40 is further configured to: under the condition that the target object category needing to be reserved in the image to be cut comprises a face category, searching a maximum image area which comprises all nodes in the candidate subgraph, meets a preset cutting proportion and is in a preset position in the image to be cut; determining the maximum image region corresponding to the candidate subgraph as the candidate clipping region.
In a possible embodiment, the apparatus further comprises: the receiving module is used for receiving an object category selection instruction input by a user; and the selection module is used for determining the target object category which needs to be reserved in the image to be cropped according to the object category selection instruction.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an external detection frame of each object contour in the image to be cut through a pre-trained object detection model, and determining the object type of the external detection frame; taking the external detection frames as nodes in a connected graph, taking the distance between the external detection frames as the weight of edges between the nodes, constructing a connected graph corresponding to an object in the image to be cut as a first connected graph, and determining a target connected graph according to the first connected graph; determining a connected subgraph of the target connected graph, and determining the connected subgraph comprising target nodes as candidate subgraphs, wherein the candidate subgraphs are one or more, and the object class to which the external detection frame corresponding to the target nodes belongs is the target object class to be reserved in the image to be cut; and determining a target clipping area according to the candidate subgraph, and clipping the image to be clipped according to the target clipping area.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. For example, the object detection module may be further described as a module that obtains an external detection frame of each object contour in the image to be cropped through a pre-trained object detection model, and determines an object type to which the external detection frame belongs.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides an image cropping method, according to one or more embodiments of the present disclosure, the method comprising:
acquiring an external detection frame of each object contour in the image to be cut through a pre-trained object detection model, and determining the object type of the external detection frame;
taking the external detection frames as nodes in a connected graph, taking the distance between the external detection frames as the weight of edges between the nodes, constructing a connected graph corresponding to an object in the image to be cut as a first connected graph, and determining a target connected graph according to the first connected graph;
determining a connected subgraph of the target connected graph, and determining the connected subgraph comprising target nodes as candidate subgraphs, wherein the candidate subgraphs are one or more, and the object class to which the external detection frame corresponding to the target nodes belongs is the target object class to be reserved in the image to be cut;
and determining a target clipping area according to the candidate subgraph, and clipping the image to be clipped according to the target clipping area.
Example 2 provides the method of example 1, the determining a target connectivity graph from the first connectivity graph including:
searching the minimum spanning tree in the first connection diagram as a second connection diagram;
and determining the target connected graph according to the second connected graph.
Example 3 provides the method of example 2, the determining the target connectivity graph from the second connectivity graph including:
according to the distance between each node in the second connected graph, clustering the nodes belonging to the same object class in the second connected graph respectively, and merging the nodes which are clustered into the same class of clusters into the same node, wherein the merged nodes corresponding to the various clusters comprise all the nodes merged in the class of clusters, and the connected graph formed by all the nodes after clustering is used as a third connected graph;
determining the third connectivity graph as the target connectivity graph.
Example 4 provides the method of example 1, and the determining a target cropping area from the candidate subgraph and cropping the image to be cropped according to the target cropping area comprises:
determining a candidate clipping region according to the candidate subgraph, wherein the candidate clipping region and the candidate subgraph have one-to-one or many-to-one relationship;
and selecting the target cropping area from the candidate cropping areas through the sorting model, and cropping the image to be cropped according to the target cropping area.
Example 5 provides the method of example 4, the determining a candidate clipping region from the candidate subgraph comprising:
searching a maximum image area which comprises all nodes in the candidate subgraph and only comprises all nodes in the candidate subgraph and meets a preset clipping proportion in the image to be clipped;
determining the maximum image region corresponding to the candidate subgraph as the candidate clipping region.
Example 6 provides the method of example 4, the determining a candidate clipping region from the candidate subgraph comprising:
under the condition that the target object category needing to be reserved in the image to be cropped comprises a face category, searching a maximum image area which comprises all nodes in the candidate subgraph, meets a preset cropping proportion and is in a preset position in the image to be cropped;
determining the maximum image region corresponding to the candidate subgraph as the candidate clipping region.
Example 7 provides the method of example 1, further comprising, in accordance with one or more embodiments of the present disclosure:
receiving an object category selection instruction input by a user;
and determining the target object class required to be reserved in the image to be cropped according to the object class selection instruction.
Example 8 provides an image cropping device, according to one or more embodiments of the present disclosure, the device comprising:
the object detection module is used for acquiring an external detection frame of each object contour in the image to be cut through a pre-trained object detection model and determining the object type of the external detection frame;
the processing module is used for taking the external detection frames as nodes in a connected graph, taking the distance between the external detection frames as the weight of edges between the nodes, constructing a connected graph corresponding to an object in the image to be cut as a first connected graph, and determining a target connected graph according to the first connected graph;
the determining module is used for determining a connected subgraph of the target connected graph and determining the connected subgraph comprising target nodes as candidate subgraphs, wherein the candidate subgraphs are one or more, and the object class to which the external detection frame corresponding to the target nodes belongs is the target object class to be reserved in the image to be cut;
and the cropping module is used for determining a target cropping area according to the candidate subgraph and cropping the image to be cropped according to the target cropping area.
Example 9 provides a computer readable medium having stored thereon a computer program that, when executed by a processing apparatus, performs the steps of the method of any of examples 1-7, in accordance with one or more embodiments of the present disclosure.
Example 10 provides, in accordance with one or more embodiments of the present disclosure, an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method of any of examples 1-7.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Claims (10)

1. An image cropping method, characterized in that it comprises:
acquiring an external detection frame of each object contour in the image to be cut through a pre-trained object detection model, and determining the object type of the external detection frame;
taking the external detection frames as nodes in a connected graph, taking the distance between the external detection frames as the weight of edges between the nodes, constructing a connected graph corresponding to an object in the image to be cut as a first connected graph, and determining a target connected graph according to the first connected graph;
determining a connected subgraph of the target connected graph, and determining the connected subgraph comprising target nodes as candidate subgraphs, wherein the candidate subgraphs are one or more, and the object class to which the external detection frame corresponding to the target nodes belongs is the target object class to be reserved in the image to be cut;
and determining a target clipping area according to the candidate subgraph, and clipping the image to be clipped according to the target clipping area.
2. The method of claim 1, wherein determining a target connectivity graph from the first connectivity graph comprises:
searching the minimum spanning tree in the first connection diagram as a second connection diagram;
and determining the target connected graph according to the second connected graph.
3. The method of claim 2, wherein determining the target connectivity graph from the second connectivity graph comprises:
according to the distance between each node in the second connected graph, clustering the nodes belonging to the same object class in the second connected graph respectively, and merging the nodes which are clustered into the same class of clusters into the same node, wherein the merged nodes corresponding to the various clusters comprise all the nodes merged in the class of clusters, and the connected graph formed by all the nodes after clustering is used as a third connected graph;
determining the third connectivity graph as the target connectivity graph.
4. The method of claim 1, wherein the determining a target cropping area from the candidate subgraph and cropping the image to be cropped according to the target cropping area comprises:
determining a candidate clipping region according to the candidate subgraph, wherein the candidate clipping region and the candidate subgraph have one-to-one or many-to-one relationship;
and selecting the target cropping area from the candidate cropping areas through the sorting model, and cropping the image to be cropped according to the target cropping area.
5. The method of claim 4, wherein determining candidate clipping regions from the candidate subgraphs comprises:
searching a maximum image area which comprises all nodes in the candidate subgraph and only comprises all nodes in the candidate subgraph and meets a preset clipping proportion in the image to be clipped;
determining the maximum image region corresponding to the candidate subgraph as the candidate clipping region.
6. The method of claim 4, wherein determining candidate clipping regions from the candidate subgraphs comprises:
under the condition that the target object category needing to be reserved in the image to be cropped comprises a face category, searching a maximum image area which comprises all nodes in the candidate subgraph, meets a preset cropping proportion and is in a preset position in the image to be cropped;
determining the maximum image region corresponding to the candidate subgraph as the candidate clipping region.
7. The method of claim 1, further comprising:
receiving an object category selection instruction input by a user;
and determining the target object class required to be reserved in the image to be cropped according to the object class selection instruction.
8. An image cropping device, characterized in that it comprises:
the object detection module is used for acquiring an external detection frame of each object contour in the image to be cut through a pre-trained object detection model and determining the object type of the external detection frame;
the processing module is used for taking the external detection frames as nodes in a connected graph, taking the distance between the external detection frames as the weight of edges between the nodes, constructing a connected graph corresponding to an object in the image to be cut as a first connected graph, and determining a target connected graph according to the first connected graph;
the determining module is used for determining a connected subgraph of the target connected graph and determining the connected subgraph comprising target nodes as candidate subgraphs, wherein the candidate subgraphs are one or more, and the object class to which the external detection frame corresponding to the target nodes belongs is the target object class to be reserved in the image to be cut;
and the cropping module is used for determining a target cropping area according to the candidate subgraph and cropping the image to be cropped according to the target cropping area.
9. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 1 to 7.
CN202110118673.7A 2021-01-28 2021-01-28 Image clipping method and device, readable medium and electronic equipment Active CN112884787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110118673.7A CN112884787B (en) 2021-01-28 2021-01-28 Image clipping method and device, readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110118673.7A CN112884787B (en) 2021-01-28 2021-01-28 Image clipping method and device, readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112884787A true CN112884787A (en) 2021-06-01
CN112884787B CN112884787B (en) 2023-09-15

Family

ID=76053010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110118673.7A Active CN112884787B (en) 2021-01-28 2021-01-28 Image clipping method and device, readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112884787B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110249867A1 (en) * 2010-04-13 2011-10-13 International Business Machines Corporation Detection of objects in digital images
US20130336580A1 (en) * 2012-06-19 2013-12-19 Palo Alto Research Center Incorporated Weighted feature voting for classification using a graph lattice
CN103544712A (en) * 2013-11-12 2014-01-29 中国科学院自动化研究所 Method for automatically segmenting human lateral geniculate nucleus through prior knowledge
CN104809721A (en) * 2015-04-09 2015-07-29 香港中文大学深圳研究院 Segmentation method and device of cartoon
US9327406B1 (en) * 2014-08-19 2016-05-03 Google Inc. Object segmentation based on detected object-specific visual cues
CN105787872A (en) * 2011-11-09 2016-07-20 乐天株式会社 Image Processing Device, Method For Controlling Image Processing Device
CN106127782A (en) * 2016-06-30 2016-11-16 北京奇艺世纪科技有限公司 A kind of image partition method and system
CN106778864A (en) * 2016-12-13 2017-05-31 东软集团股份有限公司 Initial sample selection method and device
CN108597003A (en) * 2018-04-20 2018-09-28 腾讯科技(深圳)有限公司 A kind of article cover generation method, device, processing server and storage medium
CN109712164A (en) * 2019-01-17 2019-05-03 上海携程国际旅行社有限公司 Image intelligent cut-out method, system, equipment and storage medium
CN110009654A (en) * 2019-04-10 2019-07-12 大连理工大学 Three-dimensional data dividing method based on maximum Flow Policy
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110147833A (en) * 2019-05-09 2019-08-20 北京迈格威科技有限公司 Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN110189340A (en) * 2019-06-03 2019-08-30 北京达佳互联信息技术有限公司 Image partition method, device, electronic equipment and storage medium
CN110796663A (en) * 2019-09-17 2020-02-14 北京迈格威科技有限公司 Picture clipping method, device, equipment and storage medium
CN110795925A (en) * 2019-10-12 2020-02-14 腾讯科技(深圳)有限公司 Image-text typesetting method based on artificial intelligence, image-text typesetting device and electronic equipment
US20200074665A1 (en) * 2018-09-03 2020-03-05 Baidu Online Network Technology (Beijing) Co., Ltd. Object detection method, device, apparatus and computer-readable storage medium
CN111274981A (en) * 2020-02-03 2020-06-12 中国人民解放军国防科技大学 Target detection network construction method and device and target detection method
CN112001406A (en) * 2019-05-27 2020-11-27 杭州海康威视数字技术股份有限公司 Text region detection method and device
CN112017189A (en) * 2020-10-26 2020-12-01 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110249867A1 (en) * 2010-04-13 2011-10-13 International Business Machines Corporation Detection of objects in digital images
CN105787872A (en) * 2011-11-09 2016-07-20 乐天株式会社 Image Processing Device, Method For Controlling Image Processing Device
US20130336580A1 (en) * 2012-06-19 2013-12-19 Palo Alto Research Center Incorporated Weighted feature voting for classification using a graph lattice
CN103544712A (en) * 2013-11-12 2014-01-29 中国科学院自动化研究所 Method for automatically segmenting human lateral geniculate nucleus through prior knowledge
US9327406B1 (en) * 2014-08-19 2016-05-03 Google Inc. Object segmentation based on detected object-specific visual cues
CN104809721A (en) * 2015-04-09 2015-07-29 香港中文大学深圳研究院 Segmentation method and device of cartoon
CN106127782A (en) * 2016-06-30 2016-11-16 北京奇艺世纪科技有限公司 A kind of image partition method and system
CN106778864A (en) * 2016-12-13 2017-05-31 东软集团股份有限公司 Initial sample selection method and device
CN108597003A (en) * 2018-04-20 2018-09-28 腾讯科技(深圳)有限公司 A kind of article cover generation method, device, processing server and storage medium
US20200074665A1 (en) * 2018-09-03 2020-03-05 Baidu Online Network Technology (Beijing) Co., Ltd. Object detection method, device, apparatus and computer-readable storage medium
CN109712164A (en) * 2019-01-17 2019-05-03 上海携程国际旅行社有限公司 Image intelligent cut-out method, system, equipment and storage medium
CN110009654A (en) * 2019-04-10 2019-07-12 大连理工大学 Three-dimensional data dividing method based on maximum Flow Policy
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110147833A (en) * 2019-05-09 2019-08-20 北京迈格威科技有限公司 Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN112001406A (en) * 2019-05-27 2020-11-27 杭州海康威视数字技术股份有限公司 Text region detection method and device
CN110189340A (en) * 2019-06-03 2019-08-30 北京达佳互联信息技术有限公司 Image partition method, device, electronic equipment and storage medium
CN110796663A (en) * 2019-09-17 2020-02-14 北京迈格威科技有限公司 Picture clipping method, device, equipment and storage medium
CN110795925A (en) * 2019-10-12 2020-02-14 腾讯科技(深圳)有限公司 Image-text typesetting method based on artificial intelligence, image-text typesetting device and electronic equipment
CN111274981A (en) * 2020-02-03 2020-06-12 中国人民解放军国防科技大学 Target detection network construction method and device and target detection method
CN112017189A (en) * 2020-10-26 2020-12-01 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ELIMA HUSSAIN等: "A shape context fully convolutional neural network for segmentation and classification of cervical nuclei in Pap smear images", 《ARTIFICIAL INTELLIGENCE IN MEDICINE》, pages 1 - 11 *
刘培: "基于深度学习的图像目标检测与分割算法的研究与应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2019, no. 5, pages 138 - 1309 *
陈向宇: "基于航拍图像的单木提取和机载LiDAR数据的树种分类研究", vol. 2020, no. 5 *

Also Published As

Publication number Publication date
CN112884787B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN112184738B (en) Image segmentation method, device, equipment and storage medium
CN110298413B (en) Image feature extraction method and device, storage medium and electronic equipment
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
CN114282581B (en) Training sample acquisition method and device based on data enhancement and electronic equipment
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN111461967B (en) Picture processing method, device, equipment and computer readable medium
CN114494709A (en) Feature extraction model generation method, image feature extraction method and device
CN111461968A (en) Picture processing method and device, electronic equipment and computer readable medium
CN112257478A (en) Code scanning method, device, terminal and storage medium
CN113610034B (en) Method and device for identifying character entities in video, storage medium and electronic equipment
CN110851032A (en) Display style adjustment method and device for target device
CN111461965B (en) Picture processing method and device, electronic equipment and computer readable medium
CN111797266B (en) Image processing method and apparatus, storage medium, and electronic device
CN113205601A (en) Roaming path generation method and device, storage medium and electronic equipment
CN110674813B (en) Chinese character recognition method and device, computer readable medium and electronic equipment
CN114612909A (en) Character recognition method and device, readable medium and electronic equipment
CN112884787B (en) Image clipping method and device, readable medium and electronic equipment
CN111461969B (en) Method, device, electronic equipment and computer readable medium for processing picture
CN114004229A (en) Text recognition method and device, readable medium and electronic equipment
CN114862720A (en) Canvas restoration method and device, electronic equipment and computer readable medium
CN114422698A (en) Video generation method, device, equipment and storage medium
CN114463768A (en) Form recognition method and device, readable medium and electronic equipment
CN113255812A (en) Video frame detection method and device and electronic equipment
CN112070034A (en) Image recognition method and device, electronic equipment and computer readable medium
CN111835917A (en) Method, device and equipment for showing activity range and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant