Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows a schematic diagram of one application scenario in which the image processing method of some embodiments of the present disclosure may be applied.
As shown in fig. 1, a computing device 101 may acquire an image 102 to be processed. Here, as an example, the image to be processed 102 includes a teaching aid map 103 and a teaching aid map 104. The computing device 101 identifies the contours of the teaching aid map 103 and the teaching aid map 104 in the image 102 to be processed through a contour extraction algorithm, and obtains contours 105 and 106 of the teaching aid map 103 and contours 107 and 108 of the teaching aid map 104. Thereafter, a contour circumscribed rectangle 109 of the contour 105, a contour circumscribed rectangle 110 of the contour 106, a contour circumscribed rectangle 112 of the contour 107, and a contour circumscribed rectangle 111 of the contour 108 are generated. Finally, the image classification is carried out on the areas to be processed surrounded by the outline circumscribed rectangle 109, the outline circumscribed rectangle 110, the outline circumscribed rectangle 111 and the outline circumscribed rectangle 112 respectively, and the classification information of the image and the classification probability corresponding to the classification information are obtained. Finally, according to the classification probability of the region to be processed surrounded by the outline circumscribed rectangle 109 and the outline circumscribed rectangle 110, selecting the outline circumscribed rectangle corresponding to the maximum classification probability to be mapped onto the image to be processed 102, and taking the outline circumscribed rectangle as a target outline circumscribed rectangle 113. Similarly, a target outline bounding rectangle 114 is obtained.
The computing device 101 may be hardware or software. When the computing device is hardware, the computing device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be implemented as a plurality of software or software modules, for example, to provide distributed services, or as a single software or software module. The present application is not particularly limited herein.
It should be understood that the number of computing devices 101 in fig. 1 is merely illustrative. There may be any number of computing devices 101 as desired for an implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of an image processing method according to the present disclosure is shown. The image processing method comprises the following steps:
step 201, identifying the outline of the target object in the image to be processed, and obtaining at least one outline of the object.
In some embodiments, the subject of execution of the image processing method (e.g., the computing device shown in fig. 1) may employ various contour detection algorithms to identify (e.g., edge-based contour extraction algorithms) contours of target objects (e.g., teaching aids) in the image to be processed.
The image to be processed may be an image in which the target object is displayed. The target object includes, but is not limited to, at least one of: articles, such as teaching aids; and (5) a person. Wherein the outline of the target object in the image is composed of edge pixels of the target object. The width of the pixels of the profile is divided into a single pixel width profile or a multi-pixel width profile. For an image with a contour of a single pixel width, at least one contour can be extracted by a contour detection algorithm. In practice, for an image with a contour of multiple pixel widths, the inner contour and the outer contour can be extracted by a contour detection algorithm, so that at least two contours can be obtained.
In some embodiments, the extracting the contour in the image by the executing body may include the following steps: the picture is first gray-scale processed, such as image binarization. And then, carrying out edge detection on the image subjected to gray level processing, and extracting edge information. For example, the Sobel edge detection algorithm, the Canny edge detection algorithm, or the Laplacian operator may be used to extract edge information of the image. Finally, removing noise existing in the edge information and repairing the edge information to obtain the contour information.
The edge information can be repaired by using an edge tracking method to connect discrete edges in series. The edge tracking method is divided into eight adjacent domains and four adjacent domains. The method is to perform edge tracking according to a preset tracking direction (for example, clockwise direction), and the termination condition of each tracking is that no contour exists in 8 or four neighborhoods.
Step 202, generating an external rectangle of each contour, and obtaining at least one contour external rectangle.
In some embodiments, the bounding rectangle of the outline may be the smallest rectangle that contains the outline area and has sides parallel to the sides of the image. Wherein the contour region includes a contour and an inner region thereof.
In the present embodiment, the coordinate values (x, y) of the pixel points in the outline area may be counted for each outline area. And respectively determining the maximum value and the minimum value of the coordinates x and the coordinates y in the contour area. Then, four coordinate values of the minimum value and the maximum value of x and y are used as vertexes, and a circumscribed rectangle of the outline area is generated. As an example, the coordinate value of the pixel point in the upper left corner of the image may be noted as (0, 0).
And 203, classifying the to-be-processed area surrounded by each outline circumscribed rectangle to obtain category information.
In some embodiments, the execution body of the image processing method may use various image classification algorithms (for example, a migration learning algorithm and an image classification algorithm based on a support vector machine) to perform image classification on the region to be processed surrounded by each outline circumscribed rectangle, so as to obtain the category information of the image. Wherein each contour bounding rectangle encloses a region to be processed, which is enclosed by the corresponding contour bounding rectangle obtained in step 202 mapped onto the image to be processed.
As an example, the performing body performs image classification on the to-be-processed areas, respectively, and may perform the following steps:
first, the circumscribed rectangle obtained in step 202 is mapped onto an image to be processed, and the image to be processed with a rectangular frame is obtained. And then, cutting the image to be processed to obtain the area to be processed in each rectangular frame. Finally, an image classification algorithm based on a support vector machine can be adopted to classify the image of the region to be processed, so that the image type information and the corresponding classification probability in each rectangular frame are obtained.
Step 204, selecting a target outline bounding rectangle from the at least one outline bounding rectangle based on the category information.
In some embodiments, as an example, for each outline bounding rectangle of each target object of the image to be processed, outline bounding rectangles with a classification probability smaller than a preset value in each outline bounding rectangle are removed, and one with the largest area is selected from the remaining outline bounding rectangles as the target bounding rectangle. The contour circumscribed rectangle of the same target object can be obtained by calculating that the distance between the centers of gravity of the contour circumscribed rectangle is smaller than a preset value.
In some optional manners of some embodiments, the selecting, by the image processing method execution body, the target bounding rectangle from the at least one outline bounding rectangle may be performed as follows:
firstly, selecting all outline circumscribed rectangles meeting preset conditions.
Among them, as an example, the predetermined conditions include: the center-of-gravity distance of each circumscribed rectangle is less than or equal to a predetermined value.
And secondly, selecting the outline circumscribed rectangle corresponding to the classification probability with the highest probability from the classification probabilities corresponding to the outline circumscribed rectangle from the outline circumscribed rectangles, and taking the outline circumscribed rectangle as a target outline circumscribed rectangle. One of the above embodiments of the present disclosure has the following advantageous effects: and identifying the outline of the target object in the image to be processed, thereby obtaining at least one outline of the object. Then, by generating the circumscribed rectangle of each contour, at least one contour circumscribed rectangle is obtained, so that the position information of the target object in the image can be roughly determined. And finally, selecting a target contour circumscribed rectangle from the at least one contour circumscribed rectangle by using category information obtained by classifying the region to be processed surrounded by the contour circumscribed rectangle. Thereby realizing the positioning of the target object. Therefore, the outline of the target object in the image is utilized to circumscribe the rectangle, and the target object included in the circumscribed rectangle is positioned.
With further reference to fig. 3, a flow 300 of further embodiments of an image processing method is shown. The flow 300 of the image processing method comprises the steps of:
step 301, identifying the outline of the target object in the image to be processed, and obtaining at least one outline of the object.
In some embodiments, the execution subject of the image processing method identifies the contour of the target object in the image to be processed, which may be performed as follows:
step 3011, inputting the image to be processed into a pre-trained contour extraction network model, and outputting the contour of the target object in the image to be processed. The contour extraction network may be HED (Whole nested edge detection, holisically-Nested Edge Detection), CEDN (full convolutional encoder-decoder network, convolutional Encoder-Dncoder Networks).
And 3012, selecting a closed candidate contour from the candidate contours as the at least one contour.
In practice, the idea of boundary tracking may be employed to obtain closed candidate contours. The specific steps are as follows: the pixel-by-pixel point scan starts from the top left corner of the output image of step 3011, when a point on the contour is encountered, its coordinates are noted and sequential tracking starts until the tracked subsequent point returns to the starting point, or there is no new subsequent point position. The candidate contour is a closed candidate contour in response to tracking that the last subsequent point coordinate is the same as the starting point coordinate.
Step 302, generating an external rectangle of each contour, and obtaining at least one contour external rectangle.
In some embodiments, the specific implementation of step 302 and the technical effects thereof may refer to step 202 in the corresponding embodiment of fig. 2, which is not described herein.
And 303, respectively inputting the to-be-processed area surrounded by each outline circumscribed rectangle into a pre-trained classification network model to obtain the class information of the to-be-processed area and the classification probability corresponding to the class information.
In some embodiments, the execution body of the image processing method may implement image classification using a classification network such as ResNet (residual network).
And 304, selecting an circumscribed rectangle corresponding to the maximum classification probability in the contour circumscribed rectangle of each target object in the image to be processed by using a non-maximum suppression algorithm as a target contour circumscribed rectangle.
The non-maximum suppression algorithm essentially finds local maxima and suppresses non-maximum elements. In some embodiments, according to the obtained classification probability and coordinate information of the bounding rectangle, finding out the outline bounding rectangle with the highest classification probability from the outline bounding rectangles of each target object. The specific implementation steps are as follows: first, step 303 is performed to obtain a descending order of classification probabilities of the regions to be processed. And then, selecting a contour circumscribed rectangle corresponding to the region to be processed with the highest classification probability as a target contour circumscribed rectangle, and calculating IOU (cross-over-Union) of the contour circumscribed rectangle and other contour circumscribed rectangles. Then, the IOU obtained according to the above is high in the degree of overlap removal, for example, the IOU is greater than a threshold value set in advance. And repeating the steps on the remaining outline circumscribed rectangles, so as to obtain the target outline circumscribed rectangles of all target objects in the image to be processed.
In step 305, the position information of the center of gravity of the target outline circumscribed rectangle is determined as the position information of the object corresponding to the target outline circumscribed rectangle.
Here, the positional information of the center of gravity of the target outline circumscribed rectangle is the positional information of the target object.
As can be seen in fig. 3, the flow 300 of the image processing method in some embodiments corresponding to fig. 3 highlights the contour detection algorithm and the extraction algorithm of the target contour bounding rectangle, as compared to the description of some embodiments corresponding to fig. 2. The pre-trained contour extraction network model is adopted to extract the contour of the target object, and the complicated steps of extracting the edge information and the edge repair algorithm by using the traditional algorithm are omitted. And then, by selecting the closed contour, the information irrelevant to the contour of the target object is removed, so that the classification result is more accurate. Finally, the circumscribed rectangle of the target outline is selected by adopting a non-maximum suppression algorithm, and compared with fig. 2, the positioning of the target object is more accurate and reliable.
With further reference to fig. 4, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an image processing apparatus, which correspond to those method embodiments shown in fig. 2, and which are particularly applicable in various electronic devices.
As shown in fig. 4, the image processing apparatus 400 of some embodiments includes: an identification unit 401, a generation unit 402, a classification unit 403, and a suppression processing unit 404. Wherein the identifying unit 401 is configured to identify a contour of a target object in the image to be processed, so as to obtain at least one contour of the object; a generating unit 402, configured to generate an bounding rectangle of each contour, to obtain at least one contour bounding rectangle; the classifying unit 403 is configured to classify the to-be-processed area surrounded by each outline circumscribed rectangle to obtain category information; the suppression processing unit 404 is configured to select a target contour bounding rectangle from the at least one contour bounding rectangle based on the category information.
In an alternative implementation of some embodiments, the identification unit 401 of the image processing apparatus 400 is further configured to: inputting the image to be processed into a pre-trained contour extraction network model, and outputting candidate contours of a target object in the image; and selecting a closed candidate contour from the candidate contours as the at least one contour.
In an alternative implementation of some embodiments, the classification unit 403 of the image processing apparatus 400 is further configured to: inputting the outline circumscribed rectangle into a pre-trained classification network model to obtain the classification information of the circumscribed rectangle and the classification probability corresponding to the classification information.
In an alternative implementation of some embodiments, the suppression processing unit 404 of the image processing apparatus 400 is further configured to: and selecting the circumscribed rectangle corresponding to the maximum classification probability in the contour circumscribed rectangle of each displayed object in the image to be processed as a target contour circumscribed rectangle by using a non-maximum suppression algorithm.
In an alternative implementation of some embodiments, the suppression processing unit 404 of the image processing apparatus 400 is further configured to: selecting all outline circumscribed rectangles meeting preset conditions; and selecting the outline circumscribed rectangle corresponding to the classification probability with the highest probability from the classification probabilities corresponding to the outline circumscribed rectangle from the outline circumscribed rectangles, and taking the outline circumscribed rectangle as the target outline circumscribed rectangle.
In an alternative implementation of some embodiments, the image processing apparatus 400 further includes: and a determining unit. Wherein the determining unit is configured to determine the positional information of the center of gravity of the target contour circumscribed rectangle as the positional information of the object corresponding to the target contour circumscribed rectangle.
It will be appreciated that the elements described in the apparatus 400 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting benefits described above with respect to the method are equally applicable to the apparatus 400 and the units contained therein, and are not described in detail herein.
Referring now to FIG. 5, a schematic diagram of an electronic device (e.g., the computing device of FIG. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic devices in some embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, as well as stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is only one example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 5 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communications device 509, or from the storage device 508, or from the ROM 502. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: identifying the outline of an object displayed in an image to be processed to obtain at least one outline of the object; generating an external rectangle of each contour to obtain at least one contour external rectangle; classifying the to-be-processed area surrounded by each outline circumscribed rectangle to obtain category information; and selecting a target outline circumscribed rectangle from the at least one outline circumscribed rectangle based on the category information.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: a processor includes an identification unit, a generation unit, a classification unit, and a suppression processing unit. The names of these units do not in any way constitute a limitation of the unit itself, for example, the recognition unit may also be described as "a unit that recognizes the outline of an object displayed in the image to be processed, resulting in at least one outline of the object.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
According to one or more embodiments of the present disclosure, there is provided an image processing method including: identifying the outline of an object displayed in an image to be processed to obtain at least one outline of the object; generating an external rectangle of each contour to obtain at least one contour external rectangle; classifying the to-be-processed area surrounded by each outline circumscribed rectangle to obtain category information; and selecting a target outline circumscribed rectangle from the at least one outline circumscribed rectangle based on the category information.
According to one or more embodiments of the present disclosure, the identifying the contour of the target object in the image to be processed, to obtain at least one contour of the object, includes: inputting the image to be processed into a pre-trained contour extraction network model, and outputting candidate contours of a target object in the image; and selecting a closed candidate contour from the candidate contours as the at least one contour.
According to one or more embodiments of the present disclosure, the classifying the to-be-processed area surrounded by each outline circumscribed rectangle to obtain category information includes: and inputting the to-be-processed area surrounded by each outline circumscribed rectangle into a pre-trained classification network model to obtain the class information of the to-be-processed area and the classification probability corresponding to the class information.
According to one or more embodiments of the present disclosure, the selecting, based on the category information, a target outline bounding rectangle from the at least one outline bounding rectangle includes: selecting all outline circumscribed rectangles meeting preset conditions; and selecting the outline circumscribed rectangle corresponding to the classification probability with the highest probability from the classification probabilities corresponding to the outline circumscribed rectangle from the outline circumscribed rectangles, and taking the outline circumscribed rectangle as the target outline circumscribed rectangle.
According to one or more embodiments of the present disclosure, the selecting, based on the category information, a target outline bounding rectangle from the at least one outline bounding rectangle includes: selecting, by using a non-maximum suppression algorithm, a circumscribed rectangle corresponding to a maximum classification probability in a contour circumscribed rectangle of each target object in the image to be processed as a target contour circumscribed rectangle, where the method further includes: and determining the gravity center of the target circumscribed rectangle as the position of the pattern corresponding to the target circumscribed rectangle.
According to one or more embodiments of the present disclosure, the above method further comprises: and determining the position information of the gravity center of the target contour circumscribed rectangle as the position information of the object corresponding to the target contour circumscribed rectangle.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus including: the identification unit is configured to identify the outline of the target object in the image to be processed and obtain at least one outline of the object; the generating unit is configured to generate an external rectangle of each contour to obtain at least one contour external rectangle; the classification unit is configured to classify the to-be-processed area surrounded by each outline circumscribed rectangle respectively to obtain category information; and a suppression processing unit configured to select a target outline bounding rectangle from the at least one outline bounding rectangle based on the category information.
According to one or more embodiments of the present disclosure, the identification unit is further configured to: inputting the image to be processed into a pre-trained contour extraction network model, and outputting candidate contours of a target object in the image; and selecting a closed candidate contour from the candidate contours as the at least one contour.
According to one or more embodiments of the present disclosure, the classification unit is further configured to: inputting the outline circumscribed rectangle into a pre-trained classification network model to obtain the classification information of the circumscribed rectangle and the classification probability corresponding to the classification information.
According to one or more embodiments of the present disclosure, the suppression processing unit is further configured to: and selecting the circumscribed rectangle corresponding to the maximum classification probability in the contour circumscribed rectangle of each displayed object in the image to be processed as a target contour circumscribed rectangle by using a non-maximum suppression algorithm.
According to one or more embodiments of the present disclosure, the suppression processing unit is further configured to: selecting all outline circumscribed rectangles meeting preset conditions; and selecting the outline circumscribed rectangle corresponding to the classification probability with the highest probability from the classification probabilities corresponding to the outline circumscribed rectangle from the outline circumscribed rectangles, and taking the outline circumscribed rectangle as the target outline circumscribed rectangle.
According to one or more embodiments of the present disclosure, the above-described image processing apparatus further includes: and a determining unit. Wherein the determining unit is configured to determine the positional information of the center of gravity of the target contour circumscribed rectangle as the positional information of the object corresponding to the target contour circumscribed rectangle.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors cause the one or more processors to implement the method as described in any of the above.
According to one or more embodiments of the present disclosure, there is provided a computer readable medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the method of any of the above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the application in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the application. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.