CN110619626B - Image processing apparatus, system, method and device - Google Patents

Image processing apparatus, system, method and device Download PDF

Info

Publication number
CN110619626B
CN110619626B CN201910818620.9A CN201910818620A CN110619626B CN 110619626 B CN110619626 B CN 110619626B CN 201910818620 A CN201910818620 A CN 201910818620A CN 110619626 B CN110619626 B CN 110619626B
Authority
CN
China
Prior art keywords
image
module
target object
position information
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910818620.9A
Other languages
Chinese (zh)
Other versions
CN110619626A (en
Inventor
张焱
张华宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dushi Technology Co ltd
Original Assignee
Beijing Dushi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dushi Technology Co ltd filed Critical Beijing Dushi Technology Co ltd
Priority to CN201910818620.9A priority Critical patent/CN110619626B/en
Publication of CN110619626A publication Critical patent/CN110619626A/en
Application granted granted Critical
Publication of CN110619626B publication Critical patent/CN110619626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing device, system, method and apparatus. The image processing device comprises a preprocessing module, an image detection module, a positioning module and a subsequent image processing module. The preprocessing module is configured to generate a second image suitable for detection by the image detection module according to the first image received by the image processing device, wherein the resolution of the second image is lower than that of the first image; the image detection module is configured to detect a target object in the second image; the positioning module is configured to determine second position information of the target object in the first image according to the first position information of the target object in the second image; and the subsequent image processing module is configured to perform corresponding image processing operation according to the second position information and the first image.

Description

Image processing apparatus, system, method and device
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an image processing apparatus, system, method, and device.
Background
Currently, high-definition images with high resolution (e.g., 2K, 4K images) have become increasingly popular for use in the fields of detection, identification, and monitoring. The high-definition image provides a large amount of detail information, so that the identification accuracy can be improved during image detection and identification, a clear monitoring picture can be provided for monitoring personnel, and the situation that the image cannot be identified due to too low resolution is avoided. However, current image detection recognition algorithms, limited by computational resources, typically only support recognition of limited low resolution images (e.g., 512 x 512 or 640 x 360 or lower). Therefore, after receiving the high-definition image from the image capturing device, the image detection and recognition device generally needs to convert the high-definition image into a low-resolution image suitable for the image detection and recognition algorithm by means of downsampling, and then perform detection and recognition. However, this causes a loss of image information, and thus the information provided by the high-definition image cannot be fully utilized.
Moreover, the existing image detection and identification equipment is large in size and high in power consumption, so that the equipment is not convenient to carry, especially is not convenient to use as wearable equipment, and inconvenience is brought to users.
Aiming at the technical problems that the image recognition equipment in the prior art cannot fully utilize the image information of a high-definition image, and the image detection recognition equipment is large in size, high in power consumption and not beneficial to carrying, an effective solution is not provided at present.
Disclosure of Invention
The present disclosure provides an image processing device, system, method and apparatus, so as to at least solve the technical problems that in the prior art, an image recognition device cannot fully utilize image information of a high definition image, and an image detection recognition device is large in size, high in power consumption and not easy to carry.
According to an aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including a preprocessing module, an image detection module, a positioning module, and a subsequent image processing module. The preprocessing module is configured to generate a second image suitable for detection by the image detection module according to the first image received by the image processing device, wherein the resolution of the second image is lower than that of the first image; the image detection module is configured to detect a target object in the second image; the positioning module is configured to determine second position information of the target object in the first image according to the first position information of the target object in the second image; and the subsequent image processing module is configured to perform corresponding image processing operation according to the second position information and the first image.
According to another aspect of an embodiment of the present disclosure, there is provided an image processing system including: an image acquisition device; and an image processing apparatus according to the above. The image processing equipment is connected with the image acquisition equipment and receives the image acquired by the image acquisition equipment.
According to another aspect of the embodiments of the present disclosure, there is provided an image processing method including: generating a second image suitable for image detection from the first image, wherein the second image has a lower resolution than the first image; detecting a target object in the second image; determining second position information of the target object in the first image according to the first position information of the target object in the second image; and performing corresponding image processing operation according to the second position information and the first image.
According to another aspect of an embodiment of the present disclosure, a storage medium is provided. The storage medium comprises a stored program, wherein the above described method is performed by a processor when the program is run.
According to another aspect of an embodiment of the present disclosure, there is provided an image processing apparatus including: the image generation module is used for generating a second image suitable for image detection according to the first image, wherein the resolution of the second image is lower than that of the first image; a detection module for detecting a target object in the second image; the positioning module is used for determining second position information of the target object in the first image according to the first position information of the target object in the second image; and the image processing module is used for carrying out corresponding image processing operation according to the second position information and the first image.
According to another aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including: a processor; and a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: generating a second image suitable for image detection from the first image, wherein the second image has a lower resolution than the first image; detecting a target object in the second image; determining second position information of the target object in the first image according to the first position information of the target object in the second image; and performing corresponding image processing operation according to the second position information and the first image.
Therefore, the embodiment of the disclosure can still utilize the acquired image information of the high-definition image to perform further subsequent processing on the basis of image detection. So that the information of the high-definition image can still be used for further processing while the high-definition image is converted into the resolution suitable for image detection. Therefore, the technical problem that image information of high-definition images cannot be fully utilized by image recognition equipment in the prior art is solved. And because the image processing equipment can be designed into a special integrated circuit of a processor, the technical problems that the existing image processing equipment is large in size, high in power consumption and not beneficial to carrying are solved.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of example and not by way of limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1 is an image processing apparatus according to a first aspect of embodiment 1 of the present disclosure;
fig. 2 is a flowchart of an image processing method according to a third aspect of embodiment 1 of the present disclosure;
fig. 3 is a schematic diagram of an image processing apparatus according to embodiment 2 of the present disclosure; and
fig. 4 is a schematic diagram of an image processing apparatus according to embodiment 3 of the present disclosure.
Detailed Description
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present disclosure without making creative efforts shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances for describing the embodiments of the disclosure herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example 1
Fig. 1 is a schematic diagram of an image processing apparatus 200 according to a first aspect of embodiment 1 of the present application. Referring to fig. 1, the present application provides an image processing apparatus 200 including a pre-processing module 202, an image detection module 203, a positioning module 204, and subsequent image processing modules 205, 206, and 207. Wherein the pre-processing module 202 is configured to generate a second image suitable for detection by the image detection module 203 from the first image received by the image processing device 200, wherein the resolution of the second image is lower than that of the first image; the image detection module 203 is configured to detect a target object in the second image; the positioning module 204 is configured to determine second position information of the target object in the first image according to the first position information of the target object in the second image; and the subsequent image processing modules 205, 206 and 207 are configured to perform corresponding image processing operations according to the second position information and the first image.
Specifically, referring to FIG. 1, the image processing device 200 may include the following proprietary integrated circuit algorithm modules: a preprocessing module 202, an image detection module 203, a localization module 204, an image fusion module 205, an image extraction module 206, and an image recognition module 207.
Wherein the pre-processing module 202 may be used, for example, to generate an image (second image) suitable for detection by the image detection module 203 from the high definition image (first image) received by the image processing device 200. The received high-definition image may be, for example, a high-definition image with a resolution of 1920 × 1080. While images suitable for detection by the image detection module 203 are typically of low resolution, e.g., 512 x 512 and 640 x 360 or lower.
The image detection module 203 receives the second image suitable for detection by the image detection module 203 after being preprocessed by the preprocessing module 202, and then performs detection of the target object on the second image. The target object may be a pre-trained target object such as a human face or an automobile, so that the image detection module 203 may detect the target object such as the human face or the automobile in the second image.
For example, when the high-definition image received by the image processing apparatus 200 is a 1920 × 1080 high-definition image (i.e., a first image), and the preprocessing module 202 generates a second image with a resolution of 640 × 360 from the high-definition image, the image detection module 203 performs detection of the target object on the second image, so that the position of the target object in the second image can be detected.
In this case, the positioning module 204 calculates the position (i.e., the second position information) of the target object in the first image (i.e., the 1920 × 1080 image) proportionally according to the position (i.e., the first position information) of the target object on the second image (i.e., the 640 × 360 image) detected by the received image detection module 203.
Further, the subsequent image processing modules 205, 206 and 207 are configured to perform corresponding image processing operations according to the second position information (i.e. the position of the target object on the first image) and the first image (e.g. 1920 × 1080 high definition image). Wherein the processing operation can comprise marking display of the target object and extraction and sending to the remote server for analysis of the target object.
As described in the background, current image detection recognition algorithms, limited by computational resources, typically support recognition of only limited low resolution images (e.g., 512 x 512 or 640 x 360 or lower). Therefore, after receiving the high-definition image from the image acquisition device, the image detection and identification device generally needs to convert the high-definition image into a low-resolution image suitable for the image detection and identification algorithm by means of downsampling, and then perform detection and identification. However, this causes a loss of image information, and thus the information provided by the high-definition image cannot be fully utilized.
In view of the above-mentioned problems in the prior art, the image processing apparatus 200 may divide the high definition image (i.e., the first image) acquired by the image acquisition apparatus 100 into two paths. After the one path of high-definition image is processed by the preprocessing module 202, the image detection module 203 and the positioning module 204, the position information (i.e., the second position information) of the target object in the high-definition image (the first image) can be obtained. Subsequent modules of the image processing device 200 (e.g., the image fusion module 205, the image extraction module 206, the image recognition module 207) may then perform subsequent processing based on the location information and another high definition image.
Therefore, the image processing device 200 can still perform further subsequent processing by using the acquired image information of the high-definition image on the basis of detection by the image detection module 203. So that the information of the high-definition image can be utilized for further processing while the high-definition image is converted into a resolution suitable for detection by the image detection module 203. Therefore, the technical problem that image information of high-definition images cannot be fully utilized by image recognition equipment in the prior art is solved.
Optionally, the preprocessing module 202 is configured to generate the second image by any one of the following operations: down-sampling the first image, thereby generating a second image; performing a down-sampling operation and a deformation operation on the first image, thereby generating a second image; performing a cropping operation and a downsampling operation on the first image to generate a second image; and segmenting the first image to generate a plurality of second images.
Specifically, the image processing apparatus 200 includes a preprocessing module 202. Preprocessing module 202 may convert received high definition images 1920 x 1080 (first image) into second images (e.g., 512 x 512, 640 x 360, or 640 x 480) suitable for detection by image detection module 203. The conversion mode can be any one of the following modes:
1. and carrying out down-sampling on the high-definition image. For example, when the resolution of the received high-definition image is 1920 × 1080 (16).
2. The high definition image is down-sampled and a warping operation (which may be, for example, a stretching or compressing operation on the image) is performed at an aspect ratio suitable for the image detection module 203 to detect the identified image. For example, when the resolution of the received high-definition image is 1920 × 1080 (16).
3. The high-definition image is cut according to the aspect ratio of the image suitable for the image detection module 203, and then the cut image is down-sampled to obtain the image suitable for the image detection module 203. For example, when the received high definition image is 1920 × 1080, the resolution suitable for the image detected by the image detection module 203 is 512 × 512 (1. The high-definition image may be first cropped to a size of 1080 × 1080 (1).
4. The received high definition image is partitioned into a plurality of sub-images suitable for detection by the image detection module 203. For example, when the received high definition image is 1920 × 1080 and the resolution of the image suitable for detection by the image detection module 203 is 512 × 512, the pre-processing module 202 may divide the image with the resolution of 1920 × 1080 into a plurality of sub-images with the resolution of 512 × 512. Thus, the plurality of sub-images obtained by division can be respectively subjected to image detection operation.
Thus, in the above manner, the image processor 200 enables the target object in the high-definition image to be detected using the existing image detection model by converting the high-definition image into the second image of low resolution suitable for detection by the image detection module 203.
Optionally, the subsequent image processing module includes an image extraction module 206 and an image recognition module 207, wherein the image extraction module 206 is configured to extract an image region containing the target object from the first image according to the second position information; and the image recognition module 207 is configured to recognize the target object according to the image area.
In particular, the subsequent image processing modules in the image processor 200 include an image extraction module 206 and an image recognition module 207. Wherein the image extraction module 206 is configured to extract an image area containing the target object, i.e. a high definition image area containing the target object, from the first image (i.e. the high definition image) according to the second position information (i.e. the position of the target object in the first image). For example, referring to fig. 1, the image extraction module 206 may read the cached high definition image from the image caching module 201 and receive the location information (i.e., the second location information) of the target object from the positioning module 204. Then, the image extraction module 206 extracts a high-definition image region related to the target object (e.g., a human face, etc.) at a corresponding position in the high-definition image according to the read high-definition image and the position information, and transmits the high-definition image region to the image recognition module 207.
Further, the image recognition module 207 is configured to recognize the target object according to the extracted high definition image region. Specifically, the image recognition module 207 acquires a high definition object (e.g., a human face, etc.) from the image extraction module 206. Thereby carrying out relevant operations such as face feature extraction, comparison and the like. Finally, the high definition target (such as a human face and the like) and the related feature information and the target description information (name, age and the like) are sent to a related server through the network device 301 for storage and reprocessing.
In the above manner, the image processing apparatus 200 first detects a target object to be recognized in an image of low resolution generated from a high-definition image through the preprocessing module 202, the image detection module 203, and the positioning module 204, and then determines the position of the target object in the high-definition image. Further, the image processing device extracts a high-definition image area containing the target object to be recognized from the high-definition image according to the position of the target object in the high-definition image, and then recognizes the target object in the high-definition image area. Therefore, by the mode, huge calculation power consumption caused by direct detection and identification operation in the high-definition image is avoided, and meanwhile, the image information of the high-definition image can be fully utilized, so that more accurate identification can be realized while calculation power is saved.
Optionally, the subsequent image processing module comprises an image fusion module 205, wherein the image fusion module 205 is configured to add a marker at the position of the target object in the first image according to the second position information and generate a third image.
In particular, subsequent image processing modules in the image processor 200 comprise an image fusion module 205, wherein the image fusion module 205 is configured to add a marker at the position of the target object in the first image based on the second position information (i.e. the position information of the target object in the first image) and generate a third image. Wherein the second position information is the position of the target object on the high-definition image. Wherein the image fusion module 205 receives the cached first image from the image caching module 201 and receives the position information of the target object in the high definition image from the positioning module 204. Then, the image fusion module 205 adds a mark (e.g., a color rectangle frame, a name, etc. around the target object) at the corresponding position in the high-definition image according to the received high-definition image and the position information of the target object in the high-definition image. Thereby generating a high definition image (third image) with the mark target object and outputting to the high definition image display 302.
In practice, a monitoring worker monitors a target object, usually by watching a monitoring video. Therefore, if a mark (such as a color rectangular frame, a name, and the like) for identifying a target object can be added to the video, it is more advantageous for a monitoring worker to observe the monitoring video. However, as described above, if a target object is to be detected in a video image, it is necessary to convert the high-definition image into a low-resolution image suitable for the image detection module 203. In this case, a mark may be added to the target object in the video, thereby facilitating the monitoring of the monitoring person. However, the resolution of the video image is also reduced, so that the monitoring staff cannot identify the detail information of the target object. Therefore, it is disadvantageous to monitor the work of the worker.
In view of this, in the technical solution of the present embodiment, the image processing apparatus 200 determines the position of the target object in the high definition image (i.e. the first image) through the positioning module 204, and then adds a mark at the position of the target object in the high definition image by using the image fusion module 205, so as to generate a third image with a marked high definition, and transmits the third image to the high definition image display 302.
Thus, in this way, high-definition monitoring videos with marks can be provided for monitoring workers while reducing computational effort. And the technical problem that the image identification equipment in the prior art cannot transmit high-definition images to the image display is solved.
Optionally, the image processing apparatus 200 further comprises an image caching module 201 for caching the first image, and the subsequent image processing modules 205, 206 and 207 are configured to retrieve the first image from the image caching module 201.
Specifically, the image caching module 201 may be configured to cache a high-definition image (a first image) transmitted by the image capturing apparatus 100. And the image cache module 201 may provide a high definition image (first image) for the subsequent image processing modules 205, 206 and 207 configuration. Therefore, the technical problem that the image output by the image detection and identification device is no longer an image with high definition resolution is solved.
Alternatively, the image processing apparatus 200 is a processor, and the pre-processing module 202, the image detection module 203, the localization module 204, and the subsequent image processing modules 205, 206, and 207 are proprietary integrated circuit algorithm modules in the processor.
In particular, the present embodiment integrates the pre-processing module 202, the image detection module 203, the positioning module 204, and the subsequent image processing modules 205, 206, and 207 in a proprietary integrated circuit processor. Therefore, the technical problems that an image processor in the prior art is large in size, high in power consumption and not beneficial to carrying are solved.
Further, referring to fig. 1, according to a second aspect of the present embodiment 1, there is provided an image detection system including: an image capturing apparatus 100; according to the image processing apparatus 200 of any one of the first aspect of the present embodiment, the image processing apparatus 200 is connected to the image capturing apparatus 100, and receives an image captured by the image capturing apparatus 100.
So that the image processing apparatus 200 can receive the acquired high definition image (i.e., the first image) from the image acquisition apparatus 100, for example, to perform processing.
Further optionally, the image detection system further comprises an image display 302 connected to the image processing device 200 for receiving the image processed by the image processing device 200.
Further, according to a third aspect of the present embodiment 1, there is provided an image processing method. Fig. 2 is a flowchart of an image processing method according to a third aspect of the present embodiment. The method may be executed by, for example, the image processing apparatus shown in fig. 1, but may also be implemented by a computing apparatus such as a terminal apparatus or a server. Referring to fig. 2, the present application provides an image processing method, including:
s202: generating a second image suitable for image detection from the first image, wherein the second image has a lower resolution than the first image;
s204: detecting a target object in the second image;
s206: determining second position information of the target object in the first image according to the first position information of the target object in the second image; and
s208: and performing corresponding image processing operation according to the second position information and the first image.
A detailed description of the method according to the third aspect of the present embodiment may refer to the related description of the image processing apparatus according to the first aspect of the present embodiment. Therefore, on the basis of image detection, the method can still utilize the acquired image information of the high-definition image to perform further subsequent processing. Therefore, the high-definition image can be converted into the resolution suitable for image detection, and the information of the high-definition image can be utilized for further processing. Therefore, the technical problem that image information of high-definition images cannot be fully utilized by image recognition equipment in the prior art is solved.
Optionally, the operation of generating the second image from the first image includes generating the second image by any one of the following operations: down-sampling the first image, thereby generating a second image; performing a down-sampling operation and a deformation operation on the first image, thereby generating a second image; performing a cropping operation and a downsampling operation on the first image to generate a second image; and segmenting the first image to generate a plurality of second images.
Optionally, the subsequent image processing operation comprises: extracting an image area containing the target object from the first image according to the second position information; and identifying the target object according to the image area.
Optionally, the subsequent image processing operation comprises: according to the second position information, a marker is added at the position of the target object in the first image, and a third image is generated.
Further, according to a fourth aspect of the present embodiment, there is provided a storage medium. The storage medium comprises a stored program, wherein the method of any of the above is performed by a processor when the program is run.
Example 2
Fig. 3 shows an image processing apparatus 300 according to the present embodiment, the apparatus 300 corresponding to the method according to the third aspect of embodiment 1. Referring to fig. 3, the apparatus 300 includes: an image generating module 310, configured to generate a second image suitable for image detection according to the first image, where the resolution of the second image is lower than that of the first image; a detection module 320 for detecting a target object in the second image; the positioning module 330 is configured to determine second position information of the target object in the first image according to the first position information of the target object in the second image; and an image processing module 340, configured to perform a corresponding image processing operation according to the second position information and the first image.
Optionally, the image generating module 310 includes any one of the following sub-modules: a first image generation sub-module for down-sampling the first image to generate a second image; a second image generation sub-module for performing down-sampling operation and warping operation on the first image to generate a second image; a third image generation sub-module for performing a cropping operation and a down-sampling operation on the first image to generate a second image; and a fourth image generation sub-module for segmenting the first image to generate a plurality of second images.
Optionally, the image processing module 340 includes: the image extraction submodule is used for extracting an image area containing the target object from the first image according to the second position information; and the image recognition sub-module is used for recognizing the target object according to the image area.
Optionally, the image processing module 340 includes: and the fifth image generation submodule is used for adding a mark at the position of the target object in the first image according to the second position information and generating a third image.
Therefore, according to the embodiment, on the basis of image detection, further subsequent processing can still be performed by using the acquired image information of the high-definition image. Therefore, the high-definition image can be converted into the resolution suitable for image detection, and the information of the high-definition image can be utilized for further processing. Therefore, the technical problem that image information of high-definition images cannot be fully utilized by image recognition equipment in the prior art is solved.
Example 3
Fig. 4 shows an image processing apparatus 400 according to the present embodiment, the apparatus 400 corresponding to the method according to the third aspect of embodiment 1. Referring to fig. 4, the apparatus 400 includes: a processor 410; and a memory 420 coupled to the processor 410 for providing instructions to the processor 410 to process the following process steps: generating a second image suitable for image detection from the first image, wherein the second image has a lower resolution than the first image; detecting a target object in the second image; determining second position information of the target object in the first image according to the first position information of the target object in the second image; and performing corresponding image processing operation according to the second position information and the first image.
Optionally, the operation of generating the second image from the first image includes generating the second image by any one of the following operations: down-sampling the first image, thereby generating a second image; performing down-sampling operation and deformation operation on the first image to generate a second image; performing a cropping operation and a downsampling operation on the first image, thereby generating a second image; and segmenting the first image to generate a plurality of second images.
Optionally, the subsequent image processing operation comprises: extracting an image area containing the target object from the first image according to the second position information; and identifying the target object according to the image area.
Optionally, the subsequent image processing operation comprises: according to the second position information, a marker is added at the position of the target object in the first image, and a third image is generated.
Therefore, according to the embodiment, on the basis of image detection, further subsequent processing can still be performed by using the acquired image information of the high-definition image. So that the information of the high-definition image can still be used for further processing while the high-definition image is converted into the resolution suitable for image detection. Therefore, the technical problem that image information of high-definition images cannot be fully utilized by image recognition equipment in the prior art is solved.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be discussed further in subsequent figures.
For ease of description, spatially relative terms such as "over 8230 \ 8230;,"' over 8230;, \8230; upper surface "," above ", etc. may be used herein to describe the spatial relationship of one device or feature to another device or feature as shown in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary terms "at 8230; \8230; above" may include both orientations "at 8230; \8230; above" and "at 8230; \8230; below". The device may be otherwise variously oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
In the description of the present disclosure, it is to be understood that the orientation or positional relationship indicated by the directional terms such as "front, rear, upper, lower, left, right", "lateral, vertical, horizontal" and "top, bottom", etc., are generally based on the orientation or positional relationship shown in the drawings, and are presented only for the convenience of describing and simplifying the disclosure, and in the absence of a contrary indication, these directional terms are not intended to indicate and imply that the device or element being referred to must have a particular orientation or be constructed and operated in a particular orientation, and therefore, should not be taken as limiting the scope of the disclosure; the terms "inner and outer" refer to the inner and outer relative to the profile of the respective component itself.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (6)

1. An image processing apparatus (200) comprising a pre-processing module (202), an image detection module (203), a localization module (204) and subsequent image processing modules (205, 206 and 207), characterized in that:
the pre-processing module (202) is configured to generate a second image suitable for detection by the image detection module (203) from a first image received by the image processing device (200), wherein the second image has a lower resolution than the first image;
the image detection module (203) is configured to detect a target object in the second image, wherein the target object comprises at least a human face;
the positioning module (204) is configured to determine second position information of the target object in the first image according to first position information of the target object in the second image; and
the subsequent image processing modules (205, 206 and 207) are configured to perform corresponding image processing operations according to the second position information and the first image, wherein
The pre-processing module (202) is configured to generate the second image by any one of: down-sampling the first image, thereby generating the second image; performing a down-sampling operation and a warping operation on the first image, thereby generating the second image; performing a cropping operation and a downsampling operation on the first image, thereby generating the second image; and segmenting said first image to generate a plurality of said second images, and wherein
The subsequent image processing modules (205, 206 and 207) comprise an image extraction module (206) and an image recognition module (207), wherein the image extraction module (206) is configured to extract an image region containing the target object from the first image according to the second position information; and the image recognition module (207) is configured to recognize the target object according to the image area, and
the subsequent image processing modules (205, 206 and 207) further comprise an image fusion module (205), wherein the image fusion module (205) is configured to add a marker at the position of the target object in the first image according to the second position information and generate a third image, and
the image processing apparatus (200) further comprises an image buffering module (201) for buffering the first image, and
the subsequent image processing module (205, 206 and 207) is configured to retrieve the first image from the image caching module (201).
2. The image processing device (200) of claim 1, wherein the image processing device (200) is a processor, and the pre-processing module (202), the image detection module (203), the localization module (204), and the subsequent image processing modules (205, 206, and 207) are proprietary integrated circuit algorithm modules in the processor.
3. An image processing system comprising: an image acquisition device (100); and an image processing device (200) according to any one of claims 1 to 2, the image processing device (200) being connected to the image acquisition device (100) for receiving images acquired by the image acquisition device (100).
4. An image processing method, characterized by comprising:
generating a second image suitable for image detection from the first image, wherein the second image has a lower resolution than the first image;
detecting a target object in the second image, wherein the target object at least comprises a human face;
determining second position information of the target object in the first image according to the first position information of the target object in the second image; and
according to the second position information and the first image, corresponding image processing operation is carried out, wherein
Generating the second image from the first image, including generating the second image by any of: down-sampling the first image, thereby generating the second image; performing a down-sampling operation and a warping operation on the first image, thereby generating the second image; performing a cropping operation and a downsampling operation on the first image, thereby generating the second image; and segmenting the first image to generate a plurality of second images, and wherein
Subsequent image processing operations comprising: extracting an image area containing a target object from the first image according to the second position information; and identifying the target object according to the image area, and
subsequent image processing operations, further comprising: adding a marker at a position of a target object in the first image according to the second position information, generating a third image, and
the image processing apparatus (200) further comprises an image buffering module (201) for buffering the first image, and
the subsequent image processing module (205, 206 and 207) is configured to retrieve the first image from the image caching module (201).
5. An image processing apparatus characterized by comprising:
the image generation module is used for generating a second image suitable for image detection according to the first image, wherein the resolution of the second image is lower than that of the first image;
a detection module for detecting a target object in the second image, wherein the target object at least comprises a human face;
the positioning module is used for determining second position information of the target object in the first image according to first position information of the target object in the second image; and
an image processing module for performing corresponding image processing operation according to the second position information and the first image, wherein
Generating the second image from the first image, including generating the second image by any of: down-sampling the first image, thereby generating the second image; performing a down-sampling operation and a warping operation on the first image, thereby generating the second image; performing a cropping operation and a downsampling operation on the first image, thereby generating the second image; and segmenting the first image to generate a plurality of second images, and wherein
Subsequent image processing operations comprising: extracting an image area containing a target object from the first image according to the second position information; and identifying the target object according to the image area, and
subsequent image processing operations, further comprising: adding a marker at a position of a target object in the first image according to the second position information, generating a third image, and
the image processing apparatus (200) further comprises an image buffering module (201) for buffering the first image, and
the subsequent image processing module (205, 206 and 207) is configured to retrieve the first image from the image caching module (201).
6. An image processing apparatus characterized by comprising:
a processor; and
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps:
generating a second image suitable for image detection from the first image, wherein the second image has a lower resolution than the first image;
detecting a target object in the second image, wherein the target object at least comprises a human face;
determining second position information of the target object in the first image according to first position information of the target object in the second image; and
according to the second position information and the first image, corresponding image processing operation is carried out, wherein
Generating the second image from the first image, including generating the second image by any one of: down-sampling the first image, thereby generating the second image; performing a down-sampling operation and a warping operation on the first image, thereby generating the second image; performing a cropping operation and a downsampling operation on the first image, thereby generating the second image; and segmenting the first image to generate a plurality of second images, and wherein
Subsequent image processing operations comprising: extracting an image area containing a target object from the first image according to the second position information; and identifying the target object according to the image area, and
subsequent image processing operations, further comprising: adding a marker at a position of a target object in the first image according to the second position information, generating a third image, and
the image processing apparatus (200) further comprises an image buffering module (201) for buffering the first image, and
the subsequent image processing module (205, 206 and 207) is configured to retrieve the first image from the image caching module (201).
CN201910818620.9A 2019-08-30 2019-08-30 Image processing apparatus, system, method and device Active CN110619626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910818620.9A CN110619626B (en) 2019-08-30 2019-08-30 Image processing apparatus, system, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910818620.9A CN110619626B (en) 2019-08-30 2019-08-30 Image processing apparatus, system, method and device

Publications (2)

Publication Number Publication Date
CN110619626A CN110619626A (en) 2019-12-27
CN110619626B true CN110619626B (en) 2023-04-07

Family

ID=68922886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910818620.9A Active CN110619626B (en) 2019-08-30 2019-08-30 Image processing apparatus, system, method and device

Country Status (1)

Country Link
CN (1) CN110619626B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062894A (en) * 2020-01-06 2020-04-24 北京都是科技有限公司 Artificial intelligence processor and artificial intelligence analysis device
CN112818933A (en) * 2021-02-26 2021-05-18 北京市商汤科技开发有限公司 Target object identification processing method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937508A (en) * 2010-09-30 2011-01-05 湖南大学 License plate localization and identification method based on high-definition image
CN103679134A (en) * 2013-09-09 2014-03-26 华中科技大学 A sea target infrared imaging identification apparatus
CN107016366A (en) * 2017-03-29 2017-08-04 浙江师范大学 A kind of guideboard detection method based on Adaptive windowing mouthful and convolutional neural networks
CN108875733A (en) * 2018-04-23 2018-11-23 西安电子科技大学 A kind of infrared small target quick extraction system
CN109948494A (en) * 2019-03-11 2019-06-28 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937508A (en) * 2010-09-30 2011-01-05 湖南大学 License plate localization and identification method based on high-definition image
CN103679134A (en) * 2013-09-09 2014-03-26 华中科技大学 A sea target infrared imaging identification apparatus
CN107016366A (en) * 2017-03-29 2017-08-04 浙江师范大学 A kind of guideboard detection method based on Adaptive windowing mouthful and convolutional neural networks
CN108875733A (en) * 2018-04-23 2018-11-23 西安电子科技大学 A kind of infrared small target quick extraction system
CN109948494A (en) * 2019-03-11 2019-06-28 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王立国.高光谱图像超分辨率技术.《高光谱图像处理技术》.2013,166-168. *

Also Published As

Publication number Publication date
CN110619626A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
US9373034B2 (en) Apparatus and method for tracking object
US8248474B2 (en) Surveillance system and surveilling method
CA3100569A1 (en) Ship identity recognition method base on fusion of ais data and video data
CN110987189B (en) Method, system and device for detecting temperature of target object
CN109029731A (en) A kind of power equipment exception monitoring system and method based on multi-vision visual
CN110619626B (en) Image processing apparatus, system, method and device
JP2009171296A (en) Video network system, and video data management method
CN111522073B (en) Method for detecting condition of wearing mask by target object and thermal infrared image processor
JP2016218760A5 (en)
CN110335271B (en) Infrared detection method and device for electrical component fault
CN106056594A (en) Double-spectrum-based visible light image extraction system and method
CN113066195A (en) Power equipment inspection method and device, AR glasses and storage medium
CN110536074B (en) Intelligent inspection system and inspection method
CN113228626B (en) Video monitoring system and method
CN104869316B (en) The image capture method and device of a kind of multiple target
CN113052876A (en) Video relay tracking method and system based on deep learning
KR20150021351A (en) Apparatus and method for alignment of images
JP2010268158A (en) Image processing system, method of processing image, and program
KR20110129158A (en) Method and system for detecting a candidate area of an object in an image processing system
CN113947754A (en) Vision-based ship machinery running state monitoring method and system and storage medium
CN107704851B (en) Character identification method, public media display device, server and system
CN114511592B (en) Personnel track tracking method and system based on RGBD camera and BIM system
CN116797977A (en) Method and device for identifying dynamic target of inspection robot and measuring temperature and storage medium
WO2016116206A1 (en) Object detecting method and object detecting apparatus
CN112260402B (en) Monitoring method for state of intelligent substation inspection robot based on video monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant