CN111369570B - Multi-target detection tracking method for video image - Google Patents

Multi-target detection tracking method for video image Download PDF

Info

Publication number
CN111369570B
CN111369570B CN202010111978.0A CN202010111978A CN111369570B CN 111369570 B CN111369570 B CN 111369570B CN 202010111978 A CN202010111978 A CN 202010111978A CN 111369570 B CN111369570 B CN 111369570B
Authority
CN
China
Prior art keywords
image
video image
target
gray
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010111978.0A
Other languages
Chinese (zh)
Other versions
CN111369570A (en
Inventor
吴浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Skydefence Technology Co ltd
Original Assignee
Chengdu Skydefence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Skydefence Technology Co ltd filed Critical Chengdu Skydefence Technology Co ltd
Priority to CN202010111978.0A priority Critical patent/CN111369570B/en
Publication of CN111369570A publication Critical patent/CN111369570A/en
Application granted granted Critical
Publication of CN111369570B publication Critical patent/CN111369570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of video image processing, and particularly discloses a multi-target detection tracking method for video images. The multi-target detection tracking method of the video image comprises the steps of receiving the video image; preprocessing a video image to obtain an initial video image; carrying out morphological operation on the initial video image to obtain a gray-scale darkness area image; generating a binary image and an edge image according to the gray shade area image; screening the connected region in the binary image according to the edge image and preset target feature information to obtain a target region; and forming a track according to the association relation between the target area of the video image and the target area of the video image of the previous frame to track. By acquiring the algorithm of the adjacent darker areas and determining the darker areas by using the threshold values of the morphological operation image and the original image, the highlight areas which do not accord with the gray values can be eliminated, the speed of target detection is increased, and the accuracy of target detection is improved.

Description

Multi-target detection tracking method for video image
Technical Field
The application relates to the technical field of video image processing, in particular to a multi-target detection tracking method for video images.
Background
And (3) communicating domain: the connected Region (Connected Component) generally refers to an image Region (Blob) formed by foreground pixels having the same pixel value and adjacent positions in the image. The same block area with similar pixel values.
The function of the monitoring system is developed from the original simple functions of manually monitoring video signals, displaying multiple pictures of the system, recording hard disk video and the like to the intelligent motion detection and target tracking by utilizing a computer.
The target detection refers to detecting a suspected target area in the sequence image and extracting the target from the background image. In general, the distance between the target and the lens determines the imaging size, and targets with different sizes and different motion states (motion and static) can appear at the same moment, so that targets with various motion states cannot be detected by the traditional methods of frame difference, background difference and the like.
At present, a plurality of main target detection algorithms exist, and basic methods based on images include a frame difference method, a background difference method and the like, and a machine learning based method includes a harr+adaboost method, a hog+ svm, SSD, YOLO, RCNN and the like. The method has advantages and disadvantages, the frame difference method and the background difference method are simple and easy to realize, the calculated amount of the algorithm is small, the real-time performance can be ensured, but the static target is invalid aiming at the moving target, and continuous multiple images are required to be associated. The algorithm of the class of the characteristics model learning of the harry+adaboost and hog+svm is particularly sensitive to the interference of illumination and extraneous events and the like, and has low time efficiency. The algorithms SSD, YOLO, RCNN and the like based on deep learning have good detection effect and high precision, but have low time efficiency and cannot achieve real-time processing.
Disclosure of Invention
In view of the above, the present application provides a multi-target detecting and tracking method for video images, which can solve the above-mentioned problems of the prior art.
In order to solve the technical problems, the technical scheme provided by the application is a multi-target detection tracking method for video images, which comprises the following steps:
receiving a video image;
preprocessing the video image to obtain an initial video image;
performing morphological operation on the initial video image to obtain a gray-scale darker region image;
generating a binary image and an edge image according to the gray shade area image;
screening a connected region in the binary image according to the edge image and preset target feature information to obtain a target region;
and forming a track according to the association relation between the target area of the video image and the target area of the video image of the previous frame to track.
Preferably, the method for preprocessing the video image to obtain an initial video image includes:
converting the video image into a gray scale image;
and reducing the gray level image according to a preset reduction multiple to obtain an initial video image.
Preferably, the method for performing morphological operation on the initial video image to obtain a gray-scale darker region image includes:
performing corrosion and expansion operation on the initial video image to obtain a morphological video image;
subtracting the initial video image from the morphological video image to obtain a gray-scale darkened area image.
Preferably, the method for generating the binary image and the edge image according to the gray shade region image comprises the following steps:
carrying out gray stretching and binarization treatment on the gray shade region image to obtain a binary image;
and carrying out canny edge processing on the gray-scale darkness area image to obtain an edge image.
Preferably, the method for screening the connected region in the binary image according to the edge image and the preset target feature information to obtain the target region includes:
deleting a connected region which does not accord with the preset target characteristic information on the binary image according to the preset target characteristic information;
and deleting the connected region without the edge on the binary image according to the edge map.
Preferably, after the step of screening the connected region in the binary image according to the edge image and the preset target feature information to obtain the target region, before the step of tracking by forming a track according to the association relationship between the target region of the video image and the target region of the video image of the previous frame, the method further includes the steps of:
and restoring the target area to the video image size according to a preset reduction multiple.
Compared with the prior art, the application has the following beneficial effects: the multi-target detection tracking method of the video image comprises the steps of receiving the video image; preprocessing a video image to obtain an initial video image; carrying out morphological operation on the initial video image to obtain a gray-scale darkness area image; generating a binary image and an edge image according to the gray shade area image; screening the connected region in the binary image according to the edge image and preset target feature information to obtain a target region; and forming a track according to the association relation between the target area of the video image and the target area of the video image of the previous frame to track. By acquiring the algorithm of the adjacent darker areas and determining the darker areas by using the threshold values of the morphological operation image and the original image, the highlight areas which do not accord with the gray values can be eliminated, the speed of target detection is increased, and the accuracy of target detection is improved.
Drawings
For a clearer description of embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 is a diagram of a multi-target detection tracking method for video images according to embodiment 1 of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present application.
In order to make the technical solution of the present application better understood by those skilled in the art, the present application will be further described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, embodiment 1 of the present application provides a multi-target detection tracking method for video images, which includes:
s11: receiving a video image;
s12: preprocessing a video image to obtain an initial video image;
s13: carrying out morphological operation on the initial video image to obtain a gray-scale darkness area image;
s14: generating a binary image and an edge image according to the gray shade area image;
s15: screening the connected region in the binary image according to the edge image and preset target feature information to obtain a target region:
s16: and forming a track according to the association relation between the target area of the video image and the target area of the video image of the previous frame to track.
Specifically, step S12 is a method for preprocessing a video image to obtain an initial video image, including:
s121: converting the video image into a gray scale image;
here, converting a color image in RGB format into a gray-scale image, that is, converting a three-channel color image into a single-channel image, can reduce the amount of computation.
S122: and reducing the gray level image according to a preset reduction multiple to obtain an initial video image.
Here, the subsequent processing speed can be increased by shrinking the gray-scale image according to the preset reduction multiple, and the processing speed is slower due to the fact that the image is too large, the effect of accelerating the processing speed can be achieved by shrinking the image by one time.
Specifically, step S13 performs morphological operation on the initial video image to obtain a gray-scale darker region image, which includes:
s131: performing corrosion and expansion operation on the initial video image to obtain a morphological video image;
here, a diamond template with the template size of 15 can be selected to perform corrosion operation on the image, so that the effect of expanding a brighter area is achieved. Dilation is used to connect (join) adjacent elements, which is also the most intuitive presentation of the dilated image. The basic morphological operations are erosion and swelling. In morphology, structural elements are the most important and fundamental concept. The role of the structural elements in the morphological transformation corresponds to a "filter window" in the signal processing. With B (x) representing the structural element, for each point x in the working space E, the definition of corrosion and expansion is:
and (3) corrosion:
expansion:
the result of etching E with B (x) is a set of all points where B is contained in E after translating the structural element B. The result of expanding E with B (x) is a set of points that shift structural element B such that the intersection of B and E is non-null.
S132: subtracting the initial video image from the morphological video image to obtain a gray-scale darkened area image.
Here, the gray-scale darker areas are obtained by subtracting the gray-scale image from the morphological operation image, i.e., the adjacent darker areas are obtained.
Specifically, the method for generating the binary image and the edge image according to the gray-scale darkness region image in step S14 includes:
s141: carrying out gray stretching and binarization treatment on the gray shade region image to obtain a binary image;
here, the method of stretching the gray-scale darkened area image includes: the image designated area is stretched to a target area [ lowOut, highOut ], wherein lowout=0, highout=255, the current image stretched area [ lowIn, highIn ], an average gray value mediangay of the image is calculated first, if the average gray value is less than or equal to 0, the average value is set to 1, the minimum value lowIn is the current average gray value, and the maximum highin=average gray value + (255-mediangay)/3. new represents the mapped value, old represents the old gray value, and the calculation formula is:
here, the method for acquiring the binarization threshold value and performing binarization processing includes: and counting an image histogram, counting gray values which are 95% of the image area from low gray, multiplying the gray values by 2 to obtain a current threshold, and if the current threshold is larger than the maximum gray value, obtaining a current threshold T=the maximum gray value-10. The image is binarized, if less than T, the background is set to 0, otherwise the object is set to 255.
S142: and carrying out canny edge processing on the gray-scale dark area image to obtain an edge image.
Here, a canny edge is acquired. Bilateral threshold minimum value=graymax/3, maximum value=graymax is set by using the maximum value grayMax of the gray map obtained in step S13. The method mainly comprises the following 5 steps:
1) A gaussian filter is used to smooth the image and filter out noise.
2) And calculating the gradient strength and the gradient direction of each pixel point in the image. And respectively calculating the gradients in the horizontal direction and the vertical direction of the image by using the following two templates, and combining the gradients to obtain the gradient G of the image, wherein the gradient direction is an angle formed by the gradients in the horizontal direction and the vertical direction.
θ=arc tan(G y /G x )
3) Non-maximum (Non-Maximum Suppression) suppression is applied to eliminate spurious responses from edge detection. And taking the point as the center, calculating whether the value is larger than the positive and negative direction value of the gradient, if so, determining the extreme value, and otherwise, determining the extreme value.
4) Double-Threshold (Double-Threshold) detection is applied to determine true and potential edges.
After applying the non-maximum suppression, the remaining pixels can more accurately represent the actual edges in the image. However, there are still some edge pixels due to noise and color variations. To address these spurious responses, it is necessary to filter edge pixels with weak gradient values and preserve edge pixels with high gradient values, which can be achieved by selecting a high and low threshold. If the gradient value of the edge pixel is higher than the high threshold value, marking it as a strong edge pixel; if the gradient value of the edge pixel is less than the high threshold and greater than the low threshold, it is marked as a weak edge pixel; if the gradient value of the edge pixel is less than the low threshold, it is suppressed. The choice of threshold depends on the content of a given input image.
The pseudocode for dual threshold detection is depicted as follows:
5) Edge detection is ultimately accomplished by suppressing isolated weak edges.
Pixels that are classified as strong edges have been determined to be edges because they are extracted from the true edges in the image. However, for weak edge pixels, there will be some controversy as these pixels may be extracted from the real edges or may be due to noise or color variations. In order to obtain accurate results, weak edges caused by the latter should be suppressed. Typically, weak edge pixels caused by real edges will be connected to strong edge pixels, while noise responses are unconnected. To track edge connections, a weak edge point can remain as a true edge as long as one is a strong edge pixel by looking at the weak edge pixel and its 8 neighborhood pixels.
Specifically, step S15 is a method for screening a connected region in a binary image according to an edge image and preset target feature information to obtain a target region, including:
deleting the connected region which does not accord with the preset target characteristic information on the binary image according to the preset target characteristic information; according to the edge map, connected regions without edges are deleted on the binary image.
Here, the method for screening the connected region by combining the edge map and the binary map includes: and deleting the connected regions with the length and width not meeting the conditions according to the preset target characteristic information, deleting the connected regions without edges in the connected regions according to the edge map, and reserving the connected regions meeting the target characteristics.
Specifically, in step S15, the step of screening the connected region in the binary image according to the edge image and the preset target feature information, and after the step of obtaining the target region, before the step of tracking by forming a track according to the association relationship between the target region of the video image and the target region of the video image of the previous frame, the method further includes the steps of: and restoring the target area to the video image size according to the preset reduction multiple.
Here, it is necessary to restore the connected region coordinates to the original coordinate size. Because the image is the scaled connected region screening, the coordinates are multiplied by the reduction multiple to restore to the original image size.
Specifically, in step S16, a track is formed according to the association relationship between the target area of the video image and the target area of the previous frame of video image, and the target area may be screened again according to the target track matching information of the adjacent time, so as to obtain the optimal target area and achieve tracking of the target within a certain time.
The application mainly uses morphological operation to detect the target, and the traditional morphological operation is based on a series of image operations with a certain shape, including expansion, corrosion, binarization, open operation, close operation, top hat algorithm, black hat algorithm, morphological gradient and the like. However, the conventional morphological method is not suitable for detecting the target in the sky background, so that the target in the motion background is detected by combining morphological operation in the application scene and performing a corresponding updating algorithm.
In order to ensure algorithm instantaneity, morphology is adopted as a basic algorithm, but direct morphological operation cannot be applied to target detection of complex background, so that initial positioning of a target is carried out by combining an original image with a morphological operation image difference value in an identification algorithm, namely, the specific technical scheme is that firstly, an image is subjected to corrosion operation, a dark area is enlarged, isolated points are eliminated, then, a gray level image is subtracted from the corroded image, so that an area block with a larger gray level difference value is obtained, and the area block is represented as a darker area block in a gray level image, so that the target area block is obtained. The method has the advantages of simple algorithm realization, low complexity, adaptability to various dynamic environments, various target motion states and strong instantaneity. Meanwhile, a judging mechanism of target screening is added in the technical scheme, so that the real-time performance is ensured, the accuracy of algorithm identification is improved, and the method is suitable for wide popularization.
The multi-target detection tracking method for the video image provided by the embodiment of the application is described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Claims (1)

1. A method for multi-target detection tracking of video images, the method comprising:
receiving a video image;
converting the video image into a gray image, and shrinking the gray image according to a preset reduction multiple to obtain an initial video image;
carrying out morphological operation on the initial video image, specifically carrying out corrosion and expansion operation on the initial video image to obtain a morphological video image;
subtracting the initial video image from the morphological video image to obtain a gray-scale darkness area image;
carrying out gray stretching and binarization treatment on the gray shade region image to obtain a binary image, and carrying out canny edge treatment on the gray shade region image to obtain an edge image;
deleting a connected region which does not accord with the preset target characteristic information on the binary image according to the preset target characteristic information;
deleting a communication area without edges on the binary image according to the edge map to obtain a target area;
restoring the target area to the size of the video image according to a preset reduction multiple;
and forming a track according to the association relation between the target area of the video image and the target area of the video image of the previous frame to track.
CN202010111978.0A 2020-02-24 2020-02-24 Multi-target detection tracking method for video image Active CN111369570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010111978.0A CN111369570B (en) 2020-02-24 2020-02-24 Multi-target detection tracking method for video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010111978.0A CN111369570B (en) 2020-02-24 2020-02-24 Multi-target detection tracking method for video image

Publications (2)

Publication Number Publication Date
CN111369570A CN111369570A (en) 2020-07-03
CN111369570B true CN111369570B (en) 2023-08-18

Family

ID=71210083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010111978.0A Active CN111369570B (en) 2020-02-24 2020-02-24 Multi-target detection tracking method for video image

Country Status (1)

Country Link
CN (1) CN111369570B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092925A (en) * 2020-08-05 2022-02-25 武汉Tcl集团工业研究院有限公司 Video subtitle detection method and device, terminal equipment and storage medium
CN114255253A (en) * 2020-09-25 2022-03-29 北京小米移动软件有限公司 Edge detection method, edge detection device, and computer-readable storage medium
CN112270657A (en) * 2020-11-04 2021-01-26 成都寰蓉光电科技有限公司 Sky background-based target detection and tracking algorithm
CN114022803B (en) * 2021-09-30 2023-11-14 苏州浪潮智能科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246551A (en) * 2008-03-07 2008-08-20 北京航空航天大学 Fast license plate locating method
CN101393609A (en) * 2008-09-18 2009-03-25 北京中星微电子有限公司 Target detection tracking method and device
CN101739551A (en) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 Method and system for identifying moving objects
CN103077392A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Automobile logo detection method and device
CN107085707A (en) * 2017-04-14 2017-08-22 河海大学 A kind of license plate locating method based on Traffic Surveillance Video
CN108734103A (en) * 2018-04-20 2018-11-02 复旦大学 The detection of moving target and tracking in satellite video
CN109859250A (en) * 2018-11-20 2019-06-07 北京悦图遥感科技发展有限公司 A kind of outer video multi-target detection of aviation red and tracking and device
CN110084302A (en) * 2019-04-26 2019-08-02 西安因诺航空科技有限公司 A kind of crack detection method based on remote sensing images
CN110135314A (en) * 2019-05-07 2019-08-16 电子科技大学 A kind of multi-object tracking method based on depth Trajectory prediction

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246551A (en) * 2008-03-07 2008-08-20 北京航空航天大学 Fast license plate locating method
CN101393609A (en) * 2008-09-18 2009-03-25 北京中星微电子有限公司 Target detection tracking method and device
CN101739551A (en) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 Method and system for identifying moving objects
CN103077392A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Automobile logo detection method and device
CN107085707A (en) * 2017-04-14 2017-08-22 河海大学 A kind of license plate locating method based on Traffic Surveillance Video
CN108734103A (en) * 2018-04-20 2018-11-02 复旦大学 The detection of moving target and tracking in satellite video
CN109859250A (en) * 2018-11-20 2019-06-07 北京悦图遥感科技发展有限公司 A kind of outer video multi-target detection of aviation red and tracking and device
CN110084302A (en) * 2019-04-26 2019-08-02 西安因诺航空科技有限公司 A kind of crack detection method based on remote sensing images
CN110135314A (en) * 2019-05-07 2019-08-16 电子科技大学 A kind of multi-object tracking method based on depth Trajectory prediction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘学兵.机器视觉口服液瓶外观的缺陷检测方法研究.《中国优秀硕士学位论文全文数据库信息科技辑》.2019,第23-24、41-47页. *

Also Published As

Publication number Publication date
CN111369570A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111369570B (en) Multi-target detection tracking method for video image
EP0712094B1 (en) A multi-windowing technique for threshholding an image using local image properties
CN111539980B (en) Multi-target tracking method based on visible light
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN111062293B (en) Unmanned aerial vehicle forest flame identification method based on deep learning
CN110415208A (en) A kind of adaptive targets detection method and its device, equipment, storage medium
CN109241973B (en) Full-automatic soft segmentation method for characters under texture background
CN111415363A (en) Image edge identification method
US6701026B1 (en) Method and apparatus for cancelling lighting variations in object recognition
CN108765463B (en) Moving target detection method combining region extraction and improved textural features
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
CN112288780B (en) Multi-feature dynamically weighted target tracking algorithm
Zhu et al. Optimization of image processing in video-based traffic monitoring
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN110930358A (en) Solar panel image processing method based on self-adaptive algorithm
CN114581901A (en) Method for extracting edges of ancient building wall contaminated inscription character images
CN110472472B (en) Airport detection method and device based on SAR remote sensing image
Abdusalomov et al. Robust shadow removal technique for improving image enhancement based on segmentation method
CN112949389A (en) Haze image target detection method based on improved target detection network
Long et al. An Efficient Method For Dark License Plate Detection
CN112258548B (en) Moving target extraction method based on improved ViBe algorithm
CN115937263B (en) Vision-based target tracking method, system, electronic equipment and storage medium
CN111476821B (en) Target tracking method based on online learning
CN112802055B (en) Target ghost detection and edge propagation inhibition algorithm
CN110782405B (en) Point target and dark spot image background balancing method based on gradient identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant