CN112863194B - Image processing method, device, terminal and medium - Google Patents

Image processing method, device, terminal and medium Download PDF

Info

Publication number
CN112863194B
CN112863194B CN202110076506.0A CN202110076506A CN112863194B CN 112863194 B CN112863194 B CN 112863194B CN 202110076506 A CN202110076506 A CN 202110076506A CN 112863194 B CN112863194 B CN 112863194B
Authority
CN
China
Prior art keywords
traffic light
pictures
picture
detection
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110076506.0A
Other languages
Chinese (zh)
Other versions
CN112863194A (en
Inventor
任永腾
石柱国
王堃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Issa Data Technology Co ltd
Beijing Isa Intelligent Technology Co ltd
Qingdao Yisa Data Technology Co Ltd
Original Assignee
Anhui Issa Data Technology Co ltd
Beijing Isa Intelligent Technology Co ltd
Qingdao Yisa Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Issa Data Technology Co ltd, Beijing Isa Intelligent Technology Co ltd, Qingdao Yisa Data Technology Co Ltd filed Critical Anhui Issa Data Technology Co ltd
Priority to CN202110076506.0A priority Critical patent/CN112863194B/en
Publication of CN112863194A publication Critical patent/CN112863194A/en
Application granted granted Critical
Publication of CN112863194B publication Critical patent/CN112863194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method, which comprises the following steps: acquiring vehicle passing data and pictures; classifying the pictures at the same point position according to the vehicle passing data to obtain a picture set; selecting a set number of pictures from the picture set at the same point according to different conditions; carrying out traffic light and lane line detection on the selected pictures to obtain initial identification results of different pictures at the same point; carrying out weighted average on the initial recognition results of the same target object to obtain a final recognition result; and storing the final identification result of each point location into a database according to the point location. The method can automatically label the scene graph, improves the labeling efficiency, reduces the workload of manual labeling, enhances the resistance of poor detection effect caused by objective factors by performing weighted average processing on the primary identification result of the same target object, improves the corresponding capability of scene deviation caused by the objective factors, and has higher accuracy than manual labeling.

Description

Image processing method, device, terminal and medium
Technical Field
The invention relates to the technical field of software, in particular to an image processing method, an image processing device, an image processing terminal and an image processing medium.
Background
The traffic violation prequalification analysis system relies on high and new technologies such as artificial intelligence and deep learning, preliminary machine auditing is carried out on violation data collected at the front end, waste pieces are removed, real and effective violation data are mined and provided for a violation auditing platform to carry out law enforcement treatment, and therefore the purposes of reducing the workload of manual auditing, improving the efficiency of auditing and the like are achieved. However, the current traffic violation prequalification analysis system relies heavily on manual labeling of scenes, and only the labeled scenes can be subjected to prequalification analysis by using an artificial intelligence technology, so that the traffic violation prequalification analysis system needs a large amount of manual labeling at the initial deployment stage, which seriously affects efficiency, and the manually labeled scenes lack the analysis capability on objective factors such as scene deviation and actual pavement change, so that once the scenes change and other conditions occur, a large number of error results are generated, the prequalification analysis needs to be interrupted immediately and the manual re-labeling needs to be carried out, and the use experience is seriously affected.
Disclosure of Invention
Aiming at the defects in the prior art, the image processing method, the image processing device, the image processing terminal and the image processing medium provided by the embodiment of the invention automatically label the image, reduce the manual workload, improve the labeling efficiency, and improve the accuracy of target labeling by carrying out weighted average processing on a plurality of images of the same target.
In a first aspect, an embodiment of the present invention provides an image processing method, including the following steps:
acquiring vehicle passing data and pictures;
classifying the pictures at the same point position according to the vehicle passing data to obtain a picture set;
selecting a set number of pictures from the picture set at the same point according to different conditions;
carrying out traffic light and lane line detection on the selected pictures to obtain initial identification results of different pictures at the same point position;
carrying out weighted average on the initial recognition results of the same target object to obtain a final recognition result;
and storing the final identification result of each point location into a database according to the point location.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: a data acquisition module, a classification module, a picture selection module, a target detection module, a processing module and a storage module,
the data acquisition module is used for acquiring vehicle passing data and pictures;
the classification module is used for classifying the pictures at the same point position according to the vehicle passing data to obtain a picture set;
the picture selection module selects a set number of pictures from the picture set at the same point according to different conditions;
the target detection module is used for detecting traffic lights and lane lines of the selected pictures to obtain initial identification results of different pictures at the same point position;
the processing module is used for carrying out weighted average on the initial recognition results of the same target object to obtain a final recognition result;
and the storage module is used for storing the final recognition result of each point location into a database according to the point location.
In a third aspect, an intelligent terminal according to an embodiment of the present invention includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described in the foregoing embodiment.
In a fourth aspect, the present invention is embodied in a computer-readable storage medium, which stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method described in the above embodiments.
The invention has the beneficial effects that:
the image processing method and the image processing device provided by the embodiment of the invention can automatically label the scene graph, improve the labeling efficiency, reduce the workload of manual labeling, enhance the resistance of poor detection effect caused by objective factors by performing weighted average processing on the initial identification result of the same target object, improve the response capability of scene deviation caused by the objective factors, and have higher accuracy than the manual labeling.
The embodiment of the invention provides an intelligent terminal and a medium, which have the same inventive concept and the same beneficial effects as the image processing method.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 shows a flowchart of an image processing method according to a first embodiment of the present invention;
fig. 2 is a block diagram showing a configuration of an image processing apparatus according to another embodiment of the present invention;
fig. 3 shows a block diagram of an intelligent terminal according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the present invention belongs.
As shown in fig. 1, a flowchart of an image processing method provided by a first embodiment of the present invention is shown, and the method is suitable for an image processing apparatus, and includes the following steps:
s1: and acquiring vehicle passing data and pictures.
Specifically, vehicle passing data and pictures are obtained from the electric police/card system and stored in the database, and the vehicle passing data and the pictures are obtained from the database. The data used for storing the vehicle passing data and the pictures has no requirement and can be selected at will, but the vehicle passing data and the pictures at the same point are required to be put together during storage, so that the final recognition result can be conveniently selected and stored according to the requirement in the follow-up process.
S2: and classifying the pictures at the same point according to the vehicle passing data to obtain a picture set.
S3: and selecting a set number of pictures from the picture set at the same point according to different conditions.
Specifically, the pictures at the same point are classified, so that the subsequent picture sets at the same point are convenient to follow up according to requirements. And randomly selecting a certain number of pictures according to the passing time, the illumination intensity, the weather condition, the road surface congestion condition and other factors in the data aiming at the passing data and the pictures in the same point position. For example: the passing time generally has the passing data, if the passing time does not exist, the system time for acquiring real-time data can be simply used, the illumination intensity, the traffic flow and the like can be preliminarily presumed according to the passing time, the pictures with the time of 9-11 points and 14-15 points are selected as much as possible, the illumination intensity of the pictures at the time interval is good, the road surface condition is clear, the peak in the morning and evening is avoided, the interference marking factors such as vehicles and pedestrians on the road surface are few, and the detection and identification effects of traffic lights and lane lines can be improved. After the illumination intensity is preliminarily estimated by the passing time, the brightness of the picture in the same time period is calculated by a computer vision method such as OPENCV (open vehicle vision channel) and the like, such as a pixel average value method, an RMS (mean square root mean square) method, a formula method and the like. And removing the brightest and darkest pictures (1 to 5 pictures) in the same time period, and reducing the influence of over-brightness or over-darkness of the pictures on the detection and identification effect caused by direct solar radiation or cloud shielding and the like. The weather condition can be acquired in real time through the vehicle passing data, the weather condition can affect the quality of pictures in different dates and the same time period, and pictures with good detection and identification effects in clear weather, rainy weather and the like are selected as much as possible, so that the detection and identification effects are improved. The road surface congestion condition can be obtained by detecting the number of vehicles in the current picture, and when the number of vehicles in the current picture is large, target objects such as lane lines can be shielded, so that the detection and identification effects are influenced. Therefore, pictures with a small number of vehicles are selected as much as possible, and the detection and identification effects can be improved.
S4: and carrying out traffic light and lane line detection on the selected pictures to obtain the initial identification results of different pictures at the same point position.
Specifically, the traffic light and lane line detection is performed for each selected picture in the same point location.
The specific steps of traffic light detection include:
detecting the selected picture by adopting a traffic light group detection model to obtain the coordinate position and the transverse and vertical arrangement condition of the traffic light group; obtaining the image characteristics of the traffic light group according to the coordinate position of the traffic light group; identifying the image characteristics of the traffic light group by adopting a traffic light bulb detection identification model to obtain the position and the color of each light bulb; and judging the traffic light condition according to the color and the position of the bulb. By identifying the coordinate position and the color of the traffic light bulb, the traffic light condition can be accurately obtained.
The traffic light group detection model is trained by adopting a YOLOv4 target detection algorithm, a certain number of pictures are prepared before training, and the position of the traffic light group is manually marked on each picture. During training, aiming at a certain image, firstly, coordinates of a plurality of lamp groups are predicted by using parameters in a pre-training model, then, the coordinates are compared with the coordinates of an artificial standard, an error is calculated, the gradient value of each parameter in the pre-training model is worked out according to the error, and the parameter value is adjusted according to the gradient. Repeating the steps until the error is smaller than a certain threshold value or the cycle number is larger than a certain turn, and obtaining the traffic light group detection model.
The specific steps for lane line detection include:
performing primary segmentation on the selected picture by adopting a Mask-RCNN model to obtain the positions of all pixel points representing the lane lines in the picture; connecting adjacent pixel points to obtain an edge curve of the lane line; and performing least square fitting on each lane line edge curve to obtain a lane line linear description function, and segmenting a lane area. And traversing each pixel point in the picture through a Mask-RCNN model, judging whether the pixel point is a pixel point of the lane line, and outputting the position of the pixel point representing the lane line in the picture. By using a deep learning classification algorithm for the divided lane areas, the direction categories of the lane allowing the vehicle to pass can be obtained, such as the lane allowing straight running, left turning or right turning.
S5: and carrying out weighted average on the initial recognition results of the same target object to obtain a final recognition result.
Through the step S4, the preliminary recognition results of different pictures at the same point are obtained, and in order to enhance the resistance to the deterioration of the detection effect caused by objective factors, the preliminary recognition results of the same target object are subjected to weighted average processing to obtain the final recognition result. For example: the positions of the red and green frames in the picture 1 are x1, y1, w1 and h1, the confidence is s1, the positions of the red and green frames in the picture 2 are x2, y2, w2 and h2, and the confidence is s2, so that the red and green frames after weighted averaging are (x1 s1+ x2 s2)/(s1+ s2), (y1 s1+ y2 s2)/(s1+ s2), (w1 s1+ w 82 2 s2)/(s1+ s2), (h1 s1+ h2 s2)/(s 2 + s2), and the red and green frames obtained after weighted averaging are more stable than the red and green frames obtained by manual labeling or automatic detection of the picture.
S6: and storing the final identification result of each point location into a database according to the point location.
And storing the final identification result of each point location into a database according to the point location, so that the subsequent violation prequalification logic can be conveniently called. The final recognition result in each point location can be re-recognized once at regular intervals (such as a week), that is, the steps S1 to S6 are periodically and repeatedly executed, and the final recognition result of each point location is periodically updated, so that the capability of responding to the failure of the labeling caused by objective factors such as scene deviation and actual road surface change is improved, the manual labeling time is not required, and the influence on the video violation pre-auditing system is reduced.
The image processing method provided by the first embodiment of the invention can automatically label the scene graph, improves the labeling efficiency, reduces the workload of manual labeling, enhances the resistance of poor detection effect caused by objective factors by performing weighted average processing on the initial identification result of the same target object, improves the response capability of the scene deviation caused by the objective factors, and has higher accuracy than the manual labeling.
In the first embodiment described above, an image processing method is provided, and correspondingly, the present application also provides an image processing apparatus. Please refer to fig. 2, which is a block diagram of an image processing apparatus according to another embodiment of the present invention. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 2, a block diagram of an image processing apparatus according to another embodiment of the present invention is shown, the apparatus including: the system comprises a data acquisition module, a classification module, a picture selection module, a target detection module, a processing module and a storage module, wherein the data acquisition module is used for acquiring vehicle passing data and pictures; the classification module is used for classifying the pictures at the same point position according to the vehicle passing data to obtain a picture set; the picture selection module selects a set number of pictures from the picture set at the same point according to different conditions; the target detection module is used for detecting traffic lights and lane lines of the selected pictures to obtain initial identification results of the pictures at the same point position and different points; the processing module is used for carrying out weighted average on the initial recognition results of the same target object to obtain a final recognition result; and the storage module is used for storing the final recognition result of each point location into a database according to the point location.
The image processing device provided by the embodiment of the invention can automatically label the scene graph, improves the labeling effect, reduces the workload of manual labeling, and has higher accuracy in automatic labeling than manual labeling. By carrying out weighted average processing on the initial recognition result of the same target object, the resistance of poor detection effect caused by objective factors is enhanced, and the capability of responding to scene deviation caused by the objective factors is improved.
In another embodiment of the image processing apparatus provided by the present invention, the target detection module includes a traffic light detection unit, and the traffic light detection unit is configured to perform traffic light detection on the selected picture, and the specific detection includes: detecting the selected picture by adopting a traffic light group detection model to obtain the coordinate position and the transverse and vertical arrangement condition of the traffic light group; obtaining the image characteristics of the traffic light group according to the coordinate position of the traffic light group; identifying the image characteristics of the traffic light group by adopting a traffic light bulb detection identification model to obtain the position and the color of each light bulb; and judging the traffic light condition according to the color and the position of the bulb.
In another embodiment of the image processing apparatus provided by the present invention, the target detection module further includes a lane line detection unit, the lane line detection unit is configured to perform lane line detection on the selected picture, and the specific detection step includes: performing primary segmentation on the selected picture by adopting a Mask-RCNN model to obtain the positions of all pixel points representing the lane lines in the picture; connecting adjacent pixel points to obtain an edge curve of the lane line; and performing least square fitting on the edge curve of each lane line to obtain a lane line linear description function, and segmenting a lane area.
In another embodiment of the image processing apparatus provided by the present invention, the image processing apparatus further includes a periodic update module, and the periodic update module is configured to periodically update the final recognition result of each point location. And updating the final recognition result of each point location periodically to improve the response capability of the marking failure caused by objective factors such as scene deviation and actual pavement change, and reduce the influence on a video illegal prejudging system without spending manual marking time.
As shown in fig. 3, a block diagram of an intelligent terminal according to another embodiment of the present invention is shown, where the terminal includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, and the memory is used to store a computer program, where the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method steps described in the foregoing embodiments.
It should be understood that in the embodiments of the present invention, the Processor may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device may include a display (LCD, etc.), a speaker, etc.
The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In a specific implementation, the processor, the input device, and the output device described in the embodiments of the present invention may execute the implementation described in the method embodiments provided in the embodiments of the present invention, and may also execute the implementation described in the system embodiments in the embodiments of the present invention, which is not described herein again.
An embodiment of a computer-readable storage medium is also provided in the present invention, which computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method steps described in the above embodiment.
The computer readable storage medium may be an internal storage unit of the terminal described in the foregoing embodiment, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (8)

1. An image processing method, characterized by comprising the steps of:
acquiring vehicle passing data and pictures;
classifying the pictures at the same point position according to the vehicle passing data to obtain a picture set;
selecting a set number of pictures from the picture set at the same point according to different conditions;
carrying out traffic light and lane line detection on the selected pictures to obtain initial identification results of different pictures at the same point position;
carrying out weighted average on the initial recognition results of the same target object to obtain a final recognition result;
storing the final identification result of each point location into a database according to the point location;
the different conditions include, but are not limited to, vehicle passing time, light intensity, weather conditions, and road congestion conditions;
the specific steps of traffic light detection on the selected picture comprise:
detecting the selected picture by adopting a traffic light group detection model to obtain the coordinate position and the transverse and vertical arrangement condition of the traffic light group;
obtaining the image characteristics of the traffic light group according to the coordinate position of the traffic light group;
identifying the image characteristics of the traffic light group by adopting a traffic light bulb detection identification model to obtain the position and the color of each light bulb;
judging the traffic light condition according to the color and the position of the bulb;
further comprising: and training the traffic light group detection model by adopting a YOLOv4 target detection algorithm, wherein a certain number of pictures are prepared before training, and the position of the traffic light group is manually marked on each picture.
2. The method of claim 1, wherein the specific step of lane line detection on the selected picture comprises:
performing primary segmentation on the selected picture by adopting a Mask-RCNN model to obtain the positions of all pixel points representing the lane lines in the picture;
connecting adjacent pixel points to obtain an edge curve of the lane line;
and performing least square fitting on each lane line edge curve to obtain a lane line linear description function, and segmenting a lane area.
3. The method of claim 1, wherein the method further comprises: and updating the final recognition result of each point location periodically.
4. An image processing apparatus characterized by comprising: a data acquisition module, a classification module, a picture selection module, a target detection module, a processing module and a storage module,
the data acquisition module is used for acquiring vehicle passing data and pictures;
the classification module is used for classifying the pictures at the same point position according to the vehicle passing data to obtain a picture set;
the picture selection module selects a set number of pictures from the picture set at the same point according to different conditions;
the target detection module is used for detecting traffic lights and lane lines of the selected pictures to obtain initial identification results of different pictures at the same point position;
the processing module is used for carrying out weighted average on the initial recognition results of the same target object to obtain a final recognition result;
the storage module is used for storing the final identification result of each point location into a database according to the point location;
the different conditions include, but are not limited to, vehicle passing time, light intensity, weather conditions, and road congestion conditions;
the traffic light detection unit is used for carrying out traffic light detection on the selected picture, and the specific detection comprises the following steps:
detecting the selected picture by adopting a traffic light group detection model to obtain the coordinate position and the transverse and vertical arrangement condition of the traffic light group;
obtaining the image characteristics of the traffic light group according to the coordinate position of the traffic light group;
identifying the image characteristics of the traffic light group by adopting a traffic light bulb detection identification model to obtain the position and the color of each light bulb;
judging the traffic light condition according to the color and the position of the bulb;
and training the traffic light group detection model by adopting a YOLOv4 target detection algorithm, preparing a certain number of pictures before training, and manually marking the position of the traffic light group on each picture.
5. The apparatus of claim 4, wherein the target detection module further comprises a lane line detection unit, the lane line detection unit is configured to perform lane line detection on the selected picture, and the specific detection step includes:
performing primary segmentation on the selected picture by adopting a Mask-RCNN model to obtain the positions of all pixel points representing the lane lines in the picture;
connecting adjacent pixel points to obtain an edge curve of the lane line;
and performing least square fitting on each lane line edge curve to obtain a lane line linear description function, and segmenting a lane area.
6. The apparatus of claim 4, further comprising a periodic update module to periodically update a final recognition result for each point location.
7. An intelligent terminal comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, the memory being adapted to store a computer program, the computer program comprising program instructions, characterized in that the processor is configured to invoke the program instructions to perform the method according to any of claims 1-3.
8. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-3.
CN202110076506.0A 2021-01-20 2021-01-20 Image processing method, device, terminal and medium Active CN112863194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110076506.0A CN112863194B (en) 2021-01-20 2021-01-20 Image processing method, device, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110076506.0A CN112863194B (en) 2021-01-20 2021-01-20 Image processing method, device, terminal and medium

Publications (2)

Publication Number Publication Date
CN112863194A CN112863194A (en) 2021-05-28
CN112863194B true CN112863194B (en) 2022-08-23

Family

ID=76007766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110076506.0A Active CN112863194B (en) 2021-01-20 2021-01-20 Image processing method, device, terminal and medium

Country Status (1)

Country Link
CN (1) CN112863194B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688957A (en) * 2021-10-26 2021-11-23 苏州浪潮智能科技有限公司 Target detection method, device, equipment and medium based on multi-model fusion
CN114255598A (en) * 2021-12-13 2022-03-29 以萨技术股份有限公司 Vehicle illegal data processing method, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163109A (en) * 2019-04-23 2019-08-23 浙江大华技术股份有限公司 A kind of lane line mask method and device
CN110197589A (en) * 2019-05-29 2019-09-03 杭州诚道科技股份有限公司 A kind of illegal detection method of making a dash across the red light based on deep learning
CN111126323A (en) * 2019-12-26 2020-05-08 广东星舆科技有限公司 Bayonet element recognition and analysis method and system serving for traffic violation detection
CN111582189A (en) * 2020-05-11 2020-08-25 腾讯科技(深圳)有限公司 Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105453153B (en) * 2013-08-20 2018-07-20 哈曼国际工业有限公司 Traffic lights detects
CN105719284B (en) * 2016-01-18 2018-11-06 腾讯科技(深圳)有限公司 A kind of data processing method, device and terminal
JP6971177B2 (en) * 2018-03-09 2021-11-24 フォルシアクラリオン・エレクトロニクス株式会社 Compartment line recognition device
CN108470159B (en) * 2018-03-09 2019-12-20 腾讯科技(深圳)有限公司 Lane line data processing method and device, computer device and storage medium
CN111160282B (en) * 2019-12-31 2023-03-24 合肥湛达智能科技有限公司 Traffic light detection method based on binary Yolov3 network
CN111325988A (en) * 2020-03-10 2020-06-23 北京以萨技术股份有限公司 Real-time red light running detection method, device and system based on video and storage medium
CN111488808B (en) * 2020-03-31 2023-09-29 杭州诚道科技股份有限公司 Lane line detection method based on traffic violation image data
CN111680580A (en) * 2020-05-22 2020-09-18 北京格灵深瞳信息技术有限公司 Red light running identification method and device, electronic equipment and storage medium
CN111860319B (en) * 2020-07-20 2024-03-26 阿波罗智能技术(北京)有限公司 Lane line determining method, positioning accuracy evaluating method, device and equipment
CN112053407B (en) * 2020-08-03 2024-04-09 杭州电子科技大学 Automatic lane line detection method based on AI technology in traffic law enforcement image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163109A (en) * 2019-04-23 2019-08-23 浙江大华技术股份有限公司 A kind of lane line mask method and device
CN110197589A (en) * 2019-05-29 2019-09-03 杭州诚道科技股份有限公司 A kind of illegal detection method of making a dash across the red light based on deep learning
CN111126323A (en) * 2019-12-26 2020-05-08 广东星舆科技有限公司 Bayonet element recognition and analysis method and system serving for traffic violation detection
CN111582189A (en) * 2020-05-11 2020-08-25 腾讯科技(深圳)有限公司 Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于实例分割的车道线检测及自适应拟合算法;田锦 等;《计算机应用》;20200710;1932-1937 *
基于深度学习和Opencv的交通灯识别算法研究;余楚礼 朱强;《上海汽车》;20190731;19-22 *

Also Published As

Publication number Publication date
CN112863194A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN107506760B (en) Traffic signal detection method and system based on GPS positioning and visual image processing
Nienaber et al. Detecting potholes using simple image processing techniques and real-world footage
US11380104B2 (en) Method and device for detecting illegal parking, and electronic device
CN104036262B (en) A kind of method and system of LPR car plates screening identification
WO2017171659A1 (en) Signal light detection
CN112863194B (en) Image processing method, device, terminal and medium
CN106355180B (en) A kind of license plate locating method combined based on color with edge feature
CN108109133B (en) Silkworm egg automatic counting method based on digital image processing technology
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
JP2011216051A (en) Program and device for discriminating traffic light
CN110991221A (en) Dynamic traffic red light running identification method based on deep learning
CN113537037A (en) Pavement disease identification method, system, electronic device and storage medium
CN115908774B (en) Quality detection method and device for deformed materials based on machine vision
CN111046741A (en) Method and device for identifying lane line
CN106570440A (en) People counting method and people counting device based on image analysis
CN112818853A (en) Traffic element identification method, device, equipment and storage medium
KR100903816B1 (en) System and human face detection system and method in an image using fuzzy color information and multi-neural network
CN115965934A (en) Parking space detection method and device
CN111680580A (en) Red light running identification method and device, electronic equipment and storage medium
CN111524121A (en) Road and bridge fault automatic detection method based on machine vision technology
CN109766846B (en) Video-based self-adaptive multi-lane traffic flow detection method and system
TWI498830B (en) A method and system for license plate recognition under non-uniform illumination
CN111695374B (en) Segmentation method, system, medium and device for zebra stripes in monitoring view angles
CN109800693B (en) Night vehicle detection method based on color channel mixing characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266400 Room 302, building 3, Office No. 77, Lingyan Road, Huangdao District, Qingdao, Shandong Province

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant after: Anhui Issa Data Technology Co.,Ltd.

Applicant after: Beijing isa Intelligent Technology Co.,Ltd.

Address before: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant before: Anhui Issa Data Technology Co.,Ltd.

Applicant before: Beijing isa Intelligent Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant