CN109960959B - Method and apparatus for processing image - Google Patents

Method and apparatus for processing image Download PDF

Info

Publication number
CN109960959B
CN109960959B CN201711337034.XA CN201711337034A CN109960959B CN 109960959 B CN109960959 B CN 109960959B CN 201711337034 A CN201711337034 A CN 201711337034A CN 109960959 B CN109960959 B CN 109960959B
Authority
CN
China
Prior art keywords
lane line
line
position information
target image
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711337034.XA
Other languages
Chinese (zh)
Other versions
CN109960959A (en
Inventor
李旭斌
傅依
文石磊
刘霄
丁二锐
孙昊
郭鹏
蒋子谦
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201711337034.XA priority Critical patent/CN109960959B/en
Publication of CN109960959A publication Critical patent/CN109960959A/en
Application granted granted Critical
Publication of CN109960959B publication Critical patent/CN109960959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for processing an image. One embodiment of the method comprises: determining the position information of a plurality of pixel points of at least one lane line in the position information of the plurality of pixel points in the target image based on the color of each pixel point in the plurality of pixel points of the target image; performing linear fitting on the position information of a plurality of pixel points where at least one lane line is located to determine the position information of at least two positions on a line segment where each lane line of the at least one lane line is located in the target image; and determining the area of each lane line in the target image based on the position information of at least two positions on the line segment of each lane line, and determining the line type of each lane line in each area. The embodiment of the application improves the accuracy of identifying the lane line.

Description

Method and apparatus for processing image
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of internet, and particularly relates to a method and a device for processing images.
Background
The lane lines are the most frequently encountered indication lines in the process of driving on roads, and almost all roads are drawn with the lane lines. The information of the lane line is acquired, so that the driver can be instructed, and the driving safety of the vehicle is better guaranteed.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing an image.
In a first aspect, an embodiment of the present application provides a method for processing an image, including: determining the position information of a plurality of pixel points of at least one lane line in the position information of the plurality of pixel points in the target image based on the color of each pixel point in the plurality of pixel points of the target image; performing linear fitting on the position information of a plurality of pixel points where at least one lane line is located to determine the position information of at least two positions on a line segment where each lane line in the at least one lane line is located in the target image; and determining the area of each lane line in the target image based on the position information of at least two positions on the line segment of each lane line, and determining the line type of each lane line in each area.
In some embodiments, the linearly fitting the position information of the plurality of pixel points where the at least one lane line is located to determine the position information of the at least two positions on the line segment where each lane line in the at least one lane line is located in the target image includes: performing linear fitting on the position information of a plurality of pixel points where at least one lane line is located to obtain a function of a line segment where each lane line in the at least one lane line is located; for each lane line, at least two pieces of position information of the target image, which conform to a function of the line segment where the lane line is located, are determined.
In some embodiments, determining, in the position information of the target image including the plurality of pixel points, the position information of the plurality of pixel points where the at least one lane line is located based on the color of each of the plurality of pixel points of the target image includes: inputting position information of a plurality of pixel points and a plurality of pixel points of a target image in the target image into a pre-trained color classification model, wherein the color classification model is used for performing color classification on the pixel points including colors and outputting the pixel points and the position information belonging to the color category of a lane line; and obtaining a plurality of pixel points of each color category in at least one color category belonging to the lane line and output by the color classification model, and position information of the plurality of pixel points of the color category in the target image.
In some embodiments, the position information of the at least two positions is position information of two end points of a line segment where the lane line is located and position information of a midpoint of two broad sides of the area, and the area where each lane line is located is a rectangle; and determining the area of each lane line in the target image based on the position information of at least two positions on the line segment of each lane line, wherein the determining comprises the following steps: for each lane line, a rectangular area with the width being a preset width and the length being the length of a connecting line of two end points is determined in the target image based on the position information of the middle points of the two wide sides of the area where the line segment where the lane line is located.
In some embodiments, determining the line type of the lane lines in each zone comprises: and inputting the area where each lane line is located into a pre-trained line type classification model to obtain the line type of the lane line included in each area output by the line type classification model, wherein the line type classification model is used for classifying the line type of the lane line.
In some embodiments, the method further comprises: and performing straight line detection and/or frequency domain detection on the previously acquired image, determining a sub-image where the lane line of the acquired image is positioned, and determining the sub-image as a target image.
In a second aspect, an embodiment of the present application provides an apparatus for processing an image, including: the determining unit is configured to determine the position information of a plurality of pixel points where at least one lane line is located in the position information of the plurality of pixel points in the target image based on the color of each pixel point in the plurality of pixel points in the target image; the fitting unit is configured to perform linear fitting on the position information of a plurality of pixel points where at least one lane line is located so as to determine the position information of at least two positions on a line segment where each lane line in the at least one lane line is located in the target image; and the line type determining unit is configured to determine the area of each lane line in the target image based on the position information of at least two positions on the line segment where each lane line is located, and determine the line type of each lane line in each area.
In some embodiments, the fitting unit comprises: the fitting module is configured to perform linear fitting on the position information of a plurality of pixel points where at least one lane line is located to obtain a function of a line segment where each lane line in the at least one lane line is located; and the determining module is configured to determine at least two pieces of position information of the target image, which are in accordance with the function of the line segment where the lane line is located, for each lane line.
In some embodiments, the determining unit comprises: the system comprises an input module, a color classification module and a display module, wherein the input module is configured to input position information of a plurality of pixel points and a plurality of pixel points of a target image in the target image into a pre-trained color classification model, and the color classification model is used for performing color classification on the pixel points including colors and outputting the pixel points and the position information belonging to the color category of a lane line; and the output module is configured to obtain a plurality of pixel points of each color category in at least one color category belonging to the lane line and output by the color classification model, and position information of the plurality of pixel points of the color category in the target image.
In some embodiments, the position information of the at least two positions is position information of two end points of a line segment where the lane line is located and position information of a midpoint of two broad sides of the area, and the area where each lane line is located is a rectangle; and the line type determining unit is further configured to: for each lane line, a rectangular area with the width being a preset width and the length being the length of a connecting line of two end points is determined in the target image based on the position information of the middle points of the two wide sides of the area where the line segment where the lane line is located.
In some embodiments, the line type determination unit is further configured to: and inputting the area where each lane line is located into a pre-trained line type classification model to obtain the line type of the lane line included in each area output by the line type classification model, wherein the line type classification model is used for classifying the line type of the lane line.
In some embodiments, the apparatus further comprises: the pre-processing unit is used for carrying out straight line detection and/or frequency domain detection on the previously acquired image, determining a sub-image where the lane line of the acquired image is located, and determining the sub-image as a target image.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement a method as in any embodiment of a method for processing images.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements a method as in any one of the embodiments of the method for processing an image.
According to the method and the device for processing the image, firstly, based on the color of each pixel point in a plurality of pixel points of the target image, the position information of a plurality of pixel points where at least one lane line is located is determined in the position information of the plurality of pixel points in the target image. And then, performing linear fitting on the position information of a plurality of pixel points where at least one lane line is located to determine the position information of at least two positions on a line segment where each lane line in the at least one lane line of the target image is located. And finally, determining the area of each lane line in the target image based on the position information of at least two positions on the line segment of each lane line, and determining the line type of each lane line in each area. The embodiment of the application improves the accuracy of identifying the lane line.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for processing an image according to the present application;
FIG. 3 is a schematic illustration of an application scenario of a method for processing an image according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for processing an image according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of an apparatus for processing images according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for processing images or the apparatus for processing images of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a vehicle navigation application, a web browser application, a shopping-like application, a search-like application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and communicatively connected, including but not limited to vehicles, vehicle navigation systems, smart phones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background server that supports the line type of the lane line displayed on the terminal devices 101, 102, 103. The background server may analyze and otherwise process the received data such as the target image, and feed back a processing result (e.g., a line type of a lane line in the target image) to the terminal device.
It should be noted that the method for processing an image provided in the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for processing an image is generally disposed in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing an image according to the present application is shown. The method for processing the image comprises the following steps:
step 201, based on the color of each pixel point in the multiple pixel points of the target image, determining the position information of the multiple pixel points where at least one lane line is located in the position information of the multiple pixel points in the target image.
In this embodiment, an electronic device (for example, a server shown in fig. 1) on which the method for processing an image is executed determines, based on the color of each of a plurality of pixels of a target image, position information of a plurality of pixels where at least one lane line is located, among position information of the plurality of pixels. The target image may be an image obtained from a local or other electronic device. Specifically, the target image may be a top view and a front view of the road surface. Here, in order to obtain a better image analysis result, the target image used may be a high-resolution image. The position information of the pixel points is in the target image. The target image is provided with a plurality of pixel points, and each pixel point is provided with the position information of the pixel point in the target image.
Specifically, the position information may be represented in the form of coordinates. The lane lines are lines drawn on the driving road of the vehicle and indicate the driving position of the vehicle. There may be one or more than one lane line in the image. In practice, the pixel point where the lane line is located can be determined from a plurality of pixel points of the target image, and then the position information of the pixel point where the lane line is located is determined. Because the color of the lane line is different from other colors (such as the ground color of the road surface), the pixel points where the lane line is located can be determined by identifying the color of each pixel point. Through the preset color of the lane line, the pixel points belonging to the color of the lane line can be determined. And the pixel points belonging to the colors of the lane lines are the pixel points where the lane lines are located, so that the position information of the pixel points can be determined to be the position information of the pixel points where the lane lines are located.
Step 202, performing linear fitting on the position information of the plurality of pixel points where the at least one lane line is located to determine the position information of at least two positions on the line segment where each lane line in the at least one lane line is located in the target image.
In this embodiment, the electronic device performs linear fitting on the position information of the plurality of pixel points where the at least one lane line is located, so as to determine the position information of at least two positions on the line segment where each lane line in the at least one lane line is located. The position information here is position information in the target image, that is, the positions indicated by the two pieces of position information fall in the target image. Through linear fitting, at least one line segment where the lane line is located can be obtained, and the number of the line segments existing in the at least one line segment is also obtained. The number of line segments may be determined as the number of lane lines.
In practice, the fitted line segment may be tabulated using positional information of at least two locations falling on the line segmentFor example, coordinates (x)1,y1) And (x)2,y2) It can also be expressed as a function of line segments.
Step 203, determining the area of each lane line in the target image based on the position information of at least two positions on the line segment of each lane line, and determining the line type of each lane line in each area.
In this embodiment, the electronic device determines the area of each lane line in the target image based on the position information of at least two positions on the line segment where each lane line is located. Then, in each of the determined areas, the line type of the lane line is determined. The position information of at least two positions plays a role in positioning the lane line, and the lane line can be positioned in the target image through the position information so as to be convenient for determining the line type of the lane line subsequently. The line type of the lane line indicates the style presented by the lane line. Such as solid lines, dashed lines, etc.
Specifically, determining the area of each lane line in the target image may be performed in various ways. For example, a line connecting the position information of the at least two positions may be used as a center line of the region, and a region of a predetermined width may be taken on both sides of the center line. Or two parallel lines of the line segment where the position information of at least two positions is located can be made, and the distance between the two parallel lines and the line segment where the position information of at least two positions is located is a specified distance. The area between these two parallel lines can then be taken as the area where the lane lines are located.
After the area where each lane line is located is determined, the line type of the lane line in each area may be determined. The lane lines in each zone may be compared to a standard line pattern map of lane lines. And taking the line type in the standard line type graph matched with the lane line in the area as the line type of the lane line in the area. A match may be determined when the similarity is above a threshold. In addition, the line type of the lane line may be classified using a classification model to determine the line type of the lane line in the area.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for processing an image according to the present embodiment. In the application scenario of fig. 3, the electronic device 301 determines, based on the color of each of the plurality of pixels in the image 302, the position information 303 of the plurality of pixels where at least one lane line is located in the position information of the plurality of pixels in the image 302. And performing linear fitting on the position information of a plurality of pixel points where at least one lane line is located to determine position information 304 of 2 positions on a line segment where each lane line in the 3 lane lines of the target image is located. Based on the position information 304 of the 2 positions on the line segment where each lane line is located, the area 305 where each lane line is located in the target image is determined, and the line type 306 of the lane line in each area is determined.
The method provided by the embodiment of the application improves the accuracy of identifying the lane lines and can determine the number and the line type of the lane lines.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for processing an image is shown. The flow 400 of the method for processing an image comprises the steps of:
step 401, performing line detection and/or frequency domain detection on the previously acquired image, determining a sub-image where the lane line of the acquired image is located, and determining the sub-image as a target image.
In this embodiment, the electronic device performs line detection and/or frequency domain detection on a previously acquired image to determine a sub-image in which a lane line of the acquired image is located, and determines the sub-image as a target image. The sub-image is a portion of a previously acquired image. The image here is typically a front or top view. The front view and the top view may be subjected to line detection to detect a line in the image, and then to determine the position where the line is located, by which the image is segmented. For example, hough transform may be used for detection. Frequency domain detection may also be used, that is, high frequency signals and low frequency signals in the image are detected, and if no lane line (for example, sky in front view) exists in the local image corresponding to the low frequency signals, the local image is not in the generated sub-image. In addition, the two types of detection can be carried out on the image so as to obtain better detection effect and determine the sub-image more accurately.
Step 402, inputting a plurality of pixel points of the target image and position information of the pixel points in the target image into a color classification model trained in advance.
In this embodiment, the electronic device inputs a plurality of pixel points of the target image and position information of the pixel points in the target image into a color classification model trained in advance. The color classification model is used for performing color classification on pixel points including colors and outputting pixel points and position information of each color category. Specifically, the color classification model may be a binary classification model, for example, the colors of the pixel points may be divided into a road surface background color and a non-road surface background color. The color classification model may also be a tri-classification model, and so on. For example, the color of the pixel point can be divided into a road ground color, a white color and a yellow color, wherein the white color and the yellow color are the colors of the lane lines.
Specifically, the color classification model may be trained, in particular, with a large number of samples of various colors to which color classes have been labeled. The color classification Model may be obtained by training classifiers (classifiers) such as a Support Vector Machine (SVM) Model and a Naive Bayesian Model (NBM) Model. Furthermore, the color classification model may also be pre-trained based on some classification function (e.g., softmax function, etc.).
Step 403, obtaining a plurality of pixel points of each color category in at least one color category belonging to the lane line output by the color classification model, and position information of the plurality of pixel points of the color category in the target image.
In this embodiment, the electronic device obtains a plurality of pixel points of each color category in at least one color category belonging to the lane line output by the color classification model, and position information of the plurality of pixel points of each color category in the target image. The color classification model outputs pixel points of each color category and outputs position information of the pixel points of each color category. Among the output categories, there is a color category of the lane line. Different labels can be adopted to distinguish the output pixel points with different color categories and the position information of the pixel points, so that the pixel points and the position information of the color categories belonging to the lane line can be determined conveniently.
Step 404, performing linear fitting on the position information of the plurality of pixel points where the at least one lane line is located to obtain a function of a line segment where each lane line in the at least one lane line is located.
In this embodiment, the electronic device performs linear fitting on the position information of the plurality of pixel points where the at least one lane line is located. And obtaining a function of the line segment where each lane line in the at least one lane line is located through fitting. The function obtained here is a linear function, and represents a line segment where each lane line is located. The variables in the function may have a range of values such that a coordinate point in the function falls within a line segment and also falls within the target image.
Step 405, for each lane line, at least two pieces of position information of the target image, which conform to the function of the line segment where the lane line is located, are determined.
In this embodiment, for each lane line, the electronic device determines at least two pieces of position information that are in accordance with a function of a line segment where the lane line is located, and the position information of the at least two positions here is also a point in the target image.
Step 406, for each lane line, determining a rectangular region with a width of a preset width and a length of a connection line between two end points in the target image based on the position information of the middle points of the two wide sides of the region where the line segment where the lane line is located.
In this embodiment, the position information of the at least two positions is the position information of the two end points of the line segment where the lane line is located and the position information of the middle point of the two wide sides of the area, and the area where each lane line is located is a rectangle. For each lane line, the electronic device determines a rectangular region in the target image based on the position information of the midpoint of the two broad sides of the region. The area here is the area to be determined, and is also the area where the line segment of the lane line is located. The width of the rectangular area is a preset width, and the length of the rectangular area is the length of a connecting line of two end points of a line segment where the lane line is located. The position information of the middle point of the two wide sides of the rectangular area is obtained, namely the rectangular area can be positioned. By the length and width of the rectangular area, the size of the rectangle can be determined. Each rectangle has a length and a width, where the two broad sides refer to the two opposite sides of the rectangle where the width is located.
Step 407, inputting the area where each lane line is located into the pre-trained linear classification model, and obtaining the line type of the lane line included in each area output by the linear classification model.
In this embodiment, the electronic device inputs the area where each lane line is located into a pre-trained line type classification model, and obtains the line type of the lane line included in each area output by the line type classification model. The line type classification model is used for classifying the line type of the lane line. The electronic device can distinguish different lane line types through the model, so that the lane line type to which the lane line in the area belongs is determined.
In particular, the line type classification model may be trained using a large number of samples of various lane line types that have been labeled lane line type classes. The linear classification Model may be obtained by training classifiers (classifiers) such as a Support Vector Machine (SVM) Model and a Naive Bayesian Model (NBM) Model. In addition, the linear classification model may be trained in advance based on some classification function (e.g., softmax function).
According to the embodiment, the color of the lane line can be determined by using the color classification model, so that the accuracy of lane line identification is improved. Meanwhile, according to the embodiment, the sub-image where the lane line is located is obtained from the image, so that the time consumption for recognizing the lane line is reduced, and the accuracy of recognizing the lane line is further improved.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an apparatus for processing an image, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for processing an image of the present embodiment includes: a determination unit 501, a fitting unit 502 and a line type determination unit 503. The determining unit 501 is configured to determine, based on a color of each of a plurality of pixels of the target image, location information of a plurality of pixels where at least one lane line is located in the location information of the plurality of pixels in the target image. The fitting unit 502 is configured to perform linear fitting on the position information of the plurality of pixel points where the at least one lane line is located, so as to determine the position information of at least two positions on the line segment where each lane line in the at least one lane line is located in the target image. The line type determining unit 503 is configured to determine an area where each lane line is located in the target image based on the position information of at least two positions on the line segment where each lane line is located, and determine the line type of the lane line in each area.
In this embodiment, the determining unit 501 of the apparatus 500 for processing an image may determine, based on the color of each of a plurality of pixels of the target image, the position information of a plurality of pixels where at least one lane line is located, among the position information of the plurality of pixels. The target image may be an image obtained from a local or other electronic device. Specifically, the target image may be a top view and a front view of the road surface. Here, in order to obtain a better image analysis result, the target image used may be a high-resolution image. The position information of the pixel points is in the target image. The target image is provided with a plurality of pixel points, and each pixel point is provided with the position information of the pixel point in the target image.
In this embodiment, the fitting unit 502 performs linear fitting on the position information of the plurality of pixel points where the at least one lane line is located, so as to determine the position information of at least two positions on the line segment where each lane line in the at least one lane line is located. The position information here is position information in the target image, that is, the positions indicated by the two pieces of position information fall in the target image. Through linear fitting, at least one line segment where the lane line is located can be obtained, and the number of the line segments existing in the at least one line segment is also obtained. The number of line segments may be determined as the number of lane lines.
In this embodiment, the line type determining unit 503 determines the area of each lane line in the target image based on the position information of at least two positions on the line segment where each lane line is located. Then, in each of the determined areas, the line type of the lane line is determined. The position information of at least two positions plays a role in positioning the lane line, and the lane line can be positioned in the target image through the position information so as to be convenient for determining the line type of the lane line subsequently. The line type of the lane line indicates the style presented by the lane line. Such as solid lines, dashed lines, etc.
In some optional implementations of this embodiment, the fitting unit includes: the fitting module is configured to perform linear fitting on the position information of a plurality of pixel points where at least one lane line is located to obtain a function of a line segment where each lane line in the at least one lane line is located; and the determining module is configured to determine at least two pieces of position information of the target image, which are in accordance with the function of the line segment where the lane line is located, for each lane line.
In some optional implementations of this embodiment, the determining unit includes: the system comprises an input module, a color classification module and a display module, wherein the input module is configured to input position information of a plurality of pixel points and a plurality of pixel points of a target image in the target image into a pre-trained color classification model, and the color classification model is used for performing color classification on the pixel points including colors and outputting the pixel points and the position information belonging to the color category of a lane line; and the output module is configured to obtain a plurality of pixel points of each color category in at least one color category belonging to the lane line and output by the color classification model, and position information of the plurality of pixel points of the color category in the target image.
In some optional implementation manners of this embodiment, the position information of the at least two positions is position information of two end points of a line segment where the lane line is located and position information of a midpoint of two wide sides of the area, and the area where each lane line is located is a rectangle; and the line type determining unit is further configured to: for each lane line, a rectangular area with the width being a preset width and the length being the length of a connecting line of two end points is determined in the target image based on the position information of the middle points of the two wide sides of the area where the line segment where the lane line is located.
In some optional implementations of this embodiment, the line type determining unit is further configured to: and inputting the area where each lane line is located into a pre-trained line type classification model to obtain the line type of the lane line included in each area output by the line type classification model, wherein the line type classification model is used for classifying the line type of the lane line.
In some optional implementations of this embodiment, the apparatus further includes: the pre-processing unit is used for carrying out straight line detection and/or frequency domain detection on the previously acquired image, determining a sub-image where the lane line of the acquired image is located, and determining the sub-image as a target image.
In some optional implementations of this embodiment, the fitting unit includes: the fitting module is configured to perform linear fitting on the position information of a plurality of pixel points where at least one lane line is located to obtain a function of a line segment where each lane line in the at least one lane line is located; and the determining module is configured to determine at least two pieces of position information of the target image, which are in accordance with the function of the line segment where the lane line is located, for each lane line.
In some optional implementations of this embodiment, the determining unit includes: the system comprises an input module, a color classification module and a display module, wherein the input module is configured to input position information of a plurality of pixel points and a plurality of pixel points of a target image in the target image into a pre-trained color classification model, and the color classification model is used for performing color classification on the pixel points including colors and outputting the pixel points and the position information belonging to the color category of a lane line; and the output module is configured to obtain a plurality of pixel points of each color category in at least one color category belonging to the lane line and output by the color classification model, and position information of the plurality of pixel points of the color category in the target image.
In some optional implementation manners of this embodiment, the position information of the at least two positions is position information of two end points of a line segment where the lane line is located and position information of a midpoint of two wide sides of the area, and the area where each lane line is located is a rectangle; and the line type determining unit is further configured to: for each lane line, a rectangular area with the width being a preset width and the length being the length of a connecting line of two end points is determined in the target image based on the position information of the middle points of the two wide sides of the area where the line segment where the lane line is located.
In some optional implementations of this embodiment, the line type determining unit is further configured to: and inputting the area where each lane line is located into a pre-trained line type classification model to obtain the line type of the lane line included in each area output by the line type classification model, wherein the line type classification model is used for classifying the line type of the lane line.
In some optional implementations of this embodiment, the apparatus further includes: the pre-processing unit is used for carrying out straight line detection and/or frequency domain detection on the previously acquired image, determining a sub-image where the lane line of the acquired image is located, and determining the sub-image as a target image.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a determination unit, a fitting unit, and a line type determination unit. The names of the units do not form a limitation on the unit itself in some cases, and for example, the determination unit may be further described as a unit that determines position information of a plurality of pixels where at least one lane line is located.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: determining the position information of a plurality of pixel points of at least one lane line in the position information of the plurality of pixel points in the target image based on the color of each pixel point in the plurality of pixel points of the target image; performing linear fitting on the position information of a plurality of pixel points where at least one lane line is located to determine the position information of at least two positions on a line segment where each lane line in the at least one lane line is located in the target image; and determining the area of each lane line in the target image based on the position information of at least two positions on the line segment of each lane line, and determining the line type of each lane line in each area.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A method for processing an image, comprising:
determining the position information of a plurality of pixel points of at least one lane line in the position information of the plurality of pixel points in the target image based on the color of each pixel point in the plurality of pixel points of the target image;
performing linear fitting on the position information of a plurality of pixel points where at least one lane line is located to determine the position information of at least two positions on a line segment where each lane line in the at least one lane line is located in the target image, wherein the position information of the at least two positions comprises the position information of two end points of the line segment where the lane line is located;
determining the area of each lane line in the target image based on the position information of at least two positions on the line segment of each lane line, and determining the line type of each lane line in each area;
the method further comprises the following steps:
and performing straight line detection and/or frequency domain detection on the previously acquired image, determining a sub-image where the lane line of the acquired image is positioned, and determining the sub-image as a target image.
2. The method for processing an image according to claim 1, wherein the linearly fitting the position information of the plurality of pixel points where the at least one lane line is located to determine the position information of at least two positions on the line segment of the target image where each lane line of the at least one lane line is located comprises:
performing linear fitting on the position information of a plurality of pixel points where the at least one lane line is located to obtain a function of a line segment where each lane line in the at least one lane line is located;
for each lane line, at least two pieces of position information of the target image, which conform to a function of the line segment where the lane line is located, are determined.
3. The method of claim 1, wherein the determining the position information of the plurality of pixels where at least one lane line is located in the position information of the plurality of pixels in the target image based on the color of each of the plurality of pixels in the target image comprises:
inputting a plurality of pixel points of a target image and position information of the pixel points in the target image into a pre-trained color classification model, wherein the color classification model is used for performing color classification on the pixel points including colors and outputting the pixel points and the position information belonging to the color category of a lane line;
and obtaining a plurality of pixel points of each color category in at least one color category belonging to the lane line and output by the color classification model, and position information of the plurality of pixel points of the color category in the target image.
4. The method for processing an image according to claim 1, wherein the position information of the at least two positions is position information of two end points of a line segment where a lane line is located and position information of a midpoint of two broad sides of an area, and the area where each lane line is located is a rectangle; and
the determining the area of each lane line in the target image based on the position information of at least two positions on the line segment where each lane line is located includes:
and for each lane line, determining a rectangular area with the width being a preset width and the length being the length of the connecting line of the two end points in the target image based on the position information of the middle points of the two wide sides of the area where the line segment where the lane line is located.
5. The method for processing an image according to claim 1, wherein said determining a line type of a lane line in each region comprises:
inputting the area where each lane line is located into a pre-trained line type classification model to obtain the line type of the lane line included in each area output by the line type classification model, wherein the line type classification model is used for classifying the line type of the lane line.
6. An apparatus for processing an image, comprising:
the determining unit is configured to determine position information of a plurality of pixel points where at least one lane line is located in the position information of the plurality of pixel points in the target image based on the color of each pixel point in the plurality of pixel points in the target image;
the fitting unit is configured to perform linear fitting on the position information of a plurality of pixel points where at least one lane line is located to determine position information of at least two positions on a line segment where each lane line in the at least one lane line is located in the target image, wherein the position information of the at least two positions includes position information of two end points of the line segment where the lane line is located;
the line type determining unit is configured to determine the area of each lane line in the target image based on the position information of at least two positions on the line segment where each lane line is located, and determine the line type of each lane line in each area;
the device further comprises:
the pre-processing unit is used for carrying out straight line detection and/or frequency domain detection on the previously acquired image, determining a sub-image where a lane line of the acquired image is located, and determining the sub-image as a target image.
7. The apparatus for processing an image according to claim 6, wherein said fitting unit comprises:
the fitting module is configured to perform linear fitting on the position information of a plurality of pixel points where the at least one lane line is located to obtain a function of a line segment where each lane line in the at least one lane line is located;
and the determining module is configured to determine at least two pieces of position information of the target image, which are in accordance with the function of the line segment where the lane line is located, for each lane line.
8. The apparatus for processing an image according to claim 6, wherein said determining unit comprises:
the system comprises an input module, a color classification module and a display module, wherein the input module is configured to input a plurality of pixel points of a target image and position information of the pixel points in the target image into a pre-trained color classification model, and the color classification model is used for performing color classification on the pixel points including colors and outputting the pixel points and the position information belonging to the color category of a lane line;
and the output module is configured to obtain a plurality of pixel points of each color category in at least one color category belonging to the lane line and output by the color classification model, and position information of the plurality of pixel points of the color category in the target image.
9. The apparatus for processing an image according to claim 6, wherein the position information of the at least two positions is position information of two end points of a line segment where a lane line is located and position information of a midpoint of two broad sides of an area, and the area where each lane line is located is a rectangle; and
the line type determination unit is further configured to:
and for each lane line, determining a rectangular area with the width being a preset width and the length being the length of the connecting line of the two end points in the target image based on the position information of the middle points of the two wide sides of the area where the line segment where the lane line is located.
10. The apparatus for processing an image according to claim 6, wherein the line type determining unit is further configured to:
inputting the area where each lane line is located into a pre-trained line type classification model to obtain the line type of the lane line included in each area output by the line type classification model, wherein the line type classification model is used for classifying the line type of the lane line.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201711337034.XA 2017-12-14 2017-12-14 Method and apparatus for processing image Active CN109960959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711337034.XA CN109960959B (en) 2017-12-14 2017-12-14 Method and apparatus for processing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711337034.XA CN109960959B (en) 2017-12-14 2017-12-14 Method and apparatus for processing image

Publications (2)

Publication Number Publication Date
CN109960959A CN109960959A (en) 2019-07-02
CN109960959B true CN109960959B (en) 2020-04-03

Family

ID=67017831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711337034.XA Active CN109960959B (en) 2017-12-14 2017-12-14 Method and apparatus for processing image

Country Status (1)

Country Link
CN (1) CN109960959B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688971B (en) * 2019-09-30 2022-06-24 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
CN112507852A (en) * 2020-12-02 2021-03-16 上海眼控科技股份有限公司 Lane line identification method, device, equipment and storage medium
CN113066153B (en) * 2021-04-28 2023-03-31 浙江中控技术股份有限公司 Method, device and equipment for generating pipeline flow chart and storage medium
CN113688721B (en) * 2021-08-20 2024-03-05 北京京东乾石科技有限公司 Method and device for fitting lane lines

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212287B1 (en) * 1996-10-17 2001-04-03 Sgs-Thomson Microelectronics S.R.L. Method for identifying marking stripes of road lanes
CN102298693A (en) * 2011-05-18 2011-12-28 浙江大学 Expressway bend detection method based on computer vision
CN103295420A (en) * 2013-01-30 2013-09-11 吉林大学 Method for recognizing lane line
CN104318258A (en) * 2014-09-29 2015-01-28 南京邮电大学 Time domain fuzzy and kalman filter-based lane detection method
CN106203273A (en) * 2016-06-27 2016-12-07 开易(北京)科技有限公司 The lane detection system of multiple features fusion, method and senior drive assist system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212287B1 (en) * 1996-10-17 2001-04-03 Sgs-Thomson Microelectronics S.R.L. Method for identifying marking stripes of road lanes
CN102298693A (en) * 2011-05-18 2011-12-28 浙江大学 Expressway bend detection method based on computer vision
CN103295420A (en) * 2013-01-30 2013-09-11 吉林大学 Method for recognizing lane line
CN104318258A (en) * 2014-09-29 2015-01-28 南京邮电大学 Time domain fuzzy and kalman filter-based lane detection method
CN106203273A (en) * 2016-06-27 2016-12-07 开易(北京)科技有限公司 The lane detection system of multiple features fusion, method and senior drive assist system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的快速车道线辨识研究;鞠乾翱;《中国优秀硕士学位论文全文数据库信息科技辑》;20130715(第7期);第39-56页 *

Also Published As

Publication number Publication date
CN109960959A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
CN109410218B (en) Method and apparatus for generating vehicle damage information
CN109308681B (en) Image processing method and device
CN109960959B (en) Method and apparatus for processing image
CN108073910B (en) Method and device for generating human face features
CN109711508B (en) Image processing method and device
CN109344762B (en) Image processing method and device
CN109242801B (en) Image processing method and device
EP4177836A1 (en) Target detection method and apparatus, and computer-readable medium and electronic device
KR101602591B1 (en) Methods and apparatuses for facilitating detection of text within an image
Küçükmanisa et al. Real-time illumination and shadow invariant lane detection on mobile platform
CN114332809A (en) Image identification method and device, electronic equipment and storage medium
CN108647570B (en) Zebra crossing detection method and device and computer readable storage medium
CN112967191A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111461152B (en) Cargo detection method and device, electronic equipment and computer readable medium
CN110633598B (en) Method and device for determining a driving area in an environment image
CN109409247B (en) Traffic sign identification method and device
CN111383337B (en) Method and device for identifying objects
CN109859254B (en) Method and device for sending information in automatic driving
CN112927338A (en) Simulation method based on three-dimensional contour, storage medium and computer equipment
CN113688721A (en) Method and device for fitting lane line
CN113762234A (en) Method and device for determining text line region
CN112101139A (en) Human shape detection method, device, equipment and storage medium
CN113378850B (en) Model training method, pavement damage segmentation device and electronic equipment
CN111339341A (en) Model training method and device, positioning method and device, and equipment
CN109816035B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant