CN118038423A - In-vehicle checking method and system based on intelligent vision - Google Patents

In-vehicle checking method and system based on intelligent vision Download PDF

Info

Publication number
CN118038423A
CN118038423A CN202410446423.XA CN202410446423A CN118038423A CN 118038423 A CN118038423 A CN 118038423A CN 202410446423 A CN202410446423 A CN 202410446423A CN 118038423 A CN118038423 A CN 118038423A
Authority
CN
China
Prior art keywords
window
vehicle
depth camera
image
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410446423.XA
Other languages
Chinese (zh)
Other versions
CN118038423B (en
Inventor
吴胜斌
许金金
陈明
陶伟森
孙涛
刘焕钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maxvision Technology Corp
Original Assignee
Maxvision Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maxvision Technology Corp filed Critical Maxvision Technology Corp
Priority to CN202410446423.XA priority Critical patent/CN118038423B/en
Priority claimed from CN202410446423.XA external-priority patent/CN118038423B/en
Publication of CN118038423A publication Critical patent/CN118038423A/en
Application granted granted Critical
Publication of CN118038423B publication Critical patent/CN118038423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The application provides an in-vehicle inspection method and system based on intelligent vision, wherein the in-vehicle inspection method based on intelligent vision converts a first window pixel coordinate into a first space physical coordinate based on a first depth camera, and converts the first space physical coordinate into a moving distance from a moving base to a position opposite to a window according to a first space conversion relation; and converting the second window pixel coordinates into second spatial physical coordinates based on the second depth camera, and converting the second spatial physical coordinates into third spatial physical coordinates based on the mobile base according to a second spatial conversion relation. According to the in-vehicle inspection method and system based on intelligent vision, the mobile base drives the mechanical arm and the second depth camera to the coarse positioning car window position at the same time, and the second space physical coordinates are converted into the third space physical coordinates based on the mobile base according to the second space conversion relation, so that the lane inspection efficiency, the accuracy, the flexibility and the safety are high.

Description

In-vehicle checking method and system based on intelligent vision
Technical Field
The application belongs to the technical field of vehicle inspection, and particularly relates to an in-vehicle inspection method and system based on intelligent vision.
Background
In a vehicle transportation security inspection channel, in order to prevent people from avoiding security inspection and illegal transportation by using a car, synchronous security inspection is required for people and vehicles in a clearance, only one driver is allowed to start entering when the vehicles enter an inspection lane according to relevant regulations, all windows are opened in advance, the drivers are required to get off the vehicle to carry out identity verification after the vehicles reach an inspection parking area, and meanwhile, manual inspection is carried out in the windows by staff.
Therefore, the inspection process of the whole person and the vehicle has the problems of insufficient inspection efficiency, accuracy, flexibility and safety.
Disclosure of Invention
The embodiment of the application aims to provide an in-vehicle checking method and system based on intelligent vision, which are used for solving the technical problems of insufficient checking efficiency, accuracy, flexibility and safety in the checking process of vehicles in the prior art.
In order to achieve the above purpose, the application adopts the following technical scheme: the utility model provides an inspection method in car based on intelligent vision, based on set up linear guide and first depth camera on the same side of lane, set up the removal base on linear guide, set up in arm and second depth camera on the removal base, set up the third camera on the arm, include the step:
The method comprises the steps of pre-obtaining a first space conversion relation between a first depth camera and a linear guide rail and a second space conversion relation between a second depth camera and a movable base through calibration;
The first depth camera acquires a vehicle side image, and acquires a first window pixel coordinate through primary window detection;
converting the first window pixel coordinate into a first space physical coordinate based on a first depth camera, and converting the first space physical coordinate into a moving distance from a moving base to a position opposite to the window according to a first space conversion relation;
the second depth camera acquires a front image of the vehicle window, and acquires a second vehicle window pixel coordinate through detecting the vehicle window again;
converting the second window pixel coordinates into second spatial physical coordinates based on a second depth camera, and converting the second spatial physical coordinates into third spatial physical coordinates based on the mobile base according to a second spatial conversion relation;
the mechanical arm drives the third camera to the third space physical coordinate to carry out in-vehicle snapshot detection.
Preferably, the method for pre-acquiring the second spatial transformation relation between the second depth camera and the mobile base comprises the following steps:
Fixing the checkerboard at the tail end position of the mechanical arm, and acquiring a calibration image through a second depth camera;
controlling the mechanical arm to move so that the checkerboard is positioned at each position in the calibration image;
Storing calibration images of the checkerboard at each position in the calibration images, and acquiring mechanical arm space coordinates of the tail end position in the mechanical arm demonstrator at each calibration image;
Searching the positions of the checkered corner points from the stored calibration images by adopting a checkered corner detection algorithm;
And calculating a transformation matrix of the space coordinates of the mechanical arm and the positions of the checkerboard angular points.
Preferably, the method of converting the first window pixel coordinates to first spatial physical coordinates based on the first depth camera comprises the steps of:
Setting the first spatial physical coordinates based on the first depth camera as (X, Y, Z), the first window pixel coordinates as (X pixel,ypixel), and the optical center coordinates of the first depth camera as (ppx, ppy), the conversion formula is:
where depth is the depth of the first window pixel coordinate and fx and fy represent the focal lengths of the first depth camera in the horizontal and vertical directions.
Preferably, the method for converting the first spatial physical coordinate into the moving distance for moving the base to the position opposite to the vehicle window according to the first spatial conversion relation comprises the following steps:
the view angle direction of the first depth camera is perpendicular to the guide direction of the linear guide rail;
The starting point coordinate of the marked linear guide rail is (X 0,Y0,Z0), the first space physical coordinate is (X i,Yi,Zi), and the calculation formula of the moving distance d i is as follows: .
Preferably, the method for detecting the primary vehicle window comprises the following steps:
inputting the vehicle side image into a deep learning target detection model to acquire a vehicle window target;
intercepting a window target area for preprocessing, and extracting the maximum window outline;
Calculating the area S of the contour area of the vehicle window;
The window targets with window contour area S smaller than area threshold T are filtered.
Preferably, the method for detecting the vehicle window again comprises the following steps:
Inputting the front image of the vehicle window into a deep learning target detection model to obtain a vehicle window target;
Detecting the distance between the center point of each window target and the center point of the front image of the window;
And selecting the window target with the smallest distance.
Preferably, after selecting the window target with the smallest distance, the method further comprises the steps of:
Inputting a window target with the minimum distance into the two classification models, and detecting the state of the window;
if the window target is detected to be in an open state, acquiring a second window pixel coordinate;
and if the window target is detected to be in the closed state, feeding back the window state to the system host.
Preferably, if the window object is detected to be in the closed state, the method further comprises the steps of:
the mechanical arm is further provided with a fourth infrared camera, and the mechanical arm drives the fourth infrared camera to the third space physical coordinate to perform in-vehicle snapshot detection to acquire a first in-vehicle infrared image;
turning on an infrared lamp at the other side of the vehicle;
the fourth infrared camera acquires a second infrared image in the vehicle at the same position;
performing differential processing on the first in-vehicle infrared image and the second in-vehicle infrared image to obtain an in-vehicle infrared differential image;
And detecting the violation of the infrared differential image in the vehicle.
Preferably, after the mechanical arm drives the third camera to the third space physical coordinate to perform in-vehicle snapshot detection, the method further comprises the steps of:
And resetting the mechanical arm, controlling the moving base to the right position of the next window target based on the primary window detection of the first depth camera, and executing in-vehicle snapshot detection.
The application also provides an in-vehicle inspection system based on intelligent vision, which comprises an inspection lane and a control host, wherein the inspection lane is used for acquiring the vehicle side image, the vehicle window front image and the in-vehicle snap shot image, and the control host is used for executing the in-vehicle inspection method based on intelligent vision.
Compared with the prior art, the in-vehicle checking method and system based on intelligent vision provided by the application have the advantages that the vehicle window is detected through the first depth camera, the first space physical coordinates are converted into the moving distance from the moving base to the position opposite to the vehicle window according to the first space conversion relation, and the moving base drives the mechanical arm and the second depth camera to the position of the rough positioning vehicle window; and then the vehicle window is detected by the second depth camera, and the second space physical coordinates are converted into third space physical coordinates based on the movable base according to the second space conversion relation, so that the mechanical arm drives the third camera to accurately position the vehicle window, and the vehicle interior snapshot detection of different vehicle types, vehicle window number and parking positions is realized based on intelligent vision, so that the vehicle interior snapshot detection system has the characteristics of high lane checking efficiency, accuracy, flexibility and safety.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an in-vehicle inspection method based on intelligent vision according to an embodiment of the present application;
FIG. 2 is a schematic view of a front view of a lane application scenario based on the method of FIG. 1;
FIG. 3 is a schematic diagram of a top view of a lane application scenario based on the method of FIG. 1;
FIG. 4 is a schematic illustration of window detection of a vehicle side image based on the method of FIG. 1;
FIG. 5 is a schematic view of a scenario when the calibration provided by the embodiment of the present application obtains a second spatial transformation relationship;
Fig. 6 is a schematic diagram of a scenario when infrared detection is performed on a closed car window according to an embodiment of the present application.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It will be understood that when an element is referred to as being "mounted" or "disposed" on another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are merely for convenience in describing and simplifying the description based on the orientation or positional relationship shown in the drawings, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus are not to be construed as limiting the application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The in-vehicle inspection method based on intelligent vision provided by the embodiment of the application is explained.
Referring to fig. 1 to 3, the in-vehicle inspection method based on intelligent vision is based on a linear guide rail and a first depth camera arranged on the same side of a lane, a movable base arranged on the linear guide rail, a mechanical arm and a second depth camera arranged on the movable base, and a third camera arranged on the mechanical arm, and comprises the following steps:
step S1, a first space conversion relation between a first depth camera and a linear guide rail and a second space conversion relation between a second depth camera and a movable base are obtained through calibration;
Step S2, a first depth camera acquires a vehicle side image, and a first window pixel coordinate is acquired through primary window detection;
Step S3, converting the first window pixel coordinates into first space physical coordinates based on a first depth camera, and converting the first space physical coordinates into a moving distance from a moving base to a position opposite to the window according to a first space conversion relation;
S4, acquiring a front image of a vehicle window by a second depth camera, and acquiring a second vehicle window pixel coordinate by detecting the vehicle window again;
Step S5, converting the second window pixel coordinates into second space physical coordinates based on a second depth camera, and converting the second space physical coordinates into third space physical coordinates based on a mobile base according to a second space conversion relation;
And S6, the mechanical arm drives the third camera to the third space physical coordinates to perform in-vehicle snapshot detection.
It is understood that in step S1, the linear guide and the first depth camera may be disposed on the same platform of the lane, and the positions of the linear guide and the first depth camera are fixed, so that the first spatial transformation relationship between the first depth camera and the linear guide may be pre-acquired in a calibrated manner. In addition, the second depth camera is arranged on the movable base, and the second depth camera can move along with the movable base under the guidance of the linear guide rail, namely, the positions of the second depth camera and the movable base are relatively fixed, so that the second space conversion relation between the second depth camera and the movable base can be obtained in advance through a calibration mode.
In step S2, referring to fig. 4, after the vehicle enters the lane, since the parking position and the window position of the vehicle deviate, a vehicle side image with depth information may be acquired by the first depth camera, and the first window pixel coordinate may be acquired by the first window detection based on the vehicle side image.
In step S3, since the first window pixel coordinates are acquired based on the first depth camera, the first window pixel coordinates are essentially plane coordinates with depth information, and thus the first window pixel coordinates may be converted into first spatial physical coordinates based on the first depth camera. On the basis, by utilizing the first space conversion relation between the first depth camera and the linear guide rail, which is obtained in the step S1, the moving distance from the starting point of the linear guide rail to the position opposite to the vehicle window can be calculated based on the three-dimensional space, and therefore, after the moving base moves according to the calculated moving distance, the mechanical arm and the second depth camera on the moving base can reach the vehicle window position.
However, since the window position is positioned based on the first depth camera, the positioning distance is far and is disturbed by the environment, the angle and the like, and therefore, the window position may deviate, and the process of steps S2-S3 can be understood as coarse positioning of the window.
In step S4, after the moving base drives the mechanical arm and the second depth camera to move to a position opposite to the window, the front image of the window is acquired again by the second depth camera in a close-range manner, so that the second window pixel coordinates acquired by detecting the window again are also accurate.
In step S5, since the second window pixel coordinates are acquired based on the second depth camera, the second window pixel coordinates are essentially plane coordinates with depth information, and thus the second window pixel coordinates can be converted into second spatial physical coordinates based on the second depth camera. On the basis, the second spatial physical coordinates can be converted into third spatial physical coordinates based on the mobile base by utilizing the second spatial conversion relation between the second depth camera and the mobile base, which is obtained in the step S1, wherein the third spatial physical coordinates are the snapshot position of the third camera.
In step S6, after the third spatial physical coordinate is taken as the target position of the mechanical arm, the mechanical arm drives the third camera to the third spatial physical coordinate to perform in-vehicle snapshot detection. The process of steps S4-S6 can thus be understood as a fine positioning of the vehicle window.
The whole steps S1-S6 are completed automatically based on intelligent vision without manual participation, so that the checking efficiency is high, and the checking standard is unified. In terms of suitability, different vehicle types, vehicle window positions, vehicle window number and parking positions can be adapted, and all the vehicle windows can be checked in sequence.
Compared with the prior art, the in-vehicle checking method based on intelligent vision detects the vehicle window through the first depth camera, converts the first space physical coordinate into the moving distance from the moving base to the position opposite to the vehicle window according to the first space conversion relation, and enables the moving base to drive the mechanical arm and the second depth camera to the position of the rough positioning vehicle window; and then the vehicle window is detected by the second depth camera, and the second space physical coordinates are converted into third space physical coordinates based on the movable base according to the second space conversion relation, so that the mechanical arm drives the third camera to accurately position the vehicle window, and the vehicle interior snapshot detection of different vehicle types, vehicle window number and parking positions is realized based on intelligent vision, so that the vehicle interior snapshot detection system has the characteristics of high lane checking efficiency, accuracy, flexibility and safety.
In another embodiment of the present application, referring to fig. 5, a method for pre-acquiring a second spatial transformation relationship between a second depth camera and a mobile base includes the steps of:
Fixing the checkerboard at the tail end position of the mechanical arm, and acquiring a calibration image through a second depth camera;
controlling the mechanical arm to move so that the checkerboard is positioned at each position in the calibration image;
Storing calibration images of the checkerboard at each position in the calibration images, and acquiring mechanical arm space coordinates of the tail end position in the mechanical arm demonstrator at each calibration image;
Searching the positions of the checkered corner points from the stored calibration images by adopting a checkered corner detection algorithm;
And calculating a transformation matrix of the space coordinates of the mechanical arm and the positions of the checkerboard angular points.
It can be appreciated that, because the mechanical arm and the second depth camera are both arranged on the movable base, only one position of the movable base on the linear guide rail is required to be optionally calibrated, and the second space conversion relation can be satisfied no matter the vehicle window is at any position in the application process. The second depth camera is not driven to displace by the stretching and retracting of the mechanical arm, and the second depth camera can acquire a larger visual field.
In another embodiment of the present application, a method of converting first window pixel coordinates to first spatial physical coordinates based on a first depth camera, includes the steps of:
Setting the first spatial physical coordinates based on the first depth camera as (X, Y, Z), the first window pixel coordinates as (X pixel,ypixel), and the optical center coordinates of the first depth camera as (ppx, ppy), the conversion formula is:
where depth is the depth of the first window pixel coordinate and fx and fy represent the focal lengths of the first depth camera in the horizontal and vertical directions.
Further, referring to fig. 2 to 3 together, the method for converting the first spatial physical coordinate into a moving distance for moving the base to a position opposite to the window according to the first spatial conversion relationship includes the steps of:
the view angle direction of the first depth camera is perpendicular to the guide direction of the linear guide rail;
The starting point coordinate of the marked linear guide rail is (X 0,Y0,Z0), the first space physical coordinate is (X i,Yi,Zi), and the calculation formula of the moving distance d i is as follows: .
It can be understood that the actual physical coordinate value of each vehicle window based on the first depth camera can be obtained through the above conversion formula, and since the starting position of the mechanical arm is located at the starting position of the linear guide rail, the starting position of the linear guide rail and the position of the first depth camera are unchanged, and therefore the first vehicle window pixel coordinate based on the first depth camera can be directly converted into the spatial coordinate system based on the starting position of the linear guide rail.
The viewing angle direction of the first depth camera is perpendicular to the guiding direction of the linear guide rail, which is equivalent to the guiding direction of the linear guide rail being the horizontal direction of the first depth camera, so that only the difference between the starting point coordinate and the first space physical coordinate in the horizontal direction is needed to be calculated. Thus, the vehicle parking device can adapt to the problem of deviation of different parking positions and parking directions of vehicles.
In another embodiment of the present application, referring to fig. 4, in step S2, a method for detecting a first window includes the steps of:
inputting the vehicle side image into a deep learning target detection model to acquire a vehicle window target;
intercepting a window target area for preprocessing, and extracting the maximum window outline;
Calculating the area S of the contour area of the vehicle window;
The window targets with window contour area S smaller than area threshold T are filtered.
It will be appreciated that since there are openable windows and closed windows in the vehicle, such as driver windows and passenger windows are typically openable windows, such as smaller triangular windows are typically closed windows. In order to avoid false detection, collision damage is caused to the closed vehicle window or the third camera, in this embodiment, the closed vehicle window is filtered according to the area by calculating the area S of the contour area of the vehicle window. The preprocessing comprises pixel aggregation processing, image binarization, edge detection and the like, and accuracy of vehicle window contour extraction is improved.
In another embodiment of the present application, in step S4, the method of detecting a vehicle window again includes the steps of:
Inputting the front image of the vehicle window into a deep learning target detection model to obtain a vehicle window target;
Detecting the distance between the center point of each window target and the center point of the front image of the window;
And selecting the window target with the smallest distance.
It can be understood that due to different parking positions and window sizes of different vehicles, a plurality of window targets may be detected in the window front image acquired through the second depth camera, and by detecting the distance between the center point of each window target and the center point of the window front image, the window target with the smallest distance is selected, so that the window target directly opposite to the mechanical arm can be determined, and the interference target is filtered.
Further, in step S4, after selecting the window target with the smallest distance, the method further includes the steps of:
Inputting a window target with the minimum distance into the two classification models, and detecting the state of the window;
if the window target is detected to be in an open state, acquiring a second window pixel coordinate;
and if the window target is detected to be in the closed state, feeding back the window state to the system host.
It can be understood that, according to the situation that all windows are to be opened after a specified vehicle enters a lane, if it is detected that the windows are not all opened according to the specification, the window state is fed back to a system host, and a worker confirms the reason, because there are few vehicle types and the situation that the windows of the passengers cannot be opened, if the front-row passenger window and the rear-row passenger window of the three-compartment vehicle are consistent in shape and size, the windows of the front-row passenger can be opened, the windows of the rear-row passenger cannot be opened, and if the windows cannot be opened due to abnormal functions, the front-row passenger window and the rear-row passenger window need to be reported, prepared and checked in advance. After the worker confirms that the window cannot be opened, infrared detection is also performed.
Further, referring to fig. 6, in step S4, if it is detected that the window object is in the closed state, the method further includes the steps of:
the mechanical arm is further provided with a fourth infrared camera, and the mechanical arm drives the fourth infrared camera to the third space physical coordinate to perform in-vehicle snapshot detection to acquire a first in-vehicle infrared image;
turning on an infrared lamp at the other side of the vehicle;
the fourth infrared camera acquires a second infrared image in the vehicle at the same position;
performing differential processing on the first in-vehicle infrared image and the second in-vehicle infrared image to obtain an in-vehicle infrared differential image;
And detecting the violation of the infrared differential image in the vehicle.
It will be appreciated that the closed window is not openable, and that detection by opening the window alone necessarily results in a large field of view blind spot. The existing car window is adhered with car films, so that serious reflection phenomenon exists under sunlight. The embodiment filters reflection light by an image difference means based on an infrared environment, and realizes in-vehicle imaging of a closed car window.
Specifically, when the infrared lamp is not turned on, the infrared image in the first vehicle contains dark field information, and the light source of the dark field information is from infrared light components in sunlight, so that reflection interference information exists; after the infrared lamp is started, the second in-car infrared image comprises dark field information and enhanced information, the light sources of the infrared image are from infrared light components in sunlight and the infrared lamp, the infrared lamp and the fourth infrared camera are respectively positioned on two sides of the vehicle, the enhanced information does not have reflection interference information, the two frames of images are subjected to differential processing to obtain an in-car infrared differential image only containing the enhanced information, reflection interference of the sunlight on a car window can be just eliminated, and real objects in the car window are highlighted.
In another embodiment of the present application, in step S6, after the mechanical arm drives the third camera to the third spatial physical coordinate to perform in-vehicle snapshot detection, the method further includes the steps of:
And resetting the mechanical arm, controlling the moving base to the right position of the next window target based on the primary window detection of the first depth camera, and executing in-vehicle snapshot detection.
It can be understood that based on the primary window detection of the first depth camera, the moving distance from the moving base to the position opposite to each window can be calculated, so that the detection of all windows is realized, and the safety of vehicle detection is improved.
The application also provides an in-vehicle inspection system based on intelligent vision, which comprises an inspection lane and a control host, wherein the inspection lane is used for acquiring the vehicle side image, the vehicle window front image and the in-vehicle snap shot image, and the control host is used for executing the in-vehicle inspection method based on intelligent vision.
It can be understood that the control host can be a PC computer, and the control host is monitored by staff to judge whether the violation condition exists in the vehicle. The control host is used for executing the intelligent vision-based in-vehicle checking method, so that the same beneficial effects are achieved.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the application.

Claims (10)

1. The in-vehicle checking method based on intelligent vision is characterized by comprising the following steps of:
The method comprises the steps of pre-obtaining a first space conversion relation between a first depth camera and a linear guide rail and a second space conversion relation between a second depth camera and a movable base through calibration;
The first depth camera acquires a vehicle side image, and acquires a first window pixel coordinate through primary window detection;
converting the first window pixel coordinate into a first space physical coordinate based on a first depth camera, and converting the first space physical coordinate into a moving distance from a moving base to a position opposite to the window according to a first space conversion relation;
the second depth camera acquires a front image of the vehicle window, and acquires a second vehicle window pixel coordinate through detecting the vehicle window again;
converting the second window pixel coordinates into second spatial physical coordinates based on a second depth camera, and converting the second spatial physical coordinates into third spatial physical coordinates based on the mobile base according to a second spatial conversion relation;
the mechanical arm drives the third camera to the third space physical coordinate to carry out in-vehicle snapshot detection.
2. The intelligent vision-based in-vehicle inspection method according to claim 1, wherein the method of pre-acquiring the second spatial transformation relationship of the second depth camera and the mobile dock comprises the steps of:
Fixing the checkerboard at the tail end position of the mechanical arm, and acquiring a calibration image through a second depth camera;
controlling the mechanical arm to move so that the checkerboard is positioned at each position in the calibration image;
Storing calibration images of the checkerboard at each position in the calibration images, and acquiring mechanical arm space coordinates of the tail end position in the mechanical arm demonstrator at each calibration image;
Searching the positions of the checkered corner points from the stored calibration images by adopting a checkered corner detection algorithm;
And calculating a transformation matrix of the space coordinates of the mechanical arm and the positions of the checkerboard angular points.
3. The intelligent vision-based in-vehicle inspection method according to claim 1, wherein the method of converting the first window pixel coordinates into first spatial physical coordinates based on the first depth camera, comprises the steps of:
Setting the first spatial physical coordinates based on the first depth camera as (X, Y, Z), the first window pixel coordinates as (X pixel,ypixel), and the optical center coordinates of the first depth camera as (ppx, ppy), the conversion formula is:
where depth is the depth of the first window pixel coordinate and fx and fy represent the focal lengths of the first depth camera in the horizontal and vertical directions.
4. The intelligent vision-based in-vehicle inspection method according to claim 3, wherein the method of converting the first spatial physical coordinates into a moving distance for moving the base to a position facing the window according to the first spatial conversion relation comprises the steps of:
the view angle direction of the first depth camera is perpendicular to the guide direction of the linear guide rail;
The starting point coordinate of the marked linear guide rail is (X 0,Y0,Z0), the first space physical coordinate is (X i,Yi,Zi), and the calculation formula of the moving distance d i is as follows:
5. the intelligent vision-based in-vehicle inspection method according to claim 1, wherein the method for initial window detection comprises the steps of:
inputting the vehicle side image into a deep learning target detection model to acquire a vehicle window target;
intercepting a window target area for preprocessing, and extracting the maximum window outline;
Calculating the area S of the contour area of the vehicle window;
The window targets with window contour area S smaller than area threshold T are filtered.
6. The in-vehicle inspection method based on intelligent vision according to claim 1, wherein the method of re-window detection comprises the steps of:
Inputting the front image of the vehicle window into a deep learning target detection model to obtain a vehicle window target;
Detecting the distance between the center point of each window target and the center point of the front image of the window;
And selecting the window target with the smallest distance.
7. The intelligent vision-based in-vehicle inspection method according to claim 6, further comprising the steps of, after selecting a window object having a smallest distance:
Inputting a window target with the minimum distance into the two classification models, and detecting the state of the window;
if the window target is detected to be in an open state, acquiring a second window pixel coordinate;
and if the window target is detected to be in the closed state, feeding back the window state to the system host.
8. The intelligent vision-based in-vehicle inspection method according to claim 7, further comprising the step of, after detecting that the window object is in a closed state:
the mechanical arm is further provided with a fourth infrared camera, and the mechanical arm drives the fourth infrared camera to the third space physical coordinate to perform in-vehicle snapshot detection to acquire a first in-vehicle infrared image;
turning on an infrared lamp at the other side of the vehicle;
the fourth infrared camera acquires a second infrared image in the vehicle at the same position;
performing differential processing on the first in-vehicle infrared image and the second in-vehicle infrared image to obtain an in-vehicle infrared differential image;
And detecting the violation of the infrared differential image in the vehicle.
9. The in-vehicle inspection method based on intelligent vision according to claim 1, wherein after the mechanical arm drives the third camera to the third space physical coordinate to perform in-vehicle snapshot detection, further comprising the steps of:
And resetting the mechanical arm, controlling the moving base to the right position of the next window target based on the primary window detection of the first depth camera, and executing in-vehicle snapshot detection.
10. An in-vehicle inspection system based on intelligent vision, comprising an inspection lane for acquiring a vehicle side image, a vehicle window front image, and an in-vehicle snap shot image, and a control host for executing the in-vehicle inspection method based on intelligent vision according to claim 1.
CN202410446423.XA 2024-04-15 In-vehicle checking method and system based on intelligent vision Active CN118038423B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410446423.XA CN118038423B (en) 2024-04-15 In-vehicle checking method and system based on intelligent vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410446423.XA CN118038423B (en) 2024-04-15 In-vehicle checking method and system based on intelligent vision

Publications (2)

Publication Number Publication Date
CN118038423A true CN118038423A (en) 2024-05-14
CN118038423B CN118038423B (en) 2024-07-30

Family

ID=

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136193A (en) * 2011-03-10 2011-07-27 北京大学深圳研究生院 Image feature-based virtual coil snapshot system
CN205193907U (en) * 2015-11-04 2016-04-27 杭州朗米科技有限公司 Dual verification floodgate of people's car machine system
WO2021120574A1 (en) * 2019-12-19 2021-06-24 Suzhou Zhijia Science & Technologies Co., Ltd. Obstacle positioning method and apparatus for autonomous driving system
CN114333126A (en) * 2020-03-09 2022-04-12 中通服公众信息产业股份有限公司 Intelligent human-vehicle checking system and method for inspection station
CN117409397A (en) * 2023-12-15 2024-01-16 河北远东通信***工程有限公司 Real-time portrait comparison method, device and system based on position probability

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136193A (en) * 2011-03-10 2011-07-27 北京大学深圳研究生院 Image feature-based virtual coil snapshot system
CN205193907U (en) * 2015-11-04 2016-04-27 杭州朗米科技有限公司 Dual verification floodgate of people's car machine system
WO2021120574A1 (en) * 2019-12-19 2021-06-24 Suzhou Zhijia Science & Technologies Co., Ltd. Obstacle positioning method and apparatus for autonomous driving system
CN114333126A (en) * 2020-03-09 2022-04-12 中通服公众信息产业股份有限公司 Intelligent human-vehicle checking system and method for inspection station
CN117409397A (en) * 2023-12-15 2024-01-16 河北远东通信***工程有限公司 Real-time portrait comparison method, device and system based on position probability

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苗中华;陈苏跃;何创新;金称雄;马世伟;徐双喜;: "基于3D视觉的青饲机拖车车斗自动识别与定位方法", 农业机械学报, no. 05, 29 March 2019 (2019-03-29) *

Similar Documents

Publication Publication Date Title
US10827151B2 (en) Rear obstruction detection
CN108128245B (en) Vehicle environment imaging system and method
CN108759667B (en) Front truck distance measuring method under vehicle-mounted camera based on monocular vision and image segmentation
US9511723B2 (en) Method and device for an assistance system in a vehicle for performing an autonomous or semi-autonomous driving maneuver
US9573524B2 (en) Inspection device and method of head up display for vehicle
US9630477B2 (en) Device for preventing head lamp glare and method for preventing glare using the same
EP2924653A1 (en) Image processing apparatus and image processing method
CN108082083B (en) The display methods and display system and vehicle anti-collision system of a kind of occluded object
CN106651963B (en) A kind of installation parameter scaling method of the vehicle-mounted camera for driving assistance system
US11965836B2 (en) Assembly for detecting defects on a motor vehicle bodywork
CN107229906A (en) A kind of automobile overtaking's method for early warning based on units of variance model algorithm
US10001835B2 (en) Head up display automatic correction method and correction system
EP2717219A1 (en) Object detection device, object detection method, and object detection program
US11535242B2 (en) Method for detecting at least one object present on a motor vehicle, control device, and motor vehicle
US10657396B1 (en) Method and device for estimating passenger statuses in 2 dimension image shot by using 2 dimension camera with fisheye lens
CN109703465B (en) Control method and device for vehicle-mounted image sensor
US20140169624A1 (en) Image based pedestrian sensing apparatus and method
CN118038423B (en) In-vehicle checking method and system based on intelligent vision
CN111144415B (en) Detection method for tiny pedestrian target
CN118038423A (en) In-vehicle checking method and system based on intelligent vision
JP2006069549A (en) Automatic oiling device
EP4177694A1 (en) Obstacle detection device and obstacle detection method
CN111192290B (en) Blocking processing method for pedestrian image detection
CN118038424B (en) Quick clearance checking method for vehicle
CN114967665A (en) Parking robot and obstacle detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant