US10867401B2 - Method and device for the estimation of car ego-motion from surround view images - Google Patents
Method and device for the estimation of car ego-motion from surround view images Download PDFInfo
- Publication number
- US10867401B2 US10867401B2 US15/678,774 US201715678774A US10867401B2 US 10867401 B2 US10867401 B2 US 10867401B2 US 201715678774 A US201715678774 A US 201715678774A US 10867401 B2 US10867401 B2 US 10867401B2
- Authority
- US
- United States
- Prior art keywords
- sequence
- motion
- images
- ego
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000003287 optical effect Effects 0.000 claims abstract description 36
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 11
- 230000033001 locomotion Effects 0.000 claims description 42
- 239000013598 vector Substances 0.000 claims description 34
- 230000000873 masking effect Effects 0.000 claims 3
- 230000001133 acceleration Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 230000008901 benefit Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000036962 time dependent Effects 0.000 description 2
- SAZUGELZHZOXHB-UHFFFAOYSA-N acecarbromal Chemical compound CCC(Br)(CC)C(=O)NC(=O)NC(C)=O SAZUGELZHZOXHB-UHFFFAOYSA-N 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010915 one-step procedure Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/10—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0097—Predicting future conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the disclosure relates to driver assistance systems.
- ADAS Advanced driver assistance systems
- Many driver assistance systems use information about a car's position, orientation and motion state to assist the driver in various ways. This information may even be used to drive the vehicle autonomously.
- visual odometry can be used to determine a car's position.
- cameras are used to record input images and image corrections are applied.
- Features are detected, the features are matched across image frames and an optical flow field is constructed, for example by using a correlation to establish a correspondence between two images, by feature extraction and correlation or by constructing an optical flow field using the Lucas-Kanade method.
- Odometry errors are detected, the corresponding outliers are removed and the camera motion is estimated from the optical flow, for example using a Kalman filter or by minimizing a cost function that is based on geometric properties of the features.
- multi-camera top view vision system generate a stitched virtual top view image.
- Stein et alii [4] propose a single camera application where the ego-motion of the vehicle is consistent with the road modelled. Image features in the two images are combined in a global probability function that introduces a global constraint to cope with the aperture problem.
- Barreto et al. [5] describe a visual control of robot motion using central catadioptric systems and present the Jacobian matrix linking the robot's joint velocities to image observations.
- the solution presented is treated as a least squares problem but they actually defined the state vector that can be used in an Extended Kalman Filter.
- Pink et al. [7] present a method for vehicle pose estimation and motion tracking using visual features. They assume an initial vehicle pose and then track the pose in geographical coordinates over time, using image data as the only input. They are tracking the vehicle position based on the Ackerman model.
- Lee et al. [9] present a visual ego-motion estimation algorithm for a self-driving car. They model a multicamera system as a generalized camera and applying the nonholonomic motion constraint of a car.
- Marco et al. [10] provide a formulation of a constrained minimization problem for structure and motion estimation from optical flow. He also presents the solution of the optimization problem by Levenberg-Marquardt and direct projection.
- an ego-motion of a vehicle is defined as the 3D motion of a camera relative to a fixed coordinate system of the environment, which is also known as a world coordinate system. Furthermore, ego-motion also refers to a two-dimensional motion in a given plane of the three-dimensional world coordinate system. This ego-motion is also referred to as “2D egomotion”.
- the ego-motion is calculated from an optical flow.
- the optical flow is the apparent motion of an image caused by the relative between a camera and the scene, where “scene” refers to the objects in the surroundings of the car.
- an Ackermann model of the steering geometry is used to describe the vehicle motion and an incremental pose update as a framework to integrate multiple sources of vehicle pose.
- the optical flow is calculated using features that are detected in an image frame of a sequence of images and then matched in a consecutive frame. This information is used to generate the optical flow field for the detected features in those two image frames, or consecutive images.
- the consecutive images are a projection of the three dimensional scene into a two-dimensional plane, which is also referred to as “viewport plane”.
- a model of the road may be used to simplify the estimation of the optical flow.
- the road forms a simple planar structure and can be represented by only three dominant parameters: the forward translation, the pitch, and the yaw.
- the ego-motion can also be estimated with sufficient accuracy without the use of a road model.
- a Horn-Schunck method is used to estimate the optical flow.
- a global constraint is introduced to solve the aperture problem and a road model is fitted to the flow fields to remove outliers.
- a four camera setup of a surround view system is used to generate a surround view, the images of the four cameras are merged into a single projection to a ground plane, which represents the street level and which is also referred to as “top down view”.
- a 2D-egomotion is computed from an affine projection of the top down view.
- Flow field outliers such as measurement errors or vectors of moving objects are filtered out using a suitable procedure, such as RANSAC.
- the projected view which is an affine projection of the surround view to the ground plane, is interpreted using a prior calibration, which provides depth and scale information.
- a structure is reconstructed from motion algorithms, which gives an explicit reconstruction of the observed scenes and thereby provides an estimate of object distances.
- the motion is filtered in order to obtain a consistent position over time.
- the tracking process estimates the real position of the vehicle with a consistent movement model.
- an Ackermann steering model is used as movement model to represent a vehicle with an Ackermann steering geometry.
- the Ackermann model is combined with multiple odometric measurements, such as GPS measurement, vehicle sensors, etc.
- One aspect of the disclosure provides a method for determining an ego-motion of a motor vehicle, such as a passenger car, a utility vehicle or a minibus.
- a front view camera records a first sequence of consecutive images of, a left side view camera records a second sequence of consecutive images, a right side view camera records a third sequence of consecutive images, and a rear view camera records a fourth sequence of consecutive images.
- the first, second, third and fourth image sequences each include at least two consecutive images.
- the image sequences are transferred to a computational unit of the motor vehicle.
- the computational unit merges the first sequence of consecutive images, the second sequence of consecutive images, the third sequence of consecutive images, and the fourth sequence of consecutive images to obtain a sequence of merged images.
- the merged images correspond to surround view or 360° view of the vehicle's surroundings at a given time.
- the respective images and the view fields of adjacent cameras overlap at least partially.
- the images can be merged by matching brightness values, based on the individual pixels, correlating the brightness of the pixels.
- higher level features such as lines or edges or regions of high contrast or brightness gradient of images from adjacent cameras are matched to each other.
- the images may be merged according to a field of view, position and orientation of the cameras.
- the images of the sequence of merged images, or patches thereof, are projected to a ground plane using an affine projection or transformation, thereby providing a sequence of projected images. Furthermore, a two-dimensional optical flow is determined, based on the sequence of projected images.
- the optical flow includes motion vectors of target objects in the surroundings of the vehicle.
- an optical flow at a given time is provided by comparing two projected images, which are consecutive in time.
- An ego-motion of the vehicle is based on the optical flow. For example, it is derived by comparing projected images of a first and of a second time and by determining the amount by which a pixel or a group of pixels corresponding to an object in the surroundings has moved.
- the ego-motion may be derived from the individual camera images of the surround view system or from the merged image of all cameras of the surround view system.
- a kinematic state of the vehicle such as a position, a speed or a movement is determined based on the ego-motion of the vehicle.
- the kinematic state may be determined with respect to a previous position of the car, to a fixed coordinate system, to an object in the surroundings, or to the instantaneous center of curvature.
- the derivation of the ego-motion includes deriving an angular velocity of the vehicle around an instantaneous center of curvature from the optical flow and using the derived angular velocity to derive a velocity of the vehicle, and in particular to derive a velocity of a center of gravity of the vehicle in a plane that is parallel to a ground plane using an Ackermann steering model.
- the determination of the ego-motion includes deriving a current position vector of a target object on a ground plane and a current velocity relative to the target object using a previous position of the target object, a previous velocity relative to the target object, and an angular velocity with respect to a rotation around an instantaneous center of curvature with respect to a yaw motion of the vehicle.
- the Ackermann steering model is used to derive an angular velocity of a yaw motion of the vehicle around an instantaneous center of curvature from a wheel speed and a steering angle.
- the obtained angular speed can be merged with the derived ego-motion in an incremental pose update and it can used as a further input to a prediction filter, such as a Kalman filter.
- a prediction filter such as a Kalman filter.
- filters such as a recursive double least squares estimator or a double exponential smoothing filter or other smoothing filters, such as various types of low pass filters for digital signal processing, may be used as well.
- kinematic states of the vehicle which are obtained from different sources, such as the derived vehicle ego-motion, vehicle sensors and a GPS system, are used as an input to the same prediction filter, or they are used as inputs to different prediction filters and the resulting outputs of the different prediction filters are combined to form an estimate of the kinematic state of the vehicle.
- the different sources of vehicle motion can be merged or combined in a probabilistic framework. A likelihood of being correct is determined for each source given a previous measurement. The pose is then updated with the most correct source.
- the different sources of vehicle motion are mixed in a Gaussian mixture model.
- deriving the ego-motion from the optical flow includes applying a random sample consensus (RANSAC) procedure to motion vectors, which may be motion vectors of the optical flow or ego-motion vectors.
- RANSAC random sample consensus
- the RANSAC procedure may be applied before and/or after applying a prediction filter, such as a Kalman filter.
- a model is fitted by regression to a subset of the data and the quality of the model is evaluated by measuring the data inliers to the model. The process is repeated until the solution has a pre-determined statistical significance.
- a sample subset containing minimal data items is randomly selected from the input dataset in a first step.
- a fitting model and the corresponding model parameters are computed using only the elements of this sample subset.
- the size of the sample subset is the smallest sufficient to determine the model parameters.
- the algorithm checks which elements of the entire dataset are consistent with the model instantiated by the estimated model parameters obtained from the first step. A data element will be considered as an outlier if it does not fit the fitting model instantiated by the set of estimated model parameters within some error threshold that defines the maximum deviation attributable to the effect of noise.
- the determination of the ego-motion includes deriving motion vectors of individual target objects from the optical flow, deriving a vector of ego-motion, also referred to as an average motion vector, from the motion vectors of the optical flow.
- a prediction filter such as a Kalman filter is applied to the vector of ego-motion for predicting a future vector of ego-motion or a future position of the vehicle for tracking the vehicle's position.
- an input to the prediction filter is derived from one or more vectors of ego-motion and motion sensor values, such as wheel speed sensor, acceleration sensor and GPS system output.
- Image regions may be detected that correspond to objects that are not located at a ground level and the detected image regions are disregarded or masked out.
- Another aspect of the disclosure provides a computer program product, such as an executable file in a persistent memory, such as a memory stick, a hard-disk or a DVD, or in volatile memory, such as a computer RAM.
- the executable file or executable code causes a processing unit to execute one of the preceding methods when it is loaded into a program memory of a processor.
- the Ego-motion detection system includes a computation unit, the computation that has a first input connection for receiving data from a front view camera, a second input connection for receiving data from a right side view camera, a third input connection for receiving data from left side view camera and, a fourth input connection for receiving data from a rear view camera.
- the four input connections may also be realized by a single input connection, for example, if image data from the respective cameras is transmitted in alternating time slices or alternating data chunks.
- the camera data may be transmitted via cables of a data bus.
- the computation unit includes a processing unit, such as a microprocessor with a computer memory, which is operative to obtain a first sequence of consecutive images from the front view camera, a second sequence of consecutive images from the left side view camera, a third sequence of consecutive images from the right side view camera, and a fourth sequence of consecutive images from the rear view camera via respective the input connections.
- a processing unit such as a microprocessor with a computer memory, which is operative to obtain a first sequence of consecutive images from the front view camera, a second sequence of consecutive images from the left side view camera, a third sequence of consecutive images from the right side view camera, and a fourth sequence of consecutive images from the rear view camera via respective the input connections.
- the camera or the cameras may comprise a camera processing unit for basic image processing.
- the camera processing unit is different from the main processing unit that does the ego-motion calculation.
- the processing unit is operative to merge the first sequence of consecutive images, the second sequence of consecutive images, the third sequence of consecutive images, and the fourth sequence of consecutive images to obtain a sequence of merged images, and to provide a virtual projection of the images of the sequence of merged images or patches thereof to a ground plane using an affine projection or transformation thereby obtaining a sequence of projected images.
- a virtual projection refers to the operation of mapping the content of a first memory area to the content of a second memory area according to a transformation algorithm of the projection.
- the processing unit is operative to determine an optical flow, based on the sequence of projected images, to determine an ego-motion of the vehicle based on the optical flow and to predict a kinematic state of the car based on the ego-motion.
- the optical flow includes motion vectors of target objects in the surroundings of the vehicle.
- the current disclosure discloses the aforementioned ego-motion detection system with a front view camera that is connected to the first input, a right side view camera that is connected to the second input, a left side view camera that is connected to the third input, a rear view camera that is connected to the fourth input.
- the disclosure provides a car or a motor vehicle with the aforementioned ego-motion detection system, wherein the front view camera is provided at a front side of the car, the right side view camera is provided at a right side of the car, the left side view camera is provided at a left side of the car, and the rear view camera is provided at a rear side of the car.
- FIG. 1 shows a car with a surround view system.
- FIG. 2 illustrates a car motion of the car of FIG. 1 around an instantaneous center of rotation.
- FIG. 3 illustrates a projection to a ground plane of an image point recorded with the surround view system of FIG. 1 .
- FIG. 4 illustrates in further detail the ground plane projection of FIG. 3 .
- FIG. 5 shows a procedure for deriving an ego-motion of the car.
- FIG. 1 shows a car 10 with a surround view system 11 .
- the surround view system 11 includes a front view camera 12 , a right side view camera 13 , a left side view camera 14 and a rear view camera 15 .
- the cameras 11 - 14 are connected to a CPU of a controller, which is not shown in FIG. 1 .
- the controller is connected to further sensors and units, such as a velocity sensor, a steering angle sensor, a GPS unit, and acceleration and orientation sensors.
- FIG. 2 illustrates a car motion of the car 10 .
- a wheel base B of the car and a wheel track L are indicated.
- the car 10 is designed according to an Ackermann steering geometry in which an orientation of the steerable front wheels is adjusted such that all four wheels of a vehicle are oriented in tangential direction to a circle of instant rotation.
- An instantaneous center of curvature “ICC” is in register with the rear axis of the car 10 at a distance R, where R is the radius of the car's instant rotation with respect to the yaw movement.
- a two-dimensional vehicle coordinate system is indicated, which is fixed to a reference point of the car and aligned along a longitudinal and a lateral axis of the car.
- a location of the instantaneous center of curvature relative to the vehicle coordinate system is indicated by a vector ⁇ right arrow over (P) ⁇ ICC .
- an angle ⁇ between an inner rear wheel, the instant center of curvature and an inner front wheel is equal to a steering angle ⁇ of the inner front wheel.
- inner wheel refers to the respective wheel that is closer to the center of curvature.
- a motion of the inner front wheel relative to a ground plane is indicated by a letter v.
- FIG. 3 shows a projection of an image point to a ground plane 16 .
- An angle of inclination ⁇ relative to the vertical may be estimated from a location of the image point on the image sensor of the right side view camera 13 . If the image point corresponds to a feature of the road, the location of the corresponding object point is the projection of the image point onto the ground plane.
- the camera 13 has an elevation H above the ground plane. Consequently, the corresponding object point is located at a distance H*cos( ⁇ ) from the right side of the car 10 .
- X k , X k ⁇ 1 refer to positions of the car relative to the vehicle coordinate system of FIG. 2 , which is fixed to the car 10 , where the positions X k , X k ⁇ 1 of the car 10 are evaluated at times k* ⁇ t, (k ⁇ 1)* ⁇ t, respectively, and where a position of the vehicle coordinate system is evaluated at time (k ⁇ 1)* ⁇ t.
- Equation (3) is used in equations (2), (2a). In equations (1)-(2a) the vector arrows have been omitted for easier reading.
- a vehicle position X_k′ relative to a fixed reference frame is derived from the vector X_k and a location R of the vehicle coordinate system relative to the fixed reference frame.
- the movement of the vehicle coordinate system may be derived using GPS and/or other sensors, such as a wheel speed sensor, a steering angle sensor, acceleration and orientation sensors.
- the first term on the right hand side of equation (4) is also referred to as “Euler acceleration” and the second term is also referred to as “Coriolis acceleration”. Under the assumption that the car stays on track, the centrifugal acceleration is compensated by the car's tires and does not contribute to the vehicle motion.
- the angular velocity ⁇ is time dependent.
- the angular velocity ⁇ at time (k ⁇ 1)* ⁇ t is used in a computation according to equations (2), (2a) or (3).
- a mean velocity v between times (k ⁇ 2)* ⁇ t and (k ⁇ 1)* ⁇ t can be derived from the comparison of two subsequent projections of camera images.
- the mean velocity is used as the instant velocity at time (k ⁇ 1)* ⁇ t.
- the angular velocity ⁇ is derived from the steering angles of the front wheels and a rotation speed of a front wheel using an Ackermann steering model.
- the Ackermann steering model gives a good approximation for a car steering with Ackermann geometry, especially for slow velocities when there is little or no slip between the tires and the road.
- the steering angle of the front wheels may in turn be derived from an angular position of the steering column and the known lateral distance L between the front wheels.
- the ego-motion which is derived from the image sequences of the vehicle cameras is used to derive the angular velocity ⁇ .
- the instantaneous position may be computed using input from further odometric sensors, such as a GPS system, speed and acceleration sensors of the vehicle or other kinds of odometric sensors.
- odometric sensors such as a GPS system, speed and acceleration sensors of the vehicle or other kinds of odometric sensors.
- GPS position values can be used to correct a drift from the true position.
- the ego-motion is estimated from an affine projection or transformation to a ground plane, where the images of the cameras of the surround view system are merged into the projection to the ground plane.
- FIGS. 3 and 4 show a projection to a ground plane 16 .
- the image point may be projected to a location of the corresponding object point on the ground plane.
- An angle ⁇ of incidence is derived from a location of the image point on the camera sensor.
- FIG. 4 shows an isometric view of the affine projection of FIG. 3 .
- a distance between the view port plane 17 and a projection center C is denoted by the letter “f”.
- the camera image is evaluated and the observed scene is reconstructed.
- a sidewalk is detected and its height estimated.
- Stationary objects such as a lamp post or a tree, may be detected and their orientation relative to the ground plane is estimated.
- Objects which are not located at street level and/or which have a proper motion may distort the optic flow and lead to inaccuracies in the derived ego-motion.
- the optical flow vectors resulting from such objects are filtered out using a RANSAC (random sample consensus) procedure in which outliers are suppressed.
- RANSAC random sample consensus
- a road border is recognized using edge recognition and a digital map, which is stored in a computer memory of the car 10 .
- roll and a pitch motions of the car are determined, for example by using acceleration and/or orientation sensors of the car and the ego-motion vectors are corrected by subtracting or by compensating the roll and pitch motions.
- the derived ego-motion is used for a lane-keeping application or for other electronic stabilization applications.
- FIG. 5 shows, by way of example, a procedure for obtaining an ego-motion.
- a step 30 camera images are acquired from the cameras 11 - 16 .
- the camera images are combined into a combined image in a step 31 .
- an image area is selected for the determination of ego-motion. For example, image areas that correspond to objects outside a street zone, such as buildings and other installations, may be clipped.
- the image points are projected to a ground surface, for example, by applying an affine transformation or a perspective projection.
- corresponding image points are identified in consecutive images.
- optical flow vectors are derived by comparing the locations of the corresponding image points, for example by computing the difference vector between the position vectors of the corresponding locations.
- a filter procedure is applied, such as a RANSAC procedure or other elimination of outliers and interpolation or by applying a Kalman filter.
- the filtering may involve storing image values, such as image point brightness values, of a given time window in computer memory and computing an average of the image values.
- an ego-motion vector of the car is derived from the optical flow.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Navigation (AREA)
Abstract
Description
X k =X k−1 +{dot over (X)} k−1 *Δt (1)
{dot over (X)} k =ω×X k−1 +{dot over (X)} k−1 *Δt (2)
or
X k =X k−1 +{dot over (X)} k−1 (1a)
{dot over (X)} k =ω×X k−1 +{dot over (X)} k−1, (2a)
for time units in which Δt=1. Herein, Xk, Xk−1 refer to positions of the car relative to the vehicle coordinate system of
{right arrow over (V)} car =−{right arrow over (ω)}×{right arrow over (P)} ICC (3)
where {right arrow over (ω)} is a vector of instantaneous rotation and {right arrow over (P)}ICC is the position of the instantaneous center of curvature relative to the vehicle coordinate system. The relationship according to equation (3) is used in equations (2), (2a). In equations (1)-(2a) the vector arrows have been omitted for easier reading.
{umlaut over (X)} k =ζ×X k−1+ω×(ω×X k−1)+{umlaut over (X)} k−1, , (4)
where ζ is related or proportional to the time derivative of the angular rotation ω. The first term on the right hand side of equation (4) is also referred to as “Euler acceleration” and the second term is also referred to as “Coriolis acceleration”. Under the assumption that the car stays on track, the centrifugal acceleration is compensated by the car's tires and does not contribute to the vehicle motion.
Claims (9)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15155191.8 | 2015-02-16 | ||
EP15155191.8A EP3057061B1 (en) | 2015-02-16 | 2015-02-16 | Method and device for the estimation of car egomotion from surround view images |
EP15155191 | 2015-02-16 | ||
PCT/EP2016/050937 WO2016131585A1 (en) | 2015-02-16 | 2016-01-19 | Method and device for the estimation of car egomotion from surround view images |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2016/050937 Continuation WO2016131585A1 (en) | 2015-02-16 | 2016-01-19 | Method and device for the estimation of car egomotion from surround view images |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170345164A1 US20170345164A1 (en) | 2017-11-30 |
US10867401B2 true US10867401B2 (en) | 2020-12-15 |
Family
ID=52472232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/678,774 Active 2036-09-12 US10867401B2 (en) | 2015-02-16 | 2017-08-16 | Method and device for the estimation of car ego-motion from surround view images |
Country Status (7)
Country | Link |
---|---|
US (1) | US10867401B2 (en) |
EP (1) | EP3057061B1 (en) |
JP (1) | JP6620153B2 (en) |
KR (1) | KR102508843B1 (en) |
CN (1) | CN107111879B (en) |
DE (1) | DE112016000187T5 (en) |
WO (1) | WO2016131585A1 (en) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019531560A (en) | 2016-07-05 | 2019-10-31 | ナウト, インコーポレイテッドNauto, Inc. | Automatic driver identification system and method |
JP2019527832A (en) | 2016-08-09 | 2019-10-03 | ナウト, インコーポレイテッドNauto, Inc. | System and method for accurate localization and mapping |
US10733460B2 (en) | 2016-09-14 | 2020-08-04 | Nauto, Inc. | Systems and methods for safe route determination |
US9928432B1 (en) | 2016-09-14 | 2018-03-27 | Nauto Global Limited | Systems and methods for near-crash determination |
EP3535646A4 (en) | 2016-11-07 | 2020-08-12 | Nauto, Inc. | System and method for driver distraction determination |
DE102017200278A1 (en) * | 2017-01-10 | 2018-07-12 | Volkswagen Aktiengesellschaft | Method for determining a direction of movement of a camera |
WO2018229549A2 (en) | 2017-06-16 | 2018-12-20 | Nauto Global Limited | System and method for digital environment reconstruction |
US10453150B2 (en) | 2017-06-16 | 2019-10-22 | Nauto, Inc. | System and method for adverse vehicle event determination |
US10430695B2 (en) | 2017-06-16 | 2019-10-01 | Nauto, Inc. | System and method for contextualized vehicle operation determination |
DE102017223325A1 (en) * | 2017-12-20 | 2019-06-27 | Conti Temic Microelectronic Gmbh | Method and device for merging measurements from different sources of information |
US10545506B2 (en) | 2018-02-14 | 2020-01-28 | Ford Global Technologies, Llc | Methods and apparatus to perform visual odometry using a vehicle camera system |
US11392131B2 (en) | 2018-02-27 | 2022-07-19 | Nauto, Inc. | Method for determining driving policy |
CN109189060B (en) * | 2018-07-25 | 2021-01-12 | 博众精工科技股份有限公司 | Point stabilization control method and device for mobile robot |
CN109300143B (en) * | 2018-09-07 | 2021-07-27 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for determining motion vector field, storage medium and vehicle |
CN112930557A (en) * | 2018-09-26 | 2021-06-08 | 相干逻辑公司 | Any world view generation |
KR102559203B1 (en) * | 2018-10-01 | 2023-07-25 | 삼성전자주식회사 | Method and apparatus of outputting pose information |
KR102270799B1 (en) * | 2018-12-07 | 2021-06-29 | 아진산업(주) | Around View Monitor System with camera blind spot restoration |
JP7374602B2 (en) * | 2019-03-29 | 2023-11-07 | 日立建機株式会社 | work vehicle |
US10949685B2 (en) * | 2019-07-22 | 2021-03-16 | Caterpillar Inc. | Excluding a component of a work machine from a video frame based on motion information |
CN112572462B (en) * | 2019-09-30 | 2022-09-20 | 阿波罗智能技术(北京)有限公司 | Automatic driving control method and device, electronic equipment and storage medium |
CN111862210B (en) * | 2020-06-29 | 2023-05-12 | 辽宁石油化工大学 | Object detection and positioning method and device based on looking-around camera |
DE102020120713A1 (en) | 2020-08-05 | 2022-02-10 | Ford Global Technologies Llc | Method for operating a high-beam assistance system of a motor vehicle and motor vehicle |
DE102020213855A1 (en) | 2020-11-04 | 2022-05-05 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method, computer program, storage medium and control unit for object recognition |
KR102650927B1 (en) | 2021-10-18 | 2024-03-26 | 서울대학교 산학협력단 | Radar-Based Ego-Motion Estimation of Autonomous Robot for Simultaneous Localization and Mapping |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001266160A (en) | 2000-03-22 | 2001-09-28 | Toyota Motor Corp | Method and device for recognizing periphery |
JP2004056763A (en) | 2002-05-09 | 2004-02-19 | Matsushita Electric Ind Co Ltd | Monitoring apparatus, monitoring method, and program for monitor |
US20040247352A1 (en) | 2003-06-06 | 2004-12-09 | Oki Data Corporation | Fixing apparatus |
US20060140447A1 (en) * | 2004-12-28 | 2006-06-29 | Samsung Electronics Co., Ltd. | Vehicle-monitoring device and method using optical flow |
US20080319664A1 (en) * | 2007-06-25 | 2008-12-25 | Tidex Systems Ltd. | Navigation aid |
US20100027844A1 (en) * | 2007-01-30 | 2010-02-04 | Aisin Seiki Kabushiki Kaisha | Moving object recognizing apparatus |
US20100220190A1 (en) | 2009-02-27 | 2010-09-02 | Hyundai Motor Japan R&D Center, Inc. | Apparatus and method for displaying bird's eye view image of around vehicle |
JP2011013978A (en) | 2009-07-02 | 2011-01-20 | Kyushu Institute Of Technology | Method and apparatus for detecting object based on estimation of background image |
JP2011070580A (en) | 2009-09-28 | 2011-04-07 | Mitsubishi Motors Corp | Driving support device |
US20110169957A1 (en) * | 2010-01-14 | 2011-07-14 | Ford Global Technologies, Llc | Vehicle Image Processing Method |
US20110234800A1 (en) * | 2010-03-23 | 2011-09-29 | Kabushiki Kaisha Toshiba | Image processing device and image processing system |
CN102654917A (en) | 2011-04-27 | 2012-09-05 | 清华大学 | Method and system for sensing motion gestures of moving body |
EP2511137A1 (en) | 2011-04-14 | 2012-10-17 | Harman Becker Automotive Systems GmbH | Vehicle Surround View System |
CN102999759A (en) | 2012-11-07 | 2013-03-27 | 东南大学 | Light stream based vehicle motion state estimating method |
CN104104915A (en) | 2014-07-21 | 2014-10-15 | 四川沛阳科技有限公司 | Multifunctional driving monitoring early warning system based on mobile terminal |
CN104318561A (en) | 2014-10-22 | 2015-01-28 | 上海理工大学 | Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow |
US20160001704A1 (en) * | 2013-03-28 | 2016-01-07 | Aisin Seiki Kabushiki Kaisha | Surroundings-monitoring device and computer program product |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10179543B2 (en) | 2013-02-27 | 2019-01-15 | Magna Electronics Inc. | Multi-camera dynamic top view vision system |
-
2015
- 2015-02-16 EP EP15155191.8A patent/EP3057061B1/en active Active
-
2016
- 2016-01-19 JP JP2017530610A patent/JP6620153B2/en active Active
- 2016-01-19 DE DE112016000187.8T patent/DE112016000187T5/en active Pending
- 2016-01-19 KR KR1020177018249A patent/KR102508843B1/en active IP Right Grant
- 2016-01-19 WO PCT/EP2016/050937 patent/WO2016131585A1/en active Application Filing
- 2016-01-19 CN CN201680005504.XA patent/CN107111879B/en active Active
-
2017
- 2017-08-16 US US15/678,774 patent/US10867401B2/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001266160A (en) | 2000-03-22 | 2001-09-28 | Toyota Motor Corp | Method and device for recognizing periphery |
JP2004056763A (en) | 2002-05-09 | 2004-02-19 | Matsushita Electric Ind Co Ltd | Monitoring apparatus, monitoring method, and program for monitor |
US20040247352A1 (en) | 2003-06-06 | 2004-12-09 | Oki Data Corporation | Fixing apparatus |
US20060140447A1 (en) * | 2004-12-28 | 2006-06-29 | Samsung Electronics Co., Ltd. | Vehicle-monitoring device and method using optical flow |
US20100027844A1 (en) * | 2007-01-30 | 2010-02-04 | Aisin Seiki Kabushiki Kaisha | Moving object recognizing apparatus |
US20080319664A1 (en) * | 2007-06-25 | 2008-12-25 | Tidex Systems Ltd. | Navigation aid |
US20100220190A1 (en) | 2009-02-27 | 2010-09-02 | Hyundai Motor Japan R&D Center, Inc. | Apparatus and method for displaying bird's eye view image of around vehicle |
JP2011013978A (en) | 2009-07-02 | 2011-01-20 | Kyushu Institute Of Technology | Method and apparatus for detecting object based on estimation of background image |
JP2011070580A (en) | 2009-09-28 | 2011-04-07 | Mitsubishi Motors Corp | Driving support device |
US20110169957A1 (en) * | 2010-01-14 | 2011-07-14 | Ford Global Technologies, Llc | Vehicle Image Processing Method |
US20110234800A1 (en) * | 2010-03-23 | 2011-09-29 | Kabushiki Kaisha Toshiba | Image processing device and image processing system |
EP2511137A1 (en) | 2011-04-14 | 2012-10-17 | Harman Becker Automotive Systems GmbH | Vehicle Surround View System |
US20120262580A1 (en) | 2011-04-14 | 2012-10-18 | Klaus Huebner | Vehicle Surround View System |
JP2012227913A (en) | 2011-04-14 | 2012-11-15 | Harman Becker Automotive Systems Gmbh | Vehicle surround view system |
CN102654917A (en) | 2011-04-27 | 2012-09-05 | 清华大学 | Method and system for sensing motion gestures of moving body |
CN102999759A (en) | 2012-11-07 | 2013-03-27 | 东南大学 | Light stream based vehicle motion state estimating method |
US20160001704A1 (en) * | 2013-03-28 | 2016-01-07 | Aisin Seiki Kabushiki Kaisha | Surroundings-monitoring device and computer program product |
CN104104915A (en) | 2014-07-21 | 2014-10-15 | 四川沛阳科技有限公司 | Multifunctional driving monitoring early warning system based on mobile terminal |
CN104318561A (en) | 2014-10-22 | 2015-01-28 | 上海理工大学 | Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow |
Non-Patent Citations (19)
Title |
---|
Andrea Giachetti et al: "The Use of Optical Flow for Road Navigation" IEEE Transactions on Robotics and Automation, vol. 14 No. 1, Feb. 1, 1998, p. 34-47. |
Barreto Joao P. et al, "Visual Servoing/Tracking Using Central Catadioptric Images", Int. Sym-posium on Experimental Robotics, Advanced Robotics Series, 2002. |
Cheda, D et al, "Camera Egomotion Estimation in the ADAS Context", Annual Conference on Intelligent Transportation Systems, 2010. |
Chinese Office Action dated Jul. 31, 2020 for the counterpart Chinese Patent Application 201680005504.X. |
European Search Report dated Sep. 17, 2015 for corresponding German Patent Application No. 15155191.8. |
Gillespie, Thomas D., "Fundamentals of Vehicle Dynamics". Society of Automotive Engineers, 1992. |
International Search Report and Written Opinion dated Apr. 29, 2016 from corresponding International Patent Application No. PCT/EP2016/050937. |
Japanese Office Action dated Aug. 14, 2019 for corresponding Japanese Patent Application No. 2017-530610. |
Jazar Reza N., "Vehicle Dynamics: Theory and Applications", Springer, Mar. 19, 2008. |
Kelly, Alonzo, "Essential Kinematics for Autonomous Vehicles", Robotics Institute, Carnegie Mellon University, 1994. |
Lee, Gim Hee, "Motion Estimation for Self-Driving Cars with a Generalized Camera", CVPR, 2013. |
Nourani-Vatani, et al: "Practical Visual Odometry for Car-like Vehicles," 2009 IEEE International Conference on Robotycs and Automation, Kobe, Japan, May 12-17, 2009. |
Pink, Oliver et al, "Visual Features for Vehicle Localization and Ego-Motion Estimation", proceeding of: Intelligent Vehicles Symposium, 2009 IEEE. |
Power, P. Wayne et al, "Understanding Background Mixture Models for Foreground Segmentation", Proceedings Im-age and Vision Computing New Zealand 2002. |
Simon, Dan et al, "Optimal State Estimation: Kalman, H Infinity and Nonlinear Approaches", John Wiley & Sons, 2006. |
Stein, Gideon P. et al. "A Robust Method for Computing Vehicle Ego-motion", IEEE Intelligent Vehicles Symposium, 2000. |
Tsao, A.T., Fuh, C.S., Hung, Y.P., and Chen, Y.S.: Ego-motion estimation using optical flow fields observed from multiple cameras. In Proc. Computer Vision Pattern Recognition, pp. 457-462 (1997) (Year: 1997). * |
Weinstein, Alejandro J. et al, "Pose Estimation of Ackerman Steering Vehicles for Outdoors Autonomous Naviga-tion", Proceedings of 2010 IEEE International Conference on Industrial Automation, Valparaiso, Chile, Mar. 2010. |
Zucchelli Marco, et al, "Constrained Structure and Motion Estimation from Optical Flow", ICPR 2002. |
Also Published As
Publication number | Publication date |
---|---|
EP3057061A1 (en) | 2016-08-17 |
CN107111879A (en) | 2017-08-29 |
DE112016000187T5 (en) | 2017-08-31 |
CN107111879B (en) | 2021-01-22 |
KR20170118040A (en) | 2017-10-24 |
JP6620153B2 (en) | 2019-12-11 |
WO2016131585A1 (en) | 2016-08-25 |
KR102508843B1 (en) | 2023-03-09 |
EP3057061B1 (en) | 2017-08-30 |
JP2018506768A (en) | 2018-03-08 |
US20170345164A1 (en) | 2017-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10867401B2 (en) | Method and device for the estimation of car ego-motion from surround view images | |
JP2018506768A5 (en) | ||
Qin et al. | Avp-slam: Semantic visual mapping and localization for autonomous vehicles in the parking lot | |
US11138751B2 (en) | Systems and methods for semi-supervised training using reprojected distance loss | |
Poddar et al. | Evolution of visual odometry techniques | |
Asvadi et al. | 3D object tracking using RGB and LIDAR data | |
Royer et al. | Monocular vision for mobile robot localization and autonomous navigation | |
US8134479B2 (en) | Monocular motion stereo-based free parking space detection apparatus and method | |
Parra et al. | Robust visual odometry for vehicle localization in urban environments | |
US20120308114A1 (en) | Voting strategy for visual ego-motion from stereo | |
Fraundorfer et al. | A constricted bundle adjustment parameterization for relative scale estimation in visual odometry | |
Tripathi et al. | Trained trajectory based automated parking system using visual SLAM on surround view cameras | |
WO2016170330A1 (en) | Processing a series of images to identify at least a portion of an object | |
EP3710985A1 (en) | Detecting static parts of a scene | |
de la Escalera et al. | Stereo visual odometry in urban environments based on detecting ground features | |
Hong et al. | Real-time mobile robot navigation based on stereo vision and low-cost GPS | |
Serov et al. | Visual-multi-sensor odometry with application in autonomous driving | |
Schamm et al. | Vision and ToF-based driving assistance for a personal transporter | |
US11514588B1 (en) | Object localization for mapping applications using geometric computer vision techniques | |
Sabatini et al. | Vision-based pole-like obstacle detection and localization for urban mobile robots | |
Kaneko et al. | Monocular depth estimation by two-frame triangulation using flat surface constraints | |
Aladem et al. | Evaluation of a Stereo Visual Odometry Algorithm for Passenger Vehicle Navigation | |
US20240144487A1 (en) | Method for tracking position of object and system for tracking position of object | |
Weydert | Model-based ego-motion and vehicle parameter estimation using visual odometry | |
Miksch et al. | Motion compensation for obstacle detection based on homography and odometric data with virtual camera perspectives |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: APPLICATIONS SOLUTIONS (ELECTRONIC AND VISION) LTD, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUERREIRO, RUI;PANAKOS, ANDREAS;SILVA, CARLOS;AND OTHERS;SIGNING DATES FROM 20170518 TO 20170530;REEL/FRAME:043309/0824 Owner name: APPLICATIONS SOLUTIONS (ELECTRONIC AND VISION) LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUERREIRO, RUI;PANAKOS, ANDREAS;SILVA, CARLOS;AND OTHERS;SIGNING DATES FROM 20170518 TO 20170530;REEL/FRAME:043309/0824 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: APPLICATION SOLUTIONS (ELECTRONIC AND VISION) LTD., UNITED KINGDOM Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 043309 FRAME: 0824. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:GUERREIRO, RUI;PANAKOS, ANDREAS;SILVA, CARLOS;AND OTHERS;SIGNING DATES FROM 20170518 TO 20170530;REEL/FRAME:054302/0610 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |