WO2000037875A1 - Procede et systeme d'aide a la visee pour arme legere - Google Patents
Procede et systeme d'aide a la visee pour arme legere Download PDFInfo
- Publication number
- WO2000037875A1 WO2000037875A1 PCT/FR1999/003185 FR9903185W WO0037875A1 WO 2000037875 A1 WO2000037875 A1 WO 2000037875A1 FR 9903185 W FR9903185 W FR 9903185W WO 0037875 A1 WO0037875 A1 WO 0037875A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- sensor
- weapon
- images
- sensors
- Prior art date
Links
Classifications
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F41—WEAPONS
- F41G—WEAPON SIGHTS; AIMING
- F41G3/00—Aiming or laying means
- F41G3/14—Indirect aiming means
- F41G3/16—Sighting devices adapted for indirect laying of fire
- F41G3/165—Sighting devices adapted for indirect laying of fire using a TV-monitor
Definitions
- the present invention relates to a sighting aid method for a light weapon, more precisely a sighting aid method for a shooter carrying such a weapon in his hand.
- the invention also relates to a sighting aid system for implementing such a method.
- the weapon with a laser pointer, that is to say with a collimated beam generator propagating parallel to the firing axis of the weapon or line of sight.
- the impact on the target results in a small diameter spot of light.
- the wavelength used can also be in the non-visible range, for example in the infrared.
- the shooter is therefore no longer obliged to support his weapon. It is enough that it locates the position of the spot, either directly (visible range), or using special glasses (infrared), to obtain a good aim.
- the target can detect the radiation, either directly (at sight, for wavelengths in the visible), or using a detector appropriate to the wavelength used.
- the accuracy is limited by the visual discrimination of the spot, the dispersion of the beam and the limit of visual perception of the shooter.
- the invention aims to remedy the shortcomings of the methods and devices of the known art, some of which have just been mentioned. It makes it possible to obtain a rapid fire, not requiring to support the weapon, and however precise, which appears contradictory a priori. It also ensures an aiming process not detectable by the opponent. In other words, the system remains entirely passive, that is to say does not generate radiated energy.
- the distances of use of the system according to the invention typically in a range of distances greater than 25 m and less than 100 m
- the typical dimension of the targets 1.5 m according to the vertical dimension and 0, 5 m depending on the horizontal dimension
- the method according to the invention comprises the determination of the position of the line of sight of the weapon by correlation of images and the restitution of a symbol of sight (for example a reticle materializing the line of sight) on a display member, advantageously of the helmet type, visual type of helmet.
- the symbol can be displayed in an infinite collimated form. It can be superimposed on the scene observed by the shooter, in direct vision or through a camera.
- To determine the position of the line of sight correlates an image obtained from a sensor carried by the gun and an image obtained from a sensor carried by the shooter's head.
- the spatial orientation data of the two sensors are identified with respect to each other by auxiliary means and are used to estimate the deviations in orientation between the two images in order to facilitate the correlation of the images.
- the correlation of the images makes it possible to determine in the second image the position of the first image.
- the reticle thus determined by calculation is displayed on a display secured to the shooter's head and therefore to the sensor of the second image. It is displayed at a position corresponding to its place in the second image, and it is displayed superimposed on the scene observed by the shooter.
- the shooter can then point his weapon at a target by aligning the reticle on this target without having to shoulder the weapon.
- the first sensor is preferably at a reduced field than the second sensor.
- the system for implementing the method essentially comprises a first image sensor fixed on the weapon, a second image sensor fixed on the shooter's head, electronic image processing circuits making it possible to calculate the position of the field of the first sensor in the image of the second sensor, and a display device collimated to infinity making it possible to embed in the field of vision of the shooter a reticle or a similar symbol materializing the line of sight of the weapon.
- the set is completed by orientation sensors, independent of the image sensors, fixed respectively on the weapon and on the shooter's head, allowing an estimate (preferably continuous over time) of the deviations of orientation of the fields. aiming of the first and second image sensors to assist in determining the position of the field of the first sensor in the image of the second.
- the signals from these orientation sensors are also processed by the aforementioned electronic circuits.
- the subject of the invention is therefore a method of assisting aiming for a light weapon carried by a shooter, characterized in that it comprises the following steps:
- the orientation sensors can be of the magnetometric type, with for example a heading sensor and a two-axis inclinometer, these sensors being provided on the one hand on the shooter's head and on the other hand on the weapon.
- Correlation is facilitated in particular in an acquisition phase where an image portion of the second sensor must coincide with the image of the first sensor.
- the orientation sensors indeed make it possible to develop a rotation matrix making it possible to rotate the two images relative to one another by an amount corresponding to the indication (approximate) given by the orientation sensors .
- the images are then found to be substantially identically oriented and the correlation can be continued more easily.
- the correlation of images is the main element used to move the crosshair in the field of the display.
- One determines in particular from the indications of the two orientation sensors a rotation matrix representing the relative positions in space of two reference trihedra linked one to the weapon, the other to the head.
- One can also, to help with the correlation, use a vector of translation, limited a priori to approximately 50 cm, representing the difference in position between the head and the weapon.
- the invention also relates to a sighting aid system for the implementation of this method.
- FIG. 1A and 1B schematically illustrate the architecture of an aiming aid system according to the invention
- FIG. 1B is a geometric diagram to explain the general operation of the system of Figures 1A and 1B;
- FIG. 3 illustrates an example of a modular signal and image processing device used in the aiming assistance system of the invention
- FIG. 4 illustrates the nesting of images provided by imaging sensors used in the aiming aid system of the invention
- FIG. 5 is a more detailed embodiment of one of the modules of the device of Figure 3;
- Figure 7 is a more detailed embodiment of a correlation module used in the device of Figure 3;
- FIG. 8 is a diagram illustrating the last step of the method according to the invention, consisting of a so-called fine correlation.
- FIG. 1A schematically illustrates . its overall architecture
- FIG. 1B is a detail figure showing one of the components used, in this case a display device 4, advantageously of the so-called visual helmet type.
- the system 1 firstly comprises a sensor with imagery C in the visible or infrared domain, fixed on the weapon 2.
- a sensor with imagery C in the visible or infrared domain fixed on the weapon 2.
- the first case visible light
- a standard matrix sensor or a "IL-CCD” type component (light intensifier with charge coupled device).
- a component of the type "M IR” Medium Wave Infrared: infrared in the wavelength ranges between 3 and 5 ⁇ m
- LWIR Long Wave Infrared: infrared in the wavelength ranges between 8 and 12 ⁇ m.
- the direction of the optical axis, ⁇ ci r of the sensor Ci is harmonized, by construction, with the axis, ⁇ 20. of the barrel 20. If this is not the case, it is taken into account in the calculations.
- a device for measuring the position of the center of the field of the imaging sensor C of the weapon 2 is provided.
- This device is composed of a second imaging sensor C2 fixed on the shooter's head and of an electronic processing device. of signals and images 3 supplied by the sensor C 2 , this device 3 having in particular the function of calculating the position of the field of the sensor Ci in the image of the sensor C2.
- the field of the sensor C is advantageously chosen to be smaller than the field of the sensor 2 •
- the system 1 also includes an infinitely collimated rendering device 4 making it possible to embed in the shooter's field of vision a reticle 41, or any other suitable symbol, materializing the aim.
- This device is advantageously constituted by a display member of the so-called visual helmet type.
- each of the orientation sensors comprises two sensors: a heading sensor and a two-axis inclinometer (not shown separately).
- These devices, 5 and 6, which generate heading and attitude data, are fixed, respectively, on the weapon 2 and on the shooter's head Te in order to give a continuous preference estimate of the orientation deviation of the two aiming axes, ⁇ i and ⁇ 2 , sensors C and C2.
- These sensors are independent of the image sensors.
- This double device, 5 and 6 provides, in cooperation with specific circuits of the electronic signal and image processing device 3, an estimate of the rotation matrix MR of passage between the sensor Ci and the sensor 2.
- the vector translation T is not known. Knowing that the weapon is held by the shooter, we can estimate the standard of T at an average value of 0.7 meters and an upper limit of 1 meter. The weapon can for example be held at the hip.
- the electronic signal and image processing device 3 receives the signals generated at the output of the imaging sensor C and of the sensors 6, fixed on the head Te of the shooter, via the link 60, and receives the signals generated at the output of the sensor with C2 imagery and sensors 5, carried by the weapon 2, via the link 50.
- the device 3 transmits as output the signals processed for display on the visual display member of the helmet 4 (in the example described), thus than the power supply necessary for the proper functioning of this organ.
- FIG. 1B illustrates the display member 4 in front view, that is to say on the side observed by the shooter. It allows to embed in the front scene 40, actually observed by the shooter, a reticle 41 materializing the targeted target, or other indications, for example an arrow indicating which side he must turn the weapon 2, for a period initial aiming process, as shown below.
- the image provided by the wide field sensor 2 encompasses all or part of the scene observed by the shooter.
- FIG. 2 is a geometric diagram illustrating the relationships existing between a reference trihedron Rc2 associated with the devices carried by the shooter's head Te, and in particular with the sensor 2, and a reference trihedron TRci associated with the devices carried by the weapon 2, and in particular to sensor C.
- the translation vector T we have represented on this diagram, the translation vector T and we have superimposed on the reference trihedron TRC1 a reference trihedron TR 'C2 (in dotted lines), representing the reference trihedron TRC2, after translation according to the vector T. This superposition allows to calculate the rotation matrix MR.
- the aiming process includes two initial steps, carried out using the two positioning sensor devices 5 and 6.
- the first step which will be called “automatic rallying” consists in using the signals generated by the orientation sensors 5 and 6, on the weapon 2 and on the gunner's head Te, to help the gunner to point his weapon in the same direction as his head.
- the processed signals include heading and tilt information of the reference frames, T-Ci and T.RC2, linked to the sensors, Ci and C2 with respect to a landmark -
- the processing device 3 generates signals making it possible to estimate the difference in orientation of the fields of sight of the sensors, and C2, at least in direction and direction.
- These signals are transmitted to the display member 4, so as to display a specific symbol on the screen, for example an arrow 42, in place of the reticle 41 materializing the aim.
- the shooter then knows that he must move the barrel of his weapon 2, in the general direction symbolized by the arrow 42, so that the reticle 41 symbolizing the aim enters his field of vision and is displayed on the screen. of the display unit. It is therefore an automatic aid, very coarse, to obtain an approximate score of weapon 2, so that the stages of the aid to the aiming process itself can begin.
- an approximate initial score can be performed manually, without the assistance of any symbology.
- the second initial step consists in calculating an estimate of the aforementioned rotation matrix MR, still using the data provided by the two orientation sensors, 5 and 6. This estimate of the rotation matrix MR is obtained by comparing the data (attitude and heading) provided by the two devices, 5 and 6, and possibly using the estimate of the translation vector T.
- the correlation of the images is performed using the rotation matrix to facilitate the acquisition of identical image areas in the images supplied by the two image sensors; by rotating the image of the first sensor of angles defined by the rotation matrix, the images of the two sensors are given substantially the same orientation, which facilitates correlation.
- This correlation will then make it possible to determine in the field of the head sensor what is the position of the line of sight, this being represented by a well determined point (for example a central point) of the field of the weapon sensor Cl and the field of the weapon sensor being located, by correlation, in the field of the head sensor.
- the electronic circuits of the device 3 can be cut into modules for the execution of the various necessary steps.
- FIG. 3 illustrates the modular configuration of an image processing device 3 for the implementation of the aiming method, according to a preferred embodiment of the invention.
- the circuits allowing the execution of the above-mentioned initial step or steps have not been shown.
- the device 3 comprises two analog / digital acquisition modules, 30 and 34, respectively for the channels for acquiring the output signals of the sensors Ci and C2.
- the output signals of the analog / digital acquisition modules, 30 and 34 are transmitted to modules for extracting contours of homogeneous zones, or objects, contained in the digital images, modules 31 and 35, respectively.
- a module 32 then calculates the pyramid of contours of the image supplied by the small field sensor Ci, after extraction of the contours.
- the signals at the output of the module 35 are transmitted to a first module 36 for establishing the images of the distances to the contours of the wide field image, followed by a module 37 for establishing the pyramid of images called " chamfer ", that is, distances to the nearest contour.
- a double correlation module called “Gros-Fin” 33 receives the output signals from modules 32 and 37, as well as gradient signals supplied by the contour extraction module 35, and, supplied by the respective modules 30 and 34 for acquiring image signals, iP and iG, the definition of which is given in the aforementioned rating table.
- the calculations performed by the so-called "Big" part of the correlation module 33 are based on a valued correlation type algorithm.
- the calculations carried out by the so-called “End” part are based on an algorithm for estimating the positioning difference between the center of the image of the small field sensor Ci and the center of the positioning zone F G in the image.
- large field of the sensor C ⁇ calculated by the "large” module.
- the analog / digital conversion of the image signals generated by the sensors C and C2, conversion carried out in the modules 30 and 34, is ensured synchronously by a conventional image scanning device, on a typical dynamic of 28 levels of Grey.
- Spatial re-sampling is carried out by a conventional interpolation function, of bilinear or bicubic type.
- the desired field of the “raw” image ClG ⁇ rute at the output of the sensor C2 is less than the instantaneous field of the “raw” image ClP ⁇ rute of the small field sensor Ci-
- the relationships linking these two “raw” images are as follows:
- FIG. 4 is a diagram schematically illustrating the images J G , ⁇ P and I 'P.
- the cutting of the useful area of the resampled images is such that the image I ′ P is included in the image I G of the wide field sensor C 2 •
- the diagrams have shown schematically in FIG. 4 positions of the images ⁇ and I 'P before application of the rotation matrix MR.
- the XYZ axes symbolize the T-RCi reference system ( Figure 2).
- the digital signals representing the images iP and i are transmitted to the module 33.
- TM * the maximum value of the apparent residual roll of I 'P in image i.
- the modules 31 and 35 extract the object contours present in the two digitized images. The process is similar in both modules.
- FIG. 5 illustrates in more detail, in the form of block diagrams, one of the modules, for example the module 31.
- the extraction parameters are chosen so as to introduce an identical low-pass filtering level on the two scanned images.
- An input module 310 calculates images of gradients Gx and Gy along two orthonormal axes X and Y attached to the image. This operation can be carried out by any conventional method in the field of image processing. One can have recourse, for example, to recursive treatments like those described in the article of Rachid DERICHE, entitled: "Using Canny 's detector to drift a recursively implemented Optimal Edge Detector", “Computer Vision 1987", pages 167- 187, article to which we will profitably refer.
- the images of calculated gradients, along the X and Y axes, are transmitted to a first module 311 intended to calculate, at each point, the norm of the gradient NG, in accordance with the following relation:
- the images of calculated gradients are also transmitted to a second module 312, intended to calculate at each point the orientation of the gradient OG, in accordance with the following relation:
- This step includes the following sub-steps: binarization, removal of non-maxima and application of a threshold by hysteresis.
- the binarization sub-step requires knowing the high and low thresholds. These thresholds are estimated by an additional module 313, according to an iterative process.
- the module 313 indeed receives the data associated with the list of contour points in feedback. It also receives on a second input the image of the gradient norm calculated by the module 311.
- the object of the module 313 is to dynamically control, zone of the image by zone of the image, high and low thresholds.
- the current image is divided into areas of equal size.
- the module 313 calculates a high threshold, which we will refer to SH Z , and a low threshold, which we will refer to SB Z , for the image being processed at an instant t arbitrary, this as a function of the high and low thresholds obtained on the area at an instant t-1 corresponding to the previous image.
- a high threshold which we will refer to SH Z
- SB Z a low threshold
- ⁇ 0.8 being a typical value
- ⁇ depends on the number of contour points extracted in the previous image, at time t-1. If the number of points is too high to maintain a rate in real time, ⁇ is chosen to be less than unity. On the other hand, if the number of points is insufficient to obtain contour information, one chooses ⁇ greater than unity. Typically the value of ⁇ is between 0.8 and 1.25.
- s norm is the threshold value to be applied to the gradient norm to perform a threshold operation on n% of the points in the area.
- the number n is chosen according to the characteristics of the sensors (spatial richness of the image) and is typically between 5% and 15%.
- SB Z E (yx (SH z (t)) (6)
- ⁇ is generally less than 0.5.
- Each zone z is divided into four contiguous sub-zones and the high and low thresholds are recalculated for the sub-zones by a conventional interpolation algorithm, using the zones adjacent to the zone z.
- the module 314 performs the binarization of the contours and the suppression of the local non-maxima, according to a hysteresis method.
- This module 314 determines the contour chains (that is to say the sets of connected contours according to the principle of connexity to eight) such that at least one point of each chain exceeds the local high threshold of its zone and that all other points in the chain exceed their local low threshold.
- each point in the chain is kept if it constitutes a local maximum of the standard.
- P the gradient norm of two other points, Pi and P2, which are closest to point P, in a direction achieving the best approximation on a 3x3 neighborhood of the normal to the contour at point P.
- the direction of this normal is calculated from the orientation of the gradient at point P which gives the direction of the tangent at point P.
- FIGS. 6A and 6B The process which has just been explained is illustrated very schematically by FIGS. 6A and 6B.
- an image is represented representing the scene seen by the wide field sensor C2, comprising, in the example illustrated, two remarkable objects constituted by a tree Ar and a building Bt.
- contour points CAr and CBt At the end of the contour extraction process, there is a list of contour points CAr and CBt, represented, for example, by zeros.
- the two objects, symbolized by these contours can represent potential targets that stand out from the background.
- the points or pixels of the image are represented by numbers other than zero (1, 2, 3, ...) and the value of which is all the greater the further these points are from contours (distance in connectivity to 4 from a pixel to the nearest contour, the distance being expressed in pixels).
- the zeros in the list are associated with digital coordinates, according to an orthonormal reference frame XY, the axes of which are advantageously parallel to the edges of the image, if the latter is rectangular. These coordinates are stored, at least temporarily, in conventional memory circuits (not shown), so as to be used by the next module, 32 or 36, depending on the channel considered (FIG. 3).
- module 32 allows the establishment of the pyramid of contours, from the list calculated by module 31.
- the constitution of the pyramid consists in using the image of the contours C F 0 determined by the module 311 at the maximum level of resolution and in constituting the sequence of images C P k of the pyramid in accordance to equation (7):
- C [( ⁇ j) _ (2, 2) v _, (2 ⁇ , 2j + 1) v Q p _, (2 / + 1.2 /) v Cf. , (2 / ⁇ 1,2 d + 1) '
- T p is the maximum of the sizes of ⁇ P according to the orthonormal axes X and Y.
- the gradients calculated by the module 310 are transmitted to an input of the module 33.
- the module 36 allows the establishment of the image of the distances to the nearest contour. This image is made up from the image of the contours of the wide field image given by the sensor C2. To do this, we can use a known algorithmic method, for example that disclosed by the article by Gunilla BORGEFORS, entitled: “Hierarchical Chamfer Matching: A parametric edge matching algorithm”, published in "IEEE Transactions on Pattern Analysis and Machine Intelligence", November 1988, Vol 10, n ° 6, pages 849-865, to which we will profitably refer.
- the module 37 makes it possible to establish the pyramid of the so-called "chamfer” images calculated from the image provided by the previous module 36.
- the constitution of the pyramid consists in using the image of the "chamfer" distances constituted from the image of the contours of the large field image D G Q determined by the module 36 at maximum level of resolution and to constitute the sequence of images D G of the pyramid in accordance with the following relation (8):
- D G r (i, j) Max (D G _, (2,2), D ° _, (2 / .2; + 1).
- the correlation module 33 is actually subdivided into two modules, as indicated.
- the so-called “large” module, 33G is represented in the form of a block diagram by FIG. 7, and comprises three cascaded sub-modules: 330 to 332.
- the 33G module performs a valued correlation, at the same level of resolution, between the images of the contour pyramid of the small field image and the images of the pyramid of distance images with contours obtained from the large field image.
- the input sub-module 330 allows the constitution of the correlation table N] ⁇ (u, v), in accordance with the following relation:
- N ⁇ ( ⁇ , ) ⁇ ⁇ (D c ( l J) - C ⁇ p ( ⁇ -r, j + v)) (9) /
- the module 331 is intended for the selection of local minima. This selection is made in two stages.
- N (u, v) N (u, v) has a local minimum and - ⁇ S. 5 is a
- the threshold S which depends on the spectral density of the input image.
- the threshold S is determined by a conventional calibration procedure of the image processing device.
- module 332 an iterative process is carried out, raising the levels of the pyramids and filtering the values.
- ⁇ ( '') ⁇ (° ⁇ , (/, /) c;.. (T + / +. ')) (Io),
- This point P is that for which the value of the tablecloth is minimum if this one checks the relation:
- the correlation process can stop at this stage. Indeed, the point ⁇ constitutes the best selected point. It is possible to display in the display device 4 a reticle 41 corresponding to the impact of the aiming axis ⁇ 2 0 of the weapon 2 on the intended target, and superimposed on the scene observed 40 by the shooter through the device screen 4.
- the pixels are delimited by lines, vertical and horizontal, in dotted lines, the pixels being supposed to be square. More precisely, the point ⁇ is the center of the central pixel of the window G , and a point O 'P, the center of the central pixel of the window F' P.
- the vector E represents the offset between the two "grids" of pixels, on the axis Yl-O 'P, that is to say in amplitude and in direction.
- the size of the window F ′ P is chosen so as to guarantee the presence of minimal information in this window FP. In other words, it should not be a uniform area. An indication of the information can be obtained by checking the value of the sum of the gradient standards on this window.
- the window F G guarantees a margin of 1 pixel around the window FP.
- the working neighborhood V is defined with the same sizes along X and Y as the window F ′ P.
- the total number of points in the neighborhood V is equal to L v .
- the point 0 G is equivalent to the point ⁇ , but constitutes an origin of a reference for F G •
- a magnification operation of the window FG ' is carried out by a factor Q.
- This operation is carried out by a conventional interpolation process, for example bilinear or bicubic.
- the resulting window is called G G.
- F p is a centered window of JP, of size in X (respectively in Y) equal to the size of F 'P multiplied by Q.
- H matrix of i 7 rows and 2 columns.
- E is a reduced deviation from E. It is possible to display, on the screen of the display member 4, a reticle 41 positioned at the point (T1-E-Ei) with respect to a point which corresponds to the center of the field of the wide field sensor C2.
- the point ⁇ is obtained with a typical precision of the order of 2 to 3 pixels on the wide field image.
- the difference E corresponds to 1 pixel of the resolution of the small field image. It is therefore possible to obtain precision in the positioning of the reticle 41 typically in the range of 2 to 3 mrad, which corresponds to the objective that the method according to the invention sets.
- the shooting assistance method allows both to obtain a high accuracy of sight and a great speed, since it does not require a shoulder of the weapon.
- the process since it is not accompanied by the emission of radiant energy, remains completely discreet. In other words, even if the target has a sensor sensitive to the wavelengths used, it will not be able to detect the shooter, at least due to the implementation of the method specific to the invention.
- imaging sensors electronic circuits for processing digital signals, etc.
- electronic circuits for processing digital signals etc.
- the components that can be used participate in a simple technological choice within the reach of those skilled in the art.
- imaging sensors can be used, in particular depending on the choice of wavelengths used (visible or infrared).
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002354569A CA2354569C (fr) | 1998-12-18 | 1999-12-17 | Procede et systeme d'aide a la visee pour arme legere |
EP99961103A EP1141648A1 (fr) | 1998-12-18 | 1999-12-17 | Procede et systeme d'aide a la visee pour arme legere |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR98/16022 | 1998-12-18 | ||
FR9816022A FR2787566B1 (fr) | 1998-12-18 | 1998-12-18 | Procede et systeme d'aide a la visee pour arme legere |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000037875A1 true WO2000037875A1 (fr) | 2000-06-29 |
Family
ID=9534147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FR1999/003185 WO2000037875A1 (fr) | 1998-12-18 | 1999-12-17 | Procede et systeme d'aide a la visee pour arme legere |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1141648A1 (fr) |
CA (1) | CA2354569C (fr) |
FR (1) | FR2787566B1 (fr) |
WO (1) | WO2000037875A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2862748A1 (fr) * | 2003-11-25 | 2005-05-27 | Thales Sa | Procede de conduite de tir pour aeronefs |
WO2011136897A1 (fr) * | 2010-04-27 | 2011-11-03 | Itt Manufacturing Enterprises, Inc. | Activation à distance de l'imagerie dans des lunettes de vision de nuit |
US11965714B2 (en) | 2007-02-28 | 2024-04-23 | Science Applications International Corporation | System and method for video image registration and/or providing supplemental data in a heads up display |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4093736B2 (ja) | 2001-06-28 | 2008-06-04 | 株式会社日立メディコ | 核磁気共鳴診断装置および診断システム |
IL173007A0 (en) * | 2006-01-08 | 2007-03-08 | Giora Kutz | Target/site location acquisition device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4786966A (en) * | 1986-07-10 | 1988-11-22 | Varo, Inc. | Head mounted video display and remote camera system |
US4804843A (en) * | 1985-05-16 | 1989-02-14 | British Aerospace Public Limited Co. | Aiming systems |
EP0605290A1 (fr) * | 1992-12-30 | 1994-07-06 | Thomson-Csf | Dispositif optronique d'aide au tir par arme individuelle et application à la progression en milieu hostile |
US5675112A (en) * | 1994-04-12 | 1997-10-07 | Thomson-Csf | Aiming device for weapon and fitted-out weapon |
US5806229A (en) * | 1997-06-24 | 1998-09-15 | Raytheon Ti Systems, Inc. | Aiming aid for use with electronic weapon sights |
-
1998
- 1998-12-18 FR FR9816022A patent/FR2787566B1/fr not_active Expired - Fee Related
-
1999
- 1999-12-17 WO PCT/FR1999/003185 patent/WO2000037875A1/fr not_active Application Discontinuation
- 1999-12-17 CA CA002354569A patent/CA2354569C/fr not_active Expired - Fee Related
- 1999-12-17 EP EP99961103A patent/EP1141648A1/fr not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4804843A (en) * | 1985-05-16 | 1989-02-14 | British Aerospace Public Limited Co. | Aiming systems |
US4786966A (en) * | 1986-07-10 | 1988-11-22 | Varo, Inc. | Head mounted video display and remote camera system |
EP0605290A1 (fr) * | 1992-12-30 | 1994-07-06 | Thomson-Csf | Dispositif optronique d'aide au tir par arme individuelle et application à la progression en milieu hostile |
US5675112A (en) * | 1994-04-12 | 1997-10-07 | Thomson-Csf | Aiming device for weapon and fitted-out weapon |
US5806229A (en) * | 1997-06-24 | 1998-09-15 | Raytheon Ti Systems, Inc. | Aiming aid for use with electronic weapon sights |
Non-Patent Citations (1)
Title |
---|
BORGEFORS G: "Hierarchical Chamfer Matching: A Parametric Edge Matching Algorithm", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 10, no. 6, November 1989 (1989-11-01), pages 849 - 865, XP002114858 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2862748A1 (fr) * | 2003-11-25 | 2005-05-27 | Thales Sa | Procede de conduite de tir pour aeronefs |
WO2005052495A1 (fr) * | 2003-11-25 | 2005-06-09 | Thales | Procede de conduite de tir pour aeronefs |
US11965714B2 (en) | 2007-02-28 | 2024-04-23 | Science Applications International Corporation | System and method for video image registration and/or providing supplemental data in a heads up display |
WO2011136897A1 (fr) * | 2010-04-27 | 2011-11-03 | Itt Manufacturing Enterprises, Inc. | Activation à distance de l'imagerie dans des lunettes de vision de nuit |
CN102971658A (zh) * | 2010-04-27 | 2013-03-13 | 安立世 | 夜视镜中图像的远程激活 |
Also Published As
Publication number | Publication date |
---|---|
CA2354569C (fr) | 2007-09-25 |
EP1141648A1 (fr) | 2001-10-10 |
FR2787566B1 (fr) | 2001-03-16 |
FR2787566A1 (fr) | 2000-06-23 |
CA2354569A1 (fr) | 2000-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2592293C (fr) | Procede de traitement d'images mettant en oeuvre le georeferencement automatique d'images issues d'un couple d'images pris dans le meme plan focal | |
CA2889654C (fr) | Procede de traitement d'une image infrarouge pour une correction des non uniformites | |
EP1843295B1 (fr) | Procédé de restitution de mouvements de la ligne de visée d'un instrument optique | |
EP1298592B1 (fr) | Stabilisation des images d'une scène, correction des offsets de niveaux de gris, détection d'objets mobiles et harmonisation de deux appareils de prise de vues fondés sur la stabilisation des images | |
FR2666649A1 (fr) | Dispositif de suivi d'un vehicule avec une fonction de mesure de distance. | |
EP3272119B1 (fr) | Procédé et dispositif de reconstruction 3d d'une scene | |
FR3054897A1 (fr) | Procede d'elaboration d'une image numerique, produit programme d'ordinateur et systeme optique associes | |
EP3290891A1 (fr) | Procédé et dispositif de caractérisation des aberrations d'un système optique | |
WO2000037875A1 (fr) | Procede et systeme d'aide a la visee pour arme legere | |
EP1351498A2 (fr) | Procédé de traitement en temps réel d'un signal représentatif d'une image | |
WO2014060657A1 (fr) | Procédé de conception d'un imageur monovoie passif capable d'estimer la profondeur de champ | |
FR3057095B1 (fr) | Procede de construction d'une carte de profondeur d'une scene et/ou d'une image entierement focalisee | |
CA2701151A1 (fr) | Systeme d'imagerie a modification de front d'onde et procede d'augmentation de la profondeur de champ d'un systeme d'imagerie | |
EP2746830B1 (fr) | Mise au point optique d'un instrument de saisie d'image | |
FR2661583A1 (fr) | Systeme opto-electronique d'analyse d'images video obtenues par balayage d'une barrette detectrice. | |
FR2966257A1 (fr) | Procede et dispositif de construction d'une image en relief a partir d'images en deux dimensions | |
EP0604245B1 (fr) | Procédé de détection d'apparition d'objets ponctuels dans une image | |
EP0457414B1 (fr) | Dispositif de détection d'objets dans une séquence d'images | |
EP0393763A1 (fr) | Procédé de correction des dispersions des décalages présentés par les détecteurs photoélectriques et dispositif de correction utilisant un tel procédé | |
FR2968876A1 (fr) | Systeme d'acquisition d'images presentant une dynamique elevee | |
EP1426901B1 (fr) | Procédé pour détecter des cibles ponctuelles et système de veille mettant en oeuvre ce procédé de détection de cibles | |
FR3048515A1 (fr) | Appareil de prise de vue stereoscopique | |
FR2968499A1 (fr) | Procede d'utilisation d'un capteur d'image. | |
WO2024018164A1 (fr) | Procede et systeme de surveillance spatiale infrarouge de jour | |
FR2689354A1 (fr) | Procédé récursif de restauration d'une image vidéo dégradée. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2354569 Country of ref document: CA Ref country code: CA Ref document number: 2354569 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09868034 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1999961103 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1999961103 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1999961103 Country of ref document: EP |