GB2346754A - Time compression and aspect ratio conversion of panoramic images and display thereof - Google Patents

Time compression and aspect ratio conversion of panoramic images and display thereof Download PDF

Info

Publication number
GB2346754A
GB2346754A GB9901679A GB9901679A GB2346754A GB 2346754 A GB2346754 A GB 2346754A GB 9901679 A GB9901679 A GB 9901679A GB 9901679 A GB9901679 A GB 9901679A GB 2346754 A GB2346754 A GB 2346754A
Authority
GB
United Kingdom
Prior art keywords
video
imagery
display
information
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9901679A
Other versions
GB9901679D0 (en
Inventor
Roger Colston Downs
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB9901679A priority Critical patent/GB2346754A/en
Priority to GBGB9902699.9A priority patent/GB9902699D0/en
Publication of GB9901679D0 publication Critical patent/GB9901679D0/en
Publication of GB2346754A publication Critical patent/GB2346754A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/58Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A time switch permits the display or recording, via a conventional display or recording format, of video frames deriving from a source that generates signals having line information of a length differing from that associated with the conventional format. In a particular embodiment, each element of an array of panoramic imagers (fig. 2) provides luminance information of a conventional line length. The luminance information is then combined to form time-continuous lines having non-standard length. The information is then punctuated with appropriate frame and line sync information to provide a composite video signal (VBS or CVBS). In order to display the panoramic image on a conventional display the aspect ratio of the image is changed, 12, 13, 14, and the line length of the image data is reduced by time compression, L4. Time compression is achieved by reading the data into buffer store prior to display, then reading it out at a higher rate when the image is to be displayed. Alternatively the data could be sampled at a rate consistent with achieving the same end. The image 14 may then be displayed on a conventional display means such as a CRT, LCD, plasma display or a projection screen. Images individual imagers may be displayed, simultaneously with the panoramic image, at different scales or aspect ratios (88, 89 of fig.9). Transfer from video to IMAX film for subsequent projection is discussed.

Description

TIME SWITCH This invention relates to a time switch.
It is well known, ref UK 2320392 that, for an array of suitably positioned and synchronized image sensors their luminance information may be time multiplexed to create the perspective of an imaginary or virtual image sensor VIS. Further such a perspective, when punctuated with suitable frame and line sync information, may be displayed on conventional display equipments or recorded on conventional video recorders.
The arrays of such cameras may be open or closed, and may have a total field of regard in an axis of closure of n*360 degrees.
It is clear that luminance information across the field of regard of such an array may be considered to be continuous in time. In the case of closed arrays luminance continuity in time may also be contrived by appropriate design consideration of the optics, the physical arrangement of the image sensors, and their relative synchronization. For closed arrays of image sensors a conceptual cylinder of luminance continuous in time may be considered to exist around the array.
It also follows that any width of perspective can be tapped from such continuous luminance information to include the total field of regard for open arrays, and up to and beyond 360 degrees for arrays closed in any particular axis. As the number of image sensors included in a perspective increases in any particular array axis, so the aspect ratio of the imagery changes. When the video imagery of such perspectives are punctuated with appropriate frame and line sync information then display and recording of such imagery is also possible using modified conventional displays and recorders.
91rtual Image sensor VIS technology supports cost effective electronic pan, tilt, and zoom of video imagery encompassing also panoramic video without the need to employ multiple displays or expensive computers and frame store technology, it further preserves the integrity of the analogue image, and through resolution and screen format benefits advances live and recorded image presentation. Moreover since the high quality image information may be in analogue form, straight off the image sensors, it may further constitute acceptable evidence in the courts as opposed to digital imagery.
Clearly the characteristics of video information for both image source and display or recording sink must be compatible, and these are generally defined by broadcast standards. In the UK video information is usually organized in a format of horizontal lines of 64 micro seconds duration 15.625 KHz and these are organized within frames of 20 ms duration 50Hz. Commonly, but not exclusively two frames of vertically interleaved line information are employed to define an image. The historical need for image colour information introduced a quadrature modulated, subcarrier suppressed, chrominance component to the amplitude modulated monochrome luminance signal.
Broadcast and other standards differ in respect of their detailed implementation strategies.
A change in the aspect ratio of imagery from an image source necessitates corresponding changes to be made to conventional display and recording equipments to accommodate such changes.
In the case of displays it is necessary to maintain the normal proportions of topographical detail imaged. This may be effected by modifications to the display's horizontal drive circuitry to effect appropriate, lower or higher, line scan frequencies to accommodate the increased/decreased respectively image width. Corresponding changes are also required to the display's vertical drive circuitry to adjust, reduce or increase, the picture height.
In the case of video recorders it is necessary during recording to maintain frame sync registration, which otherwise may be lost through a changed line frequency. This is important in play back, but particularly so, in pause and slow motion modes of operation when interference and loss of imagery results from consecutive and only partial frames of video having being written across the recording tape, as opposed to one complete and discrete frame.
Professional displays are expensive and accommodate interest in non standard image sources in which the format characteristics of image information departs from broadcast standards. They do not however accommodate the very significant departure from existing standards (of around 15-16KHz) necessary to support the very low line frequencies, typically 3-2KHz or lower, generated by realistic VIS arrays designed to capture 360 degree or other panoramas.
To display a live 360 degree panorama from a seven image sensor VIS closed array necessitates luminance to be tapped from right around the conceptual cylinder of time continuous luminance information, 320 micro seconds (5*64) in this particular case.
Luminance must be lost, since a line sync pulse must be inserted to effect a display's fly back, some luminance may also be masked at the display's extremities. Further for each new panoramic line generated by a virtual image sensor a need arises to return to the same array image sensor's video used at the beginning of the panorama, and moreover at the corresponding point in time within a line of its output equivalent to that when the previous panoramic line was started. To meet these requirements, luminance covering more than 360 degrees is taken from the conceptual cylinder, and therefore in this example the actual length of such a display line is 384 micro seconds (6*64).
The skilled person will appreciate the necessary display functionality requiring modification for example to a domestic television to allow display of differing aspect ratio imagery, he/she will also appreciate the health hazards of voltages present within a TV set, and the consequences of these on the further health hazards of X ray emissions.
The modifications to a domestic television receiver necessary to achieve such aspect ratio accommodation through changes to vertical scanning amplitude reduction and lowering of horizontal scanning rates down to 2.6KHz or lower are relatively inexpensive, but they should not be confuse with the trivial and cosmetic modifications necessary to achieve the current vogue of wide screen formats.
Modifications to a CRT display necessary to achieve a change in aspect ratio of the displayed image are reasonably straight forward and inexpensive, however two main problems can be identified those of tube resonance and anticipating line length.
Firstly, the aspect ratio change to accommodate display of a 360 degree panorama off a seven image sensor planar circular array necessitates a line length of 384 micro seconds and this places the horizontal scanning rate at 2.604 KHz. The consequence of this is that the picture tube mechanically resonates to this drive and derived harmonic frequencies which severely interfere with listening to the audio channel. Even the display of 180 degree panoramas introduces no real improvement to this problem of resonance.
Moreover the amplitude of the tube resonance is such that it is conceivable that eventual collapse of the tube through fatigue is possible, raising a health and safety issue in respect of such an implosion.
Secondly, a problem exists in that perspectives taken from a VIS array may be changed dynamically from greater than 360 degrees down to that equivalent to the field of view of a single image sensor in the array. For the display to accommodate changes necessary to the horizontal and vertical drives of the display requires anticipation of the line length and the number of lines in the current frame of video. These features require either that such information to be included explicitly at the beginning of each frame, or the first picture line of each new frame must be assessed in order to engage the appropriate display drive.
Modifications may be effected to achieve necessary horizontal and vertical scanning or drive rates, together with those to maintain safe voltage levels in respect of X ray emission. Screen persistence within design frame periods will support good quality imagery.
Many systems refUS 5130794 have been proposed which permit panoramic views to be created from a number of digitized images manipulated by software in memory. Such systems necessitate the use of frame stores to capture such imagery. For such systems to operate in real time fashion, using an array of cameras, would require two frame stores per camera and a fast processor to manipulate realistically upwards of 8x10^6 words of information every 20 ms.. Output of such collated video information to a conventional display would necessitate a further two frames stores to achieve continuous output. The latency of the final imagery output to a display from such a process can be seen to be at least 40ms. For a seven camera system using such technology the cost inclusive of 16 frame stores and computer together with the further considerations of size, weight and power requirements is clearly unattractive when compared to the performance, mobility both in respect of size and power requirements, of a VIS system.
Virtual image sensor technology permits the electronic sewing together of imagery direct off image sensor outputs, be they analogue or digital and therefore does not require frame stores or software processing, nor introduces latency into the display of imagery when using modified conventional equipments, particularly important for the military who are prime users of live information.
Whilst the display and recording of panoramic imagery using modified conventional equipment is necessary and important to maintain the high picture definition of such imagery, distribution of such imagery for display on unmodified screens is clearly commercially interesting. This interest necessitates moving the image information from one time domain to another to effect time compression of the video image information to allow the imagery to exist normally within existing broadcast or other standards. This compression of video imagery to lie within conventional line lengths may be achieved through the use of a time switch by which the information is compressed to a format whereby only perhaps a relatively small number of conventional lines of image information exist within a frame of otherwise blank lines.
We are aware of systems which time reformat imagery through the use of frame stores to support display of imagery from a number of video sources eg SONY EPO 0230 787 A2.
However the purpose of such functionality is quite different to the purpose of the time switch. The problems addressed by such referenced systems are to obviate the need for a number of displays and also to effect image presentation to a variety of formats not inherently expressed either by the combined video input to such systems nor equally from a single image source. The format of each image source input to such systems supports image display on unmodified displays unlike the variable length line and frame formats of connected imagery generated live from a VIS array. Moreover the time switch whilst accommodating non standard formats, of connected imagery, permitting display on conventional displays does so to circumvent the problem of picture tube resonance inherent in the normal display of such imagery on modified displays.
In respect of the above and other functionality used to reformat imagery between different broadcast or other standards the time switch purpose is further to achieve the singular display of connected continuous imagery simultaneously sourced from a number of image sensors wherein the format of such imagery must necessarily dynamically change to reflect the aspect ratio of the captured imagery and this as a function of image source as opposed that of the display or broadcast standard.
One approach to effect this time compression would be to use a high resolution camera to record the imagery displayed on a modified display. Such an approach using cameras with different colour encoding output stages could also support international distribution of such imagery.
In the UK domestic colour displays are designed to receive PAL encoded colour information. The main feature of this standard is the phasor reversal of chroma information and a 180 degree phase reversal of the sub carrier burst in alternate lines of each frame. This asymmetry was originally introduced to address errors in colour, arising from slight variations in the level of the sub carrier burst, by averaging colour information over two lines. A fundamental aspect of a PAL decoder, to perform this averaging, is the ability to delay the chroma information in the proceeding line, normally by means of a delay line, which allows its summation with chroma information present in the current line followed by its rescaling to achieve this. Clearly the delay line is tailored in the UK to the broadcast standard of 15.625 KHz frequency. This feature requires extension when line lengths differing from the PAL standard are being processed.
If we consider alternate methods of the time compression of information, other than camera capture, to achieve a reformatting of the information to existing standards then excepting monochrome amplitude Y, colour difference signals V & U or the primary colour amplitude R, G, B signals it can be seen that any attempt to compress the information will lead to a corruption of the carrier suppressed chroma signal. This arises since for the 360 degree panorama captured from the seven camera array, a six fold time compression would necessitate a six fold increase in sub carrier frequency over a conventional decoder and this is clearly incompatible with conventional display technology, moreover the bandwidth permitted for the subcarrier chrominance information would be exceeded.
Frame stores normally acquire colour decoded digitized imagery. The frame and line sync information present in a CVBS signal are used to map imagery into memory but is itself generally not stored. Subsequently this stored information may be transferred to a computer for further processing.
Interestingly for panoramic perspectives having line lengths greater than 64 micro sec's, tapped off a VIS array, the amount of information necessary to capture any perspective from an array within a frame period increases in respect of normal image technology.
This increase arises since frames of longer luminance lines comprise a lower proportion of line sync interruption. Therefore the amount of information buffered will generally exceed that associated with broadcast standard frame stores. The parameter of interest, when panoramic information from a VIS array is buffered, is the line length used in any particular frame of imagery. The number of lines within a frame is also of interest but may, for fixed frame lengths, be deduced from line length. For frames of different line length the mapping of information within a buffer changes and to support retrieval a stored indication of line end may be additionally useful.
To time compress imagery so as to effect normal drive to a conventional display requires the generation of a broadcast frame of video in which lines are blank, at the dark level, except in that portion of the frame where the time compressed image information is to exist. The time compression of imagery may be effected by means of buffering the content of lines of video information at a rate consistent with the resolution of the imagery, and subsequently retrieving this information from the buffer at a higher rate, or by sampling, to effect a reduced and standard length of video line. This time compression will operate well for signals not comprising a quadrature modulated chroma signal for example the R, G, B, Y and V, U. signals.
The time switch allows for this dynamic reformatting of video information to drive a conventional display within its designed operating envelope according to broadcast standards and thereby benefits from the implied reliability and safety related aspects of such displays accrued over the years.
Similarly the time switch may be used to dynamically reformat variable line length video imagery to support recording on conventional VCR recorders. This permits replay from the recorder direct into a conventional display. This is an attractive option in the mass distribution of such imagery, however of necessity this is not the preferred method of holding copies of such video since this approach suffers from the disadvantage that quality will be lost. For the example of a seven camera array, such time compressed image information would occupy only around one seventh of a frame track across the recording tape as opposed the entire width of high quality recordings using frames of long line video. The display of high quality long line recordings is of course also possible on conventional displays using the time switch for time compression and reformatting of such information output from a video recorder.
We are familiar with the story board concept which is a vehicle to describe a sequence of scenes, action and dialogue with respect to time. The picture board is a concept to separately elaborate a panoramic scenario, at an instance in time, with the focus or foci of interest of the panorama captured from the same perspective.
During the early use of computer generated special effects for the cinema the results were found to be disappointing because the resolution of such imagery fell far short of the resolution possible using cinematographic cameras. The sheer size of a cinema screen necessitates probably a 20-30 fold increase in one axis over and above the normal acceptable viewing size of the original image.
Similarly even the projection of imagery sourced off broadcast quality video cameras onto a cinema screen fails to match or even approach the resolution of cinematographic cameras.
Panoramas generated from VIS camera arrays however benefit in respect of an effective increase in resolution since the picture definition for analogue display devices of such imagery increases, through the lower display horizontal/vertical scanning rates, in proportion to the number of cameras used in any particular axis of the array.
Interestingly for VIS array panoramic imagery it will be appreciated that using analogue monochrome displays including primary colour display combinations supporting colour display, eg. CRTs, an increased resolution is effectively introduced by the time compression of information in the horizontal screen axis arising from the lower horizontal defection rates necessary to accommodate display of such long line imagery.
For masked colour and LCD displays this increase in resolution is lost through the effective sampling of the picture content by the colour mask or pixel arrangement.
VIS panoramic imagery scenarios may be high quality and afford projection possibilities.
The increased resolution possible from a VIS two axis arrays of image sensors extends not only to the horizontal axis but also, by modification of the frame length, to the vertical axis as well.
Since video transfer to cinema film standards of 35/70mm and IMAX is possible then the sewing together of imagery to form composite imagery from VIS arrays or more generally camera arrays using frame stores and software, be they open or closed, can thereby be employed not only to approach but also in theory exceed the quality of cinematagraphic camera resolution. Video imagery further provides for a more flexible medium for handling storage and manipulation than that of film.
In the context of cinema applications for VIS imagery, and perhaps more generally camera array video once software processing rates of frame store captured imagery increases, elements of composite and panoramic perspectives provide possibilities to develop new presentation formats for the cinema. We are accustomed to the cinema presentation of a focused view of a film set in which the action and dialogue is taking place. This presentation is contrived to fill the entire screen, which is sensible since at any point in time this was generally all that was filmed.
Taking the previous example of a seven camera closed array supporting panoramic perspectives and considering a screen size suitable for the projection of a good image from a single camera in the array. It can be seen that the change in aspect ratio associated with a 360 degree panorama would require a screen of the same height, but seven times as wide to accommodate such an image projected to the same scale. If however the original screen size were maintained and the panorama projected so as to fill the width of the screen, then the height of the projected panorama would be only that of a seventh of the screen height, but the lateral definition of the image would be seven times higher.
Similarly if a 180 degree panorama from the same array were to be projected across the entire screen width, then the height of panorama would be slightly greater than a quarter of the screen height.
It can be seen that in both the above cases vast tracts of screen would be left empty and this is a problem since it would detract from our expectation of the cinema experience.
This problem however permits the simultaneous projection of other imagery onto this vacant space. Significantly, different and directed live perspectives around a VIS array are simultaneously available for display and recording and these may comprise the same panorama from a different perspective, panoramas other than 360 degrees for example 180 or 90 degrees, or the narrow angle focus or foci of interest or action from within the panorama. All these are capable of simultaneous and separate recording and subsequent projection to a suitable scale in the remaining free screen space.
In the case of technology employing camera arrays, frame stores and software to process perspectives similar effects may be achieved off line.
The mixing of such projected imagery may be made using computer technology and frame stores to manipulate imagery before transfer to film, at the time video images are transferred to film simultaneously using multiple displays to support combined image capture, or at the time of projection in the cinema by using a number of film projectors.
The present invention is as described in the claims.
According to the current invention in its first aspect there is provided a time switch whereby display of frames of VBS or CVBS video, from an image source generating perspectives comprising frames of necessarily differing line length information, may be made on domestic displays, monitors, projection devices, or recording of the same using conventional video recording equipment.
The present invention provides in a yet further aspect a picture board whereby a continuous panorama captured by a camera array, comprising a plurality of cameras, my be displayed on, or projected onto, a screen wherein the aspect ratio of the panorama in respect to that of the screen permits the further separate display, or projection, on the same screen of the focus or foci of interest simultaneously and separately captured by elements of the same camera array to similar or different aspect ratios and scale as that of the panorama.
The present invention provides in a still yet further aspect a cinematographic video camera array whereby simultaneously sourced imagery of a continuous scenario, by separate elements of array, comprising a plurality of video cameras, supports increased picture definition, of the combined image when transferred to film and projected onto a cinema screen, in proportion to the number of video cameras employed.
A specific embodiment of the invention will now be described by way of example with reference to the accompanying drawing in which FIGURE 1 shows schematically functionality associated with a single channel virtual image sensor.
FIGURE 2 shows schematically fields of view of a closed array of image sensors.
FIGURE 3 shows schematically relative image sensor synchronization supporting conceptually time continuous luminance.
FIGURE 4 shows schematically the relationship of virtual image sensor perspective displays to the composition of image frames of lines of video imagery.
FIGURE 5 shows schematically high level generation of display configuration control signals.
FIGURE 6 shows schematically basic virtual image sensor functionality with extensions to achieve different colour encoding outputs and high level relationship of time switch.
FIGURE 7 shows schematically detail of time switch functionality.
FIGURE 8 shows schematically detail of free standing time switch.
FIGURE 9 shows schematically variety of cinema film picture board projection possibilities from camera array sourced imagery.
Figure 1 shows a schematic of functional areas of a single channel manually controlled VIS system comprising an array, not shown, of image sensors IS 1 through ISN.
For a VIS array to function the mechanical alignment of the array must be such that consecutive elements of a continuous scenario may be focused by the optics on the array's CCD's. For a VIS perspective to be tapped from anywhere around the array then the relative synchronization of the CCD's must be such that luminance information may be considered to exist continuously in time across the array line wise, and also discontinuous in time between lines, included in a perspective, by an amount equal to the line length of the selected perspective.
Image sensor luminance information is at source an analogue signal. Increasingly today this signal is digitized to preserve the integrity of instantaneous values of such signals. A time sequence of such digitized information may clearly also be time multiplexed by VIS technology to form a digitized perspective. This of course requires an increase to the number of electronic switches employed in proportion to the number of bits defining instantaneous values of such information.
The following discussion describes mostly analogue signal processing though the principles involved are equally applicable to the processing of digital signals.
Composite video 1 is output from the image sensors IS1 10 through ISN 12 to the Timing and Sync Management TSM 14 functionality and also to the Luminance and Sync Switching LSS 15 functionality. The purpose of the Timing and Sync Management TSM 14 circuitry is to establish the current relative synchronization of the image sensors and develop error signals 2 in this respect which are passed to the Clock Control functionality CC 13. The Clock Control CC 13 functionality provides clock information 3 to the image sensors IS1 10 through ISN 12 and in response to error signals 2 from the Timing and Sync Management TSM 14 modifies the image sensor clocking thereby reducing the synchronization errors to, and maintaining them at, zero.
Under the control of an operator the Hand Controller HC 21 generates a sightline rate (or positional) demand and through further controls the required angular width of perspective. This information 4 is passed to the Sightline Control SC 17 circuitry whose purpose is to determine the current boresight position of the VIS channel perspective within the field of regard of the array.
The Sightline Control SC 17 circuitry also receives information 5 from the Timing and Sync Management TSM 14 functionality in particular a master clock signal and the sync positions of each of the image sensors in the array. Based on the current required change to the sightline vector, and width of perspective required 4, the Sightline Control SC 17 circuitry outputs 6 the instantaneous actual sightline and field of view to the Sync Generation functionality SG 16. Also output by the Sightline Control SC 17 circuitry are the necessary luminance, sync, blanking and burst switching information 7 to the Luminance and Sync Switching LSS 15 circuitry.
The Sync Generation SG 16 circuitry generates frame and line sync, blanking and burst information 8 based on current sightline and field of view to the Luminance and Sync Switching LSS 15 circuitry.
The Luminance and Sync Swithing LSS 15 accepts combined luminance and chrominance information 1 from the image sensors, the sync, blanking and burst information 8 from the Sync Generation SG 16 circuitry and under control 7 of the Sightline Control SC 17 circuitry time multiplexes this information to form a chroma video blanking sync CVBS signal 9 as output.
This signal 9 may be viewed on the display D1 18, recorded on the recorder RI 19, or passed to some other process or processes indicated by PI 20. The process PI 20 may include for example digital recording, image recognition or tracking algorithms the latter of course necessitating process input links from the Hand Controller HC 21 and outputs to the Sightline Control SC 17 circuitry in order to establish closed loop performance not shown.
Figure 2 is a schematic showing adjacent fields of view IF through 7F of seven image sensors, not shown, organized as a closed and planar circular array. For a closed 360 degree VIS camera array, its parameters in respect of both the opti
Transform processors refUK 2319688 supports live data rate"many to one"and"one to many"remapping of imagery to otherwise remove optical distortion not shown.
The fields of view IF through 7F are the image sensor abutting fields of view achievable from switching luminance from a conceptual cylinder of luminance which may be considered to exist around the array.
Figure 3 shows schematically, for time progressing linewise, not shown, from left to right, a section of the fields of line wise continuous luminance taken from around the array and including the fields of luminance 3F through SF with partial representation of the further fields of luminance 2F and 6F.
For consecutive lines of video 2V through 6V taken from the image sensors IS2 through IS6, not shown, in the array the relative synchronization of the image sensors can be seen such that switching between luminance 2V at 2C to luminance 3V effects a continuous luminance sigal. Further switching at 3C between 3V and 4V maintains the time continuity as may also be seen at 4C and 5C for switching between the luminance information 4V and SV, and 5V and 6V respectively.
The UK broadcast standard defines line length as 64 micro seconds and within 52 micro secs of this period image information of a perspective equivalent to that of the horizontal field of view of any image sensor in such an array may be selected. For perspectives greater than that of the field of view of any image sensor in the array, the time between vertically corresponding points in any two consecutive lines supporting such a perspective must be equal to an interger multiple of 64 micro seconds. This arises since for a perspective to exist the start point of any such perspective must lie at a point within a line output by the first image sensor supporting the perspective wherein consecutive lines of the panorama must also start at the corresponding point within further lines, output by the first image sensor, chosen for inclusion in the perspective. Generally some overlap of image information between horizontally or vertically aligned CCD's in the array would exist though made transparent, through video switching, to the arrays operation. The necessay condition would be for focused images to abut at the extremes of CCD elements. Closed arrays necessitate further design consideration in respect of the optics, mechanical arrangement of CCD's and their relative synchronization in order to ensure that such conditions exist.
For colour systems very precise synchronization of a VIS array CCDs is necessary to maintain the correct phase, with respect to a line's subcarrier burst, not shown, of the chroma signal between components of a composite image which otherwise can result in false colour of displayed imagery. Fine positioning of VIS CCD synchronization supports this precision.
In the UK the PAL system is employed for domestic colour encoding and decoding, it requires alternate lines of video within each frame to use phasor reversal of chroma information, this asymmetry does not exist within the US NTSC system.
Figure 4 shows schematically a frame of video F from a single image sensor. Luminance information exists line wise, not shown, within the box FL1 whilst Frame and line sync, blanking, and subcarrier burst information may be considered to exist in the border region FL2.
It will be appreciated that as more image information from image sensors within an array is included within a perspective, such that this exceeds the field of view of a single image sensor, then so the aspect ratio of the image must change.
In the UK information generally displayed on conventional displays exists in frames of 20 ms duration starting top left of a screen and progressing through lines of 64 micro seconds duration, separated by horizontal blanking pulses and vertical drive, ending bottom right of the screen prior to vertical blanking. Within each line 52 micro seconds of luminance information is available for display but some masking at the screen extremities generally exists.
Luminance information representing any perspective taken from a VIS array may when punctuated with appropriate frame and line sync information, so as to form a composite video signal VBS or CVBS, be displayed on a conventional display device television, monitor, LCD or plasma display. The display characteristics however must match the aspect ratio of the intended image and if this departs from that normally displayed, for example the broadcast standard, then modification of the display becomes necessary.
I1 represents an image displayed from a frame of video F1 taken from a VIS channel supporting a perspective equivalent to that of the field of view of a single image sensor in the array, and comprising components from the images from two adjacent image sensors in the array achieved through switching between luminance from these two image sensors at the boundary indicated by the vertical dotted line SI. Conventional lines of video within the frame Fl are indicated by the examples Ll and LN such lines exist sequentially throughout the image frame implied by the dashed vertical line C 1.
A perspective panorama I2 is indicated schematically encompassing component images from five image sensors, boundaries between component images are indicated by the dotted vertical lines example S2. The image I2 has a modified aspect ratio with respect to Il in order to support its display and the image components are shown to around a quarter scale in respect of 11.
The equivalent frame of luminance information F2 for image I2 comprises lines example L2 luminance wise four times longer than Ll. As with Fl the lines exist sequentially throughout the image frame implied by the dashed vertical line C2.
The image I3 comprises components from seven cameras. The component images are indicated to around a seventh of the scale of I1 with boundaries between component images implied by the vertical dotted lines example S3.
The equivalent frame of luminance information F3 for image I3 is shown schematically comprising lines of luminance, example L3, luminance wise seven times as long as L1.
Such lines are considered to exist throughout the image frame as implied by the vertical dashed line C3.
The image 14 is a time compressed version of the previous image I3. It occupies the same implied screen area as image I3 but is supported by broadcast standard lines holding the time compressed image information example L4 existing in this particular case centrally for around a seventh of the image frame F4, with broadcast standard blank video lines indicated by horizontal dashed lines example L5 and existing throughout the remainder of the image frame implied by the vertical dashed lines C4A and C4B.
The image 14 is capable of display on the same conventional display on which I1 may be displayed.
For a planar array of image sensors sourcing VIS imagery then as the length of luminance increases to incorporate a larger perspective, so the total number of lines within a frame diminishes. Modification to the horizontal and vertical drive of the display must therefore be incorporated to reflect the characteristics of the image source.
The knowledgable person will appreciate the functionality, not shown, requiring modification to accommodate a change in aspect ratio to a display CRT LCD plasma etc capable of displaying images I2 I3, or variations of these.
For CRT devices, modifications affect circuitry generating high and potentially dangerous voltages and some explanation of these modifications, not shown, follow. The AFC of the horizontal drive circuitry of a CRT display is best disconnected and replaced with a digitally controlled monostable driven off the line sync detection circuitry. The line driver duty cycle, derived from the monostable, requires modification for the existing FBT in order to achieve linearity of horizontal scanning, and this in turn requires increased power into the line driver itself to effect the necessary power surges in the FBT required to generate tube, horizontal deflection and ultimately EHT voltages. Importantly these voltages must be limited by introducing a ballast resistor into the FBT primary thereby returning them to lie within the normal operating range of an unmodified set.
This is important to prevent the emission of harmful X rays.
Drive into the line driver itself as stated needs to be increased and this may be effected by changing the HDT operating envelope using alternative higher rated components for final drive, the HDT itself and its primary line resistor.
Modifications of the picture height may generally be simply achieved by the use of shunt resistors across the vertical yoke.
The skilled person will also recognize the need to fine tune these circuits, and whilst the circuitry from a quarter of a century ago has improved in reliability, it has changed little in organization though individual circuit designs will need appraisal before modification.
The skilled person will further appreciate the need for modification of chrominance processing particularly in the UK in respect of the delay line in PAL decoding systems and the further subtleties in respect of more recently introduced intelligent power supplies.
Such modifications when introduced so as to be selectable, based on a line length criterion, will support display of imagery including not only 11 and 14 but also I2 and 13 and variations on these. However the problem of mechanical resonance of the picture tube precludes their use in the domestic context.
Modifications to other forms of display eg LCDs or through software to effect computer screen display are similarly required to effect changes in aspect ratio. LCD devices using pixel screen technology require a horizontal sampling, or many to one merge of image information to accommodate longer and fewer lines within frames. Software processing of images captured through ADC's and associated functionality permit screen display of imagery in a variety of formats.
Referring to figure 5, whilst modifications to achieve display of non standard long line video information are possible, anticipating line length at the beginning of a new frame remains a problem.
At the image source, generation of perspectives comprising lines of non standard length necessitates knowledge of the actual length of line to be used in the current frame. Such information may be defined within the frame blanking period, not shown, as for example teletext information from a broadcast station is also included at this point of the frame.
Such information represented as a video bit string between white and black levels may contain coded information of the necessary image format.
For CRT devices for example domestic television receivers such information may be reasonably extracted from within the frame blanking period by functionality considered to exist within the Timing and Control functionality TC 102 capable of extracting frame and line sync, blanking, burst and other information from incoming CVBS 101 information.
The Timing and Control TC 102 recognition and extraction from a frame's blanking region of the line length information included on lines 103 is possible allowing its storage in the Buffer 104.
This perhaps coded information, output from the parallel output Buffer 104 together with information 105 from the Timing and Control TC 102 may be decoded by Transform processors TP1 114 through TPN 117 each generating a continuous output of information examples 113,116,118 and used to control the configuration of display functionality not shown.
In respect of CRT horizontal drive requirements line sync events 108 results in the decoded counter 113 being loaded from the Transform processor TP1 114 functionality, which defines duty cycle requirements of the horizontal drive, this event 108 simultaneously initiates output 112 from the monostable 111 and effects drive into the modified horizontal drive circuitry not shown. Clock 107 also available from the Timing and Control TC 102 circuitry counts down the loaded counter 109 defining the high state characteristic of the duty cycle. At the end of the count a clear signal 110 is generated to terminate the monostable 111 output 112 and allows horizontal drive switching through the FBT. not shown.
The Transform processors TP2 115 and TPN 117 accept continuous input from the buffered line length code 106 and also timing, PAL switch if required and other information 105 from the Timing and Control TC 102 functionality and generate multiple digital outputs capable of controlling the configuration of other circuitry to effect image display modifications. These may be achieved for example through functionality switching resistors across a CRT's vertical yoke to effect changes to the height of the displayed image, or through functionality to introduce additional delay lines to support PAL decoding of longer lines, or through functionality defining the horizontal sampling rates and number of lines of information for LCD and plasma type displays drives, none of which are shown.
Extraction of coded line length may be effected digitally in LCD and computer devices to support suitable controlled modification of pixel drive mechanisms, or through software the processing of image data to effect aspect ratio changes to accommodate longer or shorter lines respectively decreased/increased number of lines per frame.
Recording of panoramic imagery, that remains within a frame period (UK 20ms), is possible using conventional video recording equipment. Important here however is registration of frame sync to allow complete and discrete frames of video to be written across the recording tape. Frame detection circuitry in displays generally accommodate increases to line length however modification of the frame detection circuitry in recorders is generally necessary to accommodate longer line duration, this may normally be achieved by modification of this circuits time constant not shown.
Referring to figure 6 the schematic functionality of fig. 1 is repeated with additional information in support of separate colour encoding through the Colour Encoder CE1 25 functionality from primary colour information 34 to achieve output to different broadcast standards. Also high level indication of the Time Switch TS 26 is introduced necessary to achieve frames of time compressed and reformatted image information capable of display and recording on conventional domestic displays and recorders.
In addition to the ISl 10 through ISN 12 image sensor output of CVBS I signals sent to the Luminance and Sync Switching LSS 15 as described in fig 1 each image sensor outputs the three separate amplitude modulated primary colour luminance information Red, Green and Blue 34 to LSS 15 equivalent switching functionality Red Switch RS 22, Green Switch GS 23 and Blue Switch BS 24. Whilst the primary colour signals are cited here, these signals could equally be the image sensor monochrome luminance signal"Y" and the colour difference signal R-Y and B-Y that is the signals V and U or any other combination of signals capable of supporting generation of video inclusive of chrominance information not shown.
The indicated Switches RS 22, GS 23, BS 24 performing electronic switching functionality, under the control 7 of the Sightline Control SC 17 circuitry, mirror the LSS 15 switching whereby instead of monochrome amplitude information being switched linewise from sequential image sensors included in a multiple perspective, the equivalent information 34 contained in the three separate signals Red, Green, and Blue are switched in the RS 22, GS 23, BS 24 electronic switches from whence their outputs 36 are passed to the Colour Encoder functionality CE1 25. The Colour Encoder CE1 25 also receives master timing information (including the PAL switch if necessary) 35 from the Timing and Sync Management functionality TSM 14 and panoramic line length information 37 from Sightline Control SC 17 circuitry.
Not withstanding that the line frequency and possibly also the frame rates may be non standard, colour encoding within the Colour Encoder CE1 25 functional area is effected, signal wise, normally for any established broadcast standard. Frame and line sync, blanking and burst information 8 are added to the output 25A from Colour Encoder CE1 25 by the Sync Merge SM 27 functionality under the control 7 of the Sightline Control SC 17 circuitry. The resulting CVBS signal 38 is capable of driving modified displays D2 28, recorders R2 29 or feeding some other process or processes within the processing functionality P2 30.
Picture information from any colour camera comprises at source the luminance amplitude information for the three primary colours Red R, Green G and Blue B. These three signals when summed in the relative proportions a, b, c yield the monochrome luminance amplitude signal Y, a*R+b*G+c$B=Y,'ra, b, c"are defined by the relevant broadcast standard and they differ between for example the PAL and NTSC standards. This monochrome luminance signal"Y"is then traditionally subtracted from the R and B signals to yield the colour difference signals V and U signals. These signals are then quadrature modulated onto a sub carrier which is subsequently suppressed to form the chroma signal and this signal in turn is mixed with the luminance signal Y to form, when sync blanking and burst information are added the CVBS signal. In the States the NTSC signal is independent of a line within a frame of video being even or odd and therefore is more easily addressed than the PAL colour encoding which necessitates chroma phasor reversal on alternate lines together with 180 degree phase reversal of the subcarrier burst.
The functionality of the Time Switch TS 26 accepts equivalent drive to that of the Colour Encoder CE1 25 functionality but performs the function of image information time compression and reformating to a standard capable of conventional domestic display recording or processing formats.
To return to the need to compress image information to allow its display on an unmodified conventional display CRT, LCD, plasma etc in line with relevant broadcast standards then the recording of camera captured images displayed on modified conventional displays is as previously stated an option not shown. Attention must be paid to the synchronization of such a camera to avoid appearance of apparent blanking information running through the captured image from the display. Interestingly for the field of view of the camera to take in only the display screen, necessitates a proximity of the two devices at which point the lens system is effectively looking down the tube gun resulting in a number of effects which can best be addressed by offsetting the camera axis away from that of the tube, and through the use of filters.
Figure 7 is an expanded schematic of the TS 26 Time Switch functionality together with the repeated functionality 14,17,21 introduced in fig 6. It receives the primary amplitude modulated colour or equivalent information defining a perspective from the RS 22, GS 23, and BS 24 switches of fig 6 via the lines 56,59,62 Such information can be in digital rather than analogue form and this clearly affects the number of lines (wires) necessary to communicate this information, the number of electronic switches 22,23,24 (fig 6) employed and the need for ADC's, but not the principle of operation.
In this example these signals 56,59,62 considered to be analogue are converted to digital information in the ADC's 55,58,61 and passed 57,60,63 to the buffers B1 64 and B2 65.
These buffers each comprise either three physically separate buffers or logically partitioned fields within the design word length of a buffer suitable for simultaneously capturing three discrete digitized signals not shown. The buffers are sized and capable of holding a frame of lines of length appropriate to the system design requirements, importantly the largest number of image sensors in a particular axis of the array, and the total number of image sensors if frame length is also to be a variable. B 1 64 and B2 65 frame wise and alternately between them are written with digitized information under the control of the Read Write Control RWC 52 circuitry which also controls during the subsequent frame period the reading of data previously written. The partitioned data written to a buffer during one frame period is separately output in parallel 70,68,66 during the subsequent frame period to the separate DAC's 71,69,67.
During writing, digitized information 57, 60,63 is written to one of the buffers B 1 64, or B2 65 at a rate consistent with maintaining the resolution of their inherent image information.
For a particular buffer B 164 or B2 65 then during the subsequent frame to that in which writing occurred, reading of the digital data from the buffer, is performed at a rate that is either proportionately higher in respect of the necessary reduction to be achieved in line length, or sampled at a rate consistent with achieving the same end. The information defining the line length and hence number of lines in a frame is available 37 from the Sightline Control SC 17 circuitry to the Read Write Control RWC 52 and Frame Line Generation FLG 78 functionality. Initiation of reading the image information from a buffer is determined by the required position ofthe final image within a normal broadcast frame. Such information can be predetermined or subsequently contrived through a display function not shown.
The Frame and Line Generator FLG 78 functionality generates a broadcast frame of CVBS signal 79 wherein the current line length is introduced appropriately into the frame blanking. Video output by the Frame and Line Generator FLG 78 functionality sits at the black level until the required compressed image position within the frame is reached, whereon lines of imagery are read from this frame's selected output buffer BI 64 or B2 65 and after digital to analogue conversion by the DAC's 71,69,67 and combination to form a chroma video CV signal 76 in the Colour Encoding functionality CE2 75 to the required standard are inserted within the frame 79 by means of the Frame Merge FM 77 functionality which at this stage may constitute only a simple mixer. Once all the buffered lines have been read from the selected buffer B 164 or B2 65 the previous black level video lines complete the rest of the frame. Final output of a CVBS signal is line 80.
Returning to figure 6 the final output of a CVBS 80 signal from the Time Switch TS 26 functionality may be used to drive conventional displays D3 31, recorders R3 32, or form the basis of input to some further process or processes in the P3 33 functionality This time compression of video information is significantly different to that of software manipulation of imagery from a frame store since despite analogue to digital conversion and digital to analogue conversion no software processing is possible on the image information in the TS circuitry, only the time domain of the information is changed and if necessary the CVBS output is verifiable against a recorded uncompressed master signal by display or other means not shown.
Referring to Fig. 8 which shows schematically detail of a free standing Time Switch (TS 26 Fig. 6), incorporating some functionality previously described in the context of fig 7, processing a VIS panoramic long line CVBS 90 signal available from the Sync Merge SM 27 functionality output 38 (fig 6) or from a video recorder not shown.
The CVBS signal 90 is input to the Timing and Sync Management functionality TSM 14 which processes the signal to establish master timing sync etc. information also in this example this functionality extracts from the frame blanking, information defining the length and number of lines held within the current frame. This information is output on lines 35A to the Read Write Control RWC functionality 52.
The CVBS signal 90 is also input to the Colour Decoder CD 91 functionality which receives the necessary timing information 95 from the Timing and Sync Management TSM 14 functionality to support decoding of this CVBS 90 signal to primary RGB information or equivalent, sufficient to enable subsequent reconstitution of a CVBS colour signal. This decoded colour information 96 is passed to the ADCs 92 which comprise three separate analogue to digital converters each processing a separate component of the CVBS 90 colour decoded signal.
The outputs 97 from the ADCs 92 are past to the Double Buffers DB 93 which contain functionality equivalent to the previously Fig. 7 described buffers B1 64 B2 65. The Double Buffers DB 93 under the control 54 of the Read Write Control RWC 52 are alternatively written and subsequently read in line with the discussion of B1 64 and B2 65 in Fig. 7. The Read Write Control RWC 52 operates similarly to its description in Fig.
7 excepting that the source of length and number of lines within the frame is taken, in this example, from the Timing and Sync Management TSM 14 functionality and contained in the information 35A.
Time compressed image information 98 is read in the same fashion as described under Fig. 7 from the Double Buffers DB 93 and passed to the DACs 94 comprising three separate analogue to digital converters which separately convert the three separate image parameters to three analogue colour component signals 99 input to the Colour Encoder CE2 75 capable of encoding colour information to the required standard.
Generation of a broadcast or other standard frame of sync blanking burst and blank video lines in the Frame Line Generation FLG circuitry 78 also follows the Fig. 7 description excepting that the source of the number of lines of compressed image information 100 is received from the Timing and Sync Management functionality TSM 14. Insertion of colour encoded video compressed imagery information output 76 from CE2 75 occurs similarly to the Fig. 7 description in the Frame Merge functionality FM 77 with final CVBS output 80.
Referring to figure 9 which relates to VIS applications in the Cinema or other large scale projection facilities.
In respect of high resolution cinematographic cameras the projection screen 81 is sized to accommodate a high resolution image from conventional cinematographic film projection.
The inset box 82 represents the relative size of a good quality projected image originally sourced from a high resolution video camera source.
For an array of for example seven similar high resolution cameras the projection of a composite panoramic image originating from the array source to the same image definition would require a screen seven times as wide 83.
To accommodate the width of the panorama on the screen 81 would necessitate a reduction in projection scale here of around 50% as shown by 84 leaving vast tracts of screen vacant, nevertheless the panorama would appear at an effectively higher definition than in 83.
Similarly if separate panoramas were to be projected, for example comprising the perspective of four cameras in the array, then such panoramas 85,86 could be organized to effectively fill the screen.
87 indicates a representation of the seven camera projected panorama and two different foci of interest 88,89 originating from separate smaller field of view perspectives separately tapped from the array and separately recorded permitting their subsequent projection at maximum scale to partially fill the remaining screen.
VIS arrays support the simultaneous capture of separate and different perspectives. Suc

Claims (11)

  1. CLAIMS 1. A time switch whereby display of frames of VBS or CVBS video, from an image source generating perspectives comprising frames of necessarily differing line length information, may be made on domestic displays, monitors, projection devices and comprising a means to extract image information from frames of differing and non standard line length video, a means to time compress extracted image information into lines of display standard line length, and a means to incorporate time compressed imagery held in display standard lines, within display standard frames of video.
  2. 2. A time switch which permits the time compression and reformatting of imagery from a virtual image sensor's panoramic perspective whereby it may be displayed on domestic displays, monitors or projection devices comprising a means to extract image information from frames of differing and non standard line length video, a means to time compress extracted image information into lines of display standard line length and a means to incorporate time compressed imagery held in display standard lines, within display standard frames of video.
  3. 3. A time switch which remaps time domain changing video image information whereby it may be displayed on domestic displays, monitors, or projection device, comprising a means to extract image information from frames of differing and non standard line length video, a means to time compress extracted image information into lines of display standard line length and a means to incorporate time compressed imagery held in display standard lines, within display standard frames of video.
  4. 4. A time switch whereby composite imagery, from a single source of frames of VBS or CVBS video whose line length exceed that of domestic broadcast standards may be displayed by domestic displays, monitors or projection devices comprising, a means to extract image information from frames of differing and non standard line length video, a means to time compress extracted image information into lines of display standard line length, and means to incorporate time compressed imagery held in display standard lines, within display standard frames of video.
  5. 5. A time switch as claimed in claiml, or claim 2, or claim 3, or claim 4 whereby the time compressed imagery may be recorded using domestic video recording equipment.
  6. 6. A time switch as claimed in claim 1 or claim 2 or claim 3 or claim 4 whereby the time compressed imagery may be input to a further process or processes.
  7. 7. A time switch as claimed in any preceding claim whereby the length of video image lines comprising a frame may be automatically determined.
  8. 8. A time switch as claimed in any preceding claim whereby colour decoding and encoding may be made to any existing standard.
  9. 9 A time switch as claimed in any preceding claim whereby imagery output from the time switch may be verified against the uncompressed original imagery.
  10. 10. A time switch substantially as described herein with reference to the figures 1-9 of the accompanying drawing.
  11. 11. A picture board whereby a continuous panorama captured by a camera array, comprising a plurality of cameras, may be displayed on, or projected onto, a screen wherein the aspect ratio of the panorama in respect to that of the screen permits the further separate display, or projection, on the same screen of the focus or foci of interest simultaneously and separately captured by elements of the same camera array to similar or different aspect ratios and scale as that of the panorama and comprising a means to capture panoramic imagery to different scales and aspect ratios, a means to transfer such images to cinema film medium and a means to project combinations of the captured imagery to differing scales and aspect ratios onto a cinema screen 12 A cinematographic video camera array whereby simultaneously sourced imagery of a continuous scenario, by separate elements of array, comprising a plurality of video cameras, supports increased picture definition, of the combined image when transferred to film and projected onto a cinema screen, in proportion to the number of video cameras employed and comprising a means to capture panoramic imagery using an array of a plurality of video cameras, a means to transfer such imagery to cinema film medium, and a means to project the film imagery onto a cinema screen
GB9901679A 1999-01-26 1999-01-26 Time compression and aspect ratio conversion of panoramic images and display thereof Withdrawn GB2346754A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB9901679A GB2346754A (en) 1999-01-26 1999-01-26 Time compression and aspect ratio conversion of panoramic images and display thereof
GBGB9902699.9A GB9902699D0 (en) 1999-01-26 1999-02-09 Time switch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9901679A GB2346754A (en) 1999-01-26 1999-01-26 Time compression and aspect ratio conversion of panoramic images and display thereof

Publications (2)

Publication Number Publication Date
GB9901679D0 GB9901679D0 (en) 1999-03-17
GB2346754A true GB2346754A (en) 2000-08-16

Family

ID=10846517

Family Applications (2)

Application Number Title Priority Date Filing Date
GB9901679A Withdrawn GB2346754A (en) 1999-01-26 1999-01-26 Time compression and aspect ratio conversion of panoramic images and display thereof
GBGB9902699.9A Ceased GB9902699D0 (en) 1999-01-26 1999-02-09 Time switch

Family Applications After (1)

Application Number Title Priority Date Filing Date
GBGB9902699.9A Ceased GB9902699D0 (en) 1999-01-26 1999-02-09 Time switch

Country Status (1)

Country Link
GB (2) GB2346754A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4125862A (en) * 1977-03-31 1978-11-14 The United States Of America As Represented By The Secretary Of The Navy Aspect ratio and scan converter system
JPH02248178A (en) * 1989-03-22 1990-10-03 Matsushita Electric Ind Co Ltd High definition television display device
JPH05115032A (en) * 1991-10-23 1993-05-07 Mitsubishi Electric Corp Image pickup device with wide pattern
US5394520A (en) * 1991-09-26 1995-02-28 Hughes Aircraft Company Imaging apparatus for providing a composite digital representation of a scene within a field of regard
US5430489A (en) * 1991-07-24 1995-07-04 Sony United Kingdom, Ltd. Video to film conversion
US5659369A (en) * 1993-12-28 1997-08-19 Mitsubishi Denki Kabushiki Kaisha Video transmission apparatus for video teleconference terminal
GB2330978A (en) * 1996-08-06 1999-05-05 Live Picture Inc Method and system for encoding movies,panoramas and large images for online interactive viewing and gazing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4125862A (en) * 1977-03-31 1978-11-14 The United States Of America As Represented By The Secretary Of The Navy Aspect ratio and scan converter system
JPH02248178A (en) * 1989-03-22 1990-10-03 Matsushita Electric Ind Co Ltd High definition television display device
US5430489A (en) * 1991-07-24 1995-07-04 Sony United Kingdom, Ltd. Video to film conversion
US5394520A (en) * 1991-09-26 1995-02-28 Hughes Aircraft Company Imaging apparatus for providing a composite digital representation of a scene within a field of regard
JPH05115032A (en) * 1991-10-23 1993-05-07 Mitsubishi Electric Corp Image pickup device with wide pattern
US5659369A (en) * 1993-12-28 1997-08-19 Mitsubishi Denki Kabushiki Kaisha Video transmission apparatus for video teleconference terminal
GB2330978A (en) * 1996-08-06 1999-05-05 Live Picture Inc Method and system for encoding movies,panoramas and large images for online interactive viewing and gazing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Patent Abstracts of Japan & JP2248178 (Matsushita Electric Ind. Co. Ltd), 03.10.90. See abstract. *
Patent Abstracts of Japan & JP5115032 (Mitsubishi Electric Corp.), 07.05.93. See abstract. *

Also Published As

Publication number Publication date
GB9901679D0 (en) 1999-03-17
GB9902699D0 (en) 1999-03-31

Similar Documents

Publication Publication Date Title
KR100190247B1 (en) Automatic letterbox detection
EP0622000B1 (en) Method and apparatus for video camera image film simulation
JP2509128B2 (en) Wide aspect ratio television receiver
US4449143A (en) Transcodeable vertically scanned high-definition television system
US5138449A (en) Enhanced definition NTSC compatible television system
US4605952A (en) Compatible HDTV system employing nonlinear edge compression/expansion for aspect ratio control
US7733378B2 (en) Matching frame rates of a variable frame rate image signal with another image signal
US6895172B2 (en) Video signal reproducing apparatus
GB2107151A (en) Television systems and subsystems therefor
KR100191408B1 (en) Vertical reset generation system
US4611225A (en) Progressive scan HDTV system
Jack et al. Dictionary of video and television technology
US5786863A (en) High resolution recording and transmission technique for computer video and other non-television formatted video signals
GB2346754A (en) Time compression and aspect ratio conversion of panoramic images and display thereof
JPH11275486A (en) Liquid crystal display device
JP2615750B2 (en) Television receiver
KR100229292B1 (en) Automatic letterbox detection
Bancroft Pixels and Halide—A Natural Partnership?
Schäfer et al. SUBJECTIVE EVALUATION OF NEW COLOUR PRIMARIES FOR HIGH DEFINITION TELEVISION
JPH02302188A (en) Standard/high definition television receiver
JPH0810914B2 (en) Video signal processing device
Krivitskaya et al. Digital methods of recording color television images on film tape
JPH0436510B2 (en)
JPH0810913B2 (en) Video signal processing device
JPH0491572A (en) Image pickup device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)