CN105264436A - System and method for controlling equipment related to image capture - Google Patents

System and method for controlling equipment related to image capture Download PDF

Info

Publication number
CN105264436A
CN105264436A CN201480032344.9A CN201480032344A CN105264436A CN 105264436 A CN105264436 A CN 105264436A CN 201480032344 A CN201480032344 A CN 201480032344A CN 105264436 A CN105264436 A CN 105264436A
Authority
CN
China
Prior art keywords
node
data
video camera
positional information
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201480032344.9A
Other languages
Chinese (zh)
Other versions
CN105264436B (en
Inventor
A.费希尔
M.麦克唐纳
J.泰勒
J.利维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CINEMA CONTROL LAB Inc
Original Assignee
CINEMA CONTROL LAB Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CINEMA CONTROL LAB Inc filed Critical CINEMA CONTROL LAB Inc
Publication of CN105264436A publication Critical patent/CN105264436A/en
Application granted granted Critical
Publication of CN105264436B publication Critical patent/CN105264436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/56Accessories
    • G03B17/561Support related camera accessories
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/56Accessories
    • G03B17/563Camera grips, handles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/634Warning indications

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)
  • Accessories Of Cameras (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

A method and system for controlling a setting of an equipment related to image capture comprises capturing position data and orientation data of a sensing device; determining position information of a region of interest (i.e. a node) to be treated by the equipment, relative to the position and orientation data of the sensing device; and outputting a control signal directed to the equipment, in order to control in real-time the setting of the equipment based on said position information of the region of interest.

Description

For controlling the system and method for the equipment relevant with picture catching
Technical field
The present invention relates to the field of the motion tracking in video camera environment for use.More specifically, the present invention relates to the system and method for the setting for controlling video camera or relevant equipment.
Background technology
In camera environment (such as film, TV, live entertainment, competitive sports), there is the plurality of devices of function of operation video camera, illumination and sound.The control of these functions and mutual relationship determine the quality of final image that the audience's feeling arrives and sound.Such a function is video camera focusing." focusing " or " focusing (rackfocusing) " refers to the action that the focal length changing camera lens at a distance of the physical distance of focal plane corresponding to mobile object is arranged.Such as, if a performer is from moving at a distance of 3 meters, focal plane in a camera lens (shot) at a distance of 8 meters, focal plane, the position accurately corresponding to the change of performer in shooting is changed the distance setting on camera lens by focusing person (focuspuller).In addition, as what indicated by the specific aesthetic requirement of composition, focal length can be moved on to another from an object by focusing person in described frame.
This process adjusting described focal length is manually carried out by " first assistant's video camera (FirstAssistantCamera) " (AC) or " focusing person (FocusPuller) ".
Depend on the parameter of given camera lens, usually have the space of very little error.So, the effect of focusing person in the field of film making is extremely important; In majority of case, owing to cannot repair this error in post-production, " soft " image is considered to obsolete usually.People also must consider that performer may not copy his or her performance in shooting thereafter, and therefore focusing person is supposed to ideally carry out each shooting.Due to these factors, some producers think that focusing person has the most difficult work on setting.
Although focusing person may be very skilled, when pre-treatment making of still having slowed down due to the complicacy of this task and difficulty.
Current movie makes from stage scene has a dress rehearsal (blockingrehearsal), wherein builds the position of various performer.During having a dress rehearsal, video camera assistant places band mark in all points that performer suspends at the volley on floor.Then stardom leaves setting and does hair and cosmetic, and stand-in in order to throw light on, find a view and focal length mark arrange object enter to substitute them in these various positions.
Once establish camera position by director of photography DP and video camera photographer, an AC starts the various distances measured between the mark of stardom and the focal plane of video camera.With a series of greasepaint pens on the focal length cylinder of camera lens/ruling pen mark and/or to record these distances with the mark dish on burnt (followfocus) device.Use proxy, check that these marks are to guarantee accuracy by view finder and/or Airborne Surveillance, Airborne Control device.If mark is relocated so that provide the specific composition of expectation, an AC correspondingly must remeasure/its mark of setting again.In addition, an AC can put down specific range mark---marking with reference to these specific ranges when moving between their mark with performer during taking---so that help exactly by the distance of Focussing to correct centre on floor.
When going on the stage get back to setting, usually there is the dress rehearsal to video camera, wherein make focusing person and cameraman (operator) will this camera lens be put into practice and guarantee that all things have suitably been established all.During taking, focusing person based on the dialogue of stardom or object, movement, video camera motion and revise focal length, and compensate stardom and miss the skew of their mark or any unforeseen movement.When barrier prevents focusing person to see his marks all, he can ask the 2nd AC during this shooting by two-channel wireless electricity for he convenes (call) to mark.In some cases, such as when the combination in any of full length shot, wide aperture, very near distance or three, even if object move several millimeters also may need immediately and point-device focusing corrects.
After photographing, if focusing person feels that he made mistakes---may be time error, omit mark the some parts of described shooting maybe may be made to present any other problem of " soft ", he or she will report that this problem is to cameraman's (being most likely in view finder the people noticing this mistake) or director of photography DP usually, and if also do not plan again to take, then can require again to take.
Except the eyesight of acumen, reaction and intuition, the main tool of focusing person is cloth or fiber glass tape, steel tape is measured, ultrasonic range finder on range finder using laser and the video camera providing real-time range reading in some cases, and this stadimeter is installed on the side of shading case or camera body.Such as above can not contact in the arrangement of video camera at camera stabilizer (steadicam) or lifter camera lens (craneshot) focusing person, he or she is long-range with burnt (followfocus) system by using, although some focusing persons prefer all use long-range system if having time.In above-mentioned any situation, focusing person is still needed manually to focus during the process of shooting.
Current method is consuming time, difficulty and is highly susceptible to making a mistake.Due to disabled shooting, slowly setting time with to very having experience and the demand of the focusing person of high salary, this is technology barrier for a long time and is applied with huge innovation restriction to director and adds cost of manufacture in the cartoon making of film.
Applicant is it is known that depend on the automanual focusing system that laser, sonar and face/object identification is followed the tracks of.
These methods are the modification of identical method in essence, because " two dimensional surface " of their each sensing images, and the degree of depth of any given region caught on this plane or pixel or range information.For most of senior system, the photographer of system then can select the point on two dimensional image, and at this moment, then the range data for this point will be imported into the motor controlling focus adjustment in real time.
These known method table reveal some restrictions.More specifically, these systems are all " sight line " (lineofsight).They can not be focused current sightless object in " two dimensional image plane ".Laser system needs extra photographer with object laser alignment expected.If object flipper turn, leave view-finder or disappear in after another object or object, then facial-recognition security systems will lose the tracking to this object.
May be more importantly, the fabulous accuracy that in these systems, neither one really can realize that most challenging focusing required by task wants, namely there is when object moves fast the focal length length of wide aperture, and the focus on described object is very specific, such as eyes, because for LIDaR (light detection and ranging) and laser system, the photographer mankind must carry out real-time follow-up eyes by the cursor on moving screen or by aiming at actual laser.Should also be noted that can the subject of knowledge and the object of knowledge less desirable be in the eyes of people by laser projection.Although facial-recognition security systems can follow the tracks of eyes in theory, there is the demand that higher levels of accuracy and accuracy are provided.
Applicant it is known that U.S. Patent number 5,930,740 (MATHISEN), 8,448,056 (PULSIPHER) and 8,562,433 (LARSEN); There is the U.S. Patent application of publication number 2008/0312866 (SHIMOMURA), 2010/0194879 (PASVEER), 2013/0188067 (KOIVUKANGAS), 2013/0222565 (GUERIN), 2013/0229528 (TAYLOR) and 2013/0324254 (HUANG), and there is the Japanese patent application of publication number JP2008/011212 (KONDO).
Therefore, because aforementioned, there is the demand to the system improved, the problem of some prior aries discussed above can be overcome by means of its design and assembly.
Summary of the invention
The object of this invention is to provide a kind of system, by means of its design and assembly, meet some above-mentioned needs and thus the improvement become other known relevant system of prior art and/or method.
The object of this invention is to provide a kind of system and method for the setting for controlling the equipment relevant with picture catching.Such equipment can comprise video camera, and arrange can be that such as focal length is arranged, zoom is arranged, aperture is arranged, camera lens angle arranges and/or controls that pan (pan) is arranged, pitching (tilt) is arranged, the rotation of video camera is arranged and/or the position of video camera arranges and/or light fixture is arranged and/or sound device is arranged and/or fellow between two.
According to the aspect of the application, a kind of method of the setting for controlling the equipment relevant with picture catching being provided, comprising:
A) catch in the position data at sensing apparatus place with towards data;
B) by processor from the position data caught with determine by the positional information in the interested region of described device processes towards data; And
C) export via the output port of described processor and point to the control signal of described equipment, so that control the setting of described equipment in real time based on the positional information in described interested region.
" equipment " can comprise image-capturing apparatus, such as catches the video camera of the image (photo or video image) of object and/or it can comprise the equipment cooperated with image-capturing apparatus---such as light fixture, sound capture device and/or fellow.
According to the another aspect of the application, a kind of system of the setting for controlling the equipment relevant with picture catching being provided, comprising:
-sensing apparatus, is configured to catching position data and towards data;
-processor, communicates with described sensing apparatus, and described processor is configured to from described position data and determines by the positional information in the interested region of described device processes towards data; And
-output port, within a processor integrated, be configured to export and point to the control signal of described equipment, so that control the setting of described equipment in real time based on the positional information in described interested region.
According to the another aspect of the application, provide a kind of non-transient computer-readable reservoir, have storage data performed by computing machine thereon and instruction, described data and instruction comprise:
-code module, for receive sensing apparatus position data and towards data;
-code module, for from described position with determine by the positional information in the interested region of described device processes towards data; And
-code module, for exporting the control signal pointing to described equipment, so that control the setting of described equipment in real time based on the positional information in described interested region.
According to the another aspect of the application, a kind of method of the setting for controlling the equipment relevant with picture catching being provided, comprising:
A) store one or more identifier in memory, each identifier is associated with by by the predetermined interested region of described device processes, and stores corresponding positional information;
B) selection to described one or more identifier is received at processor place; And
C) output port via described processor exports the control signal pointing to described equipment, so that control the setting of described equipment in real time based on the positional information of selected by described one or more predetermined interested region.
According to the another aspect of the application, a kind of system of the setting for controlling the equipment relevant with picture catching being provided, comprising:
-storer, is configured to storage by by one or more identifier in the predetermined interested region of described device processes and corresponding positional information;
-processor, is configured to described memory communication the selection receiving described one or more identifier; And
-output port, integrated with described processor, be configured to export the control signal pointing to described equipment, so that control the setting of described equipment in real time based on the positional information of selected by described one or more predetermined interested region.
According to embodiment, the assembly of said system is provided in central means (such as computing machine), system also comprises one or more user's set (such as computing machine, it can be the flat computer with touch-screen) for receives user's, described user's set communicates with central means.More specifically, described user's set can be configured to, via graphic user interface, user is presented in one or more predetermined interested region, and receive the selection in described one or more interested region from user, and send (reference) is quoted to central means to described one or more interested region.
According to the another aspect of the application, there is provided a kind of non-transient computer-readable reservoir, store by one or more identifier in the predetermined interested region of described device processes and corresponding positional information thereon, described computer-readable reservoir also comprises the data and instruction that are performed by processor, and described data and instruction comprise:
-code module, for receiving the selection of described one or more identifier; And
-code module, for exporting the control signal pointing to described equipment, so that control the setting of described equipment in real time based on the positional information of selected by described one or more predetermined interested region.
According to the another aspect of the application, a kind of method of the setting for controlling the equipment relevant with picture catching being provided, comprising:
A) by the observability independently position data of sensing apparatus seizure at sensing apparatus place;
B) determine by the positional information in the interested region of described device processes by processor from described position data; And
C) control signal pointing to described equipment is exported by the output port of described processor, so that control the setting of described equipment in real time based on the positional information in described interested region.
According to the another aspect of the application, a kind of system of the setting for controlling the equipment relevant with picture catching being provided, comprising:
-observability is sensing apparatus independently, is configured to catching position data;
-processor, communicates with described sensing apparatus, and described processor is configured to based on described position and determines by the positional information in the interested region of described device processes towards data; And
-output port, integrated with described processor, be configured to export and point to the control signal of described equipment, so that control the setting of described equipment in real time based on the positional information in described interested region.
According to embodiment, described system also comprises controller, and described controller communicates with described output port and is configured to control by described control signal the setting of described equipment.
According to embodiment, described setting can comprise: the focal length of video camera is arranged, the zoom of this video camera is arranged, the aperture of this video camera is arranged, camera lens angle is arranged between two of this video camera, the pan of this video camera is arranged, the pitching of this video camera is arranged, the rotation of this video camera is arranged, the position of this video camera is arranged, light fixture controls to arrange and/or sound device is arranged.
According to the another aspect of the application, there is provided a kind of non-transient computer-readable reservoir, have the data and instruction that are performed by computing machine thereon, have for the input port from observability independently sensing apparatus receiving position data, described data and instruction comprise:
-code module, for based on described position data with determine by the positional information in the interested region of described device processes towards data; And
-code module, for exporting the control signal pointing to described equipment, so that control the setting of described equipment in real time based on the positional information in described interested region.
According to the another aspect of the application, a kind of system of the setting for controlling the equipment relevant with picture catching being provided, comprising:
A) sensor, is arranged on by the object of cameras capture, is adapted to catch three-dimensional location data;
B) processor, to be adapted to sensor communication for receiving position data and to produce control signal based on described position data; And
C) controller, is adapted to and processor communication, so that control the setting of described equipment in response to described control signal.
In a particular embodiment, described setting can comprise: focal length is arranged, zoom is arranged, aperture is arranged, camera lens angle arranges and/or control pan settings, pitching setting, the rotation setting of video camera between two, the position of video camera is arranged, light fixture is arranged, sound device setting and/or its any combination.
In a particular embodiment, by sensor device with whole three degree of freedom, such as with orientation, height and rotate (A, E, R) Euler (Eular) angle catch towards data.In such embodiments, processor is adapted to about the position in the place representing sensor device with towards data to calculate the position of focus or " node ".Processor thus be adapted to based on node position produce control signal.
" focus " or " node " be meant to specific point on object or interested region, and carry out the setting (such as, focusing, zoom, aperture, illumination, sound etc.) of opertaing device based on this.Should " node " in motion tracking system sometimes referred to as " tip offset ", described motion tracking system such as provides position and towards both under node does not have identical coordinate with sensor but is in the certain situation at the fixed range place leaving sensor.Such as, node can correspond to the eyes of people, and position and correspond to the rear portion of the number of people at sensor towards data.Therefore, can by arranging the focusing of video camera, zoom, aperture, angle between two from the position of sensor with towards the location calculating the eyes depending on specific people, controlling pan, pitching, rotation, the position of video camera, light fixture and/or sound device.
In a particular embodiment, described system also comprises the sensor be arranged on video camera, namely when video camera is mobile about the object that will catch.
According to the another aspect of the application, a kind of method of the setting for controlling the equipment relevant with picture catching being provided, comprising:
-catch with by by the relevant three-dimensional location data of the object of cameras capture;
-position-based data produce control signal; And
-setting of opertaing device is carried out in response to control signal.
According to the another aspect of the application, provide a kind of non-transient processor-readable medium for storing of the setting for controlling the equipment relevant with picture catching, described medium for storing comprises and is performed to carry out following data and instruction by processor:
-receive with by by the relevant three-dimensional location data of the object of cameras capture;
-position-based data produce control signal; And
-send control signals to the setting of controller for opertaing device.
According to the another aspect of the application, a kind of system of the setting for controlling the equipment relevant with picture catching being provided, comprising:
-be arranged on will by the sensor on the object of cameras capture and transmitter, is adapted to catching position and/or towards data;
-processor is adapted to communicate with the transmitter of sensor for receiving position data and based on described position and/or transmit control signal towards data; And
-controller, is adapted to and processor communication, so that reception control signal and carry out the setting of opertaing device in response to control signal.
According to other aspect, provide a kind of method be associated with said system.
According to other aspect, provide a kind of and comprise the data of method and the non-transient processor-readable medium for storing of instruction that perform and be associated with said system.
The advantage of embodiments of the invention is, use and have very concrete character to create the motion tracking data of multiple preposition and directivity ' node ' in three dimensions, the equipment that can realize improvement level in multiple movement and static photography environment controls and robotization.
The advantage of embodiments of the invention is, by or do not pass through user interactions, it allows to follow the tracks of in real time and/or selects from the point (node) of multiple predetermined static or movement in three dimensions, and without any need for extra manual intervention; Any one in these nodes is selected when using the input media of software interface or mechanical dial plate or other machinery at any time.In the example of focus controlling, select the node expected once user, system automatically by focus adjustment to this node, even and if node and video camera maintenance focal length on this node when mobile.It also focuses on the node in not being current visual field by enable, allows object once enter composition or occurring just being aimed at by focus from other object (porch, wall, vehicle etc.) below.
After the non-limitative illustration of the preferred embodiment below reading with reference to accompanying drawing, target of the present invention, advantage and disadvantage will become distincter, and described explanation is only in order to the object of example is presented.
Accompanying drawing explanation
Figure 1A is the block diagram for controlling the system that video camera is arranged of the embodiment according to the application.
Figure 1B is the process flow diagram of the step of the method by the system execution shown in Figure 1A represented according to embodiment.
Fig. 1 C is the sequence chart of the method by the system execution shown in Figure 1A represented according to embodiment.
Fig. 2 A and 2B shows the block diagram of the system for controlling the setting of multiple video camera and camera control simultaneously according to another embodiment of the invention.
Fig. 3 be illustrate according to embodiment by the schematic diagram of the list that uses together with the system shown in Figure 1A or two rod source pedestal (boompolesourcemount).
Fig. 4 illustrates the schematic diagram in shooting horn source pedestal (cameraarmsourcemount) will used together with the system shown in Figure 1A according to embodiment.
Fig. 5 illustrates the schematic diagram of camera sensor pedestal will used together with the system shown in Figure 1A according to embodiment, and camera sensor pedestal comprises rod and is arranged on the source shell of each end of rod.
Fig. 5 A is the skeleton view of the source shell of the camera sensor pedestal shown in Fig. 5.
Fig. 5 B is the side plan view of a part for the rod shown in Fig. 5, and the end showing rod has the installation shaft (shaft) extended from it.
Fig. 5 C is the profile diagram of the mounting hole of the source shell illustrated in fig. 5, and this mounting hole is configured to the end holding the rod shown in Fig. 5 B.
Fig. 6 illustrates the schematic diagram of modularization source installation system will used together with the system of Figure 1A according to embodiment.
Fig. 7 shows the home screen of the upper display of graphic user interface (GUI) of the user's set in the system shown in Figure 1A.
The node that Fig. 8 shows the GUI shown in Fig. 7 creates/revises window.
Fig. 9 shows a part for home screen shown in Figure 7, namely defines the node array of various node.
Figure 10 shows the specific node button of the node array shown in Fig. 9.
Figure 11 show the node array shown in Fig. 9 by the node button selected.
Figure 12 shows a part for home screen shown in Figure 7, namely shows sequencer (sequencer) assembly.
Figure 13 shows another part of home screen shown in Figure 7, namely shows corner dial plate control inerface.
Figure 14 shows the another part of home screen shown in Figure 7, namely shows another corner dial plate control inerface.
Figure 15 shows the display screen according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, for defining controlled video camera.
Figure 16 shows another display screen according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, for calibrating the camera lens of controlled video camera.
Figure 17 shows another display screen according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, for selecting the configuration of sensor device.
Figure 18 shows another display screen according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, for the configuration of the configuration and sequencer of recording node array in memory.
Figure 19 shows a part for the display screen according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, comprises the corner controller of the amount for regulating the delay/lag compensation by being applied to node data.
Figure 20 shows the control display screen of the replacement according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, comprises the interactive graphics (IG) relevant with linear sequencer function and represents.
Figure 21 shows the control display screen of the replacement according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, comprises the interactive graphics (IG) relevant with the sequencer function of customization and represents.
Figure 22 shows the control display screen of the replacement according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, comprises the interactive graphics (IG) relevant with free sequencing function and represents.
Figure 23 shows the other control display screen according to embodiment, and it will be displayed on the user's set of the system shown in Figure 1A, comprises the interactive graphics (IG) relevant with free sequencing function and represents.
Figure 24 shows a part for the home screen according to embodiment, and it is by the graphic user interface (GUI) of user's set that is displayed in the system shown in Figure 1A, i.e. 4-node geometry controller feature.
Figure 25 shows a part for the home screen according to embodiment, and it is by the graphic user interface (GUI) of user's set that is displayed in the system shown in Figure 1A, i.e. 3-node geometry controller feature.
Embodiment
In the following description, identical reference number refers to similar element.The configuration of illustrated in the accompanying drawings or the described embodiment that in the present note describes and/or geometry and size are only the embodiments of the invention in order to the object of example provides.
Describing widely, for controlling the system and method for the setting of video camera, according to specific embodiment, using motion-captured or overall (or local) positioning system to produce three-dimensional position and towards data.These data by calculate in real time position in three dimensions and towards and comprise the software process of other dimension calculation of the relative distance data between the object of expectation and described video camera.Then these data are used to all control such as handling the opertaing device of the servomotor of the equipment relevant with video camera in real time, and the wherein said equipment relevant with video camera is the long-range head of lens focus, lens aperture and video camera such as.
More specifically, according to specific embodiment, the disclosure relates to and controls focal length and composition, and comprises and produce predetermined point in three dimensions, and described predetermined point is hereinafter referred to as " node ".Node can be the node fixed in space, i.e. the flower of a vase.Or it can be the node of movement, i.e. human or animal.If if video camera does not move or video camera has sensor, then fixing node does not need sensor.The video camera of node as movement of movement needs sensor.Because motion tracking system creates the possibility of the point drawing numerous restriction in given three dimensions in essence, therefore allow much bigger complicacy with the interface of these data and liberate creative and possibility that is practicality.The important feature of " node " that use as defined and within the system one is that it has position and towards data: this allows to carry out the operation of intelligence, such as automatically focuses between the left and right eyes---see after the document " automatic analysis (AutoProfiling) ".
Therefore, when with reference to figure 1, a kind of system 10 of setting of the equipment 112 relevant with picture catching for controlling such as video camera 12 is provided.System 10 comprises one or more sensing apparatus 114, such as sensor 14, and it is configured in sensing apparatus place catching position data with towards data.System 10 also comprises the processor 16 be embedded in data processing equipment 28 (being also called as " data processing unit ") here.Processor 16 communicates with sensing apparatus 114, and is configured to determine the positional information x in the interested region processed by described equipment 112 based on described position with towards data.Processor 16 also comprises being configured to export and points to the control signal output port 43 of described equipment 112, so that control the setting of described equipment 112 in real time based on the positional information in described interested region.
System 10 also comprises the controller 118 communicated with output port 43 and the setting be configured to control signal opertaing device 112.System 10 also comprises storer 132, such as RAM32, for stored position data with towards data.System 10 also comprises equipment 112.According to this embodiment, sensing apparatus 114 is observability independently (visibilityindependent) (i.e. non-line-of-sight sensors), and comprises transmitter 22.System 10 is also included in the receiver 26 of communication between transmitter 22 and processor 16.System 10 also comprises user's set 40, and described user's set 40 comprises user interface 42, and described user interface 42 is communicated with data processing equipment 28 by cordless communication network 39.
More specifically, Fig. 1 shows the system 10 of the setting for controlling video camera 12.System 10 comprises sensor 14, and each sensor is installed on the object that will be caught by video camera 12, and each sensor is adapted to catch three-dimensional location data based on the place of each sensor 14.System 10 also comprises and is adapted to communicate with sensor 14 for position-based data receiver position data and the processor 16 that transmits control signal.System 10 also comprise be adapted to communicate with processor 16 so that control the controller 18 of the setting of video camera 12 in response to control signal.
Also as shown in fig. 1, sensor 14 each be hard-wired 20 to hub (hub)/transmitter 22.Hub/transmitter 22 is via the communication of less radio-frequency (RF link) communication mode 24 to USB (universal serial bus) (USB) receiver 26, and it connects 27 via USB again and is connected to and has processor 16 and be embedded in data processing equipment 28 wherein.
Data processing equipment 28 also comprises power supply 30 and DDR3 random access memory (RAM) 32, and embeds Flash non-volatile computing machine reservoir 34.Data processing equipment 28 also comprises WiFi communication module 36 and Zigbee tMwireless communication module 38 communicates with user's set 40 for by radio data network 39, and wherein said user's set 40 is iPad in this example tM, and described data processing equipment 28 comprises user interface 42.It should be understood that iPad tMcan with any other suitable computer installation (such as such as Android tMflat computer) substitute or combination.
Controller 18 is connected to data processing equipment 28 by rigid line (hardwire) 44.Controller 18 is attached in the region in video camera 12, and comprises CypressPSOC tM5LP micro controller unit (MCU) 46 and power supply 48.H bridge 50,52,54 controller 18 is connected to automatically operate respectively video camera 12 specific setting, i.e. focal length, aperture and zoom each servomotor 56,58,60.
Should be understood that according to alternative embodiment, said modules can in any suitable manner by any suitable communication mode interconnection.
In fact and such as, in the embodiment in Figures 2 A and 2 B, multiple video camera 12 is controlled by system 10'.Each video camera 12 is connected to " subordinate " data processing equipment 28b, and described " subordinate " data processing equipment 28b can operate via the corresponding user interface of user's set 40." subordinate " data processing equipment 28b communicates with " master " data processing equipment 28a.
The remaining components of Fig. 2 A and 2B refers to the similar component shown in Fig. 1.
In embodiment in fig 1 and 2, sensing system is provided by magnetic movement tracker.More specifically, sensor 14 is provided by inductive coil and system 10,10', also comprises alternating current (AC) magnetic source generator (see Fig. 3).Hub 22 powered sensor 14, translation data and send position data on radio frequency 24.Preferably, magnetic source is arranged on the telescopic rod base of customization together with airborne power supply.
Alternatively, rf repeater can be provided to extend the scope transmitted from the data of motion capture system.USBRF receiver needs obtain data from sensor and send it to video camera.If the distance between video camera and sensor very large (such as when the automobile etc. using the camera lens of 2000mm or 200mm for commercialization), may need increase scope.In addition alternatively, USB repeater can be provided so that extend the scope transmitted from the data of motion capture system.
Each user's set 40, i.e. iPad tMuser interface 42 comprise touch-screen, and user's set 40 is adapted to perform the interface software that communicates with one or more central controller 28,28a, 28b.
Alternatively, the input media (such as focus controlling dial plate or slide block) of machinery can be provided to serve as analog/digital interface to increase extra controlling feature to described software.Such as, as shown in Figures 2 A and 2 B, one of user's set 40 has the user interface 42 comprised focus knob 62.
Central data processing unit 28 is with Linux tMoperational System Control, and carry out major part process to control one or more servomotor 56,58,60.
As described above, servomotor 56,58,60 mechanically regulates video camera to arrange, such as such as, and focal length, zoom, aperture and/or control pan, pitching, rotation and/or fellow.
Be understood that, depend on specific embodiment, described setting can comprise following any one or its combination: the focal length of video camera is arranged, the zoom of this video camera is arranged, the aperture of this video camera is arranged, camera lens angle is arranged between two of this video camera, the pan of this video camera is arranged, the pitching of this video camera is arranged, the rotation of this video camera is arranged, the position of this video camera is arranged, light fixture controls to arrange, and sound device is arranged and fellow.
In the context of this description, term " processor " refers to the electronic circuit being configured to computer instructions, such as CPU (central processing unit) (CPU), microprocessor, controller and/or fellow.According to embodiments of the invention, as will be understood by the skilled person in the art, multiple such processor can be provided.Processor such as may be provided in one or more multi-purpose computer and/or in other suitable calculation element any.
Still in the context of this description, term " reservoir " refers to any computer data storage facility as lower device or assembly, and described device such as comprises: temporarily storage unit, such as random access memory (RAM) or dynamic ram; Permanent reservoir, such as hard disk; Light storage facility, such as CD or DVD (write or write-once/read-only can be repeated); Flash memory; And/or fellow.As will be understood by the skilled person in the art, multiple such storage facility can be provided.
In addition, " computer-readable reservoir " refers to any suitable readable medium for storing of non-transient processor or computer product.
Other assembly that can use with said system 10,10' comprises:
-arrange the modular system of customization of the nonmetal rod base of (sourceplacement) namely to there is the carbon fiber framing scaffold apparatus of pre-sizing for source, make when using more than can fast and assemble easily during two sources.
-various clip and support, for being installed to video camera, object and object by sensor and Magnetic Field Source; And
-various instrument, offsets for promoting node and arranges and being convenient for measuring of seedbed point.
That is, Fig. 3 show according to embodiment by the list that uses together with system or two rod sources pedestal.In addition, Fig. 4 shows the shooting horn source pedestal will used together with system according to embodiment.In addition, Fig. 5 shows the camera sensor pedestal will used together with system according to embodiment, and wherein its part is shown in Fig. 5 A-5C.In addition, Fig. 6 shows the modularization source installation system will used together with system according to embodiment.
The operation of system
As described above, the embodiment of the application allows to control focal length and composition, and relates to and create predetermined point in three dimensions, is here called as " node ", and it has position and towards data.Node can be node fixing in room, i.e. the flower of a vase.Or node can be the node of movement, i.e. human or animal.If if video camera does not move or video camera has sensor, then fixing node does not need sensor.The video camera of node as movement of movement needs sensor.
In operation, with reference to Fig. 1, sensor 14 produces the coordinate representing its physical location, the X of such as cartesian coordinate system, Y, Z coordinate and/or represent sensor towards orientation, adjustable height, rotation (A, E, R).Such as, when sensor 14 be placed on by the rear portion of video camera 12 catcher head, the information that sensor produces by the head of the place of indication sensor and people whether towards above, below etc.
Processor 16 receiving position and orientation information, and the position calculating " node ".Such as, when sensor 14 is placed on the rear portion of the number of people, " node " can correspond to eyes of people.Therefore, the precalculated position of eyes about sensor 14 of people found by processor 16, and calculate the position of eyes based on the place received and orientation information, i.e. focus.Then processor calculates the distance between video camera 12 and focus.Based on the distance calculated, processor 16 exports control signal so that control the setting of video camera 12.
Therefore, by with further reference to Figure 1A as illustrated better in fig. ib, a kind of method 200 of setting of opertaing device 112 is provided.Method 200 comprises, by the mode of sensing apparatus 114 catch 210 sensing apparatus 114 three-dimensional location data and towards data; And in storer 132, store 212 position datas and towards data.By produce represent physical location coordinate and represent described sensing apparatus towards characteristic sensing apparatus 114 catching position data and towards data.Based on three-dimensional location data with towards data, method 200 also comprises determines that 214 will by the positional information in the interested region of described device processes, i.e. " node " by the mode of processor 16.Node and sensor device 114 are usually located at different places.Then processor 16 determines the positional information of node described in 216, and calculates the distance between 218 equipment 112 and node further.
Method also comprises the control signal exporting 220 sensing equipments 112 based on the distance calculated via output port 43.
More specifically, obtain " range formula " from Pythagorean theorem, and calculate two point (x in three-dimensional Euclidean space 1, y 1, z 1) and (x 2, y 2, z 2) between distance.Once determine the accurate location of two nodes, distance between these nodes can be calculated by service range formula.For the example of focusing camera, if the center of node focal plane on video camera, then the outside focusing ring of camera lens or internal electron focusing mechanism can be set to this distance so that object of focusing.
More specifically, the positional information of each node in calculation procedure 216 comprises node (x 1, y 1, z 1) Euclidean space coordinate, and described calculation procedure 218 comprises:
-receive 222 equipment at Euclidean space coordinate (x 2, y 2, z 2) in positional information; And
-calculate the distance between the positional information of equipment described in 224 and the positional information of described node from Pythagorean theorem below:
For measuring position with towards both motion tracking sensor, vector mathematics can be used to the place " tip offset " being applied to sensor.If sensor is placed on the rear portion of his/her skull by such as performer, the position of sensor can be projected the surface of the left eye of performer by tip offset, in fact on the eyes of performer, creates virtual sensor.For rigid object/object, apply tip offset and to allow in the inside of object/object Anywhere or on the surface defined node.Similarly, tip offset (node) can establishment Anywhere in the 3 d space, namely they may reside in the position that represents relative sensors and towards the outside of object of location coordinates.Therefore, determining step 216 comprises the position data of applying 226 from the sensing apparatus 114 of described seizure step 210 and the tip offset towards data so that calculate the positional information of described node.
Carrying out the method that this tip offset (node) projects utilizes X, Y and Z about the measurement from the initial point of this sensor to eyes of the axle system defined by sensor to offset.For the example of eyes, described skew can be about sensor local coordinate system system in X-direction 10cm, in Y-direction 0cm, in Z-direction 8cm.By these skews, rotation matrix and/or hypercomplex number can be used to the absolute position (X of the eyes calculating performer in the coordinate system of motion tracking system, Y, Z) and towards (deflection (yaw), rotation (roll), pitching (pitch)).Following equalities uses the method for standard rotation matrix to solve this tip offset problem (see http://www.flipcode.com/documents/matrfaq.html#Q36).
Therefore, in this embodiment, the step 226 (see Figure 1B) applying tip offset comprising: in the axle system defined by described sensing apparatus 114, obtain the three-dimensional location data relative to sensing apparatus 114 and the relative coordinate towards data of described node.In this case, determining step 216 comprises the absolute position about described equipment 112 estimating described node.
The absolute position of described node is estimated as follows:
Use rotation matrix M=X.Y.Z, wherein M is final rotation matrix, and X, Y, Z are independent rotation matrixs.
M = C E - C F - D - B D E + A F B D F + A F - B C A D E + B F - A D F + B F A C
Wherein:
A, B are X-axle turning axle, the cosine namely rotated and sine respectively;
C, D are Y-axle turning axle, the i.e. cosine of pitching and sine respectively;
E, F are cosine and the sine of Z-axle turning axle, i.e. pan respectively;
X f=X s+X t*M(1,1)+Y t*M(2,1)+Z t*M(3,1);
Y f=Y s+X t*M(1,2)+Y t*M(2,2)+Z t*M(3,2);
Z f=Z s+X t*M(1,3)+Y t*M(2,3)+Z t*M(3,3);
Wherein:
X f, Y f, Z fit is absolute (or " finally ") coordinate of described node;
X s, Y s, Z sit is the coordinate at the center of described sensing apparatus;
X t, Y t, Z tcorrespond to the coordinate relative to the tip offset at the center of described sensing apparatus;
M (OK, row) is respectively with regard to the element of the described rotation matrix of row and column, and wherein element " row " represents line number in a matrix, and element " row " represents columns in a matrix.
The measurement of " tip offset " is promoted by other method.Such as, the rear portion of the skull of performer exist have can with Eulerian angle or by hypercomplex number represent initial towards.User wishes the sensor of defined node on the left eye of performer.Other motion tracking sensor can place to calculate X, Y and Z skew (instead of such as attempting to use tape measure) for the eyes of performer.Solution be to measure when this initial time " tip offset " and towards.The given pedestal sensor at P1 place, position and the sensor at the node P2 place expected, " tip offset ", V1 are P2-P1.Initial towards being defined as hypercomplex number Q1 with X, Y, Z and W attribute.In office what At All Other Times time, new for, Q2 by producing.
Therefore, in this embodiment, apply the step 226 of tip offset comprise acquisition by the position that is positioned at the node of the sensing apparatus of the position of node about be positioned at described sensing apparatus position basic sensing apparatus position and towards and precalculated tip offset.As mentioned above, described initial towards the hypercomplex number Q1 being defined as X, Y, Z and W attribute, described seizure step be defined as Q2 towards data.Positional information according to determining described node below:
P n+(q iq n)P i(q iq n)
Wherein:
P ifrom the skew at the sensor towards q;
P nit is the current location of described sensor;
Q ibe the sensor when calculating time of Pi towards;
Q nbe described sensor current towards; And
Q iand q nit is unit quaternion.
Other means various and/or method can be performed so that position and/or carry out multiple enhanced system feature towards data.Example can be use hypercomplex number calculate motion-captured " Magnetic Field Source " the initial point relative to described motion-captured coordinate system position and towards.If the member of movie production group is in random position with towards placement source, place, then by using motion sensor in the scope in this random source, together with from known position and towards sensor or the data in source and the data of measurement mechanism measured from such as laser ruler, can determine accurate location and towards and random source.Simple assembly tool and software can make this example process present very fast and simple execution.
Referring back to the embodiment shown in Figure 1A and 1B, method 200 also comprises the setting controlling 228 equipment 112 with the described control signal of mode of controller 118 (it is embedded in equipment 112).
Assuming that node is from sensor offset, because the position of described skew is with sensor rotation, even if so sensor rotation, also advantageously allow location node towards data.Such as, sensor can be installed on the handle of cutter, and focus can be fixed to the tip of cutter and how cutter moves and rotate all with most advanced and sophisticated described in high precision tracking.
Use and relate to " calibrated offset " function towards the further advantage of data.By towards data, the second sensor can be used immediately to calculate the deviation post of the expectation paying close attention to node.Such as, sensor is placed on the neck of performing artist below and the eyes then second " calibrating sensors " being placed on performing artist are modes of quick and powerful establishment node.This feature will be explained below further better.
Use and to relate to towards the further advantage of data that " " function, this is the special circumstances of calibrated offset feature to quick-setting.When both video camera and object have be installed to sensor on it and camera points sensor is positioned at beyond the invisible the object of (such as on the back side) time, quick-setting function is useful.Then focal length of camera is adjusted until part, such as their eyes of expectation of described object are aimed at by focus.Use and then use towards data the range data indicated by camera lens from object and video camera, can also obtain pay close attention to node fast and suitably accurate to arrange.
Now by the various functional characteristic that describes according to a particular embodiment of the invention and aspect.
According to the embodiment shown in Fig. 1 C, by with further reference to Figure 1A, show the method 300 of the setting for controlling the equipment relevant with picture catching.Method 300 is included in storer 132 stores 314 by one or more identifier in predetermined interested region (i.e. " node ") of being processed by described equipment 112 and corresponding positional information (namely relative to the three-dimensional coordinate of equipment).Positional information is obtained by following steps: catch 310 position datas and towards data at sensing apparatus 114; And determine 312 positional informations in interested region will processed by described equipment 112 from the position of described sensing apparatus 114 with towards data.Method 300 is also included in the selection that processor 16 place receives 316 one or more identifiers.Method 300 also comprises the control signal being exported 318 sensing equipments 112 by output port 43, so that control the setting of 320 equipment 112 in real time based on the positional information in interested region selected.
Node array:
The array of the node of expectation can be created in interface by pre-defined node (static or movement).Simply by selection node, camera lens will immediately be focused, and/or video camera will point to and this node be patterned in visual field.This allows on improvise at the scene, zoom and regulating exactly between the object/object of two movements extremely fast between a large amount of object/objects, and do not need any action of the manual measurement of focal length dial plate or manual adjustments---or when camera operation, to any manual adjustments of video camera self.Therefore, in this case, the receiving step 316 of the method 300 described in fig. 1 c comprises the node of the selection receiving predefined procedure; And method repeats to export step 318 to selected each node, so that multiple node is automatically controlled successively to the setting of 320 equipment 112 according to sensor selection problem sequentially.
Node sequencer:
Also likely create the node of predefined procedure, the present case of the film making of this applicable film, wherein direct the order knowing object in advance.In this way, the node expected by prestrain can be displaced to the next one from an object/object simply by clicking " next one " button simply or rotating dial plate (reality or virtual) back and forth, user not only can any expectation time be engraved in two objects before switch, can also the instruction speed (speed of focusing) of focusing between objects.Therefore, the step 318 shown in aforementioned repetition in fig. 1 c, 320 (with reference to Figure 1A) are pointed out when receiving user's input command via input port 41.Alternatively, step 318,320 is repeated based on the planning be stored in storer 132.
Geometry slide block:
On touch panel device, can also arrange that the figure of node (or node array) represents with (triangle and the square) of geometry or random pattern (zigzag line, curve etc.), and by between each node slide finger user will between objects " focusing ", there is the control of speed to focusing again, and again no matter object or video camera motion and do not need to measure or regulate actual focal distance.
Therefore, method 300 (with reference to Figure 1A) shown in Fig. 1 C also comprises via input port 41 by receiving user's input command corresponding to the sliding motion on the touchscreen of the displacement between two adjacent nodes, and wherein the selection of receiving step 316 comprises the identifier of adjacent node.Method 300 also comprises and associates centre position between described adjacent node according to described displacement, repeats to export step 318 to each of described centre position.
Interface model:
Use node array, sequencer, geometry slide block and hardware dial plate or other input media, can select between two basic schemas of focusing.
A pattern is " touching to focus ", and wherein user touches button (virtual or on physical input device) simply to select node or move forward to next predetermined node in sequence node.In this mode, it should also be noted that and can pre-determine when by pre-defined preference (reference) or by regulating virtual " speed dial plate " or analogue input unit to select the speed of focusing during next node.
Second pattern is " sliding to focus ", and wherein user not only selects next node, but by using geometry slide block, virtual dial plate or analogue input unit to select next node and the real-time implementation speed of focusing.This simulates current focusing example, wherein the speed of focusing person's regulating and controlling, and does not introduce risk out of focus on any object expecting.
Tip offset and multiple node from single-sensor:
Thering is provided a kind of real time position and the sensor towards data by using, identical sensor can be used to create multiple node.This by use X, Y, Z position coordinates and relative orientation, highly, rotational coordinates input " off-set value " and realizing.Therefore, the sensor being attached to the rear portion of the head of object is that rigid object can have the several nodes be associated with head due to it.Eyes, nose, ear etc. use this technology can be defined as node from single-sensor.
The meticulous adjustment of tip offset:
When be difficult to measure in three dimensions offset accurately, two automatic technologies are provided:
-suppose sensor be placed on the neck of performer below and the node expected is actually eyes, the second sensor can temporarily be placed on eyes.Use data " tip offset " data from the second sensor automatically can be calculated and be applied to node.
-can manually regulate tip offset by making object be in the sight line of video camera, then focusing person can focus until the node expected is aimed at (normally eyes) by focus.System roughly can calibrate its oneself tip offset because it know sensor towards and it will know to have adjusted how many focal lengths relative to sensing data.
Automatic analysis (profiling):
If user uses the sensor in the somewhere ensconced in actor body to be eyes by node definition, can notify that this node of system is actually " two nodes ", left eye and right eye.Due to system all know if having time video camera be in where and object be in where and object relative to video camera how towards, focus on right eye towards during video camera on left eye and when the right side of face towards focusing during video camera in the left side that it such as can work as face.Therefore, the method 300 (with reference to Figure 1A) illustrated in fig. 1 c is also included in step 316 and is in the node (or one or more interested region) determining to meet specified criteria in the selection of the node of reception.Thus according to meeting the signal of node generating step 318 of specified criteria.
Similarly, the object of any rotation or object can have several " automatic analysis " node associated with it, and along with described object or object rotate, described node can be triggered.
Zoom control:
Be similar to focusing, position and towards data also can be used to regulate zoom.Such as, if it is desirable that it is kept duplicate size in menu frame regardless of the distance of object, by input lens parameters, system can automatically reduce along with object or object move and amplify.NB: this effect sometimes referred to as " mobile zoom (DollyZoom) " or " three times of reverse zooms (TripleReverseZoom) ", and needs highly stable camera motion and repeatedly have a dress rehearsal to realize at present.This system makes it possible at hand-held camera lens and wherein random performing artist and video camera create this effect in moving.
Mirror image pattern:
All right extension function is with the virtual Distance geometry calculated as take required for the reflection such as in mirror or angle.Focal length between the video camera wherein reflected in mirror and the object distance equaled from video camera to mirror adds the distance from mirror to object, (and on video camera, if video camera moves) system can calculate correct pseudo range rapidly with reflection of focusing when desired by being placed on by sensor on mirror and object.
Focal length based on the focal plane of the best between two nodes or two skew nodes:
Such as may it is desirable that, focusing be dressed on two objects of sensor to each.People thus can select mid point, make the camera lens chosen to allow object owing to being aimed at by focus by the intermediate point being in each object focal plane simultaneously, and due to focal plane by roughly the depth of field midpoint and by the focal length of the best of permission two objects.Photographer also can select any point between two objects, if particularly they wish to guarantee to give when another object leaves the scope of the depth of field right of priority of two objects and determine that it is aimed at by focus.
For 3D make two between angle regulate:
Some 3-D photographies arrange the real-time adjustment requiring angle between two.This system can make this adjustment robotization by this keratotic plug being tied to the object/object chosen.
The control of diaphragm:
In some cases, it is desirable that " aperture may be pulled " to regulate the light quantity entering camera lens, such as when in single camera lens from brighter outdoor site move to darker indoor time.Regulating by camera position being tied aperture, automatic aperture adjustment can be carried out to the scope of the predetermined area.In addition, because can be used for video camera towards data, only can regulate aperture based on the direction of video camera, allow for wherein setting or place and can be illuminated to more than one " crucial light " and the current impossible scene that will always regulate smoothly between these exposure values of aperture.
Preserve and arrange:
This system can be used to preplan very complicated shooting or scene, and all required data about " node " and any sequence are input in the file on interface software.The very big body of this preservation of " scene " is improved efficiency is set, and give the ability of the shooting of high complexity that creator's plan and preparation current count can not realize.
Distance display:
Relative distance when system can calculate at any time between object and video camera and at any time time on the reading of any expectation, this is shown as range data.Such as, " node " range data of selection can always be displayed on the main control screen of software interface.In addition " satellite equipments " can be incorporated into this range data, and user can at any time time select any node to determine data.
Such as, focusing person can focus in performer A during having a dress rehearsal, but cinematographer may like to know that performer B leaves how far to assess required light level, to build the depth of field required by director.Use similar iPodTouch tMor the hand-held device of smart mobile phone, even if going on the stage A by focus on time, cinematographer also can the range data of Real-time Obtaining performer B.
Multiple-camera is supported:
This system allows user to arrange one or more video camera, and does not have the upper limit that can limit, and by object identical for multiple camera alignment or object that each camera alignment is separated.
Other realtime curve:
Obtaining real time data also allows other to calculate in real time and index:
-at any given time time for the depth of field of any given node.
-minimum focus is warned---such as: when reach predetermined closely time can with orange display distance, and when object reaches the minimum focal length of reality with red-label.
Manual covering and automatic switchover:
Owing to manually controlling focal length when any focusing person or video camera photographer may wish at any time, the not efficiency of guard system, the enable full-time switching manually or automatically between automatically and manually of this system.These are methods available in current system:
-digital fine regulates " dial plate " forever can use focusing person.Simply by this meticulous adjustment of adjustment, focusing person can rewrite automatic focal length with any amount and arrange.
-" slate (slate) pattern ".By select button, automatic system is switched to manually complete immediately.
-" automatic switchover ".This pattern allows user more predefined, manual or contrary from automatically switching at described some place node, object or object.This may use very long camera lens to be useful to when wherein the long distance of object is advanced, and/or can be the method for avoiding less desirable change in data.
The source of hanger erection:
Because cinematic industry has got used to the process of installing microphone on the telescopic bar of the length being called as " suspension rod ", the implementation that of this system is unique installs magnetic source on suspension rod, then this suspension rod can be placed on the identical mode of safety on performance region with microphone in nearest position easily, is placed on performance region in nearest position easily.If object and video camera are both equipped with sensor, desirable focal length data still can be collected for multiple node.But the method does not allow camera operation or uses the fixing node be not associated with sensor.
Two (with multiple) source suspension rod:
Extended by the basic idea that suspension rod is installed single source, can also install two sources, each one end at suspension rod, with expanded scope.Similarly, other hand-held configuration---triangle or square, such as can expanded scope, due to described source relative position can arrange in software pre-configured, so allow the quick-setting of the calibration not needing to arrange.
The source that video camera is installed:
Directly on video camera, installing described source and use software the relative position of video camera to be calibrated to source can operating said system and do not need the sensor on video camera.This allows to arrange rapidly " single origin system ", and it provides great accuracy at the scope place of the vicinity needing most sharp focusing.
Modular system:
Multiple source (not having theoretic higher limit) can be disposed in predetermined configuration or arrange randomly.Predetermined configuration can enable quick-setting, (such as having the equilateral triangle of 10ft side) and cover larger region.Random configuration need in software some manually arrange, but allow the great dirigibility in the shape that covered by system and region.
Static magnetic source (or optical sensor) calibration:
Because system uses multiple magnetic source, (or in ultrared situation, multiple video camera) and the X in each source, Y, Z and A, E, R need to be imported in described system, so comprise the simple interface for inputting these data in systems in which.
Predictability (or Kalman) filtering:
Due to the system real time inspection data of any robotization, it is always reviewed the past.Although this system will be extremely quick, even if microsecond delayed (namely has the camera lens grown very much of the object of promptly movement) in extremely full of challenges situation in low-light level, may have can visible effect.Current movie producer and cinematographer avoid these full of challenges situations and in fact spend a large amount of money to overcome them, are particularly leasing very expensive lighting kit with the mean F/point gear (stop) maintaining 5.6.Very easily can overcome any slightly delayed in data by compensating any delay in focal position by predictability algorithm being increased to system, any delay of wherein said compensation in focal position is by regulate focal position relative to object towards or away from the fixed proportion of the speed of the motion of video camera.By increasing this feature, though in the challenging situation of most or even obtain focal length be relatively simple.
As all features within the system, desired as many or equally few robotization can be increased by user and calibrate.Such as, even if very radical arranging is focused closely by also creating on the object of very rapidly movement.Establishment more naturally postpones by not radical arranging relatively, and it may be more applicable to some creationary targets.
Data record:
As described above, position within the system and (be namely stored in storer 132---see Figure 1A) can be recorded in real time towards data and use in other post-production scheme afterwards.
The camera control strengthened:
Use location and can the movement of the intactly operation of robotization video camera and dolly and/or camera crane pivoted arm or photography lift truck towards data.But video camera photographer and cinematographer wish to have whole control to the nuances of final composition.A feature of this system is will the complex work of intactly robotization camera control, and allow photographer by the video replay screen with touch screen function simply his finger mobile regulate composition.Such as, performing artist may be remained on the center of menu frame by the system of robotization, but photographer wishes left side performing artist being placed in menu frame.By simply finger being dragged to left side from any point on the video images, system will compensate and regulate the position of performing artist in menu frame to the composition expected.In this way, the object of composition promptly movement is by simple as the static object of composition.This identical adjustment can control to realize by operating rod, and its current remote camera being used to standard operates and this will be also the very large improvement to current techniques.But touch-screen drag feature more intuitively and do not need training.
Infrared LED:
Said system uses AC magnetic motion capture system.But can be applied to the feasible equally alternative of larger operating room configuration is use infrared LED motion tracking system to catch identical data.Although be sight line infrared to sensor camera, it does not need the sight line between video camera and object.At clothes, hair and small-sized infrared LED can be hidden to video camera in other objects sightless.Can also create and have infrared patterns and stitch into " smart fabric " wherein, it can provide identical data.
Difference global (and this locality) positioning system:
Differential GPS provides nearly all relative position of the data of operation needed for this system.Strengthen GPS will make this system towards data almost any outdoor site place function is complete in the world to provide by accelerating processing time, " fastening (tether) " and increasing extra sensing capability.Indoor studio application can by developing and using " local positioning system " to strengthen, but described " local positioning system " is with the principle identical with differential GPS with much smaller scale operations, and because " satellite " can be static, much bigger accuracy can also be realized.
Illumination and miscellaneous equipment control:
Once node is defined, data can be made to can be used for needing accurate appointment, follow or lock onto target and any amount of auxiliary control system of other qualitative adjustment of width etc. of such as light beam.
Athletic training:
This system being adapted to athletic training is relatively simple thing.Such as, tennis machine is tethered to the software interface knowing athletic definite position, can by machine programming to always play ball to athletic weakness (backhand-stroke) and/or create have with any speed or angle service machine ability have more challenging virtual opponent.
Application for the environment of impaired vision:
The Another Application of system may be used for low illumination conditions or the people for visually weakening.Such as, environment can be mapped as node and the people of visual impairment can receive about they position and towards and object in a room and personnel position and towards various types of feedbacks.Another example is in low illumination conditions---the such as dark place of extreme, wherein anyone can not see his or her environment.
With reference now to Fig. 7, to 25, the assembly of graphic user interface (GUI) 64 will be described.GUI64 is shown, to allow operating system of user 10 (see Fig. 1,2A and 2B) by the user interface device 42 for device 40.
Fig. 7 shows the home screen 66 of GUI64.
Fig. 8 shows node and creates/revise window 68.
Fig. 9 shows a part for the home screen 66 of Fig. 7, i.e. node array 70, and wherein user creates various node 72 in array 70.
Figure 10 shows a part for the node array 70 of Fig. 9, and more specifically, the example of node 72.
Figure 11 shows another part of the node array 70 of Fig. 9, and more specifically, and the node 72 be highlighted represents and to be easily selected by a user by touching described node.Node can indicate various information (such as whether it is associated with sensor, sensor whether online etc.) to user.
Figure 12 shows a part for the home screen 66 of Fig. 7, i.e. sequencer 74.User have recorded various node with the order of specifying to sequencer 74.
Figure 13 shows another part of the home screen 66 of Fig. 7, namely exemplifies corner dial plate control inerface 76.In this embodiment, dial plate is used to the focal distance of meticulous adjustable lens.
Figure 14 shows the another part of the home screen 66 of Fig. 7, namely exemplifies another corner dial plate control inerface 78.In this embodiment, dial plate is used to control focus is moved to another by camera lens speed from a node.
Figure 15 shows the window 80 of the GUI64 for limiting video camera.
Figure 16 shows for calibrating camera lens and selecting the window 82 of the GUI64 of which camera lens on video camera.
Figure 17 shows the window 84 of the GUI64 of the setting for selecting motion tracking system.
Figure 18 shows the window 86 of the GUI64 of the current state for preserving application in memory, comprises node array 70 and sequencer 74.
Figure 19 shows a part for GUI window 64, comprises the corner controller 88 allowing user's regulating system to be applied to the amount of the delay/lag compensation of node data.
Figure 20 shows the alternative control window 90 to GUI64 (" complete function geometric linear "), and it allows the interactive graphics of sequencer function to represent.User can slide into the next one to focus (or carry out other automatically regulate) simply by pointing from a point (each some expression one node).The speed that focal length that user will carry out from a point to the speeds control of another moveable finger (or other) regulates.
Figure 21 shows the alternative control window 92 to GUI64 (" complete function geometric linear "), and it allows the interactive graphics of sequencer function to represent.Then simply by focusing from a node to next one slip finger (or carry out other automatically regulate) user can determine the definite quantity of the point on screen and position (each some expression one node) and.The speed that focal length that user will carry out from a point to the speeds control of another moveable finger (or other) regulates.
Figure 22 shows the alternative control window 94 (" geometry 6 node of complete function ") to GUI64, and it allows the interactive mode between any 6 points to regulate, each some expression one node.The advantage of this configuration does not need predetermined order.The speed that focal length that user will carry out from a point to the speeds control of another moveable finger (or other) regulates.
Figure 23 shows the alternative control window 96 (" geometry 5 node of complete function ") to GUI64, and it allows the interactive mode between any 5 points to regulate, each some expression one node.The advantage of this configuration does not need predetermined order.The speed that focal length that user will carry out from a point to the speeds control of another moveable finger (or other) regulates.
Figure 24 shows the details 98 (" corner geometry 4 node ") of the corner controller 88 of the Figure 19 in the Main Control window on the GUI64 with multiple function.This function to show when use four nodes it and how can be used as holding manageable figure and represent.It allows the interactive mode between four points to regulate.The advantage of this configuration does not need predetermined order and can easily be operated in main GUI64 window by right (or left) thumb.The speed that focal length that user will carry out from a point to the speeds control of another moveable finger (or other) regulates.
Figure 25 shows the details 100 (" corner geometry 3 node ") of the corner controller 88 in the Main Control window on the GUI64 with multiple function.This function to show when use three nodes it and how can be used as holding manageable figure and represent.It allows the interactive mode between three points to regulate.The advantage of this configuration does not need predetermined order and can easily be operated in main GUI64 window by right (or left) thumb.The speed that focal length that user will carry out from a point to the speeds control of another moveable finger (or other) regulates.
List below provides feature, assembly, purposes etc. extra according to an embodiment of the invention:
The data stream of-this system and feature contribute to using in post-production.All data and video feed can be stored and reset (such as each ' shooting ' of arranging at film) immediately and/or stored for post-production (being such as used to CGI).This comprises camera motion/towards, joint movements/control towards with equipment.
The data stream of-this system and feature contribute to virtual with the environment of augmented reality in use.All data and video feed can be transmitted, store and reset immediately.
The data stream of-this system and feature contribute to the interoperability of various hardware.Such as, aperture and light regulate and can to link each other and programming makes along with aperture is adjusted to the change depth of field in advance, and light can be automatically simultaneously dimmed or brighten, therefore the audience's feeling depth of field change and the change of light can not be experienced.This interoperability relates to all devices and unrestricted.
-according to the described system of embodiment contribute to running application and control the interoperability of many photographers interface device (such as iPads, iPhone, ipodtouch) of all devices type.Together with this interoperability, each interface device can transmit and receive data with the other side.Such as, if photographer touches node to be focused on one object by his or her video camera, this focusing determines can be indicated on immediately on the device of another the focusing person controlling another video camera, and also on the device of other staff various comprising director and producer.
-be applicable to multiple-camera function extremely flexibly according to the described system of embodiment.In the example of focusing, an iPad can control multiple video camera, and multiple iPad can control multiple video camera simultaneously.An iPad can control multiple video camera by touching node simultaneously, or video camera can select independent control.Second of node array copies temporarily can also replace sequencer figure for side by side controlling one or more secondary video camera to permanent node array.The video feed part of described application can be made into switch to split screen (such as the split screen of 2 video cameras, or for four split screens of 4 video cameras) so that monitor all focus actuation.
-advanced hardware and Software for Design are devoted to the delay minimization of system to millisecond rank (such as interruption, multi-core, multi-thread software etc.).
-due to the low delay of system and responding ability, a function can allow photographer to slow down the responding ability of auto-focusing practically can not seem too " robotize ".
The input media (numeral being such as attached to iPad follows focal length dial plate) of-machinery can be linked to any element of the graphic user interface (such as sequencer) of software.
-this system can be contributed to by ' ductile ' touch-screen of the sensation of the electric charge establishment texture, groove etc. in screen surface.Such as, the line of the figure in ' geometry slide block ' function and node can become groove for the operability improved, and comprise restriction photographer to the dependence checking touch-screen.
-to built-in video feed display record and playback all extremely useful for focus puller, director of photography DP, director etc.Such as focus puller can assess in the end one ' shooting ' or the quality in the last of ' camera lens ' or the last focusing on the same day.
The region of-touch video feed can be selected node for focusing and/or control miscellaneous equipment function, and such as long-range head is specified, thrown light on etc.
-sensor and transmitter can be placed on the inside of free object.Such as, sensor and transmitter can be placed in the basketball of customization in the mode of the quality or center of gravity that do not affect ball, so that focus during Basketball Match in described ball.
-together with preserving ' scene is preserved ' function of the state applied, node manager can allow photographer to preserve similar node group (all parts of such as automobile can be defined as node and reload when any time in future to reuse identical automobile or promote that the node for new cars creates).
-equipment controls item can be triggered (hardware and/or software trigger) based on the coordinate position of node.
Many ' intelligence ' of-node data uses is possible.Such as, when node close to or when entering visual field (menu frame) of video camera, an instruction can warn photographer.In this example, node can be previously programmed as automatically entering focusing when it enters menu frame.
-motion tracking data stream can use many mathematical methods to filter.Such as, can be quantized to determine when data become suspicious or disabled for noise in a stream.In software function that these data can feed back to ' manually cover and automatically switch '.Many filtrators can also be applied to data stream to control level of damping etc.
-when node sequencer is ' neutrality ', 2 (lines), 3 (triangles) or 4 (square) geometry node are all set to green.Like this when sequencer is put as ' forward ' or ' oppositely ', next node will outside 2,3 or 4 groups, and next logic node in the sequence will become unique green node.
-software function can allow photographer to pass through from cameras observe node, and then handles the meticulous regulatory function of described focal length until node is focused accurately, and the slight error of Fast Correction in the tip offset of node.Now, photographer triggering system can automatically recalculate the tip offset (being calculated by hypercomplex number) of node.
The motion tracking data (such as seismic motion) of-prerecording can be fed back in system with mobile camera and equipment to simulate the motion of described prerecording.This technology can improve ' naturally experience ' (such as seismic motion, vehicle in rough terrain etc.) of spectators.
-specific (with difficult) predefined device action can be automated and/or promote (such as using the Hitchcockian of hand-supported camera (Hitchcock) zoom, the video camera rotation etc. synchronous with flying trapeze performing artist).
-the effect relevant to music content possible comprises feedback control loop (such as, the focusing of instant beat with song or location/appointments video camera relevant with beat and defocusing, comprises fact and perform).
-whole system can be that ' scriptable ' makes can be recorded and robotization with any user interactions of software.
The sensor that-various accessory is used on object is arranged.Such as, sensor can be placed on to be put on performer in belt, or can by the various fabricated sections tipped in for easily placing/being attached.
-source arranges function can comprise 3D modularization source building function, for the setting using modular stem to connect origin system accessory.In this function, photographer can represent by they 3D of the manual modularization setting constructed of fast construction.Because the angle in the length of bar and source is by the scheduled justice of mode of the physical Design of modularization origin system accessory, software can then immediately calculate active position and towards.
-for modularization origin system, connecting link can be removed after the setup and not move described source.This allows fast, the non-source fastened is arranged and do not need to measure source position or towards, as what calculate in the 3D modularization source building function applied at iPad.
-to control with the servomotor of camera lens circle together with, the internal electronic device can accessing some camera lens, directly to control focal length, aperture, zoom, removes the requirement of servomotor.
-system software allows the comprehensive control to the configuration of motion tracking system.
-one accessory is for accurate measurement by pick up calibration ' main body cap ' instrument that is assembled on the lens base of video camera.This will allow point-device measurement at center, focal plane, because described center makes camera data " node ", described center is important to visual effect work.
Embodiments of the invention are favourable being, use three-dimensional position and greatly promote towards the real-time streams adjustable lens function of data, composition, Camera Positioning, illumination and sound and expand function available film-maker and animation and/or rest image content design people.
According to embodiments of the invention, the use of the node in the context of the control of film represents many advantages, comprising:
1) node system allows pre-defined multiple mobile node (other video camera/focusing system nearly all can not, but PictorvisionEclipse uses GPS for thick application http://www.pictorvision.com/aerial-products/eclipse/).
2) node system allow to multiple mobile node real from motion tracking (may other video camera/focusing systems all can not; Some carry out trial PictorvisionEclipse by making people carry out following the tracks of only may have a mobile node; An example for " the real automatic tracing device " that throw light on may be: http:/www.tfwm.com/news-0310precision).
3) node system provides three-dimensional position data (relative to the distance be far from, being different from other system nearly all).
4) feature of the node used be position and towards, (difference may other video camera/focal length system all to allow on object/object definition instead of general ' region '; If without this, other system can not again object any place apply skew with defined node, as focused in eyes).
5) position and the angle towards permission trial control object/object, such as, when their head is in some angle for video camera, be switched to their left eye (not having other system to realize this) from the right eye of performer.
6) node system provides high accuracy (being less than 1cm in many cases), and being different from may other automatic tracking system all (owing to providing the control/focusing level of lifting towards with skew).
7) node system also provides high frequency (120Hz), and being different from may other automatic tracking system all (such as gps system, positive face detects and tends to not possess this).
8) node system also provides low delay (10ms).This delay level can not hinder ' film ' to control (again, many systems lack this) for majority of case.
9) node system provides predictability/correcting property function, quite reduces delay.
10) node system does not need the requirement of ' sight line ', and namely node uses the sensor be placed on performer/object, and therefore laser or sound also can not flick from performer.Face recognition obviously also needs sight line.The other benefit of the sensor in this is constant node data.Such as, if performer jumps out after grove, relative to the vision system of the new appearance reaction needed performer, he/her ' immediately ' be focused.
11) node system continues to operate in the environment of movement.Such as, if source is installed to hand-held camera system (or using with source suspension rod accessory), no matter he/her goes to where, system continues to operate near video camera photographer.Similarly, system works in the vehicle of movement, such as, on the train of movement.
12) in addition, node system is portable system.
Above-described embodiment is only considered to schematic instead of restrictive in all fields, and the application is intended to cover apparent any amendment or change for a person skilled in the art.Certainly, as apparent for a person skilled in the art, can other amendment multiple be made to above-described embodiment and not depart from the scope of the present invention.

Claims (39)

1., for controlling a method for the setting of the equipment relevant with picture catching, comprising:
A) in sensing apparatus place seizure three-dimensional location data with towards data;
B) by processor from the position data caught with determine by the positional information in the interested region of described device processes towards data; And
C) export via the output port of described processor and point to the control signal of described equipment, so that control the setting of described equipment in real time based on the positional information in described interested region.
2. method according to claim 1, also comprises:
D) controlled the setting of described equipment by described control signal by controller.
3. method according to claim 1, also comprises:
-in memory reservoir state position data and towards data.
4. according to Claim 1-3 any one described in method, wherein said setting comprises following at least one: the focal length of video camera is arranged, the zoom of this video camera is arranged, the aperture of this video camera is arranged, camera lens angle is arranged between two of this video camera, the pan of this video camera is arranged, the pitching of this video camera is arranged, the rotation of this video camera is arranged, the position of this video camera is arranged, the position of this video camera is arranged, light fixture controls to arrange and sound device is arranged.
5. according to claim 1 to 4 any one described in method, wherein said seizure comprise produce represent physical location coordinate and represent described sensing apparatus towards characteristic.
6. according to claim 1 to 5 any one described in method, the interested region of wherein said determining step (b) comprises one or more node, and described determining step (b) comprises, for each node:
I) positional information of described node is determined; And
Ii) distance between described equipment and described node is calculated, and
The control signal of described output step (c) is wherein produced based on the distance calculated in step (b) place.
7. method according to claim 6, the positional information of each node wherein in determining step (b) (i) comprises node (x 1, y 1, z 1) Euclidean space coordinate, and
Wherein said calculation procedure (b) (ii) comprising:
-be received in Euclidean space coordinate (x 2, y 2, z 2) in the positional information of described equipment; And
-calculate the distance between the positional information of described equipment and the positional information of described node from Pythagorean theorem below:
8. the method according to claim 6 or 7, wherein said calculation procedure (b) (i) comprises the position data of applying from the sensing apparatus of described seizure step (a) and the tip offset towards data, so that calculate the positional information of described node.
9. method according to claim 8, wherein said applying tip offset comprises:
-in the axle system defined by described sensing apparatus, obtain the position data of described node relative to described sensing apparatus and the relative coordinate towards data; And
Wherein said determining step (b) (i) also comprises estimates the absolute position of described node about described equipment.
10. method according to claim 9, the absolute position of wherein said node is estimated as follows:
M = C E - C F - D - B D F + A F B D F + A F - B C A D F + B F - A D F + B F A C
Wherein:
Rotation matrix M=X.Y.Z, wherein M is final rotation;
Matrix, and X, Y, Z are independent rotation matrixs;
A, B are X-axle turning axle, the cosine namely rotated and sine respectively;
C, D are Y-axle turning axle, the i.e. cosine of pitching and sine respectively;
E, F are cosine and the sine of Z-axle turning axle, i.e. pan respectively;
X f=X s+X t*M(1,1)+Y t*M(2,1)+Z t*M(3,1);
Y f=Y s+X t*M(1,2)+Y t*M(2,2)+Z t*M(3,2);
Z f=Z s+X t*M(1,3)+Y t*M(2,3)+Z t*M(3,3);
Wherein:
X f, Y f, Z fit is absolute (or " finally ") coordinate of described node;
X s, Y s, Z sit is the coordinate at the center of described sensing apparatus;
X t, Y t, Z tcorrespond to the coordinate relative to the tip offset at the center of described sensing apparatus;
M (OK, arranging) is respectively with regard to the element of the described rotation matrix of row and column.
11. methods according to claim 8, wherein said applying tip offset comprise obtain by measure the position that is positioned at the node sensing apparatus of the position of node about be positioned at described sensing apparatus position basic sensing apparatus position and towards and precalculated tip offset.
12. methods according to claim 11, wherein said initial towards the hypercomplex number Q being defined as X, Y, Z and W attribute 1, described seizure step be defined as Q towards data 2, and wherein according to the following positional information determining described node:
P n+(q iq n)P i(q iq n)
Wherein:
P iit is the skew from being in towards the sensor of q;
P nit is the current location of described sensor;
Q ibe the sensor when calculating time of Pi towards;
Q nbe described sensor current towards; And
Q iand q nit is unit quaternion.
13. methods according to any one of claim 1 to 12, wherein said interested region and described sensing apparatus are positioned at different places.
14. 1 kinds, for controlling the system of the setting of the equipment relevant with picture catching, comprising:
-sensing apparatus, is configured to catching position data and towards data;
-processor, communicates with described sensing apparatus, and described processor is configured to from described position data and determines by the positional information in the interested region of described device processes towards data; And
-output port, integrated with described processor, be configured to export and point to the control signal of described equipment, so that control the setting of described equipment in real time based on the positional information in described interested region.
15. systems according to claim 14, also comprise:
-controller, communicates with described output port and is configured to control by described control signal the setting of described equipment.
16. systems according to claims 14 or 15, also comprise:
-storer, states position data for reservoir and towards data.
17. systems according to any one of claim 14 to 16, also comprise described equipment, wherein said setting comprises following at least one: the focal length of video camera is arranged, the zoom of this video camera is arranged, the aperture of this video camera is arranged, camera lens angle is arranged between two of this video camera, the pan of this video camera is arranged, the pitching of this video camera is arranged, the rotation of this video camera is arranged, the position of this video camera is arranged, light fixture controls to arrange and sound device is arranged.
18. systems according to any one of claim 14 to 17, wherein said sensing apparatus is observability independently sensing apparatus.
19. systems according to any one of claim 14 to 18, wherein said sensing apparatus comprises transmitter, and described system is also included in the receiver communicated between described transmitter and described processor.
20. systems according to any one of claim 14 to 19, also comprise the data processing unit embedding described processor and the user's set communicated with described data processing unit, described user's set comprises user interface.
21. systems according to claim 20, wherein said user's set communicates with described data processing unit via cordless communication network.
22. 1 kinds of computer-readable reservoirs, have storage thereon for the data that performed by computing machine and instruction, described data and instruction comprise:
-for receiving the position data of sensing apparatus and the code module towards data;
-for from described position with determine by the code module of the positional information in the interested region of described device processes towards data; And
-for exporting the control signal of pointing to described equipment so that control the code module of the setting of described equipment in real time based on the positional information in described interested region.
23. 1 kinds, for controlling the method for the setting of the equipment relevant with picture catching, comprising:
A) store one or more identifier in memory, each identifier is associated with by by the predetermined interested region of described device processes, and stores corresponding positional information;
B) selection to described one or more identifier is received at processor place; And
C) output port via described processor exports the control signal pointing to described equipment, so that control the setting of described equipment in real time based on the positional information of selected by described one or more predetermined interested region.
24. methods according to claim 23, also comprise:
D) controlled the setting of described equipment by described control signal by controller.
25. methods according to claim 23 or 24, wherein in step (a), the described positional information of storage is obtained by following steps:
-catch in the position data at sensing apparatus place with towards data; And
-by processor from the position of described sensing apparatus with determine by the positional information in the interested region of described device processes towards data.
26. methods according to any one of claim 23 to 25, each of wherein said predetermined interested region corresponds to node and corresponding positional information comprises three-dimensional coordinate relative to equipment.
27. methods according to claim 26, wherein receiving step (b) comprises
The selection of the predefined procedure of-receiving node;
Described method also comprise according to order the node of sensor selection problem to each selection repeat described output step (c) so that sequentially automatically control the setting of described equipment to multiple node.
28. methods according to claim 27, wherein repeat step (c) and perform based on storage predetermined planning in memory.
29. methods according to claim 27, wherein repeat step (c) energized when receiving user's input command via input port.
30. methods according to claim 26, also comprise the user's input command receiving the displacement corresponded between two adjacent nodes via input port, the selection of wherein said receiving step (a) comprises the identifier of described adjacent node, and described method also comprises:
-associate the centre position between described adjacent node according to described displacement; And
Wherein described output step (c) is repeated to each of described centre position.
31. methods according to claim 30, wherein said user's input is inputted via touch-screen by sliding motion.
32. methods according to any one of claim 23 to 31, also comprise:
-from the selection of described receiving step (b), determine the one or more interested region meeting specified criteria; And
Wherein basis meets the control signal of one or more interested region generating step (c) of described specified criteria.
33. 1 kinds, for controlling the system of the setting of the equipment relevant with picture catching, comprising:
-storer, is configured to storage by by one or more identifier in the predetermined interested region of described device processes and corresponding positional information;
-processor, is configured to described memory communication the selection receiving described one or more identifier; And
-output port, integrated with described processor, be configured to export the control signal pointing to described equipment, so that control the setting of described equipment in real time based on the positional information of selected by described one or more predetermined interested region.
34. 1 kinds of computer-readable reservoirs, store by one or more identifier in the predetermined interested region of described device processes and corresponding positional information thereon, described computer-readable reservoir also comprises the data and instruction that are performed by processor, and described data and instruction comprise:
-for receiving the code module of the selection of described one or more identifier; And
-for export point to described equipment control signal so that control the code module of the setting of described equipment in real time based on the positional information of selected by described one or more predetermined interested region.
35. 1 kinds, for controlling the method for the setting of the equipment relevant with picture catching, comprising:
A) by the observability independently position data of sensing apparatus seizure at described sensing apparatus place;
B) determine by the positional information in the interested region of described device processes by processor from described position data; And
C) control signal pointing to described equipment is exported by the output port of described processor, so that control the setting of described equipment in real time based on the positional information in described interested region.
36. 1 kinds, for controlling the system of the setting of the equipment relevant with picture catching, comprising:
-observability is sensing apparatus independently, is configured to catching position data;
-processor, communicates with described sensing apparatus, and described processor is configured to based on described position and determines by the positional information in the interested region of described device processes towards data; And
-output port, integrated with described processor, be configured to export and point to the control signal of described equipment, so that control the setting of described equipment in real time based on the positional information in described interested region.
37. systems according to claim 34, also comprise:
-controller, communicates with described output port and is configured to control by described control signal the setting of described equipment.
38. systems according to claim 35 or 36, also comprise described equipment, wherein said setting comprises following at least one: the focal length of video camera is arranged, the zoom of this video camera is arranged, the aperture of this video camera is arranged, camera lens angle is arranged between two of this video camera, the pan of this video camera is arranged, the pitching of this video camera is arranged, the rotation of this video camera is arranged, the position of this video camera is arranged, light fixture controls to arrange and sound device is arranged.
39. 1 kinds of computer-readable reservoirs, have storage thereon for the data that performed by computing machine and instruction, have for the input port from observability independently sensing apparatus receiving position data, described data and instruction comprise:
-for based on described position data with determine by the code module of the positional information in the interested region of described device processes towards data; And
-for exporting the control signal of pointing to described equipment so that control the code module of the setting of described equipment in real time based on the positional information in described interested region.
CN201480032344.9A 2013-04-05 2014-04-04 System and method for controlling equipment related with picture catching Active CN105264436B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201361808987P 2013-04-05 2013-04-05
US61/808,987 2013-04-05
PCT/CA2014/050346 WO2014161092A1 (en) 2013-04-05 2014-04-04 System and method for controlling an equipment related to image capture

Publications (2)

Publication Number Publication Date
CN105264436A true CN105264436A (en) 2016-01-20
CN105264436B CN105264436B (en) 2019-03-08

Family

ID=51657363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480032344.9A Active CN105264436B (en) 2013-04-05 2014-04-04 System and method for controlling equipment related with picture catching

Country Status (7)

Country Link
US (2) US9912857B2 (en)
EP (1) EP2987026B1 (en)
JP (1) JP6551392B2 (en)
CN (1) CN105264436B (en)
CA (1) CA2908719C (en)
HK (1) HK1220512A1 (en)
WO (1) WO2014161092A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105763797A (en) * 2016-02-29 2016-07-13 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN108696697A (en) * 2018-07-17 2018-10-23 腾讯科技(深圳)有限公司 A kind of imaging apparatus control method and device
CN109639971A (en) * 2018-12-17 2019-04-16 维沃移动通信有限公司 A kind of shooting focal length method of adjustment and terminal device
CN113422905A (en) * 2021-06-22 2021-09-21 浙江博采传媒有限公司 Automatic control method and system for movement locus of focus follower
WO2022147703A1 (en) * 2021-01-07 2022-07-14 深圳市大疆创新科技有限公司 Focus following method and apparatus, and photographic device and computer-readable storage medium
CN115345901A (en) * 2022-10-18 2022-11-15 成都唐米科技有限公司 Animal motion behavior prediction method and system and camera system

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150027934A (en) * 2013-09-04 2015-03-13 삼성전자주식회사 Apparatas and method for generating a file of receiving a shoot image of multi angle in an electronic device
US9769378B2 (en) * 2014-09-08 2017-09-19 Panasonic Intellectual Property Management Co., Ltd. Imaging apparatus that changes from highlighting a first focus frame to highlighting a second focus frame when a focus lens moves from a first focus position to a second focus position
JP2016086007A (en) * 2014-10-23 2016-05-19 パナソニックIpマネジメント株式会社 Production management method of board in component packaging system
CN104730677B (en) * 2014-12-17 2017-11-28 湖北久之洋红外***股份有限公司 Uncooled infrared camera continuous vari-focus and fast automatic focusing circuit and method
WO2016134031A1 (en) * 2015-02-17 2016-08-25 Alpinereplay, Inc Systems and methods to control camera operations
TWI564647B (en) * 2015-03-27 2017-01-01 國立臺北科技大學 Method of image conversion operation for panorama dynamic ip camera
WO2017099882A1 (en) * 2015-12-10 2017-06-15 Intel Corporation Accelerated touch processing at computing devices
CN105912749B (en) * 2016-03-31 2019-06-04 北京润科通用技术有限公司 Emulation mode and device
US10928245B2 (en) * 2016-09-15 2021-02-23 Siteco Gmbh Light measurement using an autonomous vehicle
US11303689B2 (en) * 2017-06-06 2022-04-12 Nokia Technologies Oy Method and apparatus for updating streamed content
US10666929B2 (en) * 2017-07-06 2020-05-26 Matterport, Inc. Hardware system for inverse graphics capture
JP6977931B2 (en) * 2017-12-28 2021-12-08 任天堂株式会社 Game programs, game devices, game systems, and game processing methods
CN108184067B (en) * 2018-01-18 2019-05-24 桂林智神信息技术有限公司 A kind of working method with burnt system
CN109872271A (en) * 2019-01-28 2019-06-11 努比亚技术有限公司 A kind of image processing method, terminal and computer readable storage medium
WO2021168809A1 (en) * 2020-02-28 2021-09-02 深圳市大疆创新科技有限公司 Tracking method, movable platform, apparatus, and storage medium
US11431895B2 (en) 2020-06-24 2022-08-30 International Business Machines Corporation Photography guidance based on crowdsourced photographs
CN116962885B (en) * 2023-09-20 2023-11-28 成都实时技术股份有限公司 Multipath video acquisition, synthesis and processing system based on embedded computer

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020057217A1 (en) * 2000-06-23 2002-05-16 Milnes Kenneth A. GPS based tracking system
US20020080257A1 (en) * 2000-09-27 2002-06-27 Benjamin Blank Focus control system and process
US20050007553A1 (en) * 2001-03-23 2005-01-13 Panavision Inc. Automatic pan and tilt compensation system for a camera support structure
CN102598658A (en) * 2009-08-31 2012-07-18 扫痕光学股份有限公司 A method and apparatus for relative control of multiple cameras
CN102609954A (en) * 2010-12-17 2012-07-25 微软公司 Validation analysis of human target
EP2479993A2 (en) * 2006-12-04 2012-07-25 Lynx System Developers, Inc. Autonomous systems and methods for still and moving picture production

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5930740A (en) 1997-04-04 1999-07-27 Evans & Sutherland Computer Corporation Camera/lens calibration apparatus and method
JP2001245280A (en) * 2000-02-29 2001-09-07 Canon Inc Camera control system, device, method, and computer- readable storage medium
JP2004112615A (en) * 2002-09-20 2004-04-08 Victor Co Of Japan Ltd Automatic tracking video camera system
US20080312866A1 (en) 2003-09-11 2008-12-18 Katsunori Shimomura Three-dimensional measuring equipment
JP2005266520A (en) * 2004-03-19 2005-09-29 Olympus Corp Imaging apparatus and imaging method
JP4479386B2 (en) * 2004-07-08 2010-06-09 パナソニック株式会社 Imaging device
EP1946203A2 (en) 2005-10-26 2008-07-23 Sony Computer Entertainment America, Inc. System and method for interfacing with a computer program
US8888593B2 (en) 2005-10-26 2014-11-18 Sony Computer Entertainment Inc. Directional input for a video game
JP5111795B2 (en) 2006-06-29 2013-01-09 三菱電機株式会社 Monitoring device
JP4596330B2 (en) * 2006-08-07 2010-12-08 日本ビクター株式会社 Imaging system
JP2008227877A (en) * 2007-03-13 2008-09-25 Hitachi Ltd Video information processor
US8049658B1 (en) * 2007-05-25 2011-11-01 Lockheed Martin Corporation Determination of the three-dimensional location of a target viewed by a camera
JP2010534316A (en) 2007-07-10 2010-11-04 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method for capturing movement of an object
US8803958B2 (en) * 2008-01-04 2014-08-12 3M Innovative Properties Company Global camera path optimization
JP5359754B2 (en) * 2009-10-05 2013-12-04 株式会社Jvcケンウッド Imaging control device and program
US8843857B2 (en) * 2009-11-19 2014-09-23 Microsoft Corporation Distance scalable no touch computing
US8550903B2 (en) * 2010-11-15 2013-10-08 Bally Gaming, Inc. System and method for bonus gaming using a mobile device
EP2618566A1 (en) 2012-01-23 2013-07-24 FilmMe Group Oy Controlling controllable device during performance
US20130222565A1 (en) 2012-02-28 2013-08-29 The Johns Hopkins University System and Method for Sensor Fusion of Single Range Camera Data and Inertial Measurement for Motion Capture
WO2013131036A1 (en) 2012-03-01 2013-09-06 H4 Engineering, Inc. Apparatus and method for automatic video recording
US9874964B2 (en) 2012-06-04 2018-01-23 Sony Interactive Entertainment Inc. Flat joystick controller

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020057217A1 (en) * 2000-06-23 2002-05-16 Milnes Kenneth A. GPS based tracking system
US20020080257A1 (en) * 2000-09-27 2002-06-27 Benjamin Blank Focus control system and process
US20050007553A1 (en) * 2001-03-23 2005-01-13 Panavision Inc. Automatic pan and tilt compensation system for a camera support structure
EP2479993A2 (en) * 2006-12-04 2012-07-25 Lynx System Developers, Inc. Autonomous systems and methods for still and moving picture production
CN102598658A (en) * 2009-08-31 2012-07-18 扫痕光学股份有限公司 A method and apparatus for relative control of multiple cameras
CN102609954A (en) * 2010-12-17 2012-07-25 微软公司 Validation analysis of human target

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105763797A (en) * 2016-02-29 2016-07-13 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN108696697A (en) * 2018-07-17 2018-10-23 腾讯科技(深圳)有限公司 A kind of imaging apparatus control method and device
CN109639971A (en) * 2018-12-17 2019-04-16 维沃移动通信有限公司 A kind of shooting focal length method of adjustment and terminal device
WO2022147703A1 (en) * 2021-01-07 2022-07-14 深圳市大疆创新科技有限公司 Focus following method and apparatus, and photographic device and computer-readable storage medium
CN113422905A (en) * 2021-06-22 2021-09-21 浙江博采传媒有限公司 Automatic control method and system for movement locus of focus follower
CN115345901A (en) * 2022-10-18 2022-11-15 成都唐米科技有限公司 Animal motion behavior prediction method and system and camera system
CN115345901B (en) * 2022-10-18 2023-01-10 成都唐米科技有限公司 Animal motion behavior prediction method and system and camera system

Also Published As

Publication number Publication date
EP2987026B1 (en) 2020-03-25
EP2987026A4 (en) 2016-12-14
CA2908719A1 (en) 2014-10-09
HK1220512A1 (en) 2017-05-05
US10306134B2 (en) 2019-05-28
WO2014161092A1 (en) 2014-10-09
EP2987026A1 (en) 2016-02-24
JP6551392B2 (en) 2019-07-31
CN105264436B (en) 2019-03-08
JP2016522598A (en) 2016-07-28
CA2908719C (en) 2021-11-16
US20160050360A1 (en) 2016-02-18
US9912857B2 (en) 2018-03-06
US20180176456A1 (en) 2018-06-21

Similar Documents

Publication Publication Date Title
CN105264436A (en) System and method for controlling equipment related to image capture
US10317775B2 (en) System and techniques for image capture
US11045956B2 (en) Programming of a robotic arm using a motion capture system
US9324179B2 (en) Controlling a virtual camera
KR102110123B1 (en) Automated frame of reference calibration for augmented reality
US9179182B2 (en) Interactive multi-display control systems
US20130215229A1 (en) Real-time compositing of live recording-based and computer graphics-based media streams
US9242379B1 (en) Methods, systems, and computer readable media for producing realistic camera motion for stop motion animation
US20210392462A1 (en) Systems and methods for processing data based on acquired properties of a target
US20160344946A1 (en) Screen System
JP6725736B1 (en) Image specifying system and image specifying method
US20230396874A1 (en) Virtual indicator for capturing a sequence of images
JP7508271B2 (en) IMAGE IDENTIFICATION SYSTEM AND IMAGE IDENTIFICATION METHOD
WO2023195301A1 (en) Display control device, display control method, and display control program
CN117716419A (en) Image display system and image display method
JP2021164135A (en) Video management support system and video management support method
JP2021027544A (en) Control device, control method, and program
KR20200028829A (en) Real-time computer graphics video production system using rig combined virtual camera
WO2017090027A1 (en) A system and method to create three-dimensional models in real-time from stereoscopic video photographs

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1220512

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant