GB2516691A - Visualisation method - Google Patents

Visualisation method Download PDF

Info

Publication number
GB2516691A
GB2516691A GB1313604.9A GB201313604A GB2516691A GB 2516691 A GB2516691 A GB 2516691A GB 201313604 A GB201313604 A GB 201313604A GB 2516691 A GB2516691 A GB 2516691A
Authority
GB
United Kingdom
Prior art keywords
image
model
size
data
modules
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1313604.9A
Other versions
GB2516691B (en
GB201313604D0 (en
Inventor
Michael James Dowman
Nicholas Paul Whitehead
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BiFold Fluidpower Ltd
Original Assignee
BiFold Fluidpower Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BiFold Fluidpower Ltd filed Critical BiFold Fluidpower Ltd
Priority to GB1313604.9A priority Critical patent/GB2516691B/en
Publication of GB201313604D0 publication Critical patent/GB201313604D0/en
Publication of GB2516691A publication Critical patent/GB2516691A/en
Application granted granted Critical
Publication of GB2516691B publication Critical patent/GB2516691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

A method for visualizing an object such as a fluid control component, the method comprising acquiring first image data representing a space in which visualization is to be provided then identifying at least one predetermined tracking object represented within the first image data. The predetermined tracking object has associated with it a first size parameter in image space and a second size parameter in real-world space. Data defining a model of the object is read including a third size parameter for the object in real-world space and a second image generated comprising a representation of the object. A composite image based upon the first image data and the second image data is then outputted to a display device with the representation of the object within said second image being displayed at a size based upon the second and third size parameters.

Description

VISUALISATION METHOD
The present invention relates to a method for visualising an object.
The use of computers in the design of industrially produced articles is now commonplace. It is well known to use computer software to define the appearance of an article, and software is readily available which allows the visualisation of an object in various ways, typically on a display screen. For example, computer aided design packages enable a designer to generate drawings which define an article and subsequently provide functionality whereby a three-dimensional representation of the defined article can be viewed so as to provide an accurate indication of the article's final appearance.
While the use of computer aided design packages of the type described above provides a convenient way to design industrially produced articles, there is a desire for ever-more effective design tools to be provided.
It is an object of some embodiments of the present invention to provide a visualisation method.
According to a first aspect of the invention, there is provided a method for visualising an object, the method comprising: acquiring first image data representing a space in which visualisation is to be provided; identifying at least one predetermined tracking object represented within the first image data, said predetermined tracking object having an associated first size parameter in image space and an associated second size parameter in real-world space; reading data defining a model of the object, said data comprising data including a third size parameter for the object in real-world space; generating second image data comprising a representation of the object; and outputting to a display device a composite image based upon the first image data and the second image data; wherein said representation of the object within said second image is displayed at a size based upon the second and third size parameters.
In this way the representation of the object within the second image is based upon data indicating the size of the object in real-world space and the size of the tracking object in real-world space. By relating the size of the object in real-world space to that of the tracking image in real-world space the object can be displayed in the second image -and therefore in the composite image -at a known scale. This can be useful in allowing the size of the object to be appreciated relative to the size of other features of the space in which the visualisation is to be provided.
Generating the second image may comprise determining a size of said representation of said object in image space based upon said first, second and third size parameters.
In some embodiments, determining a size of said representation of said object in image space may comprise determining a fourth size parameter associated with said representation of said object in said second image, said fourth size parameter being such that a first ratio between said fourth size parameter and said first size parameter has a predetermined relationship with a second ratio between said second size parameter and said third size parameter.
The predetermined relationship may be equality, thereby allowing the object to be displayed within the space in which visualisation is to be provided at a 1:1 scale ratio.
Alternatively, the predetermined relationship may be such that said first ratio F is related to said second ratio S by F = xS, where x is a real number such that 0 c x. In some embodiments x!= 1. In this way, the value x determines a scale factor at which the object is displayed relative to the space in which visualisation is to be provided.
In some embodiments, the scale factor x may be automatically selected. For example the scale factor x may be selected based upon one or more features of the first image data. The scale factor may be based upon the tracking object. For example a bounding volume may be defined, optionally with reference to the tracking object. The scale factor x may be selected such that the representation of the object in the second image has a predetermined relationship with said bounding volume -e.g. the representation of the object in the second image may be enclosed within the bounding volume.
The object may comprise one or more fluid flow control components such as valves, filters and the like.
The predetermined tracking object may be a predetermined tracking image, for example a two-dimensional tracking image. The tracking image or tracking object is sometimes referred to as a target image or target object.
Each size parameter may represent one or more dimensions of a respective object in a respective space.
The method may further comprise generating said data defining a model of the object.
The method may comprise processing a schematic depiction of the object and generating said data defining the model of the object based upon said schematic depiction.
The object may be a composite product comprising a plurality of modules (e.g. a plurality of fluid flow control components), and generating said data defining the model of the object may comprise obtaining model data for each of the modules and processing model data for each of the modules to generate the model of the object.
The model data for each of the modules may include a size parameter indicating a size of the module in real-world space, and the third size parameter may be generated based upon size parameters of each of the modules for which model data is obtained.
The schematic depiction of the object may include a schematic depiction of at least some of said modules. Model data may be obtained to for each of the modules depicted in the schematic depiction of the object and may be optionally obtained for at least one further module, the further module being selected based upon at least one of the modules depicted in the schematic depiction of the object.
The method may further comprise receiving updated first image data representing a further space in which visualisation is to be provided, combining said updated first image data with said second image data to generate a further composite image and displaying said further composite image. The updated first image data may be generated by a user moving (or panning) a camera across the space in which visualisation is to be provided.
The first image data may be acquired by a camera integral with a computing device having a display device on which the composite image is displayed. The computing device may receive, over a network, the model of the object.
Aspects of the invention can be implemented in any convenient form including by way of suitable methods, systems and apparatus. Aspects of the invention can be implemented by way of a computer. As such computer-implemented methods in which each step is carried out by a suitably programmed processor are provided. Similarly, aspects of the invention provide computer readable media comprising computer executable instructions configured to cause a computer to carry out a method as described above. Such computer readable media include tangible media (e.g. discs) and intangible media (e.g. communications signals).
Where aspects of the invention are implemented by way of a computer, there may be provided a computer apparatus comprising a memory (e.g. a random access memory) storing processor readable instructions and a processor configured to read and execute instructions stored in said memory. The processor readable instructions may comprise instructions configured to cause the processor to carry out a method as set out above.
Embodiments of the invention are now described, by way of example, with reference to the accompanying drawings, in which: Figure 1 is a flowchart showing processing according to an embodiment of the invention at a high level; Figure 2 is a flowchart showing the processing of Figure 1 in further detail; Figure 3 is a flowchart showing part of the flowchart of Figure 2 in further detail; Figure 4A is a schematic illustration of components used to implement the processing of Figures 1 and 2; Figure 4B is a schematic illustration of a network of devices used in an embodiment of the invention; Figure 5 is a schematic representation of components used in the generation of a model of a composite product in accordance with an embodiment described herein; Figure 6 is a representation of a user interface for generating a schematic of a composite product; Figure 7 is an image of a 3D model generated using an embodiment described herein; Figure 8 is a schematic representation of classes used by a program operable to generate a model of a composite product from a schematic in an embodiment described herein; Figure 9 is a flowchart showing example processing carried out by a processing device of Figure 4B to generate a 3D model.
Figure 10 is a flowchart showing example processing carried out by the processing device of Figure 4B to select a module during the processing of Figure 9; Figure 11 is a schematic representation of example data representing physical characteristics of a module; Figure 12 is a flowchart showing example processing to calculate a spacing requirement in the processing of Figure 11; Figure 13 is a schematic representation of an example module that may form part of a composite product; and Figure 14 is a schematic illustration showing a computer of Figures 4A and 4B in further detail.
Referring to Figure 1, an overall process for visualisation is provided. In very general terms, at step Si an image of an object of interest is generated. Typically the image takes the form of image data generated based upon a model of the object of interest.
The model of the object of interest can be generated in any convenient way.
At step 52 scene image data representing a scene in the context of which the generated image is to be displayed is obtained. The scene image data is typically obtained using a camera, which may be a camera integrated within a computing device such as a tablet computer or mobile telephone.
At step 53 the image of the object of interest generated at step Si is applied to the scene image data to generate a composite image which is then displayed to a user at step 54.
The technique described with reference to Figure 1, whereby image data representing at an object of interest generated at step Si is combined with image data representing a scene obtained at step 52 is an example of a family of techniques known as augmented reality techniques. Such techniques are useful as a method of visualisation, as an object of interest can be viewed in the context of a complete, real-Referring now to Figure 2, the process for visualisation is described in further detail.
The process comprises two general parts. A training part A is concerned with the definition of a target image which is subsequently used to determine relative sizing of objects in an image as described below. An image generation part B is concerned with generation of a composite image of the type described above, based upon an identified occurrence of the target image.
In some embodiments, the described processing is carried out using the VuforialM platform provided by Qualcomm Technologies, Inc which provides an augmented reality toolkit comprising an application programmers' interface (API). Details of the Vuforia platform can be found at www.vuforia.com. While some embodiments are implemented using the Vuforia platform, those skilled in the art will appreciate that other suitable augmented reality products can instead be used.
Considering first the part A of the processing of Figure 2, at step S4 a training image is obtained, the training image being an occurrence of the target image. The target image is a two dimensional image. The target image can take any suitable form. At step 55, a target object is created based upon the identified target image. Where the Vuforia platform is used, the target object is an instance of an ImageTarget class, a subclass of the Trackable class. The instance of the ImageTarget class is created by the ImageTargetBuilder class, again provided by the Vuforia platform. At the end of step 55, an instance of the Imagelarget class exists which represents a target image of interest.
The ImageTarget class has a setSize method associated therewith. This method allows the size of the target image to specified in units of image space. The size of the target image is set using the setSize method at step SC. The setting of the size of the target image in this way provides a point of reference against which other scene objects can be sized in subsequent processing, as described below.
The processing of steps S4 to 56 together results in the generation of an object which represents the target image and provides size data for the target image in image space.
Part B of the processing shown in Figure 2, as noted above, is concerned with the identification of an occurrence of the target image within a scene image of interest. This allows the relative size of objects in the scene (in image space) to be determined with reference to the size of the target image in image space as defined at step 56 using the setSize method described above.
Referring to the processing of part B, at step S7 an image of a scene of interest is obtained. The scene of interest is that in which a user wishes to see an object visualised. At step SB the Vuforia platform processes the obtained scene image to identify an occurrence of the target image represented by an instance of the ImageTarget class as described above. When the target image is identified, an instance of the TargetFiesult class is created at step S9 to represent the identified occurrence of the target image.
At step SlO an image of the object of interest is generated, and at step Si 1 the image of the object of interest is combined with the image of the scene (obtained at step S7) to generate a composite image. The composite image is displayed at step Si 2.
When combining the image of the object of interest with the image of the scene of interest to form the composite image at step Si 1, it is desired to ensure that the object of interest appears relative to the image of the scene at the correct scale. This is achieved using knowledge that the target image has a size in image space as specified using the setSize method described above, and storing data indicating the size of the target image in real-world space (i.e. the real-world dimensions of the target image).
This provides a scale factor which allows any image of an object of interest which has been created to scale to be scaled based upon the scale factor between the size of the target image in image space and the size of the target image in real-world space. This process is illustrated in Figure 3.
At step Si3, scale data indicating a relationship between the size of the target image in real world space and the size of the target image in image space is obtained. At step Si4 data indicating a real world size of the object of interest is obtained. The data obtained at step S14 may take the form of size data in the real world, or alternatively may take the form of size data in a virtual space together with data relating size in the virtual space to size in the real world. At step Sib the data obtained at steps S13 and Si4 is used to generate a representation of the object of interest based upon its real world size and a scale factor based upon the data obtained at steps Si3 and Si4.
It will be appreciated that the processing of Figure 3 can be carried out so as to display the object of interest at a 1:1 scale relative to the real world or alternatively to display the object of interest at any other desired scale relative to the real world. In some embodiments the desired scale may be based upon user input.
Figure 4A shows an example implementation of an embodiment of the invention. A computer 1 (which may take any suitable form, being, for example, a desktop computer or a laptop computer) is connected to a network 2. The network 2 is, in some embodiments, a wireless local area network operating in accordance with IEEE standard 802.11 although it will be appreciated that the network 2 can take any suitable form. A tablet computer 3 is also connected to the network 2. The tablet computer 3 includes an integral camera 4.
In use, the camera 4 of the tablet computer 3 acquires an image of a scene 5 which includes an instance of the target image 6. The tablet computer is configured to process the image of the scene to identify the instance of the target image.
Alternatively, the tablet computer 3 may transmit the acquired image to the computer 1 over the network 2 (or to any other suitable computer) for processing to identify the instance of the target image.
The computer 1 provides an object image to the tablet computer 3 over the network 2 as denoted by a directional arrow 7. The object image is provided to the tablet computer 3 together with size data indicating a size of the object of interest in real-Having identified the instance of the target image 6 within the scene 5 the tablet computer 3 (or other computer processing the acquired image) can determine the size of other scene components in image space given the known size of the instance of the target image in image space, as described above. The tablet computer 3 can also take the object image which is received, together with the received data indicating the size of the object in real-world space, and use the received data and a scale ratio indicating a relationship between size in real-world space and size in image space (for the target image) to scale the received image relative to other objects within the scene 5.
The tablet computer may be configured to display the image of the object of interest centrally (or in any other position) on the display device. As the camera 4 is moved so as to obtain data from other pads of the scene 5, the displayed composite image may be updated so as to continue to display the object of interest, albeit relative to different parts of the scene 5. As such, as a user pans' the camera across the scene 5, the object of interest can be visualised relative to different parts of the scene 5.
Displaying the object of interest relative to the scene 5 and at the same scale (or a predetermined scale) relative to objects within the scene 5 allows a user to more effectively perceive how the object of interest will appear in the real world. For example, it may be desired to know whether the object of interest will fit within a particular location in the scene 5 and using the techniques described above, the user can use the tablet computer 4 to obtain an image of the particular location and display a composite image in which the object is displayed relative to the particular location.
In some embodiments the computer 1 is arranged to generate an image of the object of interest by allowing a user to specify a combination of schematic symbols, and processing the schematic symbols to generate the image of the object of interest.
Referring now to Figure 4B, in one embodiment the computer 1 is connected to a processing device 7 via the network 2. The processing device 7 is connected to a database 8 storing data used by the processing device 7. The processing device 7 may be, for example, a computer arranged to act as a server. For example, the processing device 7 may provide functionality such as a web-server and an application server.
The computer 1 is operable to execute a computer program, the computer program providing functionality that allows a user of the computer 1 to create a schematic representation of the object of interest which is referred to in the following description as "a product". The product may take the form of a manifold for regulating fluid flows, which is made up of a plurality of modules, through the selection of a plurality of schematic symbols, each schematic symbol representing a respective module. For example, the computer program may be a web-browser configured to access a web-based application that provides the functionality. For example, the web-based application may be provided by the processing device 3 over the network 2.
Alternatively, the computer program may be a computer program stored locally to the computer 1. It will be appreciated that the term web-based application is intended to refer to a computer application that is accessed over a network such as the Internet or an intranet. Where the computer program is stored locally, the computer program may nonetheless be accessed via a web-browser (i.e. the computer program may comprise computer program code written in a browser-supported programming language such as JavaScript).
With reference to Figure 5, there is schematically illustrated a plurality of schematic symbols 9 which can be selected by a user for inclusion within a schematic of a composite product such as a manifold. In particular, a user selects from among the plurality of schematic symbols 9 to create a schematic 11 representing a composite product to be modelled. As will become apparent from the description herein, the schematic 11 need not comprise a representation of every part within the composite product. Upon selection of the schematic symbols 9, the schematic 11 is sent, via the network 2, to the processing device 7. The processing device 7 is configured to process the received schematic 11 to generate a model 12 of the composite product, the model 12 providing an accurate representation of the composite product to facilitate modelling and creation of the composite product. The model 12 may be, for example, a CAD model.
In generating the model 12, the processing device 7 selects from a plurality of modules represented by the selected schematic symbols, details of which are stored in the database 8. The processing device 7 further utilises data 13 stored in the database 8 associated with each module to determine requirements for using the modules 10 within the model 12. The database 8 therefore provides data that allows appropriate modules to be identified from the information within the schematic 11, and any information required to allow that module to be used within the model 12. For example, the data 13 may comprise indications of the physical characteristics of each module.
The data indicating physical characteristics may comprise a graphical representation of the module (or a location at which a graphical representation is stored). The data 13 may further comprise an orientation of the module, offsets with respect to other modules, and any other data required to accurately model the module within the composite product. The exact nature of the data provided by the data 13 may vary with respect to the composite products that are to be modelled.
Figure 6 illustrates an example of a user interface 15 that may be provided by software operating at the computer 1 to allow a user of the computer 1 to select schematic symbols 9 for inclusion in the schematic 11. In particular, the user interface 15 allows a user of the computer 1 to create a schematic representation of a manifold for regulating fluid flow, by selecting from a plurality of schematic symbols representing manifold modules such as valves, take offs, etc. In Figure 6, a plurality of schematic symbols 16 each representing a respective manifold module is arranged in a left-hand portion of the user interface 15. The schematic symbols 16 are selectable by a user for inclusion in a schematic representation 17 of the manifold displayed in a right-hand portion of the user interface 15.
Respective markers 18, 19 indicate positions within the schematic representation 17 at which modules may be validly added to the manifold. A top portion 29 of the user interface 15 provides a plurality of controls for setting parameters that apply to the whole of the manifold. In particular, a base" option 29a allows a base size to be selected, a "port size" option 29b allows a port size to be selected, while an auto spacers" option 29c allows a user to select whether spacers (for placement between modules) are to be selected and placed manually, or to be placed automatically.
As will be known to the skilled person, some modules included within a manifold may be inverted". The user interface 15 therefore allows a user to indicate inversion for those schematic symbols that represent modules that may be inverted. For example, an orientation of a schematic symbol may be altered to indicate that the schematic symbol represents an inverted module.
The schematic representation 17 indicates a structure that manifolds may have. In particular, schematic symbols 20, 21, 22, 23, 24 and 25 may be considered to represent "inline" modules, while schematic symbols 26 to 28 may be considered to represent stacker" modules. Inline modules may be considered to be defined in a left to right direction, while stacker modules form stacks above one or more inline modules.
The schematic symbols 20-25 may therefore be referred to as inline symbols, while the schematic symbols 20-25 may be referred to as stacker symbols.
When the schematic representation 17 represents a manifold that a user wishes to model, the schematic representation 17 may be saved to provide the schematic 11.
The schematic 11 may then be transmitted to the processing device 7 for processing.
The schematic 11 may store the schematic representation in any appropriate format, as would be readily apparent to the skilled person. In some embodiments, the schematic 11 is stored in a format in which the position of each schematic symbol is defined with reference to its immediately neighbouring schematic symbols.
For example, with reference to Figure 6, the schematic symbol 20 is defined as a root (or first) module. The schematic symbol 21 is defined as being to the right of the schematic symbol 20, the schematic symbol 22 is defined as being to the right of the schematic symbol 21, a schematic symbol 23 is defined as being to the right of the schematic symbol 22 and a schematic symbol 24 is defined as being to the right of the schematic symbol 23 and the schematic symbol 25 is defined as being to the right of the schematic symbol 24. The schematic symbol 26 is defined as being above the schematic symbol 24, the schematic symbol 27 is defined as being above the schematic symbol 26, and a schematic symbol 28 is defined as being above the schematic symbol 27.
In this way, operations to process respective symbols within the schematic 11 are easily defined. For example, directional operations may be provided, so that a user can step right", "left", up" or down" from each module to determine the modules neighbouring a currently selected module.
Figure 7 illustrates an example of a model 12 generated by the processing device 7 by processing a schematic 11. Modules corresponding to each respective schematic symbol 20 to 28 are provided within the model in an appropriate orientation together with any requisite components not specified in the schematic. An example of processing that may be performed to generate the model 12 is now described with reference to Figures 8 to 11.
In Figure 8 logical components used in the processing of the schematic 11 to generate the model 12 are depicted. In particular, example classes used within a computer program that generates a model from a schematic are shown. A Job class 30 comprises data structures necessary for storing a schematic 11 and operations (e.g. sub-routines, methods, functions, etc) used for accessing the schematic symbols within the schematic. Each instance of the Job class 30 comprises a respective instance of a SchematicSymbol class 31 for each one of the schematic symbols present within the schematic 11. The instance of the Job class 30 comprises data items for defining a root symbol from amongst its instances of the SchematicSymbol class 31 and for storing information relating to the schematic 11 as a whole, such as a base type, a port size, etc. Each instance of the Job class 30 therefore provides a complete representation of a schematic for processing by the processing device 7.
Each instance of the Job class 30 is associated with an instance of a Model class 32.
The Model class 32 comprises data structures suitable for storing the complete model 12, including a representation of the various modules and components of the composite product, and any other information that may be used to define the composite product such as a weight, length, height, etc. A ManifoldAssembler class 33 comprises operations (e.g. sub-routines, methods, functions, etc) and data items used to generate the model 12 from a schematic 11. In particular, the ManifoldAssembler class 33 comprises a primary sub-routine which is arranged to receive an instance of the Model class 32. The ManifoldAssembler class 33 is configured to process the instance of the Job class 30 associated with the received instance of the Model class 32 by stepping through the instances of the SchematicSymbol class 31 contained within the instance of the Job class 30. For each instance of the SchematicSymbol class 31 contained within the Job class 30, the ManifoldAssembler class 33 is configured to add appropriate modules and/or other components to the instance of the Model class 32 to generate the model 12.
Processing carried out by the primary sub-routine of the ManifoldAssembler class 33 is now described with reference to Figure 9. It will be appreciated that in other embodiments of the invention, the processing of Figure 9 may be performed at the computer 1. That is, in some embodiments, the processing device 7 and the computer 1 may be the same device.
At step S21 of Figure 9 a schematic 11 is received and a new manifold model is created (e.g. an instance of the Model class 32 is instantiated together with an instance of the Job class 30). Processing passes from step 321 to step S22 at which an initial bracket is added to the model. In general, brackets may be required for attaching a manifold to a base. Where brackets are required, the number of brackets may be influenced by a number of factors, including the type of base to which the manifold will be attached (for example, a Namur" base, a 1 inch" base, etc) and the number of modules that make up the manifold. For example, it may be specified that a bracket is required for every four modules of a manifold and that a manifold must have a minimum of two brackets in total. Respective counts may therefore be maintained of the number of modules that have been added to the manifold model since the last bracket was added, and the total number of brackets added to the manifold model.
Processing passes from step S22 to step S23 at which the root schematic symbol of the schematic 11 is made the current inline symbol. Processing then passes to step S24, at which a module represented by the current inline symbol is determined and that module added to the manifold model. Processing performed to select and add a module to the manifold model at step 324 is described in more detail below with reference to Figure 10.
Processing passes from step S24 to step S25 at which it is determined whether an exhaust is associated with the module represented by the current inline symbol. The determination as to whether an exhaust module is required may be made in any appropriate way. For example, identification of those modules that require an exhaust may be made based upon an identifier provided within a name of the schematic symbol. In this way, the name of the current schematic symbol may be examined at step S25 to determine whether an exhaust module is required. For example, a specific character may be reserved for the names of symbols which represent modules that require exhausts.
If it is determined at step S25 that an exhaust module is required for the module added to the manifold model at step S24, processing passes to step S26 at which an exhaust module is added to the manifold model. Processing passes from step S26 to step S27.
If, on the other hand, it is determined at step S25 that an exhaust module is not required in association with the module added to the manifold model at step S24, processing passes directly from step S25 to step S27.
At step S27 it is determined whether there is a stacker symbol above" the current inline symbol in the schematic 11 (i.e. whether there are any stacker modules above the module that was added at step S24). If it is determined that there is a stacker symbol above the current inline symbol, processing passes from step S27 to step S28, at which the first stacker symbol in the stack is made the current stacker symbol.
Processing passes from step S28 to step S29 at which a stacker module represented by the current stacker symbol is added to the manifold model.
Processing passes from step S29 to step S30 at which it is determined whether there is a stacker symbol above the current stacker symbol. If it is determined that there are stacker symbols above the current stacker symbol, processing passes to step S31 at which the next stacker symbol in the stack is set to be the current stacker symbol.
Processing passes from step S31 to step S29. If, on the other hand, it is determined at step S30 that there are no stacker symbols above the current stacker symbol, processing passes to step S32.
At step S32 tie-rods are added to the manifold model for the current stack. A plurality of tie-rods run through each stacker module of the stack in order to fix the modules of the stack together. For example, a respective tie-rod may run through each of four "corners" of a stack. The length of the tie-rods to be used to connect the stack depends on the number and type (and therefore height) of stacker modules in the stack. By waiting until each stacker module has been processed, therefore, the correct tie-rod length can be automatically selected, and tie-rods of the correct length added.
Processing passes from step S32 to step S33. On the other hand, if it is determined at step S27 that there are no stacker symbols (and therefore no stack) above the current inline symbol, processing passes directly from step S27 to step S33. At step S33 it is determined whether further brackets are required to be added to the manifold model.
As indicated above, a count may be maintained of the number of modules that have been added to the manifold model 12 since the last bracket was added. The determination at step S33 may therefore be a determination as to whether the count has reached a predetermined number (e.g. four). If it is determined at step S33 that a further bracket is required, processing passes to step S34 at which a bracket with an integrated seal is added to the manifold model. Processing passes from step S34 to step S36. On the other hand, if it is determined at step S33 that a further bracket is not required, processing passes from step S33 to step S35 at which a seal plate is added to the model after the previously added module. Processing passes from step S35 to step S36.
At step S36 it is determined whether there are any schematic symbols to the right of the current inline symbol. If it is determined that there are schematic symbols to the right of the current inline symbol, processing passes to step S37 at which the next inline symbol is made the current inline symbol. Processing passes from step S37 to step S24. If, on the other hand, it is determined at step S36 that there are no further schematic symbols to the right of the current inline symbol, processing passes from step S36 to step S38 at which any required inlet and outlet ports are added to the manifold model. Horizontal tie-rods tying the inline modules together are added based on the number and type (and therefore length) of modules added to the manifold model 12. The processing of Figure 9 passes from step S38 to end at step S39.
It will be appreciated that the processing described with reference to Figure 9 is an example of processing for generating a manifold model, in order to show how schematic symbols may be processed to generate a model of a composite product.
Changes to the processing of Figure 9 may be made both for generating models of manifolds, or for generating models of other types of composite product. For example, the processing of Figure 9 assumes that brackets are added to the model. It may be that because of factors such as a particular base being used, brackets are not required, and would therefore not be added to the model 12 during the processing of the schematic 11.
It is described above that at steps S24 and S29 of Figure 9 inline modules and stacker modules respectively are added to the manifold model based on the current inline symbol or current stacker symbol being processed. Selection of the correct module is based on data stored in the database 8 in connection with each model. Referring to Figure 10, processing carried out to add a new module to the manifold model at steps S24 and S29 is now described.
At a step S40 details of the module corresponding to the current inline or stacker symbol are obtained from a Modules table of the database 8 storing details of all modules. For example, each schematic symbol in the schematic 11 may comprise a respective identifier that can be used to determine a module identifier. The module identifier may then be used as an index into the Modules table. Where a schematic symbol represents a module that can be inverted, the schematic identifier of that schematic symbol may be associated with two module identifiers. The particular module identifier retrieved at step S40 may therefore include a determination of an orientation of the schematic symbol to determine whether the schematic symbol has been inverted.
The details obtained from the database 8 may include any details stored in the database 8 used to add the module to the manifold model (e.g. at a correct orientation and in a way that avoids conflicts between components). In the presently described example, the database 8 includes a 3D graphical representation of each module, orientation data for each module to allow the module (and the 3D graphical representation) to be correctly positioned in 3D space with respect to other modules, and to allow inlets of one module to align with outlets of a neighbouring module. The database 8 further comprises measurement information indicating amounts by which the current module (i.e. the module currently being added to the manifold model) protrudes into a space that is occupied by a previously added module and a space that may be occupied by a further module.
Figure 11 schematically illustrates an example of protrusion data that may be stored for each module. A module 40 is schematically illustrated, the module 40 comprising a front face 41, a top face 42, a bottom face 43 (not visible), a left face 44, a right face 45 (not visible) and a back face 46 (not visible). For a manifold such as the manifold shown in Figure 7, it may be desired to process a top, bottom and front face of each module as these are the faces from which protrusions may occur (the back, left and right faces being generally utilised for connection with a base and other with modules).
An arrow 47 protruding from the front face 41 indicates a protrusion perpendicular to the front face 41. A size of the front protrusion is stored in a FrontProtrusion field of the Modules table of the database 8. An arrow 48 indicates an amount by which the front protrusion 47 extends leftward into a region that may be occupied by a previous module (e.g. beyond a main body of the module 40). A size of the leftward protrusion 48 is stored in an InvadeByFrontLeft field of the Modules table of the database 8. An arrow 49 indicates an amount by which the front protrusion 47 extends rightward into a region that may be occupied by a following module (e.g. beyond a main body of the module 40). A size of the rightward protrusion 49 is stored in an InvadeByFrontRight
field of the Modules table of the database 8.
An arrow 50 indicates a top protrusion perpendicular to the top face 42. An arrow 51 indicates an amount by which the top protrusion 50 extends leftward into a region that may be occupied by a previous module. A size of the leftward protrusion 51 is stored in an InvadeByTopLeft field of the Modules table of the database 8. An arrow 52 indicates an amount by which the top protrusion 50 extends rightward into a region that may be occupied by a following module. A size of the rightward protrusion 52 is stored in an InvadeBylopRight field of the Modules table of the database 8.
Similar information as described above for the front face 41 and top face 42 may be stored for the bottom face 43 in a BottornProtrusion field, an InvadeByBottomLeft field and an lnvadeByBottomRight field of the Modules table of the database 8. It is to be understood, however, that the protrusion information illustrated in Figure 11 is exemplary, and that additional or different protrusion information may be stored in dependence upon the type of modules to be modelled. That is, any protrusion information may be stored, for any area of a module.
With reference again to Figure 7, the effect of protrusions on the placement of modules in a manifold may be more clearly appreciated. A module 60 comprises a port portion 61 that protrudes to the front of the module 61. A module 62 to the right of the module comprises a handle portion 63 that also protrudes to the front. The handle portion 63 also protrudes to front-left and front-right of the module 62. It can be seen from Figure 7 that the protrusions 60 and 63 are such that placement of the module 62 next to the module 60 could result in impingement of the handle portion 63 on the port portion 61, such that the ability to correctly operate the handle portion 63 would be compromised. It will be appreciated protrusion information may be set for reasons other than a physical protrusion. That is, it may be determined that a clearance is required between components for any reason, and appropriate indications stored in the database & accordingly.
Referring again to Figure 10, at step S41, the protrusion details of the current module are processed, in combination with the protrusion details of the previously added module to determine whether any additional space is required between the previously added module and the current module. The processing of step S41 is described in more detail below with reference to the flowchart shown in Figure 12. At step S42 it is determined whether the amount of space required (if any) calculated at step S41 is such that one or more spacer components are required to be added to the model between the previous module and the current module.
If it is determined that one or more spacers are required, processing passes to step S43 at which one or more spacers are added to the model in accordance with the space requirements determined at step S41. For example, one or more spacer components may be defined as being available for use between modules, each spacer component having a predetermined amount by which that spacer component separates modules between which it is placed. Depending upon the size information calculated at step S41, one or more spacers may be selected as required.
Referring to Figure 7, it can be seen that a spacer module 64 is placed between the module 60 and the module 62 such that the protrusions 61 and 63 do not impinge upon each-other. Referring again to Figure 10, processing passes from step S43 to step S44. If, on the other hand, it is determined at step S42 that no spacers are required between the current module and the previous module, processing passes from step 542 directly to step 544.
At step 544 the module corresponding to the schematic symbol currently being processed is added to the model 12. As described above, the Modules table of the database B stores orientation information for each module so that each module can be appropriately oriented within the model 12 with respect to other components and for inlets and outlets of adjacent modules to correctly align. Different orientation, offset, and protrusion data may be stored for inverted and non-inverted modules to allow both inverted and non-inverted modules to be correctly processed.
Processing passes from step S44 to step S45 at which height and/or length parameters are updated to allow the calculation of appropriate tie-rod lengths at steps 542 and/or S38.
The processing described above in connection with Figure 10 is described in terms of modules added to the model 12 in a single piece". It will be appreciated, however, that modules may comprise a plurality of separate pieces. For example, Figure 13 illustrates a Double 1/2 inch Take-off Module 60 comprising a first 1/2 inch Port Block 61, a first Seal Plate 62, a Double Take-off Body 63, a second Seal Plate 64 and a second 1/2 inch Port Block 65. An entry may be stored in the database & for each respective piece with a module comprising a "base piece" and with other pieces of the module having a defined offset from the location of the base piece within the model. Locations of module faces" and protrusions therefrom (as described in more detail below) may be stored per module, rather than per-piece.
Referring to Figure 12, processing carried out to calculate a required number of spacers is now described. At step S50 a current module face to be processed is selected. The processing at step S50 may select, for example, the top face. Processing passes from step S50 to step S51 at which it is determined whether there are protrusions from the selected face on each of the previous and the current modules.
For example, if the top face was selected at step 550, it is determined at step S51 whether there is a top protrusion on the previously added module and a top protrusion on the module currently being processed.
If it is determined at step S51 that there is a protrusion on each of the previous and the current module for the currently selected face, processing passes to step S52 at which a spacing value required to accommodate the protrusions from the face being processed is set to be equal to the amount by which the previous module protrudes into the space occupied by the current module plus the amount that the current module protrudes into the space occupied by the previous module. That is, with reference to the protrusion values stored in the Modules table of the database 8, the spacing required to accommodate the protrusions from the top face would be set to be equal to the value stored in the InvadeByTopRight field for the previous module plus the value stored in the InvadeByTopLeft field for the current module.
Processing passes from step S52 to step S53 at which it is determined whether there are further faces to be processed. If it is determined that there are further faces to be processed, processing passes to step S54 at which the next face (e.g. the bottom face, or front face) is set to be the next face for processing. Processing passes from step 554 to step 551.
If it is determined at step S51 that either of the previous or the current modules do not comprise protrusions from the face currently being processed, processing passes directly from step 551 to step 553.
If it is determined at step 553 that there are no further faces to be processed, processing passes from step 553 to step 555 at which the spacing required between the previous and current modules is set to be the largest of the spacing values calculated for each of the faces. That is, if the spacing calculated to be needed to accommodate protrusions from the top face is larger than the spacing calculated to be required to accommodate protrusions from the bottom face and larger than the spacing calculated to be required to accommodate protrusions from the front face, the overall required spacing would be determined at step 555 to be the spacing value calculated for the top face. Similarly, if the spacing value calculated for the bottom face is the largest spacing value, the overall spacing value required would be determined at step 555 to be the spacing value calculated for the bottom face. Finally, if the spacing value calculated for the front face is the largest spacing value, the overall spacing value required would be determined at step S55 to be the spacing value calculated for the front face.
The processing described above with reference to Figures 9, 10 and 12 has described example processing that may be carried out to add particular modules to a model of a manifold. It will be appreciated that other processing may be required depending upon the characteristics of the modules that are to be added to the manifold, and the characteristics of the manifold itself.
It will be appreciated from the above that the processing of Figures 9, 10 and 12 allows for the creation of a detailed model 12 of a manifold based on a simple, easily generated schematic 11. The model 12 may be saved in any appropriate format and used to accurately determine whether the manifold modelled therein meets predetermined requirements, such as space requirements. For example, the model 12 may be used to generate a 3D visualisation of the manifold.
If such modelling results in a determination that a manifold design does not meet one or more requirements, it will be appreciated that changes may be easily made to the schematic 11 using, for example, the user interface 15 to add, remove or replace schematic symbols. A new model may then be created. Embodiments described herein therefore also allow for a rapid reconfiguration of information used to generate models of composite products such as manifolds, and the rapid generation of models in accordance with that information.
The description presented above with reference to Figures 4B to 13 provides a method whereby the computer 1 can conveniently obtain a model of an object of interest -in this case a model of a composite product such as a manifold. The model provides a three-dimensional visual representation of the object of interest which can then be provided from the computer 1 to the tablet computer 4 (Figure 4A) as described above so as to allow visualisation of the object of interest within the scene of interest.
The methods described above together provide a convenient overall method for the design and visualisation of a composite product in that a user can user the techniques of Figures 43 to 13 to generate an image of an object of interest from a simple schematic and subsequently use techniques as exemplified with reference to Figure 4A to allows the image of the object of interest to be effectively visualised.
Figure 14 shows the computer 1 of Figures 4A and 4B in further detail. The computer 1 comprises a CPU 102 which is configured to read and execute instructions stored in a volatile memory 103 which takes the form of a random access memory (RAM). The volatile memory 103 stores instructions for execution by the CPU 102 and data used by those instructions. For example, image data may be generated and manipulated by the instructions and such image data may be stored in the volatile memory 103.
The computer 1 further comprises non-volatile storage in the form of a hard disc drive 104. The hard disc drive 104 may provide persistent storage for image data which is to be processed by the computer 1. The computer 1 further comprises an I/O interface to which are connected peripheral devices used in connection with the computer 1.
More particularly, a display 106 is configured so as to display output from the computer 1. Input devices are also connected to the I/O interface 105. Such input devices may include a keyboard 107 and a mouse 108 which allow user interaction with the computer 1. It will be appreciated that the computer 1 may have other input interfaces, for example where the computer 1 is a portable computer it may include any suitable portable computer interface such as a touch screen.
A network interface 109 allows the computer 1 to be connected to an appropriate communications network, such as the network 2, so as to receive and transmit data from and to other computers. The CPU 102, volatile memory 103, hard disc drive 104, I/O interface 105, and network interface 106, are connected together by a communications bus 110 to allow communication therebetween.
It will be appreciated that the tablet computer 4 (Figure 4A) and the processing device 7 (Figure 4B) can both take a form similar to that of the computer 1 illustrated in Figure 14.
Although various embodiments have been described above, it will be appreciated that these embodiments are for presented for the purposes of illustration only and are not intended to limit the scope of the claimed invention. It will be readily apparent to the skilled person that various modifications can be made to the described embodiments without departing from the spirit and scope of the present invention.

Claims (22)

  1. CLAIMS: 1. A method for visualising an object, the method comprising: acquiring first image data representing a space in which visualisation is to be provided; identifying at least one predetermined tracking object represented within the first image data, said predetermined tracking object having an associated first size parameter in image space and an associated second size parameter in real-world space; reading data defining a model of the object, said data comprising data including a third size parameter for the object in real-world space; generating second image data comprising a representation of the object; and outputting to a display device a composite image based upon the first image data and the second image data; wherein said representation of the object within said second image is displayed at a size based upon the second and third size parameters.
  2. 2. A method according to claim 1, wherein generating said second image comprises determining a size of said representation of said object in image space based upon said first, second and third size parameters.
  3. 3. A method according to claim 2, wherein determining a size of said representation of said object in image space comprises determining a fourth size parameter associated with said representation of said object in said second image, said fourth size parameter being such that a first ratio between said fourth size parameter and said first size parameter has a predetermined relationship with a second ratio between said second size parameter and said third size parameter.
  4. 4. A method according to claim 3, wherein said predetermined relationship is equality.
  5. 5. A method according to claim 3, wherein said predetermined relationship is such that said first ratio F is related to said second ratio S by F = xS, where x is a real number such that 0 cx.
  6. 6. A method according to claim 5, further comprising determining a value for x.
  7. 7. A method according to claim 6, wherein determining the value for x comprises determining the value for x based upon one or more features of the first image data.
  8. 8. A method according to claim 7, wherein determining the value for x comprises defining a bounding volume and selecting the value x such that the representation of the object in the second image has a predetermined relationship with said bounding volume.
  9. 9. A method according to any preceding claim wherein said object comprises one or more fluid flow control components.
  10. A method according to any preceding claim, wherein said predetermined tracking object is a predetermined tracking image.
  11. 11. A method according to any preceding claim, wherein each size parameter represents one or more dimensions of a respective object in a respective space.
  12. 12. A method according to any preceding claim, further comprising generating said data defining a model of the object.
  13. 13. A method according to claim 12, further comprising processing a schematic depiction of the object and generating said data defining the model of the object based upon said schematic depiction.
  14. 14. A method according to claim 13, wherein the object is a composite product comprising a plurality of modules, and generating said data defining the model of the object comprises obtaining model data for each of the modules and processing model data for each of the modules to generate the model of the object.
  15. 15. A method according to claim 14, wherein the model data for each of the modules includes a size parameter indicating a size of the module in real-world space, and the third size parameter is generated based upon size parameters of each of the modules for which model data is obtained.
  16. 16. A method according to claim 14 0115, wherein said schematic depiction of the object includes a schematic depiction of at least some of said modules.
  17. 17. A method according to claim 16, wherein model data is obtained for each of the modules depicted in the schematic depiction of the object and is optionally obtained for at least one further module, the further module being selected based upon at least one of the modules depicted in the schematic depiction of the object.
  18. 18. A method according to any preceding claim, further comprising receiving updated first image data representing a further space in which visualisation is to be provided, combining said updated first image data with said second image data to generate a further composite image and displaying said further composite image.
  19. 19. A method according to any preceding claim wherein the first image data is acquired by a camera integral with a computing device having a display device on which the composite image is displayed.
  20. 20. A method according to claim 19, wherein the computing device receives, over a network, the model of the object.
  21. 21. A computer readable medium comprising computer executable instructions configured to cause a computer to carry out a method according to any preceding claim.
  22. 22. A computer apparatus comprising: a memory storing processor readable instructions; and a processor configured to read and execute instructions stored in said memory; wherein said processor readable instructions comprise instructions configured to cause the processor to carry out a method according to any one of claims 1 to 20.
GB1313604.9A 2013-07-30 2013-07-30 Visualisation method Active GB2516691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1313604.9A GB2516691B (en) 2013-07-30 2013-07-30 Visualisation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1313604.9A GB2516691B (en) 2013-07-30 2013-07-30 Visualisation method

Publications (3)

Publication Number Publication Date
GB201313604D0 GB201313604D0 (en) 2013-09-11
GB2516691A true GB2516691A (en) 2015-02-04
GB2516691B GB2516691B (en) 2020-10-07

Family

ID=49167183

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1313604.9A Active GB2516691B (en) 2013-07-30 2013-07-30 Visualisation method

Country Status (1)

Country Link
GB (1) GB2516691B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108292141A (en) * 2016-03-01 2018-07-17 深圳市大疆创新科技有限公司 Method and system for target following

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5867282A (en) * 1996-07-29 1999-02-02 Eastman Kodak Company Method of combining two digitally generated images wherein one is customized in view of the other
US6222637B1 (en) * 1996-01-31 2001-04-24 Fuji Photo Film Co., Ltd. Apparatus and method for synthesizing a subject image and template image using a mask to define the synthesis position and size
US20020152462A1 (en) * 2000-08-29 2002-10-17 Michael Hoch Method and apparatus for a frame work for structured overlay of real time graphics
EP1507235A1 (en) * 2003-08-15 2005-02-16 Werner G. Lonsing Method and apparatus for producing composite images which contain virtual objects
US20120102398A1 (en) * 2010-10-25 2012-04-26 Cok Ronald S Automated image template layout method
US8406519B1 (en) * 2010-03-10 2013-03-26 Hewlett-Packard Development Company, L.P. Compositing head regions into target images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222637B1 (en) * 1996-01-31 2001-04-24 Fuji Photo Film Co., Ltd. Apparatus and method for synthesizing a subject image and template image using a mask to define the synthesis position and size
US5867282A (en) * 1996-07-29 1999-02-02 Eastman Kodak Company Method of combining two digitally generated images wherein one is customized in view of the other
US20020152462A1 (en) * 2000-08-29 2002-10-17 Michael Hoch Method and apparatus for a frame work for structured overlay of real time graphics
EP1507235A1 (en) * 2003-08-15 2005-02-16 Werner G. Lonsing Method and apparatus for producing composite images which contain virtual objects
US8406519B1 (en) * 2010-03-10 2013-03-26 Hewlett-Packard Development Company, L.P. Compositing head regions into target images
US20120102398A1 (en) * 2010-10-25 2012-04-26 Cok Ronald S Automated image template layout method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108292141A (en) * 2016-03-01 2018-07-17 深圳市大疆创新科技有限公司 Method and system for target following
US10802491B2 (en) 2016-03-01 2020-10-13 SZ DJI Technology Co., Ltd. Methods and systems for target tracking

Also Published As

Publication number Publication date
GB2516691B (en) 2020-10-07
GB201313604D0 (en) 2013-09-11

Similar Documents

Publication Publication Date Title
US6879322B2 (en) Three-dimensional object display system, three-dimensional object display method and recording medium recording a three-dimensional object display program
JP5991423B2 (en) Display device, display method, display program, and position setting system
US8264488B2 (en) Information processing apparatus, information processing method, and program
KR20180131471A (en) Apparatus for integrated management of construction errors using 3d scanning with bim and method thereof
EP2798616B1 (en) Target aquisition in a three dimensional building display
EP1835466A2 (en) Method and apparatus for geometric data processing and a parts catalog system
US20060017725A1 (en) Information processing method and information processing apparatus
CN109857825B (en) Three-dimensional model display method and system
CN101866379B (en) Method, program and product edition system for visualizing objects displayed on a computer screen
CN102779202B (en) Method and apparatus for the executor of selecting object
CN105745586A (en) Program for creating work assistance data
Vincke et al. Immersive visualisation of construction site point cloud data, meshes and BIM models in a VR environment using a gaming engine
JP2009134708A (en) Part identification image generation device, part identification image generation method, part identification image display device, part identification image display method, and recording medium
JP6049923B1 (en) Parts information retrieval apparatus, parts information retrieval method, and program
JP6544407B2 (en) INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
US20140002448A1 (en) Measurement support device, method and computer program product
US20220121784A1 (en) Method and apparatus for visually comparing geo-spatially aligned digital content according to time
JP5372590B2 (en) Information processing apparatus, information processing method, and program
US8311320B2 (en) Computer readable recording medium storing difference emphasizing program, difference emphasizing method, and difference emphasizing apparatus
GB2516691A (en) Visualisation method
JP2004094663A (en) Conversion check device, conversion check method, program and storage medium
JP5449851B2 (en) Placement planning support device
US20120330619A1 (en) Dynamic connection visualization in computer aided design package
Andreev et al. Expansion of the Functions of the Multi-View Stereomaker Software for Automatic Construction of Complex Stereo Images
JP6264208B2 (en) Display program, display method, and display device