CN110262729B - Object processing method and device - Google Patents

Object processing method and device Download PDF

Info

Publication number
CN110262729B
CN110262729B CN201910420908.0A CN201910420908A CN110262729B CN 110262729 B CN110262729 B CN 110262729B CN 201910420908 A CN201910420908 A CN 201910420908A CN 110262729 B CN110262729 B CN 110262729B
Authority
CN
China
Prior art keywords
collision
region
area
determining
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910420908.0A
Other languages
Chinese (zh)
Other versions
CN110262729A (en
Inventor
郑新宇
吕君校
张骕珺
罗颖灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Shanghai Electronics Technology Co Ltd
Original Assignee
Lenovo Shanghai Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Shanghai Electronics Technology Co Ltd filed Critical Lenovo Shanghai Electronics Technology Co Ltd
Priority to CN201910420908.0A priority Critical patent/CN110262729B/en
Publication of CN110262729A publication Critical patent/CN110262729A/en
Application granted granted Critical
Publication of CN110262729B publication Critical patent/CN110262729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an object processing method and device. The method comprises the following steps: detecting a collision object which collides in display data, and determining a collision surface on which the collision object collides and a collision area to which the collision object belongs; determining a region to be detected; the region to be detected is a region which has no association relation with the collision region; and determining the possibility of collision between the object in the area to be detected and the collision surface, and performing collision detection on the object with the possibility meeting the collision determination condition.

Description

Object processing method and device
Technical Field
The present application relates to information processing technologies, and in particular, to an object processing method and device.
Background
Collision of objects is a frequent occurrence in games, and how to detect the collision of the objects is a key technical problem. Especially in flying shooting games, if this problem is not solved well, the interest of the players is greatly influenced. The collision detection can detect the physical edges of each object in the game and detect whether each object in the game collides. This collision detection technique prevents two objects from passing through each other when they collide. For example, when a character in a game hits a wall, the collision detection technique determines the position and interaction of the character and the wall based on the characteristics of the character and the wall, thereby ensuring that the character neither passes through the wall nor hits the wall.
In the related art, collision detection is performed by simulating a real physical engine, which may consume a large amount of CPU (Central Processing Unit) resources, thereby affecting the CPU performance.
Disclosure of Invention
In view of this, embodiments of the present application provide an object processing method and device.
The technical scheme of the embodiment of the application is realized as follows:
the object processing method provided by the embodiment of the application comprises the following steps:
detecting a collision object which collides in display data, and determining a collision surface on which the collision object collides and a collision area to which the collision object belongs;
determining a region to be detected; the region to be detected is a region which has no association relation with the collision region;
and determining the possibility of collision between the object in the area to be detected and the collision surface, and performing collision detection on the object with the possibility meeting the collision determination condition.
The object processing device provided by the embodiment of the application comprises: a processor and a memory for storing a computer program capable of running on the processor; wherein the processor is configured to execute the object processing method when the computer program is executed.
In the embodiment of the application, when the display data contains the collision object which has collided, the area to be detected is determined according to the area where the collision object is located, only the possibility that the object in the area to be detected collides with the collision surface is determined, and only the object with the possibility meeting the collision judgment condition is subjected to collision detection, so that the detection range of collision detection is reduced, and the CPU resource consumed by the collision detection is reduced.
Drawings
Fig. 1 is a first schematic flowchart of an object processing method according to an embodiment of the present application;
FIG. 2 is a schematic view of a display data according to an embodiment of the present application;
FIG. 3 is a schematic view of a collision surface according to an embodiment of the present application;
fig. 4 is a second flowchart illustrating an object processing method according to an embodiment of the present application;
FIG. 5 is a first diagram illustrating region division according to an embodiment of the present application;
fig. 6 is a third schematic flowchart of an object processing method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a correlation area according to an embodiment of the present application;
FIG. 8A is a schematic diagram illustrating a region division according to an embodiment of the present application;
FIG. 8B is a schematic diagram illustrating quad-tree region partitioning according to an embodiment of the present disclosure;
fig. 9A is a fourth schematic flowchart of an object processing method according to an embodiment of the present application;
FIG. 9B is a diagram illustrating a moving object according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of an object processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an object processing apparatus according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the examples provided herein are merely illustrative of the present application and are not intended to limit the present application. In addition, the following examples are provided as partial examples for implementing the present application, not all examples for implementing the present application, and the technical solutions described in the examples of the present application may be implemented in any combination without conflict.
In various embodiments of the present application: detecting a collision object which collides in display data, and determining a collision surface on which the collision object collides and a collision area to which the collision object belongs; determining a region to be detected; the region to be detected is a region which has no association relation with the collision region; and determining the possibility of collision between the object in the area to be detected and the collision surface, and performing collision detection on the object with the possibility meeting the collision determination condition.
The embodiment of the present application provides an object processing method, which is applied to an object processing device, and each functional module in the object processing device may be cooperatively implemented by hardware resources of an electronic device (such as a terminal device, a server, or a server cluster), such as computing resources such as a processor, and communication resources (such as for supporting communications in various manners such as optical cables and cells).
Of course, the embodiments of the present application are not limited to being provided as methods and hardware, and may be provided as a storage medium (storing instructions for executing the object processing method provided by the embodiments of the present application) in various implementations.
An embodiment of the present application provides an object processing method, as shown in fig. 1, the object processing method includes:
step S101, detecting a collision object which collides in display data, and determining a collision surface on which the collision object collides and a collision area to which the collision object belongs.
The object processing device may be any electronic device with information processing capability, and in one embodiment, the object processing device may be a smart terminal, for example, a mobile terminal with wireless communication capability such as a mobile phone (mobile phone), iPad, notebook, AR glasses, AR projector, and the like. In another embodiment, the object processing device may also be a computing-enabled terminal device that is not mobile, such as a desktop computer, or the like.
The object processing apparatus has installed therein an image processing application such as: augmented Reality (AR) applications, gaming applications, and the like. The image processing application can convert the display data into an image and output it on the display screen. Wherein, displaying the object in the data may include: different objects, such as tables, chairs, trees, people, etc., are different display elements to form a picture in the display data.
When the image processing application is an AR application, the object processing apparatus may combine the information of the virtual object in the virtual environment and the information of the actual object in the actual environment on the screen based on the AR application, and perform interaction between the virtual object and the actual object. The display data comprises display data corresponding to the virtual environment and display data corresponding to the actual environment. Here, the display data corresponding to the virtual environment may be referred to as virtual data, and the display data corresponding to the actual environment may be referred to as scene data. Correspondingly, the objects in the display data include virtual objects and scene objects, where the virtual objects are objects in the virtual data, and the scene objects are objects in the scene data. Such as: as shown in fig. 2, a scene object 202 and a virtual object 203 in a real environment 201 are included in display data of the electronic device.
The object handling device may be provided with a collision detection application which may be integrated in the image processing application or may be independent of the image processing application as a third party plug-in to the image processing application.
The object processing apparatus detects whether an object in the display data has collided by the collision detection application. When a collision of any of the different objects in the display data is detected, the object is identified as a collision object. When the image processing application is an AR application, the collision occurring in the display data may be a collision between the scene object and the virtual object, or a collision between the virtual object and the virtual object.
The object processing apparatus, upon detecting the collision object by the collision detection application, determines a collision surface from a position at which the collision object collides, and determines a collision region to which the collision object belongs, where the region to which the collision object belongs is made a collision region.
Here, the interface where collision occurs between colliding objects is referred to as a collision surface, and the collision surface may also be referred to as a collision interface. As shown in fig. 3, when the object 301 and the object 302 collide with each other, the interface 303 where the object 301 and the object 302 collide with each other is a collision surface when the object 301 and the object 302 collide with each other. Here, the collision surface may be represented by means of three-dimensional coordinates.
The display interface of the object processing equipment for displaying the display data, namely the visible area of the eyes, is divided into a plurality of areas, each area is an area corresponding to the display data, and the area to which each object belongs is determined according to the display position of the object on the display interface.
In the embodiment of the present application, no limitation is imposed on the area division algorithm used for area division. In one example, the object processing device divides the display interface into a set number of equally sized regions. Such as: uniformly dividing a display interface into 9 areas; for another example, the display interface is uniformly divided into 16 regions. In yet another example, the object processing device divides the display interface into a plurality of regions according to a quadtree.
In the embodiment of the application, different objects can be stored in different mesh files.
S102, determining a region to be detected; the region to be detected is a region which has no association relation with the collision region;
in step S101, after the collision region is determined, a region not associated with the collision region among the regions divided by the display interface is determined as a region to be detected, that is, a region other than the region associated with the collision region among the regions divided by the display interface is determined as a region to be detected.
Such as: the divided areas of the display interface comprise: the area 1, the area 2 to the area 16, the collision area to which the collision object belongs is the area 2, the areas having an association relationship with the collision area 2 are the area 1, the area 3, the area 6 and the area 10, and the area to be detected having no association relationship with the collision area 2 includes: region 4, region 5, region 7, region 8, region 9, region 11, region 12, region 13, region 14, and region 15.
Step S103, determining the possibility of collision between the object in the area to be detected and the collision surface, and performing collision detection on the object with the possibility meeting the collision judgment condition.
After the region to be detected is determined in step S102, the object in the region to be detected is used as a candidate object, the spatial feature of each candidate object is obtained, and the possibility of collision between each candidate object and the collision surface is determined according to the spatial feature of each candidate object. Here, the probability that the candidate object collides with the collision surface can be represented by a probability such as: the probability of collision of the candidate object a with the collision surface is 0.4. The likelihood of a candidate colliding with the collision surface may also be represented by a likelihood rating, such as: the level of likelihood of the possibility of the candidate object colliding with the collision surface includes a level 1, a level 2, and a level 3, where the level 1 indicates that collision is not possible, the level 2 indicates that collision is possible, and the level 3 indicates that collision must occur.
The probability of the collision between the candidate object and the collision surface can be represented by probability, and the user can set the probability according to actual requirements.
And matching the possibility of collision between each candidate feature and the collision surface with the collision determination condition, determining that the possibility meets the collision determination condition when the possibility is matched with the collision determination condition, and performing collision detection on the collision detection object by taking the candidate object corresponding to the possibility as the collision detection object. The collision detection is not performed for the candidate object whose possibility does not satisfy the collision determination condition.
Here, the collision determination condition corresponds to a manner of expression of the possibility.
In an example, when the possibility that the candidate object collides with the collision surface can also be represented by the possibility level, the collision determination condition may be that the probability is greater than a set collision probability threshold.
Such as: when the collision probability threshold is 0.6, the probability of collision between the candidate object a and the collision surface is 0.4, the probability of collision between the candidate object B and the collision surface is 0.7, and the probability of collision between the candidate object C and the collision surface is 0.2, the object B is detected as a collision.
In still another example, when the possibility of the candidate object colliding with the collision surface may also be represented by a possibility level, the collision determination condition may be that the possibility level is higher than a set reference level.
Such as: the level of possibility of the possibility of collision of the candidate object with the collision surface includes a level 1, a level 2, and a level 3, where the level 1 indicates that collision is not possible, the level 2 indicates that collision is possible, the level 3 indicates that collision is likely to occur certainly, the reference level is the level 2, the level of possibility of collision of the candidate object a with the collision surface is the level 1, the level of possibility of collision of the candidate object B with the collision surface is the level 2, and the level of possibility of collision of the candidate object C with the collision surface is the level 3, and then the object B and the object C perform collision detection.
In an embodiment, the determining the possibility of the object in the area to be detected colliding with the collision surface in step S103 includes:
and determining the possibility of collision between the object in the area to be detected and the collision surface according to the spatial relationship between the spatial characteristics of the object in the area to be detected and the spatial characteristics of the collision surface.
Here, the spatial feature of the object may be: spatial information such as position information, moving direction, object size, and the like. The spatial characteristics of the collision surface may be spatial information such as position information, surface size, and the like.
The method comprises the steps of obtaining space characteristics of objects in a region to be detected, namely candidate objects and space characteristics of collision surfaces, and determining the possibility of collision between the candidate objects and the collision surfaces according to the space characteristics of the candidate objects and the space characteristics of the collision surfaces.
Here, the distance between the candidate object and the collision surface and the relative displacement direction are determined based on the spatial feature of the candidate object and the spatial feature of the collision surface, and the probability may be obtained by multiplying quantized values of the distance and the relative displacement direction, or the probability level may be determined based on the distance and the relative displacement direction.
The embodiment of the present application specifically defines the determination manner of the possibility.
In the embodiment of the application, when a collision object which has collided exists in the display data, the area to be detected is determined according to the area where the collision object is located, only the possibility that the object in the area to be detected collides with the collision surface is determined, and only the object with the possibility meeting the collision judgment condition is subjected to collision detection, so that the detection range of collision detection is reduced, and the CPU (Central processing Unit) resource consumed by the collision detection is reduced
In one embodiment, as shown in fig. 4, step S103 includes:
and step S1031a, determining the area space relationship between each area corresponding to the display data and the collision area.
And traversing the area space relationship between each area corresponding to the display data, namely the area divided by the display interface and the collision area. The spatial relationship includes: adjacent and non-adjacent.
Step S1032a, if the spatial relationship of the regions is adjacent, determining that the region corresponding to the spatial relationship of the regions has an association relationship with the collision region.
Step S1033a, determining that, of the regions corresponding to the display data, a region other than the region having an association relationship with the collision region is the region to be detected.
When the area corresponding to the display data is divided into the areas 501 to 516 as shown in fig. 5, wherein the collision area is the area 511, the spatial relationship with the area 511 is that the adjacent areas include: region 507, region 510, region 512, and region 515. The region 507, the region 510, the region 512, and the region 515 are regions having an associated relationship with the collision region, and the regions other than the region 507, the region 510, the region 511, the region 512, and the region 515 in the regions 501 to 516 are regions to be detected.
In practical applications, in the areas corresponding to the display data shown in fig. 5, the area 506, the area 507, the area 508, the area 510, the area 512, the area 514, the area 515, and the area 516 may be areas having an association relationship with the collision area, and the area 501, the area 502, the area 503, the area 504, the area 505, the area 509, and the area 513 may be areas to be detected.
Here, when the area corresponding to the display data is divided as shown in fig. 8A, the area may include an area 801 to an area 819. When the collision region is the region 811, the spatial relationship with the region 511 is that the neighboring regions include: region 807, region 510, region 812, and region 813. The areas other than the area 807, the area 810, the area 811, the area 812, and the area 813 among the areas 801 to 819 are areas to be detected.
In one embodiment, as shown in fig. 6, step S103 includes:
and step S1031b, determining the position relation between the object to be evaluated in each area corresponding to the display data and the reference object in the collision area.
Traversing each region corresponding to the display data, namely the region divided by the display interface, and determining the position relationship between the object of each region and the object to be evaluated and the reference object in the collision region, wherein the reference object can be any object in the collision region. The position relation can reflect whether the position of the object to be evaluated is related to the position of the reference object or not and whether the position of the object to be evaluated is influenced by the position of the reference object or not.
Step S1032b, if the position relationship is that the position of the object to be evaluated changes with the position of the reference object, determining that the region to which the object to be evaluated belongs has an association relationship with the collision region.
And if the position of the object to be evaluated in one area is influenced by the position of the reference object in the collision area, the area is considered as the area having the association relation with the collision area, otherwise, the area is considered as the area not having the association relation with the collision area.
Here, whether the position of the object to be evaluated of one region is changed by the change in the position of the reference object of the collision region can be judged by counting the positions of the object in each region and the object in the other region.
Step S1033b, determining that, of the regions corresponding to the display data, a region other than the region having an association relationship with the collision region is the region to be detected.
The area having the association relationship is described by taking the area corresponding to the display data shown in fig. 7 as an example, where in fig. 7, the area corresponding to the display data includes: region 701 through region 712, where the positions of objects among region 701, region 702, and region 703 influence each other, the positions of objects among region 704, region 705, and region 706 influence each other, and the positions of objects among region 707, region 708, and region 709 influence each other. Then, when the collision region is the region 707, the region having an association relationship with the collision region includes the region 708 and the region 709, and the regions from the region 701 to the region 712, which are outside the region 707, the region 708, and the region 709, are the regions to be detected.
In the embodiment of the invention, the area to be detected is determined according to whether the spatial relationship between the area to which the collision object belongs and other areas is adjacent or whether the position relationship is the mutual influence of the positions, so that the area which has a collision area and is low in collision probability is accurately determined, and the hit rate of collision detection is improved.
In an embodiment, before determining the collision surface on which the collision object collides and the collision zone to which the collision object belongs, the method further comprises:
dividing the display interface into different areas based on the quadtree; and mapping the object included in the display data to the display interface, and determining the area to which the mapping position of the object in the display data in the display interface belongs as the area to which the object in the display data belongs.
Before the information flow corresponding to the object is output to the display interface, the display interface can be divided into different areas through the quadtree, when the information flow corresponding to the object is output to the display interface, each display object is displayed on the display interface, and the area of each object is determined according to the position of each object in the display interface.
A quadtree is defined as a tree-like data structure with four children nodes per parent node. When the display interface is divided by the quadtree, the display interface is divided into four regions shown in fig. 8B: region 8a, region 8b, region 8c, and region 8d, four nodes of the quadtree being appropriate to represent the four regions. If more objects are in a certain area, the area is continuously divided into four sub-areas, and by analogy, the partition result of the display interface is obtained through multiple recursions.
The partition result of the quadtree can be based on the three recursive splits of the display area, as shown in fig. 8A, resulting in areas 801 to 819. Here, the number of recursions may be determined according to the accuracy required for collision detection, may also be determined according to the power consumption of the object processing apparatus, and may also be a balance between accuracy and power consumption depending on the requirement for collision detection.
In an embodiment, a part of the objects may be mapped to the divided regions of the display interface, where, according to collision priorities of different objects in the display data, objects whose collision priorities satisfy the region determination condition are screened; correspondingly, the object with the collision priority meeting the area determination condition in the display data is mapped into the display interface, and the area to which the mapping position of the object with the collision priority meeting the area determination condition in the display interface belongs is determined as the area to which the object with the collision priority meeting the area determination condition belongs.
A collision priority level at which whether or not the individual objects are likely to collide can be determined for different objects in the display data based on a combination of one or more of the following priority determination conditions;
condition 1, whether it is the subject of the model;
condition 2, whether it belongs to a protruding or recessed portion;
condition 3, size and environment compare whether penetration is allowed.
In condition 1, the collision priority of a subject who is a model is higher than the collision priority of a subject who is not a model. In condition 2, the collision priority belonging to the protruding portion is higher than the collision priority belonging to the recessed portion. In condition 3, the collision priority for which penetration is not allowed by the size and environment is higher than the collision priority for which penetration is allowed by the size and environment.
Here, when the conditions for determining the collision priority include a plurality of conditions, weights of the conditions may be set, and the quantized values of the collision priorities determined by the conditions are weighted to obtain the final collision priority of the object.
The division of the collision priority can be set according to actual requirements, for example, the collision priority includes: the priority level of the object is 1, the priority level of the object is 2, the priority level of the object is 3, the priority levels of the priority level of the object 1, the priority level of the object 2 and the priority level of the object 3 are sequentially reduced, and the area determination condition is that the priority level of the object is higher than or equal to the priority level of the object 2, the object with the collision priority level of the object higher than or equal to the priority level of the object is subjected to collision detection, and the area of the object with the collision priority level of the object higher than or equal to the priority level of the object 2 in the display interface is determined.
At this time, in step S101, the detected collision object is an object whose collision priority is higher than or equal to priority 2.
In practical applications, when objects satisfying the area determination condition are screened from the objects in the display data according to the collision priority, the objects processed in the object processing method shown in fig. 1 are the objects selected to satisfy the area determination condition.
In the embodiment of the invention, through the setting of the collision priority, the objects with the collision priority meeting the region determination condition are obtained, and the collision objects are screened, so that the range of collision detection is further reduced.
In an embodiment, as shown in fig. 9A, an object processing method provided in an embodiment of the present invention includes:
step S901, determining a moving object moving to a scene plane object in the display data.
Step S902, determining a probability that the moving object collides with the planar object within a set time period based on the moving trajectory of the moving object.
And step S903, when the probability is larger than a set probability threshold, monitoring the moving object and the plane object, and monitoring whether the moving object and the plane object are the collision objects.
Step S904, detecting a collision object that collides in the display data, and determining a collision surface on which the collision object collides and a collision region to which the collision object belongs.
Step S905, determining a region to be detected;
step S906, determining a possibility of collision between the object in the area to be detected and the collision surface, and performing collision detection on the object whose possibility satisfies a collision determination condition.
Here, in the AR scene, the display data includes virtual data and scene data, where a plane object in the scene data is determined, and the plane object in the scene data is a plane object in the display data corresponding to the actual environment, and may also be referred to as a scene plane object, for example: tables, chairs, etc. have planar objects.
Determining a moving object moving to a scene plane object in the virtual data, determining a moving track of the moving object, determining parameters such as moving speed and moving direction of the moving track, determining the probability of collision of the moving object with the plane object within a set time period according to the parameters of the moving track, and determining the probability of collision of the plane object with the moving object according to the probability of collision of the plane object with the moving object. As shown in fig. 9B, the moving object 9B moves toward the wall surface 9a in the direction 9c, and at this time, the probability of collision between the moving object 9B and the wall surface 9a is determined based on the distance between the moving object 9B and the wall surface 9a and the direction 9 c.
In an embodiment, the step S902, based on the moving trajectory of the moving object, determines a probability that the moving object collides with the planar object within a set time, and includes: based on the moving track of the moving object, extracting rays on the moving object; and determining the probability of collision between the moving object and the plane object within a set time based on the extracted ray and the moving speed of the moving object.
Here, it may be determined whether the extracted ray intersects the planar object, based on a moving trajectory of the moving object, and in the case of the intersection, based on whether a moving speed of the moving object collides with the moving object within a set period of time.
Here, the determined probability of collision is only an estimated probability, and is not used to represent whether the moving object and the planar object may collide with each other. When a moving object moves to a planar object along a movement trajectory, the movement trajectory may change.
In an embodiment, before determining a moving object moving to a scene plane object in the display data in step S901, the method further comprises: determining a normal vector of the scene plane object; and carrying out smoothing treatment on the scene plane object according to the normal vector.
Here, smoothing processing such as point subtraction and surface subtraction is performed on the scene plane object so that the number of dots protruding after the surface of the scene plane object is reduced. Here, whether the scene plane object has a protruded face or point can be determined by determining normal vectors between planes in the scene plane object and a pinch between the normal vectors, and a jaggy formed by the protruded face or point, that is, a stepped line, is smoothed. The specific algorithm for smoothing the scene plane object in the embodiment of the invention is not limited at all.
In the embodiment of the invention, the plane object in the scene data is smoothed, so that the probability of collision between the moving object and the plane object in the scene data is effectively judged, the frequency of rendering the plane object, namely drawcall, can be reduced, and the CPU overhead in the process of outputting the display data is reduced.
In the following, taking an image processing application as an AR application as an example, the object processing method provided by the embodiment of the present invention is further described with reference to an AR scene.
The object processing method provided by the embodiment of the invention can reduce the detection times of collision detection by the following aspects:
in a first aspect, collision detection times are reduced by a quadtree screening algorithm.
The algorithm is a targeted improvement based on a quadtree algorithm, and collision detection in an AR scene does not need to perform collision calculation on all mesh and environments of a model, which is specifically described as follows:
the fov (visible) area of the glasses is divided in a quadtree pattern (i.e. quartered recursions are made, the number of recursions depending on the balance between the accuracy required for collision detection and the power consumption).
When the virtual model is loaded, the model mesh is marked to belong to which divided region.
And when the mesh in a certain area is detected to collide, acquiring a collided surface and a collision interface, calculating the collision possibility between the mesh in other areas and the collision surface again, screening according to the possibility, and rejecting the mesh which cannot collide and not performing collision detection. On the other hand, the regions adjacent to or highly correlated with the collided region are not detected any more, and only the mesh with possibility of collision is detected once more.
A second aspect reduces duplicate or meaningless collision detection by preprocessing the actual environment.
The environment is scanned through the glasses, all planes (such as walls, desktops, floors and the like) in the current actual environment are found, virtual scenes are generated, and plane mesh of the scenes is subjected to surface reduction and vertex reduction processing, so that collision calculation amount is reduced.
And thirdly, calculating and screening out the points to be collided, and shielding the detection of the points which are not collided.
Therefore, when the model moves, which planes the model moves to can be pre-calculated, and whether collision occurs or not can be estimated by emitting rays to the model by the planes.
For the above manner of reducing collision detection, the object processing device may automatically search the AR scene map of the current scene and the model mesh that needs collision detection, and add the searched model mesh that needs collision detection into the calculation list.
And fourthly, carrying out priority division on the mesh file of the model to control the collision priority.
Here, the user may also perform pre-calculation and prioritize models before running by manually setting the model mesh that needs to be calculated.
The object processing method provided by the embodiment of the application has the following technical effects:
invalid collision detection reduced by more than 50%; the CPU overhead is reduced by more than 50%; and controlling the mesh to be displayed by dividing the virtual AR scene map, thereby reducing GPU rendering consumption.
In order to implement the method of the embodiment of the present application, an embodiment of the present application provides an object processing apparatus 1000, which is applied to an object processing device, and each unit included in the brightness control apparatus and each module included in each unit can be implemented by a processor in the object processing device; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the Processor may be a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
As shown in fig. 10, the apparatus 1000 includes: a first determination unit 1001, a second determination unit 1002, and a third determination unit 1003, wherein,
a first determination unit 1001 configured to detect a collision object that collides in display data, and determine a collision surface on which the collision object collides and a collision region to which the collision object belongs;
a second determining unit 1002, configured to determine a region to be detected; the region to be detected is a region which has no association relation with the collision region;
a third determining unit 1003, configured to determine a possibility that the object in the area to be detected collides with the collision surface, and perform collision detection on the object whose possibility satisfies a collision determination condition.
In an embodiment, the second determining unit 1002 is further configured to:
determining the area space relationship between each area corresponding to the display data and the collision area;
if the region spatial relationship is adjacent, determining that the region corresponding to the region spatial relationship has an association relationship with the collision region;
and determining the region except the region having the association relation with the collision region in the region corresponding to the display data as the region to be detected.
In an embodiment, the second determining unit 1002 is further configured to:
determining the position relation between the object to be evaluated in each area corresponding to the display data and the reference object in the collision area;
if the position relation is that the position of the object to be evaluated changes along with the position change of the reference object, determining that the region to which the object to be evaluated belongs has an association relation with the collision region;
and determining the region except the region having the association relation with the collision region in the region corresponding to the display data as the region to be detected.
In an embodiment, the third determining unit 1003 is further configured to:
and determining the possibility of collision between the object in the area to be detected and the collision surface according to the spatial relationship between the spatial characteristics of the object in the area to be detected and the spatial characteristics of the collision surface.
In an embodiment, the object processing apparatus 1000 further comprises: an area determination unit for:
dividing the display interface into different areas based on the quadtree;
and mapping the object included in the display data to the display interface, and determining the area to which the mapping position of the object in the display data in the display interface belongs as the area to which the object in the display data belongs.
In an embodiment, the object processing apparatus 1000 further comprises: the region determining unit is further configured to:
screening objects with collision priorities meeting the region determination conditions according to the collision priorities of different objects in the display data;
and mapping the object with the collision priority meeting the area determination condition in the display data to the display interface, and determining the area to which the mapping position of the object with the collision priority meeting the area determination condition in the display interface belongs as the area to which the object with the collision priority meeting the area determination condition belongs.
In an embodiment, the object processing apparatus 1000 further comprises: a fourth determination unit configured to:
determining a moving object moving to a scene plane object in the display data, wherein the scene plane object is a plane object in the display data corresponding to an actual environment;
determining the probability of collision of the mobile object with the plane object within a set time period based on the movement track of the mobile object;
and when the probability is greater than a set probability threshold value, monitoring the moving object and the plane object, and monitoring whether the moving object and the plane object are the collision objects.
In an embodiment, the fourth determining unit is further configured to:
based on the moving track of the moving object, extracting rays on the moving object;
and determining the probability of collision between the moving object and the plane object within a set time based on the extracted ray and the moving speed of the moving object.
In an embodiment, the fourth determining unit is further configured to:
determining a normal vector of the scene plane object;
and carrying out smoothing treatment on the scene plane object according to the normal vector.
It is noted that the description of the apparatus embodiment, similar to the description of the method embodiment above, has similar advantageous effects as the method embodiment. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
An embodiment of the present application provides an object processing apparatus, where an electronic apparatus may be used as an illumination apparatus, fig. 11 is a schematic structural diagram of the object processing apparatus in the embodiment of the present application, and as shown in fig. 11, an object processing 1100 includes: a processor 1101, at least one communication bus 1102, a user interface 1103, at least one external communication interface 1104 and a memory 1105. Wherein the communication bus 1102 is configured to enable connective communication between these components. The user interface 1103 may include a touch interface, a key switch, etc. for interacting with a user, and the external communication interface 1104 may include a standard wired interface and a wireless interface, among others. The object processing apparatus 1100 may further include an image collector such as a camera.
Wherein the processor 1101 is configured to execute a computer program stored in a memory to implement the steps of:
detecting a collision object which collides in display data, and determining a collision surface on which the collision object collides and a collision area to which the collision object belongs;
determining a region to be detected; the region to be detected is a region which has no association relation with the collision region;
and determining the possibility of collision between the object in the area to be detected and the collision surface, and performing collision detection on the object with the possibility meeting the collision determination condition.
Accordingly, an embodiment of the present application further provides a storage medium, i.e., a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the object processing method described above.
The above description of the embodiments of the object processing apparatus and the computer-readable storage medium is similar to the description of the above embodiments of the method, and has similar advantageous effects to the embodiments of the method. For technical details not disclosed in the embodiments of the object processing device and the computer-readable storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
In the embodiment of the present application, if the object processing method is implemented in the form of a software functional module and sold or used as a standalone product, the object processing method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An object processing method, comprising:
detecting a collision object which collides in display data, and determining a collision surface on which the collision object collides and a collision area to which the collision object belongs;
determining a region to be detected; the region to be detected is a region which has no association relation with the collision region; wherein, the region to be detected is a region which has no association relation with the collision region, and the method comprises the following steps: the area space relationship between the area to be detected and the collision area is a non-adjacent relationship, or the position relationship between the object to be evaluated in the area to be detected and the reference object in the collision area is that the position of the object to be evaluated does not change along with the position change of the reference object; wherein the reference object is any object in the collision region;
determining the probability or the grade of the possibility of the collision between the object in the area to be detected and the collision surface, and performing collision detection on the object of which the probability or the grade of the possibility meets the collision judgment condition.
2. The method of claim 1, the determining a region to be detected, comprising:
determining the area space relationship between each area corresponding to the display data and the collision area;
if the region spatial relationship is adjacent, determining that the region corresponding to the region spatial relationship has an association relationship with the collision region;
and determining the region except the region having the association relation with the collision region in the region corresponding to the display data as the region to be detected.
3. The method of claim 1, the determining a region to be detected, comprising:
determining the position relation between the object to be evaluated in the other areas except the collision area in each area corresponding to the display data and the reference object in the collision area;
if the position relation is that the position of the object to be evaluated changes along with the position change of the reference object, determining that the region to which the object to be evaluated belongs has an association relation with the collision region;
and determining the region except the region having the association relation with the collision region in the region corresponding to the display data as the region to be detected.
4. The method of claim 1, the determining a likelihood of a collision of an object in the area to be detected with the collision surface, comprising:
and determining the possibility of collision between the object in the area to be detected and the collision surface according to the spatial relationship between the spatial characteristics of the object in the area to be detected and the spatial characteristics of the collision surface.
5. The method according to any one of claims 1 to 4, prior to determining a collision surface on which the collision object collides and a collision region to which the collision object belongs, the method further comprising:
dividing the display interface into different areas based on the quadtree;
and mapping the object included in the display data to the display interface, and determining the area to which the mapping position of the object in the display data in the display interface belongs as the area to which the object in the display data belongs.
6. The method of claim 5, further comprising:
screening objects with collision priorities meeting the region determination conditions according to the collision priorities of different objects in the display data;
correspondingly, the object with the collision priority meeting the area determination condition in the display data is mapped into the display interface, and the area to which the mapping position of the object with the collision priority meeting the area determination condition in the display interface belongs is determined as the area to which the object with the collision priority meeting the area determination condition belongs.
7. The method of claim 1, prior to determining a collision face at which the collision object collides and a collision zone to which the collision object belongs, the method further comprising:
determining a moving object moving to a scene plane object in the display data, wherein the scene plane object is a plane object in the display data corresponding to an actual environment;
determining the probability of collision of the mobile object with the plane object within a set time period based on the movement track of the mobile object;
and when the probability is greater than a set probability threshold value, monitoring the moving object and the plane object, and monitoring whether the moving object and the plane object are the collision objects.
8. The method of claim 7, wherein determining the probability that the mobile object collides with the planar object within a set time based on the movement trajectory of the mobile object comprises:
based on the moving track of the moving object, extracting rays on the moving object;
and determining the probability of collision between the moving object and the plane object within a set time based on the extracted ray and the moving speed of the moving object.
9. The method of claim 7, prior to determining a moving object that moves to a scene plane object in the display data, the method further comprising:
determining a normal vector of the scene plane object;
and carrying out smoothing treatment on the scene plane object according to the normal vector.
10. An object processing apparatus comprising: a processor and a memory for storing a computer program capable of running on the processor; wherein the processor is configured to execute the object processing method according to any one of claims 1 to 9 when the computer program is executed.
CN201910420908.0A 2019-05-20 2019-05-20 Object processing method and device Active CN110262729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910420908.0A CN110262729B (en) 2019-05-20 2019-05-20 Object processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910420908.0A CN110262729B (en) 2019-05-20 2019-05-20 Object processing method and device

Publications (2)

Publication Number Publication Date
CN110262729A CN110262729A (en) 2019-09-20
CN110262729B true CN110262729B (en) 2021-11-16

Family

ID=67914871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910420908.0A Active CN110262729B (en) 2019-05-20 2019-05-20 Object processing method and device

Country Status (1)

Country Link
CN (1) CN110262729B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112827175B (en) * 2021-02-26 2022-07-29 腾讯科技(深圳)有限公司 Collision frame determination method and device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509317A (en) * 2011-09-27 2012-06-20 北京像素软件科技股份有限公司 Implementation method of real-time collision detection system
CN104941180A (en) * 2014-03-31 2015-09-30 北京畅游天下网络技术有限公司 Collision detecting method and device for 2D games
CN105469406A (en) * 2015-11-30 2016-04-06 东北大学 Bounding box and space partitioning-based virtual object collision detection method
CN107145227A (en) * 2017-04-20 2017-09-08 腾讯科技(深圳)有限公司 The exchange method and device of virtual reality scenario
CN108714303A (en) * 2018-05-16 2018-10-30 深圳市腾讯网络信息技术有限公司 Collision checking method, equipment and computer readable storage medium
CN108885492A (en) * 2016-03-16 2018-11-23 微软技术许可有限责任公司 Virtual objects path clustering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100251185A1 (en) * 2009-03-31 2010-09-30 Codemasters Software Company Ltd. Virtual object appearance control

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509317A (en) * 2011-09-27 2012-06-20 北京像素软件科技股份有限公司 Implementation method of real-time collision detection system
CN104941180A (en) * 2014-03-31 2015-09-30 北京畅游天下网络技术有限公司 Collision detecting method and device for 2D games
CN105469406A (en) * 2015-11-30 2016-04-06 东北大学 Bounding box and space partitioning-based virtual object collision detection method
CN108885492A (en) * 2016-03-16 2018-11-23 微软技术许可有限责任公司 Virtual objects path clustering
CN107145227A (en) * 2017-04-20 2017-09-08 腾讯科技(深圳)有限公司 The exchange method and device of virtual reality scenario
CN108714303A (en) * 2018-05-16 2018-10-30 深圳市腾讯网络信息技术有限公司 Collision checking method, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN110262729A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN109522843B (en) Multi-target tracking method, device, equipment and storage medium
US11301954B2 (en) Method for detecting collision between cylindrical collider and convex body in real-time virtual scenario, terminal, and storage medium
CN109523621B (en) Object loading method and device, storage medium and electronic device
CN107808122B (en) Target tracking method and device
Zhou et al. GM-PHD-based multi-target visual tracking using entropy distribution and game theory
CN111062429A (en) Chef cap and mask wearing detection method based on deep learning
CN114187633B (en) Image processing method and device, and training method and device for image generation model
EA018349B1 (en) Method for video analysis
CN109685797B (en) Bone point detection method, device, processing equipment and storage medium
CN108961318A (en) A kind of data processing method and calculate equipment
RU2667720C1 (en) Method of imitation modeling and controlling virtual sphere in mobile device
JP7137719B2 (en) Virtual object selection method, device, terminal and program
JP4951490B2 (en) Moving object tracking device, moving object tracking method, moving object tracking program, and recording medium recording moving object tracking program
CN113177968A (en) Target tracking method and device, electronic equipment and storage medium
CN110262729B (en) Object processing method and device
WO2020001016A1 (en) Moving image generation method and apparatus, and electronic device and computer-readable storage medium
CN114926832A (en) Feature extraction model training method, material chartlet processing method, device and electronic equipment
CN109934072B (en) Personnel counting method and device
CN111950507A (en) Data processing and model training method, device, equipment and medium
CN109375866B (en) Screen touch click response method and system for realizing same
CN115311403B (en) Training method of deep learning network, virtual image generation method and device
CN111265874A (en) Method, device, equipment and storage medium for modeling target object in game
CN106295693A (en) A kind of image-recognizing method and device
Zheng et al. Adaptive local adversarial attacks on 3D point clouds
CN108932704A (en) Image processing method, picture processing unit and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant